Google’s AI may replace traditional websites and content creators leading to potential monopolization and diminishing user experience - Mrwhosetheboss

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    2 months ago

    For the same reason it became more popular than other search engines in 00s. Those would give honest search results, Google would have various kinds of complex foolery approaching ML to give people what they wanted quicker.

    One can say their corporate culture shows signs of overfitting for that situation. And not just theirs. In general those attempts to make products more competitive with even more complex foolery outside of the main functionality - are that.

    Except when products are simply less usable for said main functionality, people use them less even if they don’t consciously realize that.

    Also - what is Google in essence? It’s saying that some computing thing is too smart for you to run it at home or self-host it. It can only be done by the very smart and important people in companies with trillions in capitalization. And because you can’t, you are by some cultural taboo forbidden, to run it at home or self-host it, they get to manipulate results to make you give money to the people partnering with them.

    We all know there’s nothing fundamentally or practically impossible in making a search engine. If we don’t have to cache pages, it’s actually easy.

    The issue is in the service requirements. What the Internet needs is a technically transparent p2p market of services. Where storage and computing power can be transparently donated (or sold) just like in some countries you can sell power to the electric grid.

    OK, I’ve described the magic wand. That’s the strategy. Tactics is for someone actually capable of conceiving the thing. LOL

    • asdfasdfasdf@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 months ago

      The search engine magic isn’t just about caching pages. It’s also extremely expensive / complex to:

      • maintain an index of all the websites in the world. This is an extremely high cost
      • refresh that index in almost real time. How long will your self hosted crawler take to find new content for every website in the world?
      • there’s also the algorithm for weighing results. The order of results and their relevance is not easy at all. How many times a word appears on a page is a terrible metric.

      A hosted search is also a lot more environmentally friendly - that gigantic search index and all the energy poured into the work is something that can be shared by everyone. If everyone did that themselves at home, you spend that same amount of energy for every single household.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        “Almost real time” is not what I’d call necessary.

        Weighting results - I’d expect user feedback (good result, bad result, combined with keywords from the request) would be good enough. Similar to ed2k for files’ reputation.

        The index is going to be big, yes. But if we want a p2p system with split storage and computation, something between Freenet and Ceph, may be doable.

        A hosted search is also a lot more environmentally friendly - that gigantic search index and all the energy poured into the work is something that can be shared by everyone. If everyone did that themselves at home, you spend that same amount of energy for every single household.

        With some kind of such a p2p system I can imagine the overhead to be like 10 or maybe 100 times Google. But not what you said.

    • cybergazer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      When you go to search something and you literally have to type site:reddit.com at the end of it to get a human response it’s no wonder it’s falling apart