• GooberEar@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    36
    ·
    8 hours ago

    I definitely miss the cached pages. I found that I was using the feature very frequently. Maybe it’s just the relative obscurity of some of my hobbies and interests, but a lot of the information online that shows up in search engines seems to come from old forums. Often times those old forums are no longer around or have migrated to new software (obliterating the old URLs and old posts as well).

  • bassomitron@lemmy.world
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    1
    ·
    12 hours ago

    I was super annoyed when they first took away the links. “Pages are more dependably available now,” is such a lazy excuse. Storing the cached content probably wasn’t even that expensive for them, as it didn’t retain anything beyond basic html and text. Their shitty AI-centric web search was likely the main reason for getting rid of it.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      22
      ·
      6 hours ago

      They used to have a “cache” link on search results. It occasionally came in handy when the original site was down or changed their link or something.

    • WindyRebel@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      It was a tool to see what Google has cached, to check web pages for changes based on Google’s last access.

      It also had a nice habit of bypassing those pop-ups that would prevent scrolling. 😂

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      11 hours ago

      Shocked? You’d think all the people outraged at having their websites scraped would be delighted. That’s probably the real reason for this.

      • subignition@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        8 hours ago

        It’s not the scraping itself, but the purpose of the scraping, that can be problematic. There are good reasons for public sites to allow scraping.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          51 minutes ago

          I have the distinct impression that a number of people would object to the purpose of re-hosting their content as part of a commercial service, especially one run by Google.

          Anyway, now no one has to worry about Google helping people bypass their robots.txt or IP-blocks or whatever counter-measures they take. And Google doesn’t have to worry about being sued. Next stop: The Wayback Machine.

    • WoahWoah@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      It’s a 2-trillion-dollar company, I think news of their coming demise has been exaggerated.