Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    Following up from this truth bomb: https://awful.systems/comment/4877052

    @Soyweiser: Sorry AGIbros, not even the Dutch believe AGI is near.

    For your delectation, here are the HN comments

    I’m in the other camp: I remember when we thought an AI capable of solving Go was astronomically impossible and yet here we are. This article reads just like the skeptic essays back then.

    Ah yes my coworkers communicate exclusively in Go games and they are always winning because they are AI and I am on the street, poor.

    There’s not that much else to sneer at though, plenty of reasonable people.

    Here’s the lobste.rs disucssion: https://lobste.rs/s/4xzxqk

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 months ago

      oh i dunno, there was

      Honestly - Computer Science has given us more clues about how the human mind might work than cognitive science ever did.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        I think the one thing LLMs have shown us is that coherent English is less complicated than we previously believed. I don’t think we learned anything about actual cognition.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        This remark is actually part of a long fight between CS and CS people. And it is really frustrating in various ways, as CS always thinks they did better than CS while being blind of the actual accomplishments of CS they don’t know and just how complex the subject matter is. It is an annoying failure to communicate between both disciplines. (A lot of people don’t fall victim to this btw, but it can be really annoying to encounter a ‘Our CS is good, and theirs is bad because strawman’, who often don’t even realize that various words have different meanings in the different fields).

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          For the record, I think the Counter-Strike people are correct on this one, mainly because heuristically Confederate States advocates are wrong by default.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 months ago

            Exactly the problem im talking about. What about all the good things the confederate state … no wait.

            • gerikson@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              3 months ago

              before you further impugne my sneer-hunting the quote I posted was literally the first one on top in the thread. I thought it was gonna be easy pickings before I realized a lot of people were making sense and I got bored.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      Well that’s quite the confused comment chain given that neither Go nor chess are solved. “Remember that thing everyone said wouldn’t happen? Well it still hasn’t happened! 🫨”

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Confusing ‘solved’ with ‘a computer can win playing vs high level human players a high % of the times’ because they don’t know that ‘solved’ actually has a specific meaning.

        Tech reporting has massively fucked up this as well over the years btw, so I’m not that annoyed random HN people also don’t get it. But there is a wikipedia page for it: https://en.wikipedia.org/wiki/Solved_game

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      The best thing about the lobste.rs thread is to identify prompt fondlers among the brethren.

      Here’s something I’ve never heard of before:

      https://en.wikipedia.org/wiki/Moravec’s_paradox

      Moravec wrote in 1988: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers[…]”

      Apparently he had GPT back then!

      Anyway is this anything anyone takes seriously? Steven Pinker makes an appearance in the wiki page, which is a bit of a red flag.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        3 months ago

        So to throw my totally-amateur two cents in, it seems like it’s definitely part of the discussion in actual AI circles based on the for-public-consumption reading and viewing I’ve done over the years, though I’ve never heard it mentioned by name. I think a bigger part of the explanation has less to do with human cognition (it’s probably fallacious to assume that AI of any method effectively reproduces those processes) and more to do with the more abstract cognitive tests and games being much more formally defined. Our perception and model of a game of Chess or Go may not be complete enough to solve the game, but it is bounded by the explicitly-defined rules of the game. If your opponent tries to work outside of those bounds by, say, flipping the board over and storming off, the game itself can treat that as a simple forfeit-by-cheating. But our understanding of the real world is not similarly bounded. Things that were thought to be impossible happen with impressive frequency, and our brain is clearly able to handle this somehow. That lack of boundedness requires different capabilities than just being able to operate within expected parameters like existing English GenAI or image generators, I suspect relating to handling uncertainty or lacking information. The assumption that what AI is doing is a mirror to the living mind is wholly unproven.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Yeah, it’s a real thing that happens when programming robots. Kinematics is more difficult than route planning, for example.

      • imadabouzu@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        3 months ago

        Moravec’s Paradox is actually more interesting than it appears. You don’t have take his reasoning or Pinker’s seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it’s a common theme.

        One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.

        It’s part of why I don’t think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      I remember when

      I don’t think AI will ever be able to get me to lick my own elbow (while my body is undamaged). Boom AGI will never happen. Logic’ed