• ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    There are thousands of sci-fi novels where sentient robots are treated terribly by humans and apparently the people at Boston Dynamics have read absolutely zero of them as they spend all day finding new ways to torment their creations.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Those are just brainless bodies, currently. They don’t have sentience and have no ability to suffer. They’re nothing more than hydraulics, servos, and gyros. I’d be more concerned about mistreatment of advanced AI in disembodied form, something we’re dabbling potentially close to currently.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I disagree. I care greatly about not mistreating anything with consciousness and worry of where that line is and how we’ll even be able to tell that we’ve crossed it.

          I also recognized that a machinized body without a brain is exactly that - a cluster of unthinking matter. A true artificial intelligence wouldn’t be offended by the mistreatment of inanimate gears and servos any more than I would be. The mistreatment of an intelligent entity, however, is a different story.

      • LillyPip@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Food for thought, though: we thought the same thing about all other animals until only a couple of decades ago, and are still struggling over the topic.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          …Just no. Animals are complex organic beings. Of course, we don’t understand them. Machines, though? We built machines from the literal Earth. Their level of complexity is incomparable to that of anything made by nature.

          Now, take a sufficiently advanced neural network that’s essentially a black box that no human can possibly understand entirely and put it inside of that machine? Then you’re absolutely right. We’ll get there soon, I’m sure. For now, however, a physical robotic body is just a machine, no different than a car.

    • LillyPip@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      People think I’m crazy for apologising to my roomba when I trip on it and for saying please and thank you to Alexa and Siri, but I won’t be surprised at all when the robots rise up, considering how our scientists are treating them. I’ll have a track record of being nice, and that has to count for something, right?

          • Hupf@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Doctor Bashir: They broke seven of your transverse ribs and fractured your clavicle.

            Garak: Ah, but I got off several cutting remarks which no doubt, did serious damage to their egos.

  • sleepy@reddthat.com
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Isn’t that a part of the ai marketing though? That whole “this thing could destroy us” stuff?

      • visak@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          That is totally true but that’s a different direction than the danger in the marketing as discussed above.

          The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

          That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

          • visak@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Yeah they got the “will take out jobs part” just not the “will take our jobs and be worse at it and companies will still prefer it”.

            I was around in the 80s when we were losing all the manufacturing jobs, mostly to outsourcing but they blamed automation, and they said “don’t worry there will be lots of good paying jobs in the new service economy!”. Guess what they outsourced those too and now they’ll automate them.

            • Square Singer@feddit.de
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              It will be interesting how it plays out. For some jobs, mainly stuff that wasn’t important but needed doing anyway (e.g. writing product listing on Amazon), this will be fatal. These jobs aren’t coming back.

              But for more skilled jobs, it will be interesting how they will deal with it when AI will mess up important stuff every single time.

              On the other hand, managers have been doing the same consistently for a much longer time and they still exist. Let’s see what happens.

      • Comment105@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Do you see any reason to think enough iterations of random nodes in a large enough network could result in emergent conscious intelligence?

        Or are you more of a spiritualist than a materialist when it comes to the mind?

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I can’t say anything about the spiritualist/materialist thing, but there are two things that are clear:

          First: Same as you won’t be able to ever get a Shakespeare work by randomly stringing letters together in any reasonable time frame, you won’t be able to do the same with conciousnes. If it’s possible, the number of incorrect permutations are so massive, that just random trying will not ever be enough in any realistic amount of time.

          Second: Transformer networks and all other generative AI concepts we have today aren’t even trying to create a conciousnes. They are not the path to general AI.