• sleepy@reddthat.com
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Isn’t that a part of the ai marketing though? That whole “this thing could destroy us” stuff?

      • Comment105@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Do you see any reason to think enough iterations of random nodes in a large enough network could result in emergent conscious intelligence?

        Or are you more of a spiritualist than a materialist when it comes to the mind?

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I can’t say anything about the spiritualist/materialist thing, but there are two things that are clear:

          First: Same as you won’t be able to ever get a Shakespeare work by randomly stringing letters together in any reasonable time frame, you won’t be able to do the same with conciousnes. If it’s possible, the number of incorrect permutations are so massive, that just random trying will not ever be enough in any realistic amount of time.

          Second: Transformer networks and all other generative AI concepts we have today aren’t even trying to create a conciousnes. They are not the path to general AI.

      • visak@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          That is totally true but that’s a different direction than the danger in the marketing as discussed above.

          The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

          That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

          • visak@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Yeah they got the “will take out jobs part” just not the “will take our jobs and be worse at it and companies will still prefer it”.

            I was around in the 80s when we were losing all the manufacturing jobs, mostly to outsourcing but they blamed automation, and they said “don’t worry there will be lots of good paying jobs in the new service economy!”. Guess what they outsourced those too and now they’ll automate them.

            • Square Singer@feddit.de
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              It will be interesting how it plays out. For some jobs, mainly stuff that wasn’t important but needed doing anyway (e.g. writing product listing on Amazon), this will be fatal. These jobs aren’t coming back.

              But for more skilled jobs, it will be interesting how they will deal with it when AI will mess up important stuff every single time.

              On the other hand, managers have been doing the same consistently for a much longer time and they still exist. Let’s see what happens.