• iAvicenna@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    I think tech CEOs can empathise with chatgpt on how uninformed its opinions are and how well it can it bullshit

  • Snowclone@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    They put new AI controls on our traffic lights. Cost the city a fuck ton more money than fixing our dilapidated public pool. Now no one tries to turn left at a light. They don’t activate. We threw out a perfectly good timer no one was complaining about.

    But no one from silicone valley is lobbing cities to buy pool equipment, I guess.

    • lazynooblet@lazysoci.al
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Whilst it’s a shame this implementation sucks, I wish we would get intelligent traffic light controls that worked. Sitting at a light for 90 seconds in the dead of night without a car in sight is frustrating.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        That was a solved problem 20 years ago lol. We made working systems for this in our lab at Uni, it was one of our course group projects. It used combinations of sensors and microcontrollers.

        It’s not really the kind of problem that requires AI. You can do it with AI and image recognition or live traffic data but that’s more fitting for complex tasks like adjusting the entire grid live based on traffic conditions. It’s massively overkill for dead time switches.

        Even for grid optimization you shouldn’t jump into AI head first. It’s much better long term to analyze the underlying causes of grid congestion and come up with holistic solutions that address those problems, which often translate into low-tech or zero-tech solutions. I’ve seen intersections massively improved by a couple of signs, some markings and a handful of plastic poles.

        Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.

        • jeffhykin@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 months ago

          (I know I’m two months late)

          To back up what you’re saying, I work with ML, and the guy next to me does ML for traffic signal controllers. He basically established the benchmark for traffic signal simulators for reinforcement learning.

          Nothing works. All of the cutting edge reinforment algorithms, all the existing publications, some of which train for months, all perform worse than “fixed policy” controllers. The issue isn’t the brains of the system, its the fact that stoplights are fricken blind to what is happing.

  • uis@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    CEOs(dumbasses who are constantly wrong): rush replacing everyone with AI before everyone replaces them with AI

  • schnurrito@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    LLMs aren’t virtual dumbasses who are constantly wrong, they are bullshit generators. They are sometimes right, sometimes wrong, but don’t really care either way and will say wrong things just as confidently as right things.

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        it’s like how techbros constantly want to reinvent transportation, if they assign an AI to give them an answer it would just say “build more railways and trains” and they’d throw it out a window in anger

        • AggressivelyPassive@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          They re-invent everything for no reason. Every mundane device has been “re-invented” using big data, blockchain, VR, now AI and in a few years probably quantum-something.

          The entire tech world fundamentally ran out of ideas. The usual pipeline is basic research > applied research > products, but since money only gets thrown at products, there’s nothing left to do research. So the tech bros have to re-iterate on the same concepts again and again.

    • abracaDavid@lemmy.today
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      Oh come on. It’s called AI, as in artificial intelligence. None of these companies have ever called it a text generator, even though that’s what it is.

  • StaySquared@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Lots of companies jumping the gun… laying off so many people only to realize they’re going to need those people back. AI is still in its infancy, using it to replace an actual human is a dumb dumb move.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Generative AI is amazing for some niche tasks that are not what it’s being used for

        • Waraugh@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          4 months ago

          Creating drafts for white papers my boss asks for every week about stupid shit on his mind. Used to take a couple days now it’s done in one day at most and I spend my Friday doing chores and checking on my email and chat every once in a while until I send him the completed version before logging out for the weekend.

    • darthelmet@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Yeah. It’s more like:

      Researchers: “Look at our child crawl! This is a big milestone. We can’t wait to see what he’ll do in the future.

      CEOs: Give that baby a job!

      AI stuff was so cool to learn about in school, but it was also really clear how much further we had to go. I’m kind of worried. We already had one period of AI overhype lead to a crash in research funding for decades. I really hope this bubble doesn’t do the same thing.

      • ed_cock@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        The sheer waste of energy and mass production of garbage clogging up search results alone is enough to make me hope the bubble will pop reeeeal soon. Sucks for research but honestly the bad far outweighs the good right now, it has to die.

    • eee@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      It CAN BE amazing in certain situations. Ceo tomfoolery is what’s making generative Ai become a joke to the average user.

      • ChaoticNeutralCzech@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Yes. It’s not wrong 100% of the time, otherwise you could make a fortune by asking it for investment advice and then doing the opposite.

        What happened is like the current robot craze: they made the technology resemble humans, which drives attention and money. Specialized “robots” can indeed perform tedious tasks (CNC, pick-and-place machines) or work safely with heavier objects (construction equipment). Similarly, we can use AI to identify data forgery or fold proteins. If we try to make either human-like, they will appear to do a wide variety of tasks (which drives sales & investment) but not be great at any of them. You wouldn’t buy a humanoid robot just to reuse your existing shovel if excavators are cheaper. (Yes, I don’t think a humanoid robot with digging capabilities will ever be cheaper than a standard excavator).

        • Match!!@pawb.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          It’s actually really frustrating that LLMs have gotten all the funding when we’re finally at the point where we can build reasonably priced purpose-built AI and instead the CEOs want to push trashbag LLMs on everything

          • ChaoticNeutralCzech@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            Well, a conversational AI with sub-human abilities still has some uses. Notably scamming people en masse so human email scammers will be put out of their jobs /s