https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

http://web.archive.org/web/20240904174555/https://ssi.inc/

I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

They aren’t interested in anything besides “superintelligence” which strikes me as an optimistic business strategy. If you are “cracked” you can join them:

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

  • imadabouzu@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 months ago

    I don’t get it. If scaling is all you need, what does a “cracked team” of 5 mean in the end? Nothing?

    What’s, the different between super intelligence being scaling, and super intelligence, being whatever happens? Can someone explain to me the difference between what is and what SUPER is? When someone gives me the definition of super intelligence as “the power to make anything happen,” I always beg, again, “and how is that different precisely from not, that?”

    The whole project is tautological.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      I’m just amused that their scaling program doesn’t scale properly. Due to the hungry hungry AI needing more and more data.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      Superintelligence is an AI meaningfully beyond human capability.

      It pretty obviously can’t be achieved by brute forcing something already way past diminishing returns, though.

      • imadabouzu@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        I’m actually, not convinced that AI meaningfully beyond human capability actually makes any sense, either. The most likely thing is that after stopping the imitation game, an AI developed further would just… have different goals than us. Heck, it might not even look intelligent at all to half of human observers.

        For instance, does the Sun count as a super intelligence? It has far more capability than any human, or humanity as a whole, on the current time scale.