• GBU_28@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Off the shelf models do this, yes.

    Sophisticated local trained models on expensive private hardware are already dunking on publicly available versions. The problem of hallucination is generally resolved in those contexts

    • amki@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Sure but until I see such a thing I chose not to believe in fairy tales.

      Decompiling arbitrary architecture machine code is quite a few levels above everything I’ve seen so far which is generally pretty basic pattern recognition paired with statistics and training reinforcement.

      I’d argue decompiling arbitrary machine code into either another machine code or legible higher level code is in a whol other league than what AO has proven to be capable of.

      Especially because with this being 90% accurate is useless.