• TiKa444@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    A little bit offside.

    Today I tried to host a large language model locally on my windows PC. It worked surprisingly successfull (I’m unsing LMStudio, it’s really easy, it even download the models for you). The most models i tried out worked really good (of cause it isn’t gpt-4 but much better than I thought), but in the end I discuss 30 minutes with one of the models, that it runs local and can’t do the work in the background at a server that is always online. It tried to suggest me, that I should trust it, and it would generate a Dropbox when it is finish.

    Of cause this is probably caused by the adaption of the model from a model that is doing a similiar service (I guess), but it was a funny conversation.

    And if I want a infinite repetition of a single work, only my PC-Hardware will prevent me from that and no dumb service agreement.

    • misophist@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      And if I want a infinite repetition of a single work, only my PC-Hardware will prevent me from that and no dumb service agreement.

      That is entirely not the point. The issue isn’t the infinitely repeated word. The issue is that requesting an infinitely repeated word has been found to semi-reliably cause LLM hallucinations that devolve into revealing training data. In short, it is an unintended exploit and until they have it reliably patched, they are making it against their TOS to try to exploit their systems.

      • TiKa444@feddit.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Of cause you’re right. I tried to take it with humor. As I said. A little bit off topic.