• Lucy :3
    link
    fedilink
    arrow-up
    1
    ·
    12 hours ago

    If you have a decent GPU or CPU, you can just set up ollama with ollama-cuda/ollama-rocm and run llama3.1 or llama3.1-uncensored.

      • Lucy :3
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        I bet even my Pi Zero W could run such a model*

        * with 1 character per hour or so

        • 1985MustangCobra@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          interesting, well it’s something to look into, but id like a place to communicate with like minded people.