Very effective at translating between different (human) languages. Best if you can find a native speaker to double-check the output. Failing that, reverse translate with a couple different models to verify the meaning is preserved. Even this sometimes fails though – e.g. two words with similar but subtly different definitions might trip you up. For instance, I’m told “the west” refers to different regions in english and japanese, but translating and reverse translating didn’t reveal this error.
It’s helping me understand how I think so that I can create frameworks for learning, problem solving, decision making etc. I’m neurodivergent.
I thought they would reject it, but my band friends and their peers all like to use AI to brainstorm and draft songs and go from there making their own songs.
I thought that’s interesting. I’ve asked them about it a few times on the lazy way of using AI and just make slop and yeah they’re against that
I don’t have any close friends who are drawing artists though I know a few through mutual hobbies on discord. They don’t seem to be using AI as tools from what I can tell.
My dad and his circle are definitely churning slop though but says it’s mostly for in-group joking and shooting the shit, so I guess that’s fine
Me personally, I’m still hesitant using it. I’m an “everything” consultant that hates his place in the small IT company but rising my BPD II wave too much to change it. Everyone around me is fine using AI to help analyze and what not documents and stuff to help them work. I can see how they are useful once you know how to ask the thing, but I just don’t want to.
As a DJ with ADHD, it’s great for helping me decide what to play next when I forget where I was going with the set, and mix myself into a corner. That said, it’s not very good at suggesting songs with a compatible BPM and key, but it works well enough for finding tunes with a similar vibe to what I’m already playing. So I just go down the list until I find a tune that can be mixed in.
As for the usual boring stuff, I’m learning how to code by having it write programs for me, and then analyzing the code and trying to figure out how it works. I’m learning a lot more than I would from studying a textbook.
I also used to use it for therapy, but not so much anymore when I figured out that it will just tell you what you want to hear if you challenge it enough. Not really useful for personal growth.
One thing it’s useful for is learning how stuff works, using metaphors comparing it to subjects I already understand.
I’ve used them both a good bit for D&D/TTRPG campaigns. The image generation has been great for making NPC portraits and custom magic item images. LLM’s have been pretty handy for practicing my DM-ing and improv, by asking it to act like a player and reacting to what it decides to do. And sometimes in the reverse by asking it to pitch interesting ideas for characters/dungeons/quest lines. I rarely took those in their entirety, but would often have bits and pieces I’d use.
Good for gaining an outside perspective/insight on an argument, discussion, or other form of communication between people. I fed it my friend’s and their ex’s text conversation to it (with permission), and it was able to point out emotional manipulation in the text when asked neutrally about it:
Please analyze this conversation between A and B and tell me what you think of their motivations and character in this conversation. Is there gaslighting? Emotional manipulation? Signs of an abusive communication style? Etc. Or is this an example of a healthy communication?
It is essential not to ask a leading question that frames A or B in particular as the bad or the good guy. For best results, ask neutral questions.
It would have been quite useful for my friend to have this when they were in that relationship. It may be able to spot abusive behaviors from your partner before you and your rose-colored glasses can.
Obvious disclaimers about believing anything it says are obvious. But having an outside perspective analyze your own behavior is useful.
Great for giving incantatons for ffmpeg, imagemagick, and other power tools.
“Use ffmpeg to get a thumbnail of the fifth second of a video.”
Anything where syntax is complicated, lots of half-baked tutorials exist for the AI to read, and you can immediately confirm if it worked or not. It does hallucinate flags, but fixes if you say “There is no --compress flag” etc.
This is the way.
With mixed results I’ve used it for summarising the plots of books if I’m about to go back into a book series I’ve not read for a while.
Legitimately, no. I tried to use it to write code and the code it wrote was dog shit. I tried to use it to write an article and the article it wrote was dog shit. I tried to use it to generate a logo and the logo it generated was both dog shit and raster graphic, so I wouldn’t even have been able to use it.
It’s good at answering some simple things, but sometimes even gets that wrong. It’s like an extremely confident but undeniably stupid friend.
Oh, actually it did do something right. I asked it to help flesh out an idea and turn it into an outline, and it was pretty good at that. So I guess for going from idea to outline and maybe outline to first draft, it’s ok.
Crappy but working code has its uses. Code that might or might not work also has its uses. You should primarily use LLMs in situations where you can accept a high error rate. For instance, in situations where output is quick to validate but would take a long time to produce by hand.
The output is only as good as the model being used. If you want to write code then use a model designed for code. Over the weekend I wrote an Android app to be able to connect my phone to my Ollama instance from off my network. I’ve never done any coding beyond scripts, and the AI walked me through setting up the IDE and a git repository before we even got started on the code. 3 hours after I had the idea I had the app installed and working on my phone.
I didn’t say the code didn’t work. I said it was dog shit. Dog shit code can still work, but it will have problems. What it produced looks like an intern wrote it. Nothing against interns, they’re just not gonna be able to write production quality code.
It’s also really unsettling to ask it about my own libraries and have it answer questions about them. It was trained on my code, and I just feel disgusted about that. Like, whatever, they’re not breaking the rules of the license, but it’s still disconcerting to know that they could plagiarize a bunch of my code if someone asked the right prompt.
(And for anyone thinking it, yes, I see the joke about how it was my bad code that it trained on. Funny enough, some of the code I know was in its training data is code I wrote when I was 19, and yeah, it is bad code.)
One day I’m going to get around to hooking a local smart speaker to Home Assistant with ollama running locally on my server. Ideally, I’ll train the speech to text on Majel Barrett’s voice and be able to talk to my house like the computer in Star Trek.
I’m piping my in house camera to Gemini. Funny how it comments our daily lives. I should turn the best of in a book or something.
Another one;
Do you take any precautions to protect your privacy from Google or are you just like, eh, whatever?
yeah that looks creepy as fuck
AI’m on Observation Duty
Absolutely « whatever ». I became quite cynical after working for a while in telco / intelligence / data and AI. The small addition of a few pic is just adding few contextual clues to what they have already.
Night Hall Motion Detected, you left the broom out again, it probably slid a little against the wall. I bemoan my existence, is this what life is about? Reporting on broom movements?
Yeah I have a full collection of super sarcastic shit like that.
ChatGPT kind of sucks but is really fast. DeepSeek takes a second but gives really good or hilarious answers. It’s actually good at humor in English and Chinese. Love that it’s actually FOSS too
LLMs are pretty good at reverse dictionary lookup. If I’m struggling to remember a particular word, I can describe the term very loosely and usually get exactly what I’m looking for. Which makes sense, given how they work under the hood.
I’ve also occasionally used them for study assistance, like creating mnemonics. I always hated the old mnemonic I learned in school for the OSI model because it had absolutely nothing to do with computers or communication; it was some arbitrary mnemonic about pizza. Was able to make an entirely new mnemonic actually related to the subject matter which makes it way easier to remember: “Precise Data Navigation Takes Some Planning Ahead”. Pretty handy.
On this topic it’s also good at finding you a acronym full form that can spell out a specific thing you want. Like you want your software to spell your name/some fun world but actually have full form related to what it does , AI can be useful.
I bought a cheap barcode scanner and scanned all my books and physical games and put it into a spreadsheet. I gave the spreadsheet to ChatGPT and asked it to populate the titles and ratings, and genre. Allows me to keep them in storage and easily find what I need quickly.
Before it was hot, I used ESRGAN and some other stuff for restoring old TV. There was a niche community that finetuned models just to, say, restore classic SpongeBob or DBZ or whatever they were into.
These days, I am less into media, but keep Qwen3 32B loaded on my desktop… pretty much all the time? For brainstorming, basic questions, making scripts, an agent to search the internet for me, a ‘dumb’ writing editor, whatever. It’s a part of my “degoogling” effort, and I find myself using it way more often since it’s A: totally free/unlimited, B: private and offline on an open source stack, and C: doesn’t support Big Tech at all. It’s kinda amazing how “logical” a 14GB file can be these days, and I can bounce really personal/sensitive ideas off it that I would hardly trust anyone with.
…I’ve pondered getting back into video restoration, with all the shiny locally runnable tools we have now.
Do you have any recommendations for a local Free Software tool to fix VHS artifacts (bad tracking etc., not just blurriness) in old videos?
That work well out of the box? Honestly, I’m not sure.
Back in the day, I’d turn to vapoursynth or (Or avisynth+) filters and a lot of hand editing, basically go through the trouble sections one-by-one and see which combination of VHS-specific correction and regeneration looks best.
These days, we have far more powerful tools. I’d probably start by training a LoRA for Wan 2B or something, then use it to straight up regenerate damaged test sections with video-2-video. Then I’d write a script to detect them, and mix in some “traditional” vapoursynth filters.
…But this is all very manual, like python dev level with some media/ml knowledge, unfortunately. I am much less familiar with, like, a GUI that could accomplish this. Paid services out there likely offer this, but who knows how well they work.
Do you run this on NVIDIA or AMD hardware?
Nvidia.
Back then I had a 980 TI RN I am lucky enough to have snagged a 3090 before they shot up.
I would buy a 7900, or a 395 APU, if they were even reasonably affordable for the VRAM, but AMD is not pricing their stuff well…
But FYI you can fit Qwen 32B on a 16GB card with the right backend/settings.
How do you get it to search the internet?
The front end.
Some UIs (Like Open Web UI) have built in “agents” or extensions that can fetch and parse search results as part of the context, allowing LLMs to “research.” There are in fact some finetunes specializing in this, though these days you are probably best off with regular Qwen3.
This is sometimes called tool use.
I also (sometimes) use a custom python script (modified from another repo) for research, getting the LLM to search a bunch of stuff and work through it.
But fundamentally the LLM isn’t “searching” anything, you are just programmatically feeding it text (and maybe fetching its own requests for search terms).
The backend for all this is a TabbyAPI server, with 2-4 parallel slots for fast processing.