Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash
Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash
The thing with AI, is that it mostly only produces trash now.
But look back to 5 years ago, what were people saying about AI? Hell, many thought that the kind of art that AI can make today would be impossible for it to create! …And then it suddenly did. We’ll, it wasn’t actually suddenly, and the people in the space probably saw it coming, but still.
The point is, we keep getting better at creating AIs that do stuff we thought were impossible a few years ago, stuff that we said would show true intelligence if an AI can do them. And yet, every time some new impressive AI gets developed, people say it sucks, is boring, is far from good enough, etc. While it slowly, every time, creeps on closer to us, replacing a few jobs here and there in the fringes. Sure, it’s not true intelligence, and it still doesn’t beat humans, but, it beats most, at demand, and what happens when inevitably better AIs get created?
Maybe we’re in for another decades long AI winter… or maybe we’re not, and plenty more AI revolutions are just around the corner. I think AIs current capabilities are frighteningly good, and not something I expected to happen this soon. And the last decade or so has seen massive progress in this area, who’s to say where the current path stops?
Nah, nah to all of it. LLM is a parlor trick and not a very good one. If we are ever able to make a general artificial intelligence, that’s an entirely different story. But text prediction on steroids doesn’t move the needle.
In humans, abstract thinking developed hand in hand with language. So despite their limitations, I think that at least early AGI will include an LLM in some way.
I’ve been having a lot of vague thoughts about the unconscious bits of our brains and body, in regards to LLMs. The parts of our brains/neurons that started evolving back in simple animals as basically super primitive ways to process visual/audio/whatever input.
Our brains do a LOT of signal processing and filtering that never reaches conscious thought, that we can’t even reach with our conscious thought if we tried, but which is necessary for our squishy body-things to take in input from our environment and turn it into something useful instead of drowning in a screeching eye-searing tangled mess of chaotic sensory input all the time.
LLMs strike me as that sort of low-level input processing, the pattern-recognition and filtering. I think true generalized AI would have to be built on pieces like this–probably a lot of them. Ways to pluck patterns out of complex but repeated input. Like, this stuff definitely isn’t self-aware, but could eventually end up as some sort of processing library for something else far down the line.
Now might be a good time to pick up Peter Watts’ sci-fi book Blindsight. He doesn’t exactly write about AI in it, but he does write about a creature that responds to input but isn’t exactly conscious like you or I.
This is what I meant.
I just got the EPUB, thanks. Looking forward to reading it.