for plenty of use cases where it’s supposed to help, it just plain doesn’t work (software engineering being my main use case, but it does not help find obscure songs/games either).
it’s fundamentally unsafe, which matters in a lot of cases where the evangelists want to put AI into.
the resource usage to train models is crazy. To the point of delaying Google’s carbon neutrality plans for instance. It’s also expected to put a significant toll on energy grids worldwide for the years to come, which is the last thing we need on a burning planet.
it’s being pushed by evil actors like big tech billionaires, who aren’t trusted to do the right things with the tech.
it’s already proven harmful (cf Air Canada’s chatbot, or the idiots on today’s other HN LLM subject saying they use it for medical advice or electric work among many examples)
it’s overhyped, much like crypto. Way too many promises, and it does not deliver.
My sentiment on the reliability is shared in my team, among people that used it a bit more: it’s a garbage machine.
I do fear it might train a generation of software professionals that don’t know how to code, which is going to harm them (unemployable) or the people they serve, but I might be overreacting due to the fact that the only a person I knew who claimed to use LLMs professionally was a hack who’s using LLMs as a palliative for general lack of skills and low output. Come to think of it, this is precisely the kind of people that should be cautious around LLMs, because they can’t review the LLM’s output accurately for dangerous hallucinations.
I do ask ChatGPT questions sometimes, but honestly pretty rarely. I use it as a complement to regular search, and nothing more, because it can’t get the basics right most of the time.
There are multiple reasons:
My sentiment on the reliability is shared in my team, among people that used it a bit more: it’s a garbage machine.
I do fear it might train a generation of software professionals that don’t know how to code, which is going to harm them (unemployable) or the people they serve, but I might be overreacting due to the fact that the only a person I knew who claimed to use LLMs professionally was a hack who’s using LLMs as a palliative for general lack of skills and low output. Come to think of it, this is precisely the kind of people that should be cautious around LLMs, because they can’t review the LLM’s output accurately for dangerous hallucinations.
I do ask ChatGPT questions sometimes, but honestly pretty rarely. I use it as a complement to regular search, and nothing more, because it can’t get the basics right most of the time.