ChatGPT is hilariously incompetent… but on a serious note, I still firmly reject tools like copilot outside demos and the like because they drastically reduce code quality for short term acceleration. That’s a terrible trade-off in terms of cost.
I enjoy using copilot, but it is not made to think for you. It’s a better autocomplete, but don’t ever let it do more than a line at once.
they drastically reduce … quality for short term acceleration
Western society is built on this principle
Tell me about it…
I left my more mature company for a startup.
I feel like Tyler Durden sometimes.
How you liking it? How many years have you aged in the months working at your startup?
My hairline has started receding very rapidly. There’s there’s these fine hairs all over my desk, and I see the photo I took when joining directly before turning on my camera every meeting.
Doesn’t sood good at all. I’m sorry to hear that, friend. I really hope there’s enough upsides there compared to working at a more mature company for you.
Biggest problem with it is that it lies with the exact same confidence it tells the truth. Or, put another way, it’s confidently incorrect as often as it is confidently correct - and there’s no way to tell the difference unless you already know the answer.
it’s kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we’ve somehow regressed back from that point
did we really regress back from that?
i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.
But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.
I’m still convinced that GitHub copilot is actively violating copyleft licenses. If not in word, then in the spirit.
I predict that, within the year, AI will be doing 100% of the development work that isn’t total and utter bullshit pain-in-the-ass complexity, layered on obfuscations, composed of needlessly complex bullshit.
That’s right, within a year, AI will be doing .001% of programming tasks.
Can we just get it to attend meetings for us?
Legitimately could be a use case
“Attend this meeting for me. If anyone asks, claim that your camera and microphone aren’t working. After the meeting, condense the important information into one paragraph and email it to me.”
Here is a summary of the most important information from that meeting. Since there were two major topics, I’ve separated them into two paragraphs.
- It is a good morning today.
- Everyone is thanked for their time. Richard is looking forward to next week’s meeting.
The rest of the information was deemed irrelevant to you and your position.
Engineering is about trust. In all other and generally more formalized engineering disciplines, the actual job of an engineer is to provide confidence that something works. Software engineering may employ fewer people because the tools are better and make people much more productive, but until everyone else trusts the computer more, the job will exist.
If the world trusts AI over engineers then the fact that you don’t have a job will be moot.
People don’t have anywhere near enough knowledge of how things work to make their choices based on trust. People aren’t getting on the subway because they trust the engineers did a good job; they’re doing it because it’s what they can afford and they need to get to work.
Similarly, people aren’t using Reddit or Adobe or choosing their cars firmware based on trust. People choose what is affordable and convenient.
In civil engineering public works are certified by an engineer; its literally them saying if this fails i am at fault. The public is trusting the engineer to say its safe.
Yeah, people may not know that the subway is safe because of engineering practices, but if there was a major malfunction, potentially involving injuries or loss of life, every other day, they would know, and I’m sure they would think twice about using it.
“look i registered my own domain name all by myself!”
the domain: “localhost”
I’m an elite hacker and I grabbed your IP address from this post. It’s 192.168.0.1 just so you know I’m not bluffing.
Dude, you need to use the broadcast address.
These morons are probably going to train AI wrong so job security for the next 100 years.
The only thing ChatGPT etc. is useful for, in every language, is to get ideas on how to solve a problem, in an area you don’t know anything about.
ChatGPT, how can I do xy in C++?
You can use the library ab, like …That’s where I usually search for the library and check the docs if it’s actually possible to do it this way. And often, it’s not.
On a more serious note, ChatGPT, ironically, does suck at webdev frontend. The one task that pretty much everyone agrees could be done by a monkey (given enough time) is the one it doesn’t understand at all.
I don’t think it’s very useful at generating good code or answering anything about most libraries, but I’ve found it to be helpful answering specific JS/TS questions.
The MDN version is also pretty great too. I’ve never done a Firefox extension before and MDN Plus was surprisingly helpful at explaining the limitations on mobile. Only downside is it’s limited to 5 free prompts/day.
Chat gpt is also great if you have problems with Linux. It is my nr 1 trouble shooting tool.