- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)
The best part about this, is that new models will be trained on the garbage from old models and eventually LLMs will just collapse into garbage factories. We’ll need filter mechanisms, just like in a Neal Stephenson book.
People learn and write program code with the help of AI. Let this sink in for a moment.
I’m in university and I’m hearing this more and more. I keep trying to guide folks away from it, but I also understand the appeal because an LLM can analyze the code in seconds and there’s no judgements made.
It’s not a good tool to rely on, but I’m hearing more and more people rely on it as I progress.
The true final exam would be writing code on an airgapped system.
I’m going into my midterm in 30 minutes where we will be desicrating the corpses of trees.