Article from The Atlantic, archive link: https://archive.ph/Vqjpr
Some important quotes:
The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.
The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.
Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.
Summary: Tech bros want money, tech bros want speed, tech bros want products.
Scientists want safety, researchers want to research…
They believed that the AI safety work they had done was insufficient.
Considering that every new model seems to be getting worse for anything but highly sanitized corporate usage, I’m not sure that I want more AI safety …
For my usage, I use Chat GPT 3.5 turbo with the march checkpoint because I can’t get the current one to stop moralizing about bullshit instead of doing what it’s supposed to (I run two twitch bots with it). GPT4 used to be okay there, but the new preview is now starting to have the same issue with more frequent “I can’t do that Dave”-style answers, though it’s still mostly circumventable with enough prompt massaging, but it is getting harder.
In a year, I don’t see anything but self-hosted models usable for anything not corporate glitz if trajectories hold, so fuck all that AI safety.
On top of all of this, those efforts to tame and control outputs from the developer side could be abused to simply appease investors or totalitarian markets. So we might see a Disneyfication like we‘re seeing on other platforms like Youtube with their horrendous filters, spawning ridiculous terms like „unlifed“. And just imagine the level of censorship we‘d see if they ever try to get into the Chinese market because clearly, the ‚non‘ in non-profit is becoming more and more silent.