We’re starting with a singular focus on video game development, because we think that will offer the best feedback loop for testing new AI models. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks.
Healthcare pivot:
Originally, when Encultured was founded as a gaming-oriented AI research company, our immediate goal was to make research progress on human–AI interaction that would ultimately benefit humanity well beyond the entertainment sector. Since then, we’ve considered healthcare as a likely next step for us after gaming.
Couldn’t find any details beyond that. Perhaps one of them read way too much Friendship is Optimal but they didn’t actually having any gaming chops so never got anywhere.
Wow, they actually succeeded at their plans. I’m impressed. “we expect to be much more careful than other companies to ensure that recursively self-improving intelligent agents don’t form within our game and break out onto the internet!” Well done!
It’s real, all of it. John Titor the time traveler? He’s real. AI gods? We could build them.
John Titor came back in time to stop the creation of a superintelligence. He does this by secretly founding, co-founding, or co-co-founding various silicon valley startups that don’t actually do anything; but that sound good to venture capitalists with too much money.
The money is secretly funneled to good causes like food banks, adopting puppies, and maintaining the natural habitat of burrowing owls. Thus averting the end of the world. Encultured AI is part of this plan. They do nothing-- for the good of the earth.
the AI what now?
Here’s what they write:
AI alignment via the power of videogames:
Healthcare pivot:
Couldn’t find any details beyond that. Perhaps one of them read way too much Friendship is Optimal but they didn’t actually having any gaming chops so never got anywhere.
EDIT: More details here: https://www.lesswrong.com/posts/ALkH4o53ofm862vxc/announcing-encultured-ai-building-a-video-game
Wow, they actually succeeded at their plans. I’m impressed. “we expect to be much more careful than other companies to ensure that recursively self-improving intelligent agents don’t form within our game and break out onto the internet!” Well done!
It’s real, all of it. John Titor the time traveler? He’s real. AI gods? We could build them.
John Titor came back in time to stop the creation of a superintelligence. He does this by secretly founding, co-founding, or co-co-founding various silicon valley startups that don’t actually do anything; but that sound good to venture capitalists with too much money.
The money is secretly funneled to good causes like food banks, adopting puppies, and maintaining the natural habitat of burrowing owls. Thus averting the end of the world. Encultured AI is part of this plan. They do nothing-- for the good of the earth.
Now that is a name I have not heard in a long long time.
@V0ldek @sailor_sega_saturn remember all those awful NFT games? I imagine it’s like that, but stupider