Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years.
Is there anything similar for the sophistication of AI, or AGI in particular?
Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years.
Is there anything similar for the sophistication of AI, or AGI in particular?
That is a pretty wild assumption. There’s absolutely no reason, why a larger model wouldn’t produce drastically better results. Maybe not next month, maybe not with this architecture, but it’s almost certain that they will grow.
This has hard “256kb is enough” vibes.
What drastically better results are you thinking of?
Actual understanding of the prompts, for example? LLMs are just text generators, they have no concepts of what’s being the words.
Thing is, you seem to be completely uncreative or rather deny the designers and developers any creativity if you just assume “now we’re done”. Would you have thought the same about Siri ten years ago? “Well, it understands that I’m planning a meeting, AI is done.”