Stephen Wolfram explores minimal models and their visualizations, aiming to explain the underneath functionality of neural nets and ultimately machine learning.
I think the main issue is that ML isn’t useless just because we don’t understand how it works. It clearly does work and we can use that. It’s hardly unique in that way either. There are a gazillion medicines that work but we don’t really know how. We’re not going to abandon them just because we don’t understand them.
And it’s not like people aren’t trying to understand how they work; it’s just really difficult.
The calculator analogy also makes no sense. You can’t build a working speech recognition engine by manually entering equations for phonemes or whatever. That’s actually not a million miles away from how speech recognition worked in the 90s and 2000s… or I should say “didn’t work”.
Yeah fair enough. That was a bit mean, sorry.
I think the main issue is that ML isn’t useless just because we don’t understand how it works. It clearly does work and we can use that. It’s hardly unique in that way either. There are a gazillion medicines that work but we don’t really know how. We’re not going to abandon them just because we don’t understand them.
And it’s not like people aren’t trying to understand how they work; it’s just really difficult.
The calculator analogy also makes no sense. You can’t build a working speech recognition engine by manually entering equations for phonemes or whatever. That’s actually not a million miles away from how speech recognition worked in the 90s and 2000s… or I should say “didn’t work”.