I’ve been researching the key findings and data on AI for a few years. With that context, this post strikes me as one of the more important findings; yet, it was written in such a scientific, vanilla fashion that most people won’t stop to process the details.
“The new paper demonstrates how ultra-fast learning rates are surprisingly identical for networks that are large as well as small.
So ‘the disadvantage of the complicated brain’s learning scheme is actually an advantage,’ the researchers say.
Here’s a related post…
How Google DeepMind is learning like a child: DeepMind uses videos to teach itself about the world
“Just like DeepMind taught an AI to interpret its surroundings via Symbol-Concept Association Network, the team leading this DeepMind project is following a similar path.
This method of learning is almost exactly like how humans think and learn to understand the world around them.”