“The only reason a small company like you.com can compete with a large company like Google is because of the progress we’ve seen in AI. In particular, when it comes to so-called unsupervised and transfer learning. The idea here is that you can train very large neural networks on unsupervised text – basically all of Wikipedia, Common Crawl, and as much web text as you can find, while keeping in mind that not everything on the web is great in terms of training AI.”
- “Three emerging areas within AI that are poised to redefine the field—and society—in the years ahead.
- Unsupervised Learning – is an approach to AI in which algorithms learn from data without human-provided labels or guidance.
- Federated Learning – Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent out—one to each device with training data—and trained locally on each subset of data.
- Transformers – Transformers’ great innovation is to make language processing parallelized: all the tokens in a given body of text are analyzed at the same time rather than in sequence.”
- “Nobody tells the baby that objects are supposed to fall,” said Yann LeCun, the chief AI scientist at Facebook and a professor at NYU…, “a lot of what they learn about the world is through observation.”
- While algorithms based on supervised and reinforcement learning are taught to achieve an objective through human input, unsupervised ones extract patterns in data entirely on their own.
- “Everything we learn as humans—almost everything—is learned through self-supervised learning.
- Ultimately, unsupervised learning will help machines develop a model of the world that can then predict future states of the world, he said. It’s a lofty ambition that has eluded AI research but would open up an entirely new host of capabilities. LeCun is confident: “The next revolution of AI will not be supervised.”