From a broad AI market dynamics review
"Yann LeCun, a deep-learning pioneer and the current head of Facebook’s AI research wing, agrees with many of the new critiques of the field. He acknowledges that it requires too much training data, that it can’t reason, that it doesn’t have common sense. “I’ve been basically saying this over and over again for the past four years,” he reminds me. But he remains steadfast that deep learning, properly crafted, can provide the answer. He disagrees with the Chomskyite vision of human intelligence. He thinks human brains develop the ability to reason solely through interaction, not built-in rules. “If you think about how animals and babies learn, there’s a lot of things that are learned in the first few minutes, hours, days of life that seem to be done so fast that it looks like they are hardwired,” he notes. “But in fact they don’t need to be hardwired, because they can be learned so quickly.” In this view, to figure out the physics of the world, a baby just moves its head around, data-crunches the incoming imagery, and concludes that, hey, depth of field is a thing.How to Teach Artificial Intelligence Some Common Sense | Wired
Still, LeCun admits it’s not yet clear which routes will help deep learning get past its humps. It might be “adversarial” neural nets, a relatively new technique in which one neural net tries to fool another neural net with fake data—forcing the second one to develop extremely subtle internal representations of pictures, sounds, and other inputs. The advantage here is that you don’t have the “data hungriness” problem. You don’t need to collect millions of data points on which to train the neural nets, because they’re learning by studying each other. (Apocalyptic side note: A similar method is being used to create those profoundly troubling “deepfake” videos in which someone appears to be saying or doing something they are not.)"
No comments:
Post a Comment