A study suggests that deep neural networks have a built-in "Occam's razor" and prefer simpler solutions when learning from data.
Researchers at Oxford University are studying why deep neural networks (DNNs) are good at learning from data.
DNNs are AI systems that use multiple layers to process information. The study suggests that these networks have a built-in “Occam’s razor,” which means they prefer simpler solutions when learning from data. Occam’s razor is the idea that simpler explanations are usually better than complex ones.
The researchers found out that this bias in DNNs perfectly balances the growing number of possible complex solutions, making it easier for these networks to pick out simple, effective patterns. This helps DNNs make good guesses on new data, even with lots of parameters. The researchers explain the methods and results of this study in a paper published in Nature Communications.
DNNs learn by processing Boolean functions, where outcomes are just true or false. They naturally lean towards simpler functions, which are easier to describe. This preference helps them avoid overfitting, where a model learns the training data so well that it then fails on new data.
The study also explored how changing the neural network’s learning process affects its performance. Small changes to this simplicity bias can make DNNs much worse at generalizing, showing how important this bias is.
A strong parallel between artificial intelligence and fundamental principles of nature
This research helps explain how AI makes decisions, though it doesn’t cover why some DNNs work better than others for specific data. The researchers suspect that there might be other biases at play too.
Interestingly, the simplicity bias in DNNs resembles patterns in nature and evolution. According to the researchers, the findings suggest a strong parallel between artificial intelligence and fundamental principles of nature.
“The bias we observe in DNNs has the same functional form as the simplicity bias in evolutionary systems that helps explain, for example, the prevalence of symmetry in protein complexes,” says research leader Ard Louis in an Oxford press release. “This points to intriguing connections between learning and evolution, a connection ripe for further exploration.”
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
0 Comments
0 thoughts on “Deep neural networks want to keep things simple”