Stanford researchers find improved robot training by adding audio
For robots to move into homes, they’ll need to learn to listen, suggests MIT Technology Review.
“Researchers at the Robotics and Embodied AI Lab at Stanford University have built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone.
“Thus far, robots have been training on videos that are muted,” says Zeyi Liu, a PhD student at Stanford and lead author of the study. “But there is so much helpful data in audio.”
The results, published in a paper on arXiv: “When using vision alone in the dice test, the robot could tell 27% of the time if there were dice in the cup, but that rose to 94% when sound was included.”
Citation: Zeyi Liu et al. ManiWAV: Learning Robot Manipulation from
In-the-Wild Audio-Visual Data. arXiv https://arxiv.org/pdf/2406.19464 (open access)
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
1 Comments
One thought on “Robots learn better by also listening”
The integration of auditory learning in robots is a fascinating development that opens new possibilities for AI. By incorporating listening capabilities, robots can better understand their environment and interact more naturally with humans. This advancement could lead to significant improvements in areas like customer service, healthcare, and even personal assistants, where nuanced understanding and responsive communication are crucial. The article highlights how this technology could also enhance machine learning processes, making AI systems more adaptive and efficient. It's exciting to see how these innovations will shape the future of human-robot interactions.
🟨 😴 😡 ❌ 🤮 💩