Electronic tongue decodes flavors with advanced AI

2024-10-10
2 min read.
Researchers have developed an electronic tongue that analyzes and interprets flavors using AI and looking inside AI operation.

Scientists led by Penn State researchers have developed an innovative electronic tongue that mimics human taste by using Artificial Intelligence (AI) to analyze various flavors. The device can identify tastes such as sweet, salty, sour, and bitter. By training the electronic tongue with a diverse range of samples, the team has enabled it to learn and predict how new substances might taste, significantly enhancing its accuracy and reliability.

The implications of this technology are substantial for the food industry. It could streamline the product development process, allowing companies to create new flavors or refine existing ones without the extensive time and resources typically required for human taste tests. This efficiency could lead to faster innovations in food products, benefiting both manufacturers and consumers.

Beyond food science, the researchers envision applying this technology in other fields, such as medicine, to analyze chemical compositions in various materials. This versatility makes the electronic tongue a powerful tool for various applications.

The scientists have described the research methods and results in a paper tited "Robust chemical analysis with graphene chemosensors and machine learning," published in Nature.

The Nature podcast has an audio explanation titled "This AI powered ‘tongue’ can tell Coke and Pepsi apart."

Looking inside what a neural network is thinking

A key feature of the AI system is the incorporation of Shapley additive explanations, a method rooted in cooperative game theory. The method helps explain how each individual taste contributes to the overall flavor profile, providing valuable insights into the decision-making process of the AI. This transparency is crucial because it allows researchers to understand not just what the electronic tongue recognizes, but also why it makes those distinctions.

When asked to define its own assessment parameters, the AI could more accurately interpret the data generated by the electronic tongue. "[W]e used a method called Shapley additive explanations, which allows us to ask the neural network what it was thinking after it makes a decision,” explains researcher Andrew Pannone in a Penn State press release.

By leveraging the power of Shapley additive explanations, this technology enhances interpretability, making it easier to understand the complex interactions that define taste.

It is often said that neural networks are black boxes where one is not allowed to look inside, but this research suggests that one can, and should, look inside.

#AIApplications



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!