Simple modified-earbud biosensors can record and diagnose brain activity and exercise levels

Close up of the device and sensors
Screen-printed flexible sensors attached to earbuds on a flexible, stamp-like surface can be used for health monitoring and diagnosis of neuro-degenerative conditions (credit: Erik Jepsen/University of California San Diego)

A multidisciplinary research team of engineers at the University of California San Diego has developed biosensors that can be used for health monitoring and diagnosis of neurodegenerative conditions during exercise.

Data from an electroencephalogram (EEG), which measures electrical activity in the brain, and sweat lactate, an organic acid the body produces during exercise and normal metabolic activity, can now be combined for a variety of purposes. For example, they can be used to diagnose epileptic and other seizures or to monitor effort during physical exercise and levels of stress and focus. 

“Being able to measure the dynamics of both brain cognitive activity and body metabolic state in one in-ear integrated device that doesn’t intrude on the comfort and mobility of the user opens up tremendous opportunities for advancing health and wellness of people of all ages, anytime and anywhere,” said Gert Cauwenberghs, a professor in the Shu Chien Gene Lay Department of Bioengineering at UC San Diego.

Building the sensors

The first step in building the in-ear sensors was confirming that EEG and lactate data could be gathered in the ear. Researchers had to design smaller, more compact instruments to gather electrophysiological signals, such as EEG data, that would fit on an earbud.

They also had to find a suitable material to collect sweat and sense lactate.  After preliminary experiments on human subjects, researchers determined that the best location to collect and record lactate data was the tragus, where sweat accumulates at the entrance of the ear. The team also knew from previous experience that to collect EEG data, high-performance physiological electrodes pointed  toward the temporal lobe were required. 

“The primary technical challenge was fitting two sensors in the ear, which is a small space that varies from an individual to another, but also reliably acquiring signals from both EEG and lactate,” said Yuchen Xu, co-first author of the paper, and a postdoctoral researcher in Cauwenberghs’s lab. “It’s a natural entry point–people are used to wearing earbuds.”

“We also had to accommodate for earbuds integration and reduce crosstalk. That’s when we landed on the idea of a stamp-like stretchable sensor, which is a simple addition to the earbud itself, but has all the necessary functions we needed and gave us enough freedom for our designs.”

To make sure that the electrophysiological sensors had firm contact with the ear, researchers designed 3D spring-loaded sensors that hold contact but can adjust as earbuds move. To improve sweat collection, researchers covered the electrochemical sensors with a see-through hydrogel film. “It’s sponge-like and hydrophilic,” Xu said. “It acts as a mechanical cushion between skin and sensors and also helps collect sweat.” 

“This new and powerful in-ear multimodal wearable bioelectronic platform offers a rich source of real-time information on the health of the users by recording physical and biochemical information simultaneously and dynamically,” said Joseph Wang, a professor in the Department of NanoEngineering and director of the Center for Wearable Sensors at the Jacobs School.

The researchers foresee a future in which neuroimaging and health monitoring systems work with wearable sensors and mobile devices, such as phones, earbuds, watches, and more to track brain activity and levels of many health-related metabolites throughout the day. This would allow users to enhance brain and body capabilities.

Citation: UC San Diego Department of Bioengineering: Shu Chien-Gene Lay, Yuchen Xu, Akshay Paul, Min Lee, Abhinav Uppal, William Chen, Stephen Deiss, Gert Cauwenberghs; UC San Diego Department of NanoEngineering and Chemical Engineering: Ernesto De la Paz, Kuldeep Mahato, Juliane R. Sempionatto, Nicholas Tostado, Muyang Lin, Srishty Dua, Lu Yin, Sheng Xu, Joseph Wang. UC San Diego Department of Electrical and Computer Engineering: Gopabandhu Hota, Brian L. Wuerstle, Patrick Mercier. (28-Sep-2023). Unobtrusive in-ear integrated physiological and metabolic sensors for continuous brain-body activity monitoring. Nature Biomedical Engineering. https://www.nature.com/articles/s41551-023-01095-1 (open-access)


Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

You can learn to control a robotic arm in one hour

One-hour training is enough for people to carry out a task with one or more robotic arms, according to a new study by researchers at Queen Mary University of London, Imperial College London and The University of Melbourne.

The study, published in the journal IEEE Open Journal of Engineering in Medicine and Biology, investigated the potential of multiple robotic arms to help people perform tasks that require more than two hands.

“Many tasks in daily life, such as opening a door while carrying a big package, require more than two hands,” said Dr. Ekaterina Ivanova, lead author of the study, from Queen Mary University of London. “Supernumerary robotic arms have been proposed as a way to allow people to do these tasks more easily, but until now, it was not clear how easy they would be to use.” 

Perhaps like Doctor Octopus in Spider-Man 2 (1963)?

Dr. Octopus, with four long tentacles as extra hands. After a failed fusion experiment, eccentric and obsessive scientist Dr. Otto Octavius is transformed into super-villain Doctor Octopus (credit: Sony Pictures)

The study involved 24 participants who were asked to perform a variety of tasks with more than two robotic arms. The participants were either given one hour of training in how to use the arm, or they were asked to work with a partner. 

The results showed that the participants who had received training on the supernumerary arm performed the tasks just as well as the participants who were working with a partner.

“Our findings are promising for the development of supernumerary robotic arms,” said Ivanova. “They suggest that these arms could be used to help people with a variety of tasks, such as surgery, industrial work, or rehabilitation.” 

Citation: Yanpei Huang, Jonathan Eden, Ekaterina Ivanova, Etienne Burdet (16-Aug-2023). Can Training Make Three Arms Better Than Two Heads for Trimanual Coordination? IEEE Open Journal of Engineering in Medicine and Biology. Vol. 4. 10.1109/OJEMB.2023.3305808 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Tong test for AGI

Researchers in China have developed an AGI test platform for evaluating and benchmarking AGI models.

Funded by the National Key Research and Development Program of China, the Tong* test architecture consists of infrastructure, “dynamic embodied physical and social interactions” (DEPSI), and evaluation tools, the researchers report in the journal Engineering.

Reasonable, but not AGI in general

“The Tong test looks fairly reasonable,” says AGI pioneer Dr. Ben Goertzel, CEO of SingularityNet, “but it’s plainly a test of what I’d call ‘human-capable AGI,’ not AGI in general.

“This sort of test can be valuable, but I don’t think it’s really meaningful to have a single litmus test for ‘general AGI’ nor for ‘human-capable AGI.’ These are multidimensional things that can be measured in a lot of different ways. And different in-progress AGI systems are going to perform differently on different dimensions.   

Useful evaluation tools

“However, it’s useful to have some standardized environments and tests for evaluating in-progress AGI systems, as long as one interprets these as useful tools to play with, rather than as definitive measurement devices,” says Goertzel. 

“General intelligence is not something highly precisely defined, like temperature or pressure, which can be accurately measured if one just builds the right instrument (and I say this being very aware of how hard it is to build an accurate calorimeter…!).”

* “Tong” corresponds to the pronunciation of the Chinese character of “general,” as in “artificial general intelligence”

Citation: Peng, Y., Han, J., Zhang, Z., Fan, L., Liu, T., Qi, S., Feng, X., Ma, Y., Wang, Y., & Zhu, S. (2023). The Tong Test: Evaluating Artificial General Intelligence Through Dynamic Embodied Physical and Social Interactions. Engineering. https://www.sciencedirect.com/science/article/pii/S209580992300293X (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Finger-worn device can ‘translate’ text messages to braille on-the-fly

Engineers and chemists at Lawrence Livermore National Laboratory (LLNL) and Meta have developed a new kind of 3D-printed material that can replicate some characteristics of biological tissue.

The researchers say the advancement could impact the future of “augmented humanity,” where technology enhances human performance by closely interfacing engineered devices with our anatomy. 

Emulating the range of bio materials

As described In a paper in the journal Matter, the method patterns smooth gradients in stiffness to approximate gradients found in biology, such as where bone meets muscle. This overcomes “mechanical mismatch” (natural tissues are soft; electronic devices are usually made of rigid materials).

“For engineers, it’s very hard to get a softer material combined with a stiffer material such as is common in nature,” explained lead author and LLNL engineer Sijia Huang. “Engineers make a part that is stiff and another part that is soft, and then manually assemble them together, so we have a very sharp interface that compromises the mechanical property.

The new technique works by manipulating the intensity of light applied to a photopolymer resin, using a layer-by-layer technique that can rapidly produce parts by projecting light into a liquid resin. Lower light intensity results in a softer material; higher light a stiffer material.

Wearable braille display

To demonstrate the potential, engineers at Meta used the material to 3D-print an inexpensive braille display that could be worn on a single finger and connected to a smartphone and an air pump.

When text is transmitted via the phone, sections of the wearable fill with air, causing it to deform and create braille letters, enabling a sightless person to “read” the text through the device.

The material is stretchable to around 200 times its original properties, and as its gradient transitions from soft to stiffer material, its toughness increases by 10 times. Huang said the material could be tailored for energy-absorbing materials, soft robotics and wearable electronic devices.

Citation: Sijia Huang et al. (June 20, 2023) One-pot ternary sequential reactions for photopatterned gradient multimaterials. Matter. DOI: https://doi.org/10.1016/j.matt.2023.05.040 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Does a brain in a dish have moral rights?

Inventors of DishBrain—a collection of 800,000 living brain cells in a dish that learned to play Pong—are exploring ethical applications of biocomputing with an international team of ethicists and medical researchers.

In a paper published in Biotechnology Advances, the research team maps such a framework.

“Combining biological neural systems with silicon substrates to produce intelligence-like behavior has significant promise, but we need to proceed with the bigger picture in mind to ensure sustainable progress,” says lead author Dr. Brett Kagan, Chief Scientific Officer of biotech start-up Cortical Lab.

“We haven’t adequately addressed the moral issues of what is even considered ‘conscious’ in the context of today’s technology,” says Dr. Julian Savulescu, author and Uehiro Chair in Practical Ethics at the University of Oxford. “There are still many ways of describing consciousness or intelligence, each raising different implications for how we think about biologically based intelligent systems.”

But can they suffer?

The paper cites early English philosopher Jeremy Bentham, who argued that “the question is not, ‘can they reason?’ Or ‘can they talk?’ but, ‘can they suffer?’”

“From that perspective, even if new biologically based computers show human-like intelligence, it does not necessarily follow that they have moral status,” says co-author Dr. Tamra Lysaght, Director of Research at the Centre for Biomedical Ethics, National University of Singapore.

“Our paper doesn’t attempt to definitively answer the full suite of moral questions posed by bio-computers, but it provides a starting framework to ensure that the technology can continue to be researched and applied responsibly,” says Lysaght.

Understanding diseases

The paper further highlights the ethical challenges and opportunities offered by DishBrain’s potential to greatly accelerate our understanding of diseases such as epilepsy and dementia.

“Current cell lines used in medical research predominately have European-type genetic ancestry, potentially making it harder to identify genetic-linked side effects,” says co-author Dr. Christopher Gyngell, Research Fellow in biomedical ethics from the Murdoch Children’s Research Institute and The University of Melbourne.

“In future models of drug screening, we have the chance to make them more sufficiently representative of real-world patients by using more diverse cell lines, and that means potentially faster and better drug development.”

The researchers point out that it is worth working through these moral issues, as the potential impact of bio-computing is significant.

Bioenergy’s high-level energy efficiency

“Silicon-based computing is massively energy-hungry, with a supercomputer consuming millions of watts of energy. By contrast, the human brain uses as little as 20 watts of energy—biological intelligences will show similar energy efficiency,” says Kagan.

“As it stands, the IT industry is a massive contributor to carbon emissions. If even a relatively small number of processing tasks could be done with biocomputers, there is a compelling environmental reason to explore these alternatives.”

Citation: Kagan, B. J., Gyngell, C., Lysaght, T., Cole, V. M., Sawai, T., & Savulescu, J. (2023). The technology, opportunities, and challenges of Synthetic Biological Intelligence. Biotechnology Advances, 68, 108233. https://doi.org/10.1016/j.biotechadv.2023.108233.

email sharing button
facebook sharing button
print sharing button
twitter sharing button
whatsapp sharing button
reddit sharing button

COMMENTS | COMMENTING POLICY

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Possible new chemical combinations for the origin of life in distant inhabited planets

A team led by scientists at the University of Wisconsin-Madison has written a “cookbook” with hundreds of recipes that have the potential to give rise to life.

“The origin of life really is a something-from-nothing process,” says Betül Kaçar, a NASA-supported astrobiologist and UW–Madison professor of bacteriology. “But that something can’t happen just once. Life comes down to chemistry and conditions that can generate a self-reproducing pattern of reactions.”

Autocatalytic reactions

Chemical reactions that produce molecules that encourage the same reaction to happen again and again are called “autocatalytic reactions.” In a new study published in the Journal of the American Chemical Society, Zhen Peng, a postdoctoral researcher in the Kaçar laboratory, and collaborators compiled 270 combinations of molecules. These involve atoms from all groups and series across the periodic table that have the potential for sustained autocatalysis.

Comproportionation reactions

But to be autocatalytic, the outcome of the reaction also needs to provide starting materials for the reaction to occur again, allowing the output to become a new input.

The researchers focused their search on “comproportionation reactions,” where two compounds that include the same element (but with different numbers of electrons, or reactive states) combine to create a new compound in which the element is in the middle of the starting reactive states.

Comproportionation reactions result in multiple copies of some of the molecules involved, providing materials for the next steps in autocatalysis.

“We will never definitively know what exactly happened on this planet to generate life,” Kaçar says. “But, in a test tube, we can create multiple planetary conditions to understand how the dynamics to sustain life can evolve in the first place.”

Kaçar also leads a NASA-supported consortium called MUSE (Metal Utilization & Selection Across Eons).

Citation: Zhen Peng, Zachary R. Adam, Albert C. Fahrenbach, and Betül Kaçar. 18-Sep-2023. Assessment of Stoichiometric Autocatalysis across Element Groups. Journal of the American Chemical Society. DOI: 10.1021/jacs.3c07041 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Making AI smarter: a multisensory, integrated artificial neuron

Sensors in robots and other devices typically run separately. However, “allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed,” emulating the human brain, says Saptarshi Das, associate professor of engineering science and mechanics at Penn State.

Penn State researchers applied this synergistic approach in developing the first multisensory integrated artificial neuron, which they announced today (Sept. 15) in the open-access journal Nature Communications.

Creating a tactile-light neuron

The team focused on integrating a tactile sensor and a visual sensor. This combination allows the output of one sensor to modify the other, with the help of visual memory. (For example, a flash of light could enhance the chance of successful movement through a dark room.)

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor, based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics, useful for detecting light and supporting transistors.

The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

Visual and tactile stimuli are integrated in this bio-inspired artificial neuron (credit: Tyler Henderson/Penn State)

Simulations

To simulate touch, the tactile sensor used the triboelectric effect, in which two layers slide against one another to produce electricity (meaning the touch stimuli were encoded into electrical impulses). To simulate visual input, the researchers shined a light into a monolayer molybdenum disulfide photo memtransistor (which can remember visual input), similar to how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron—simulated as electrical output—increased when both visual and tactile signals were weak.

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

Combining sensors to mimic how our brains actually work

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute.

“The requirements for different sensors are based on the context. In a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense,” he noted. “We have a complete sense of our surroundings, and our decision-making is based on the integration of what we’re seeing, hearing, touching, smelling, etc.

“The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

The Army Research Office and the National Science Foundation supported this work.

Citation: Sadaf, M. U., Sakib, N. U., Pannone, A., Ravichandran, H., & Das, S. (2023). A bio-inspired visuotactile neuron for multisensory integration. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-40686-z (open-access)


Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Your brain’s amygdala may release cannabinoid molecules during stress, researchers find

When you’re under stress, your brain’s amygdala (a key emotional brain center) may release cannabinoid molecules*, activating the same brain receptors that THC** does, Northwestern Medicine researchers have discovered.

These endogenous (body’s own) feel-good cannabinoid molecules dampen incoming stress alarms from the hippocampus, a memory and emotion center in the brain, the researchers report in a study with mice, published Sept. 12 in the journal Cell Reports. 

Overcoming stress disorders: new drug development

Stress heightens risk for many psychiatric disorders. This new finding opens a new avenue for drug development to treat psychiatric disorders from conditions like generalized anxiety, major depression and post-traumatic stress disorder (PTSD), the researchers suggest.

“Understanding how the brain adapts to stress at the molecular, cellular and circuit level could provide critical insight into how stress is translated into mood disorders and may reveal novel therapeutic targets for the treatment of stress-related disorders,” said study author Dr. Sachi Patel, chair of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine and a Northwestern Medicine psychiatrist.

“The endocannabinoid system is one of the leading signaling systems that has been identified as a prominent drug-development candidate for stress-related psychiatric disorders, Patel said.

Removing cannabinoid receptor decreases pleasure and ability to cope

When these scientists removed the target of these cannabinoids, the cannabinoid receptor type 1, it resulted in poorer ability to cope with stress and motivational deficits in the mice. Specifically, when the receptor target of these endogenous cannabinoids was removed at hippocampal-amygdala synapses, mice adopted more passive and immobile responses to stress and had a lower preference to drink a sweetened sucrose water after stress exposure.

This finding may relate to anhedonia (decrease in pleasure), often experienced by patients with stress-related disorders such as depression and PTSD.

“Determining whether increasing the levels of endogenous cannabinoids can be used as potential therapeutics for stress-related disorders is a next logical step from this study and our previous work,” said Patel. “Also, there are ongoing clinical trials in this area that may be able to answer this question in the near future.” 

* Cannabinoids are a class of biological compounds that bind to cannabinoid receptors. They are most frequently sourced from and associated with the plants of the Cannabis genus, including Cannabis sativa, Cannabis indica, and Cannabis ruderalis.

** THC (tetrahydrocannabinol) is the substance that’s primarily responsible for the effects of marijuana on a person’s mental state.

Citation: Kondev, V., Najeed, M., Yasmin, F., Morgan, A., Loomba, N., Johnson, K., … & Patel, S. Sept.12, 2023. Endocannabinoid Release at Ventral Hippocampal-Amygdala Synapses Regulates Stress-Induced Behavioral Adaptation. Cell Reports. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=433

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

‘Lightning’ system combines light and electrons to unlock faster, greener computing for machine-learning models

The growing demand for high-performance computers that can support increasingly complex (and expensive) AI models has led engineers to explore new methods for expanding the computational capabilities of their machines.

Now Manya Ghobadi, an associate professor at MIT’s Department of Electrical Engineering and Computer Science (EECS) and a CSAIL member, and her colleagues have developed a solution: a system (dubbed “Lightning”) that connects photons (produced by lasers) to the electronic components (transistors and wires) of computers—creating hybrid photonic-electronic, reconfigurable network interface cards (“SmartNICs).

First photonic-computing prototype to serve real-time, machine-learning inference requests

The new system allows deep neural networks (machine-learning models that imitate how brains process information) to complete inference tasks (like image recognition and large language models generated in chatbots such as ChatGPT).

Machine-learning services completing inference-based tasks, like ChatGPT and BERT, currently require heavy computing resources and are expensive—some estimates show that ChatGPT requires $3 million or more per month to run.

They’re also environmentally detrimental. Instead, Lightning uses photons, which move faster than electrons do in wires while generating less heat.

The team is currently (Sept. 10–14) presenting their findings at the Association for Computing Machinery’s Special Interest Group on Data Communication (SIGCOMM).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How ecology and AI could collaborate

A paper published September 11 in the open-access journal Proceedings of the National Academy of Sciences argues for a synergy between AI and ecology—one that could strengthen AI and also help to solve complex global challenges, such as disease outbreaks, loss of biodiversity, and climate-change impacts.

Dealing with complex systems

The paper argues that there are many more possibilities for applying AI in ecology, such as in “synthesizing big data and finding missing links in complex systems,” said co-author Shannon LaDeau, a disease ecologist at Cary Institute.

Ecologists are also using AI to searching for patterns in large data sets and making more accurate predictions, such as whether new viruses might be capable of infecting humans and which animals are most likely to harbor those viruses.

Smarter AI

Inspired by ecological systems, a more robust AI might include feedback loops, redundant pathways, and decision-making frameworks, the researchers suggest. These flexibility upgrades could also contribute to a more “general intelligence” for AIs that could enable reasoning and connection-making beyond the specific data that the algorithm was trained on.

Ecology could also help the AI-driven large language models that power chatbots block “hallucinations” (when an AI generates false information). Because ecology examines complex systems at multiple levels and in holistic ways, it’s good at capturing emergent properties and can help reveal the mechanisms behind such behaviors, the researchers suggest.

The research was funded by the National Science Foundation.

Citation: Barbara A. Han et al. September 11, 2023. A synergistic future for AI and ecology. https://www.pnas.org/doi/10.1073/pnas.2220283120 (open source)

https://doi.org/10.1073/pnas.2220283120

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter