World’s fastest electron microscope

Imagine a camera so powerful it can take freeze-frame photographs of a moving electron—an object traveling so fast it could circle the Earth many times in a matter of a second.

Researchers at the University of Arizona, led by Mohammed Hassan, associate professor of physics and optical sciences, have now developed just that: the world’s fastest electron microscope. They believe this development will lead to groundbreaking advancements in physics, chemistry, bioengineering, materials sciences and more.

A transmission electron microscope (TEM) is a tool used by scientists and researchers to magnify objects up to millions of times their actual size to see details too small for a traditional light microscope to detect (by directing beams of electrons through a sample being studied). The interaction between the electrons and the sample is captured by lenses and detected by a camera sensor to generate detailed images of the sample.

The new speed: “attomicroscopy”

To see an electron frozen in place, U of A researchers have generated a single attosecond electron pulse, which is as fast as electrons move, thereby enhancing the microscope’s temporal resolution, like a high-speed camera capturing movements that would otherwise be invisible.

“The improvement of the temporal resolution inside of electron microscopes has been long anticipated and the focus of many research groups—because we all want to see the electron motion,” Hassan said. “These movements happen in attoseconds. But now, for the first time, we are able to attain attosecond temporal resolution with our electron transmission microscope, and we coined it “attomicroscopy.”

“For the first time, we can now see pieces of the electron in motion.”

Citation: Hui, D., Alqattan, H., Sennary, M., Golubev, N. V., & Hassan, M. T. (2024). Attosecond electron microscopy and diffraction. Science Advances. https://www.science.org/doi/10.1126/sciadv.adp5805 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI links heat-wave events to global warming

Researchers at Stanford University and Colorado State University have developed a rapid, low-cost machine-learning approach for studying how individual extreme weather events have been affected by global warming.

They found that if global temperatures reach 2.0 C above pre-industrial-level events (equivalent to some of the worst heat waves in Europe, Russia, and India over the past 45 years) it could happen multiple times per decade. According to Copernicus Climate Change Service, global warming is currently approaching 1.3 C above pre-industrial levels.

Their method, detailed in an August 21 study in the journal Science Advances, uses machine learning to determine how much global warming has contributed to heat waves in the U.S. and elsewhere in recent years.

The approach proved highly accurate and could change how scientists study and predict the impact of climate change on a range of extreme weather events, according to the researchers. The results can also help to guide climate adaptation strategies and are relevant for lawsuits that seek to collect compensation for damages caused by climate change.

Training AI models to predict daily maximum temperatures

“We’ve seen the impacts that extreme weather events can have on human health, infrastructure, and ecosystems,” said study lead author Jared Trok, a PhD student in Earth system science at the Stanford Doerr School of Sustainability, in a statement. “To design effective solutions, we need to better understand the extent to which global warming drives changes in these extreme events.”

Trok and his co-authors trained AI models to predict daily maximum temperatures, based on the regional weather conditions and the global mean temperature, using data from a large database of climate model simulations extending from 1850 to 2100.

The researchers then used the actual weather conditions from specific real-world heat waves to predict how hot the heat waves would have been if the same weather conditions occurred, but at different levels of global warming. They then compared these predictions at different global warming levels to estimate how climate change influenced the frequency and severity of historical weather events.

Case studies and beyond

The researchers first put their AI method to work analyzing the 2023 Texas heat wave, which contributed to a record number of heat-related deaths in the state that year. The team found that global warming made the historic heat wave 1.18 to 1.42 degrees Celsius (2.12 to 2.56 F) hotter than it would have been without climate change. The researchers also found that their new technique was able to accurately predict the magnitude of record-setting heat waves in other parts of the world. The results were consistent with previously published studies of those events.

Based on these findings, the researchers used AI to predict how severe heat waves could become if the same weather patterns that caused previous record-breaking heat waves instead occurred under higher levels of global warming. They found that events equal to some of the worst heat waves in Europe, Russia, and India over the past 45 years could happen multiple times per decade if global temperatures reach 2.0 C above pre-industrial levels.

Accurate, low-cost analyses of extreme events in more parts of the world

The new AI method addresses some limitations of existing approaches, including those previously developed at Stanford, by using actual historical weather data when predicting the effect of global warming on extreme events. It does not require expensive new climate model simulations because the AI can be trained using existing simulations.

Together, these innovations will enable accurate, low-cost analyses of extreme events in more parts of the world, which is crucial for developing effective climate adaptation strategies. And it opens up new possibilities for fast, real-time analysis of the contribution of global warming to extreme weather, according to the researchers. The study was funded by Stanford University and the U.S. Department of Energy.

Citation: Trok, J. T., Barnes, E. A., Davenport, F. V., & Diffenbaugh, N. S. (2024). Machine learning–based extreme event attribution. Science Advances. https://www.science.org/doi/10.1126/sciadv.adl3242 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

‘Metacrimes’ in the metaverse

Australian researchers are investigating “metacrime”—attacks, crimes or inappropriate activities that occur within the metaverse, the virtual world where users of VR headsets can choose an avatar to represent themselves as they interact with other users’ avatars or move through other 3D digital spaces.

According to Griffith University’s Ausma Bernot, who has teamed up with researchers from Monash University, Charles Sturt University and the University of Technology Sydney, “Nnfortunately, while those new environments are very exciting, they also have the potential to enable new crimes. We’ve seen reports of sexual harassment or assault against both adults and kids,” Bernot said in a statement.

The Australian eSafety Commissioner has estimated that around 680,000 adults in Australia are engaged in the metaverse. And UK’s Center for Countering Digital Hate conducted 11 hours and 30 minutes of recorded user interactions using Meta’s Oculus headset in the popular VRChat app (with more than 25,000 Community Created Worlds and climbing).

Impacts on mental or emotional well-being

The researchers found that most users had been faced with at least one negative experience in that virtual environment. Those experiences include being called offensive names, receiving repeated unwanted messages or contact, being challenged about cultural identity, being sent unwanted inappropriate content, or being provoked to respond to something or to start an argument.

Eleven percent of the participants had been exposed to a sexually graphic virtual space and nine percent had been touched (virtually) in a way they didn’t like. Of these respondents, 49 percent said the experience had a moderate to extreme impact on their mental or emotional well-being.

Monitor children’s activity

The two largest user groups are minors and men, so Bernot said it was “important for parents to monitor their children’s activity or consider limiting their access to multi-player games. Minors are more vulnerable to grooming and other abuse.

“They may not know how to deal with these situations, and while there are some features like a ‘safety bubble’ within some games (or just take the headset off), once immersed in these environments it does feel very real. It’s somewhere in between a physical attack and a social media harassment message—you’ll still feel that distress and it can take a significant toll on a user’s well-being. It is a real and palpable risk.”

Virtual rape

Monash University’s You Zhou said there had already been many reports of virtual rape, including one in the United Kingdom where police have launched a case for a 16-year-old girl whose avatar was attacked, causing psychological and emotional trauma similar to an attack in the physical world.

“When immersed in this world of virtual reality, and particularly when using higher quality VR headsets, users will not necessarily stop to consider whether the experience is reality or virtuality,” Zhou said in a statement.

“While there may not be physical contact, victims (mostly young girls) strongly claim the feeling of victimization was real. Without physical signs on a body, and unless the interaction was recorded, it can be almost impossible to show evidence of these experiences.”

The metaverse is expected to grow exponentially in coming years, so the research team’s findings highlight a need for metaverse companies to instill clear regulatory frameworks for their virtual environments, making them safe for everyone to inhabit, the researchers advised.

Citation: Zhou, Y., Tiwari, M., Bernot, A. et al. Metacrime and Cybercrime: Exploring the Convergence and Divergence in Digital Criminality. Asian J Criminol (2024). https://doi.org/10.1007/s11417-024-09436-y (open access)

Thumbnail image credit: A. Angelica/ChatGPT4

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to detect covert AI-generated text

Machine-generated text has been fooling humans since the release of GPT-2 in 2019. Large language model (LLM) tools have gotten progressively better at crafting stories, news articles, student essays and more. So humans are often unable to recognize when they are reading text produced by an algorithm.

Misuse and harmful outcomes

These LLMs are being used to save time and even boost creativity in ideating and writing, but their power can lead to misuse and harmful outcomes that are already showing up where we consume information.

One way both academics and companies are trying to improve this detection is by employing machine learning models that can identify subtle patterns of word choice and grammatical constructions. These allow for recognizing LLM-generated text in a way that our human intuition cannot. 

Today, many commercial detectors are claiming to be highly successful at detecting machine-generated text, with up to 99% accuracy—but are these claims verified? Chris Callison-Burch, Professor in Computer and Information Science, and Liam Dugan, a doctoral student in Callison-Burch’s group, aimed to find out in their recent paper, published at the 62nd Annual Meeting of the Association for Computational Linguistics.

“As the technology to detect machine-generated text advances, so does the technology used to evade detectors,” said Callison-Burch in a statement. “It’s an arms race, and while the goal to develop robust detectors is one we should strive to achieve, there are many limitations and vulnerabilities in detectors that are available now.”   

Testing AI detector ability

To investigate those limitations and provide a path forward for developing robust detectors, the research team created Robust AI Detector (RAID), a data set of more than 10 million documents across recipes, news articles, blog posts and more, including AI-generated text and human-generated text.

RAID is the first standardized benchmark to test detection ability in current and future detectors. The team also created a leaderboard, which publicly ranks the performance of all detectors that have been evaluated using RAID in an unbiased way.

“The concept of a leaderboard has been key to success in many aspects of machine learning like computer vision,” said Dugan in a statement. “The RAID benchmark is the first leaderboard for robust detection of AI-generated text. We hope that our leaderboard will encourage transparency and high-quality research in this quickly evolving field.”

Dugan has already seen the influence this recemt paper is having in companies that develop detectors. “Originality.ai is a prominent company that develops detectors for AI-generated text,” he says. “They shared our work in a blog post, ranked their detector in our leaderboard and are using RAID to identify previously hidden vulnerabilities and improve their detection tool.”

How detectors fail

So do the current detectors hold up to the work at hand? RAID shows that not many do as well as they claim. 

“Detectors trained on ChatGPT were mostly useless in detecting machine-generated text outputs from other LLMs such as Llama and vice versa,” says Callison-Burch. “Detectors trained on news stories don’t hold up when reviewing machine-generated recipes or creative writing. What we found is that there are a myriad of detectors that only work well when applied to very specific use cases and when reviewing text similar to the text they were trained on.” 

Detectors are able to detect AI-generated text when it contains no edits or “disguises,” but when manipulated, current detectors are not reliably able to detect AI-generated text. Faulty detectors are not only an issue because they don’t work well; they can be as dangerous as the AI tool used to produce the text in the first place. 

“If universities or schools were relying on a narrowly trained detector to catch students’ use of ChatGPT to write assignments, they could be falsely accusing students of cheating when they are not,” says Callison-Burch. “They could also miss students who were cheating by using other LLMs to generate their homework.”   

Adversarial attacks and tricks

The team also looked into how adversarial attacks (such as replacing letters with look-alike symbols) can easily derail a detector and allow machine-generated text to fly under the radar.

“It turns out, there are a variety of edits a user can make to evade detection by the detectors we evaluated in this study,” says Dugan. “Something as simple as inserting extra spaces, swapping letters for symbols, or using alternative spelling or synonyms for a few words can cause a detector to be rendered useless.”

While current detectors are not robust enough to be of significant use in society just yet, openly evaluating detectors on large, diverse, shared resources is critical to accelerating progress and trust in detection, and that transparency will lead to the development of detectors that do hold up in a variety of use cases, the study concludes. 

Reducing harms

“My work is focused on reducing the harms that LLMs can inadvertently cause, and, at the very least, making people aware of the harms so that they can be better informed when interacting with information,” Dugan continues. “In the realm of information distribution and consumption, it will become increasingly important to understand where and how text is generated, and this paper is just one way I am working towards bridging those gaps in both the scientific and public communities.”

This study was funded by the Intelligence Advanced Research Activity (IARPA), a directive of the Office of the Director of National Intelligence and within the Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program.

Citation: Dugan, L., Hwang, A., Trhlik, F., Ludan, J. M., Zhu, A., Xu, H., & Ippolito, D. (2024). RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors. arXiv. https://arxiv.org/abs/2405.07940 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Smart fabric for sensing and energy harvesting

University of Waterloo researchers have developed a smart fabric that can convert body heat and solar energy into electricity, replacing batteries and other power sources for energy harvesting, health monitoring, and movement tracking applications. 

The fabric sensors can monitor temperature, pressure, chemical composite, and more, which can be integrated into the material. For example, a smart face mask that can track breath temperature and rate and detect chemicals in breath to help identify viruses, lung cancer, and other conditions.

The researchers say the material is more stable, durable, and cost-effective than other fabrics on the market, integrating advanced materials such as MXene and conductive polymers with cutting-edge textile technologies to advance smart fabrics for wearable technology.

The study is published in the Journal of Materials Science & Technology.

The next phase of research will focus on further enhancing the fabric’s performance and integrating it with electronic components, in collaboration with electrical and computer engineers. Future developments may include a smartphone app to track and transmit data from the fabric to healthcare professionals, enabling real-time, non-invasive health monitoring and everyday use.

Citation: Peng, J., Ge, F., Han, W., Wu, T., Tang, J., Li, Y., & Wang, C. (2024). MXene-based thermoelectric fabric integrated with temperature and strain sensing for health monitoring. Journal of Materials Science & Technology, 212, 272-280. https://doi.org/10.1016/j.jmst.2024.06.011

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists find oceans of water on Mars

Geophysicists have found evidence for a large underground reservoir of liquid water—enough to fill oceans on the planet’s surface, using seismic activity.

The Scripps Oceanography scientists estimate that the amount of groundwater could cover the entire planet to a depth of between 1 and 2 kilometers (about a mile), based on data from NASA’s Insight lander.

However, the water is located in tiny cracks and pores in rock in the middle of the Martian crust, between 11.5 and 20 kilometers below the surface, a challenge to reach by drilling.

Water on the planet’s surface

Manga noted that lots of evidence—river channels, deltas and lake deposits, as well as water-altered rock—supports the hypothesis that water once flowed on the planet’s surface more than 3 billion years ago, after Mars lost its atmosphere.

The researchers note that understanding the water situation on Mars will help us get closer to knowing if life exists there. The research appears this week in the journal Proceedings of the National Academy of Sciences.

The Canadian Institute for Advanced Research, the National Science Foundation and the U.S. Office of Naval Research supported the work.

Citation: Wright, V., Morzfeld, M., & Manga, M. (2024). Liquid water in the Martian mid-crust. Proceedings of the National Academy of Sciences, 121(35), e2409983121. https://doi.org/10.1073/pnas.2409983121 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New, more accurate way to deliver medicine to the brain

Houston Methodist researchers have discovered a more accurate and timely way to deliver life-saving drug therapies to the brain, laying the groundwork for more effective treatment of brain tumors and other neurological diseases than with the current convection-enhanced delivery (CED).

Delivering the correct dosage of drugs to the right place in the brain has long been a challenge. The natural blood-brain barrier that protects our brains from toxins and pathogens can also block the delivery of important medical treatments. And CED also follows the path of least resistance, so therapeutics don’t always hit the target.

Electric field infuses medicine accurately

So the researchers developed a new process called “electrokinetic convection-enhanced delivery” (ECED) that allows surgeons to design the appropriate delivery path and potentially reach brain lesions and tumors more accurately. ECED uses an electric field to infuse medicine from a reservoir outside the brain to specific targets inside the brain. It applies continuous pressure over time to inject a fluid containing therapeutics into the brain.

In the Springer Nature journal Communications Biology, Houston Methodist research scientist and co-author Jesus G. Cruz-Garza explains how ECED infuses macromolecules into the brain from a hydrogel reservoir placed at the brain’s surface. “The brain acts as a charged porous scaffold that in the presence of an electric field allows for electroosmosis—bulk fluid flow in a porous media.” From the hydrogel reservoir, Cruz-Garza explains, this bulk flow of fluid enables the delivery of therapeutic agents.

This invention improves on the 30-year-old process of injecting therapeutics into the brain via, say the researchers.

“Delivering therapeutics by way of ECED has many applications,” explained Dr. Amir Faraji, principal investigator and Houston Methodist neurosurgeon, in a statement. “It has the potential to “improve gene therapy and as a treatment for traumatic brain injury and degenerative diseases, and in a more targeted manner.”

The research was conducted at Houston Methodist Department of Neurosurgery, Houston Methodist Research Institute, Center for Neural Systems Restoration, Center for Neuroregeneration, in conjunction with Texas A&M University College of Medicine and School of Engineering.

Further research is needed before Faraji and team can bring this investigational therapy to humans, the researchers advise.

Citations:

Cruz-Garza, J.G., Bhenderu, L.S., Taghlabi, K.M. et al. (2024) Electrokinetic convection-enhanced delivery for infusion into the brain from a hydrogel reservoir. Commun Biol 7, 869. https://www.nature.com/articles/s42003-024-06404-1 (open access)

Eid, F., Chen, A. T., Chan, K. Y., Huang, Q., Zheng, Q., Tobey, I. G., Pacouret, S., Brauer, P. P., Keyes, C., Powell, M., Johnston, J., Zhao, B., Lage, K., Tarantal, A. F., Chan, Y. A., & Deverman, B. E. (2024). Systematic multi-trait AAV capsid engineering for efficient gene delivery. Nature Communications, 15(1), 1-14. https://www.nature.com/articles/s41467-024-50555-y (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Tracking how psychedelics affect neurons

Researchers at the University of California, Davis have developed a rapid, noninvasive tool to track the neurons and biomolecules activated in the brain by psychedelic drugs.

The protein-based tool, called Ca2+-activated Split-TurboID, or CaST, is described in research published in Nature Methods

The new tool could help scientists unlock the benefits of psychedelic treatments for patients with brain disorders. 

There has been mounting interest in the value of psychedelic-inspired compounds as treatments for brain disorders, including depression, post-traumatic stress and substance use. Psychedelic compounds like LSD, DMT and psilocybin promote the growth and strengthening of neurons and their connections in the brain’s prefrontal cortex.

“It’s important to think about the cellular mechanisms that these psychedelics act upon,” said Christina Kim, an assistant professor of neurology at the UC Davis Center for Neuroscience and School of Medicine, and an affiliate of the UC Davis Institute for Psychedelics and Neurotherapeutics, in a statement.

The new technique could be used to track step-by-step the molecular signaling processes that are responsible for these compounds’ beneficial neuroplastic effects, taking 10 to 30 minutes, rather than the hours typical of other tagging methods.   

“We designed these proteins in the lab that can be packaged into DNA and then put into harmless adeno-associated viruses,” Kim said. “Once we deliver the CaST tool and these proteins into neurons, they incubate inside the cells and start expressing.”

A snapshot of the brain

The CaST tool measures changes in intracellular calcium concentrations, a nearly universal marker to track activity in a neuron. When neurons exhibit high activity, they exhibit high calcium levels. CaST uses this cue to tag the cell with a small biomolecule called biotin.

In the study, Kim and her colleagues dosed mice with the psychedelic psilocybin. They then used CaST in tandem with biotin to identify neurons with increased calcium in the prefrontal cortex (an area affected by many brain disorders and where psychedelics promote neuronal growth and strengthening).

The researchers also monitor the mouse head-twitch responses of a freely behaving animal. These are the primary behavioral correlates for hallucinations caused by psychedelics.

Next steps

Kim and her colleagues are now working on methods to enable brain-wide cellular labeling with the CaST tool and ways to enrich the signature of individual proteins produced by neurons affected by psychedelics. 

“We can send those samples to the UC Davis Proteomics Core Facility and they can give us an unbiased picture of all the proteins we identified,” Kim said.

The goal is to identify how psychedelics benefit the cellular profiles of those with brain disorders, elucidating the step-by-step cellular process of their therapeutic effects. 

The work was supported by grants from the Brain and Behavior Research Foundation, Kinship Foundation, Arnold and Mabel Beckman Foundation, NIH, NSF and the Boone Family Foundation. 

Citation: Zhang, R., Anguiano, M., Aarrestad, I. K., Lin, S., Chandra, J., Vadde, S. S., Olson, D. E., & Kim, C. K. (2024). Rapid, biochemical tagging of cellular activity history in vivo. Nature Methods, 1-11. https://doi.org/10.1038/s41592-024-02375-7 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Sustainable AI for healthcare

High-speed development of AI has made its way into healthcare—particularly in the radiology field.

AI-based diagnostic systems are flourishing, with hospitals quickly adopting the technology to assist radiologists with AI-based diagnostic systems, for example. However, there are concerns about the environmental impact of increasingly complex AI models.

Energy consumption of medical AI systems

So Associate Professor Daiju Ueda of Osaka Metropolitan University’s Graduate School of Medicine led a research team to investigate energy consumption of AI systems in the medical field, carbon emissions of data centers, and electronic waste issues. The goal: developing energy-efficient AI models, implementing green computing and the use of renewable energy.

In the study, the researchers proposed specific guidelines for the sustainable deployment of AI in the medical field in an environmentally responsible manner.

“The challenge for the future will be to verify and further elaborate these recommendations in actual medical practice,” said Ueda in a statement. “They are also expected to contribute to the standardization of methods for assessing AI’s environmental impact and the development of an international regulatory framework.”

The results were published in Diagnostic and Interventional Imaging.

Citation: Ueda, D., Walston, S. L., Fujita, S., Fushimi, Y., Tsuboyama, T., Kamagata, K., Yamada, A., Yanagawa, M., Ito, R., Fujima, N., Kawamura, M., Nakaura, T., Matsui, Y., Tatsugami, F., Fujioka, T., Nozaki, T., Hirata, K., & Naganawa, S. (2024). Climate change and artificial intelligence in healthcare: Review and recommendations towards a sustainable future. Diagnostic and Interventional Imaging. https://doi.org/10.1016/j.diii.2024.06.002 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Mapping the body’s internal sensory communication highway

Running from the brain to the large intestine, the vagus nerve is the body’s longest cranial nerve, encoding sensory information from the visceral organs. It plays a key role in respiratory, gastrointestinal, cardiovascular, endocrine and immune system functions.

Interoception—the body’s ability to sense its internal state in a timely and precise manner—is facilitated by the vagus sensory neurons, which independently code the three critical features of a body signal: the involved visceral organ, the tissue layer where the signal originates, and the type of sensory modality.

This was just discovered by Qiancheng Zhao, assistant professor in medicine-endocrinology and McNair Scholar at Baylor University School of Medicine, and Department of Medicine-Endocrinology, Baylor College of Medicine, and Department of Neuroscience, School of Medicine, Yale University.

Clinical applications

Zhao said his findings could provide potential new vagal targets, so future researchers might use this genetic information to help them access different visceral organs precisely.

The more detailed map also could help clarify the vagus nerve’s role in interception (the body’s ability to sense its internal state in a timely and precise manner) and find whether there might be neural modulatory applications for treating interoceptive disorders in respiratory, gastrointestinal, cardiovascular, endocrine and immune system functions.

Encoding sensory information

However, it has not been clear how the body’s longest cranial nerve, running from the brain to the large intestine, encodes sensory information from the visceral organs.

Applying a variety of techniques, Zhao discovered that vagal sensory neurons independently code the three critical features of a body signal: the involved visceral organ; the tissue layer where the signal originates; and the type of sensory modality.

Decoding vagal sensory nerve traffic

The vagal highway is crowded with traffic, with sensory and motor pathways intermingling in the nerve bundle. One of the potential challenges of expanding vagal nerve stimulation (VNS) is finding ways to identify and target specific vagal signals, instead of broadly stimulating the nerve—potentially creating unwanted side effects.

“We know that vagal sensory neurons can project to the visceral organs,” said Zhao. “So our question was: what signals from those visceral organs need to be sensed by vagal sensory neurons?”

To decipher the complexity of VNS traffic, Zhao and his colleagues focused on three aspects: the visceral organ sending the signal, the tissue layer in the organ where the signal originates and the kind of sensory stimulus.

To identify organ-projecting neurons, they have combined a viral tracing approach with single-cell RNA sequencing The analysis revealed that vagal sensory neurons use different gene modules to code specific visceral organs. They also traced neuronal projections in transparent, whole-mounted mouse organs to determine how vagal sensory neurons innervate the layers of specific tissues.

Zhao and his colleague Chuyue Yu at Yale University also developed an in vivo calcium imaging technique. It allowed them to identify the molecular features of vagal sensory neurons responding to different types of stimuli, such as mechanical inflation in the lung and chemical stimuli of nutrients from a protein shake in the mouse GI tract.

“When we have the anatomical map together with the molecular information and the inputs from functional imaging, then we can really have a full picture to understand the sensory logic,” Zhao said.

Clinical applications

Zhao’s research could inform future therapies that stimulate the vagus nerve to treat a variety of physical and psychiatric disorders, using VNS through an implantable electrical pulse generator (which has been approved by the U.S. Food and Drug Administration to treat drug-resistant epilepsy and depression).

Zhao said his findings could provide potential new vagal targets, so future researchers might use this genetic information to help them access different visceral organs precisely. The more detailed map also could help clarify the vagus nerve’s role in interoception and whether there might be neural modulatory applications for treating interoceptive disorders.

Aging

Zhao is also interested in discovering more about how vagal sensory neurons might behave differently across the lifespan and under different disease conditions. “We know that our physiological parameters, such as heart rate and blood pressure will change during aging so we might want to look, for example, at how the aging state changes the interaction between the sensory nerve and different organs.”

For his work investigating and mapping this internal information highway, Zhao is the 2024 grand prize winner of the Science & PINS Prize for Neuromodulation.

Citation: Zhao, Q. (2024). Navigating internal senses: A road map for the vagal interoceptive system. Science. https://www.science.org/doi/10.1126/science.adq8578 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter