New energy-efficient brain-like transistor mimics human intelligence

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

The researchers at Northwestern University, Boston College and MIT have developed a device that processes and also stores information, like the human brain. This transistor goes beyond simple machine-learning tasks. It can categorize data and perform associative learning.

Previous transistor devices cannot function outside cryogenic temperatures (-3l0 degrees Fahrenheit or lower). The new device works at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study was published today (Dec. 20) in the journal Nature.

Co-locating brain, memory, and information processing

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, consuming a lot of energy and creating a bottleneck when attempting to perform multiple tasks at the same time.

With the new architecture, “the brain, memory and information processing are instead co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain, he said.

High power consumption

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. 

Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy-costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits.”

But this comes “at the cost of high power consumption, especially in the current era of big data, where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

Based on moiré patterns

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer, even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

Higher-level “associative learning”

To test the transistor, Hersam and his team trained it to recognize similar—but not identical—patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition, known as “associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns—it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT. He is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering and chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology.

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Citation: Yan, X., Zheng, Z., Sangwan, V.K. et al. Dec. 12, 2023. Moiré synaptic transistor with room-temperature neuromorphic functionality. Nature 624, 551–556 (2023). https://doi.org/10.1038/s41586-023-06791-1

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nuke that asteroid! Is that a good idea?

Situation: An asteroid is heading toward Earth. Scientists calculate that deflecting the asteroid to defend the planet against catastrophic impact with a conventional kinetic impact will not work. Why? Limitations in the mass that can be lifted to space.

Solution: Researchers at Lawrence Livermore National Laboratory (LLNL) say they have developed a modeling tool for assessing the potential use of a nuclear device. The research, published today (Dec. 19, 2023) in Planetary Science Journal, introduces a novel approach to simulating the energy deposition from a nuclear device on an asteroid’s surface.

This new tool improves our understanding of the nuclear deflection’s radiation interactions on the asteroid’s surface. It also opens the door to new research on the shockwave dynamics affecting the inner asteroid.

This model will allow researchers to build upon the insights gained from NASA’s recent Double Asteroid Redirection Test (DART) mission in Sept. 2022, when they deliberately crashed a nuclear device into an asteroid to alter its trajectory.

DART Animated Infographic

Nuclear devices have the highest ratio of energy density per unit of mass of any human technology, making them an invaluable tool in mitigating asteroid threats, said LLNL physicist Mary Burkey, who led the research.

A long-range decision: to smash or deflect?

“If we have enough warning time, we could potentially launch a nuclear device, sending it millions of miles away to an asteroid that is headed toward Earth,” Burkey said. “We would then detonate the device and either deflect the asteroid, keeping it intact but providing a controlled push away from Earth, or we could disrupt the asteroid, breaking it up into small, fast-moving fragments that would also miss the planet.”

Accurate predictions for the effectiveness of nuclear deflection missions rely on sophisticated multiphysics simulations, Burkey said, explaining that LLNL simulation models cover a wide range of physical factors, which makes them complex and computationally demanding.

Asteroid x-rays

The paper also introduces an efficient and accurate library of X-ray energy deposition functions. High-fidelity simulations tracked photons penetrating surfaces of asteroid-like materials such as rock, iron, and ice, while accounting for more complex processes, such as reradiation.

The model also considers a diverse set of initial conditions, including different porosities, source spectra, radiation fluences, source durations, and angles of incidence. This comprehensive approach makes the model applicable to a wide range of potential asteroid scenarios. 

High-fidelity simulation modeling

Should a real planetary defense emergency arise, high-fidelity simulation modeling will be critical in providing decision-makers with actionable, risk-informed information that could prevent asteroid impact, protect essential infrastructure and save lives, explained Megan Bruck Syal, LLNL’s planetary defense project lead.

“While the probability of a large asteroid impact during our lifetime is low, the potential consequences could be devastating,” Bruck Syal said.

Citation: Mary T. Burke, Robert A. Managan, Nicholas A. Gentile, Megan Bruck Syal, Kirsten M. Howley and Joseph V. Wasem. December 19, 2023. Planetary Science Journal, Vol. 4, Number 12. https://iopscience.iop.org/article/10.3847/PSJ/ad0838 (open access).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI’s memory-forming mechanism found to be strikingly similar to the brain’s

Researchers at the Institute for Basic Science (IBS) in South Korea have discovered a striking similarity between AI memory processing of transformer models and the hippocampus of the human brain.

The finding provides a novel perspective on memory consolidation, a process that transforms short-term memories into long-term ones in AI systems.

Hippocampus role in memory

The team focused on memory consolidation through the NMDA receptor in the hippocampus, which facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation, but a magnesium ion acts as a small gatekeeper blocking the door. When this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories.

Low-cost, high-performance AI systems

The team discovered that large learning models, such as ChatGPT, seem to use a gatekeeping process similar to the brain’s NMDA receptor.

In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in the transformer can be improved by mimicking the NMDA receptor. Similar to how changing magnesium levels affect memory strength, tweaking the transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model.

CHA Meeyoung, a data scientist in the team, notes that “the human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Deep neural networks mimic human hearing

A new study from MIT has found that computational models derived from machine learning are moving closer to mimicking the structure and function of the human auditory system.

This could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces, the researchers suggest.

In this largest study yet of deep neural networks trained to perform auditory tasks, the MIT team showed that most of these models share properties seen in the human brain when people are listening to the same sounds.

Mimicking the human auditory system

When trained on auditory inputs, including background noise, the models closely mimic the activation patterns of the human auditory cortex, matching those seen in fMRI scans of people listening to the same sounds.

“This is the most comprehensive comparison of these kinds of models to the auditory system so far,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study, which appears in PLOS Biology.

Auditory models

For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks. They also created 14 models of their own, based on two different architectures.

Most of these models were trained to perform a single task: recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre, while two of them were trained to perform multiple tasks.

The researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

Better models

McDermott’s lab now plans to develop models that are even more successful at reproducing human brain responses. Such models could help scientists learn more about how the brain may be organized, and could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.

“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.

The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MITand a Department of Energy Computational Science Graduate Fellowship.

Citation: Greta Tuckute, Jenelle Feather, Dana Boebinger, Josh H. McDermott. December 13, 2023, PLOS Biology, https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002366 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists unveil complete cell map of a whole mammalian brain

An NIH-funded international team of researchers has created a complete cell atlas of a whole mammalian brain, a first. A map for the mouse brain, it describes the type, location, and molecular information of more than 32 million cells and provides information on connectivity between these cells.

The mouse is the most commonly used vertebrate experimental model in neuroscience research, and this cellular map paves the way for a greater understanding of the human brain. The cell atlas also lays the foundation for developing a new generation of precision therapeutics for people with mental and neurological disorders of the brain.

The findings were funded by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies® Initiative, or The BRAIN Initiative®, and appear in a collection of 10 papers published in Nature.

“The mouse atlas has brought the intricate network of mammalian brain cells into unprecedented focus, giving researchers the details needed to understand human brain function and diseases,” said Joshua A. Gordon, M.D., Ph.D., Director of the National Institute of Mental Health, part of the National Institutes of Health.

Unprecedented map of cellular organization and diversity

The cell atlas describes the types of cells in each region of the mouse brain and their organization within those regions. It also provides a detailed catalog of the cell’s transcriptome—the complete set of gene readouts in a cell, which contains instructions for making proteins and other cellular products.

The atlas also characterizes the cell epigenome—chemical modifications to a cell’s DNA and chromosomes that alter the way the cell’s genetic information is expressed—detailing thousands of epigenomic cell types and millions of candidate genetic regulation elements for different brain cell types.

A blueprint of transmitted chemical signals

The atlas also provides an accounting of the neurotransmitters and neuropeptides used by different cells and the relationship among cell types within the brain. This information can be used as a detailed blueprint for how chemical signals are initiated and transmitted in different parts of the brain. Those electrical signals are the basis for how brain circuits operate and how the brain functions overall.

“This product is a testament to the power of this unprecedented, cross-cutting collaboration and paves our path for more precision brain treatments,” said John Ngai, Ph.D., Director of the NIH BRAIN Initiative.

Of the 10 studies included in this collection, seven are funded through the NIH BRAIN Initiative Cell Census Network (BICCN), and two are funded through the larger NIH BRAIN Initiative.

The core aim of the BICCN, a groundbreaking, cross-collaborative effort to understand the brain’s cellular makeup, is to develop a comprehensive inventory of the cells in the brain—where they are, how they develop, how they work together, and how they regulate their activity—to better understand how brain disorders develop, progress, and are best treated.

Next big step: completing the cell maps of the human brain and the nonhuman primate brain

“By leveraging the unique nature of its multi-disciplinary and international collaboration, the BICCN was able to accomplish what no other team of scientists has been able to before,” said Ngai. “Now we are ready to take the next big step—completing the cell maps of the human brain and the nonhuman primate brain.”

The BRAIN Initiative Cell Atlas Network (BICAN) is the next stage in the NIH BRAIN Initiative’s effort to understand the cell and cellular functions of the mammalian brain.

Aim: revolutionize neuroscience research

Acording to NIH, BICAN is a transformative project that, together with two other large-scale projects—the BRAIN Initiative Connectivity Across Scales and the Armamentarium for Precision Brain Cell Access—aim to revolutionize neuroscience research by illuminating foundational principles governing the circuit basis of behavior and informing new approaches to treating human brain disorders.

Citation: Yao, Z., van Velthoven, C. T. J., Kunst, M., Zhang, M., McMillen, D., Lee, C., Jung, W., Goldy, J., Abdelhak, A., Aitken, M., Baker, K., Baker, P., Barkan, E., Bertagnolli, D., Bhandiwad, A., Bielstein, C., Bishwakarma, P., Campos, J., Carey, D., … Zeng, H. (2023). A high-resolution transcriptomic and spatial atlas of cell types in the whole mouse brain. Naturehttps://www.nature.com/articles/s41586-023-06812-z (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A brain-scale neuromorphic supercomputer

Would it be possible one day to create a supercomputer capable of emulating networks at the scale of the human brain?

Yes. It’s called Deep South. And researchers from the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University plan to have it operational by April 2024, they announced at this week’s 2023 ICNS NeuroEng Workshop.

Mimics biological processes, so less power required

DeepSouth uses a neuromorphic system that mimics biological processes, using hardware to efficiently emulate large networks of spiking neurons at 228 trillion synaptic operations per second—rivaling the estimated rate of operations in the human brain.

DeepSouth is purpose-built to operate like networks of neurons, so it will require less power and enable greater efficiencies, says CNS Director Professor André van Schaik. This contrasts with supercomputers optimized for more traditional computing loads, which are power-hungry.

“Progress in our understanding of how brains compute using neurons is hampered by our inability to simulate brain like networks at scale,” he said. “Simulating spiking neural networks on standard computers using graphics processing units (GPUs) and multicore central processing units (CPUs) is just too slow and power intensive. Our system will change that.”

Large-scale AI and other applications

“This platform will progress our understanding of the brain and develop brain-scale computing applications in diverse fields, including sensing, biomedical, robotics, space, and large-scale AI applications.”

Practically, this will lead to advances in smart devices, such as mobile phones, sensors for manufacturing and agriculture, and less power-hungry and smarter AI applications, he said. It will also enable a better understanding of how a healthy or diseased human brain works.

Western Sydney University’s ICNS team collaborated with partners across the neuromorphic field in developing this ground-breaking project, with researchers from the University of Sydney, University of Melbourne, and University of Aachen, Germany.

The supercomputer is aptly named DeepSouth, paying homage to IBM’s TrueNorth system, which initiated efforts to build machines simulating large networks of spiking neurons, and Deep Blue, which was the first computer to become a world chess champion. The name is also a nod to its geographical location.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Mind-reading AI turns thoughts into text

Researchers at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text.

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as operating a bionic arm or robot.

The research at the GrapheneX-UTS Human-centric Artificial Intelligence Centre was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre

AI learning process

In the study, participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp, using an electroencephalogram (EEG). An AI model called DeWave, developed by the researchers, translates these EEG signals into words and sentences . 

UNIVERSITY OF TECHNOLOGY SYDNEY MEETING Neural Information Processing Systems, FUNDER; Australian Research Council, GrapheneX

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding,” Lin said.

No need for invasive Neuralink-type implants or an MRI machine

He also said the new system eliminates the need to translate brain signals to language, which has required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink, or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

The UTS research was carried out with 29 participants, so it’s likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals at a time.

Accuracy to be improved

The translation accuracy score is currently around 40% on BLEU-1 (a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations). The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force. It uses brainwaves to command a quadruped robot, demonstrated in this ADF video.

A paper will be presented at the Dec. 12 at the NeurIPS conference, to be held in New Orleans on December 12, 2023, and will be linked here.

EurekAlert! The Global Source for Science News
AAAS - American Association for the Advancement of Science

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ultrasound microrobots guide medications to treat brain tumors, other conditions

Brain tumors, brain hemorrhages and neurological and psychological conditions are often hard to treat with medication. Even when effective drugs are available, they tend to have severe side effects because they circulate throughout the brain, not just the area they are meant to treat.

So researchers at ETH Zurich, the University of Zurich and University Hospital Zurich have now developed mini-​transporters that can be guided through the dense maze of blood vessels in the brain of an animal, using ultrasound. Compared to magnetic fields and other methods, “ultrasound is widely used in the medical field, is safe, and penetrates deep into the body,” said Daniel Ahmed, Professor of Acoustic Robotics at ETH Zurich and supervisor of the study.

Ultrasound-guided microbubbles “likely to be approved”

The gas-​filled 1.5 micrometer microbubbles are coated in lipids, the substances that biological cell membranes are made of. The bubbles are used as contrast materials and are guided through blood vessels.

“Since these bubbles, or vesicles, are already approved for use in humans, it’s likely that our technology will be approved and used in treatments for humans more quickly than other types of microvehicles currently in development,” says Ahmed, who was awarded a Starting Grant by the European Research Council ERC in 2019 for his project to research and develop this technology.

Ultrasound-​guided microbubbles also dissolve in the body once they’ve done their job, unlike magnetic-based microvehicles, and are small and smooth. “This makes it easy for us to guide them along narrow capillaries,” says Alexia Del Campo Fonseca, a doctoral student in Ahmed’s group and lead author of the study.

Tests in mice

Ahmed’s team has tested this method on blood vessels in the brains of mice. They also used ultrasound to hold the vesicles in place and guide them through the brain vessels against the direction of blood flow, including convoluted blood vessels.

The researchers attached four small transducers to the outside of each mouse’s skull. These devices generate vibrations in the ultrasonic range, which spread through the brain as waves.

To create the imaging for this study, the researchers used two-​photon microscopy. In the future, they want to use ultrasound itself for imaging. The researchers next plan to use these medical devices for treating cancer, stroke and psychological conditions. The researchers’ next step will be to attach drug molecules to the outside of the bubble casing for transport. They want to enhance the entire method to the point at which it can be used in humans, hoping it will one day provide the basis for the development of new treatments.

Citation: Alexia Del Campo FonsecaChaim GlückJeanne DrouxYann FerryCarole FreiSusanne WegenerBruno WeberMohamad El Amki,Daniel Ahmed I2023). Ultrasound trapping and navigation of microrobots in the mouse brain vasculature. Nature Communications volume 14, Article number: 5889 (21-Sep-2023). https://www.nature.com/ar ticles/s41467-023-41557-3 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Blue light exposure can damage neuronal function and induce aging

In a study on fruit flies, daily low-intensity blue-light exposure—similar to that experienced daily by billions of humans in the form of LED lighting and device screens—changed flies at the sub-cellular level, affecting processes related to aging and circadian rhythms.

Xiaoyun Wang and colleagues exposed fruit flies (Drosophila melanogaster) to different durations of daily low-intensity blue light, and then analyzed the consequences to the cellular makeup of the insects, compared to flies raised in darkness.

Blue light’s effects on flies

The authors measured blue light’s effects on N6-Methyladenosine (m6A), an RNA modification that is common across the tree of life and plays a role in a wide range of processes.

The authors found that blue light induced genetic transcriptomic** (m6A epitranscriptomic) and metabolomic reprogramming.

Damage to neuronal function

The RNA profiles of 25-day-old male fly heads exposed to blue light were significantly different from 25-day-old male dark-raised flies’ heads, say the authors, and the types of genes that were up- and down-regulated* suggest that blue light exposure can damage neuronal function.

In comparisons of whole-body transcriptomes (the protein-coding part of an organism’s genome) of RNA molecules that are present in cells, however, age created bigger differences between groups than light exposure, suggesting that eye and brain tissues of Drosophila flies are the major tissues affected by blue light.

Differences in mRNA m6A levels and other m6A-related markers between light- and dark-raised flies also indicate that m6A methylation (which regulates reproductive system development) was involved in the impacts of blue light on Drosophila.

Hazards with blue light

According to the authors, attention should be paid to the potential hazards of cumulative blue light exposure in humans because blue light can influence the circadian rhythm and processes mediated by melanopsin-expressing intrinsically photosensitive retinal ganglion cells.

* The process by which a cell increases and decreases (respectively) the production and quantities of its cellular components, such as RNA and proteins, in response to an external stimulus

** The set of all the ribonucleic acid (RNA) molecules (called “transcripts”) expressed in some given entity, such as a cell, tissue, or organism

Citation: Jia Huang, Fan Zhou, Huanchan Zhou, Xiaoqi Zheng, Zhengyi Huo, Meng Yang, Zihe Xu, Runzhou Liu, Luoluo Wang, Xiaoyun Wang, Systematic assessment of transcriptomic and metabolic reprogramming by blue light exposure coupled with aging, PNAS Nexus, Volume 2, Issue 12, December 2023, pgad390, https://doi.org/10.1093/pnasnexus/pgad390 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Solar-powered atmospheric water harvesting

Researchers from Shanghai Jiao Tong University in China have developed a new solar-powered atmospheric water harvesting technology that could help provide enough drinking water for people to survive in dryland areas in water-stressed countries.

The United Nations estimates that 3.5 million die every year from water-related diseases and the areas most in need of improved drinking water are also located in some of the sunniest places in the world.

Multiple uses

“This atmospheric water harvesting technology can be used to increase the daily water supply needs, as household drinking water, industrial water, and water for personal hygiene,” said author Ruzhu Wang.

The researchers synthesized a super-hygroscopic gel using plant derivatives and hygroscopic salts that were capable of absorbing and retaining an unparalleled amount of water. One kilogram of dry gel could adsorb 1.18 kilograms of water in arid atmospheric environments and up to 6.4 kilograms in humid atmospheric environments, they estimate. This hygroscopic gel was simple and inexpensive to prepare and would consequently be suitable for large-scale preparation.

In addition to daily water production, these sorbent materials that harvest atmosphere water could also play an important role in future applications such as dehumidification, agriculture irrigation, and thermal management for electronic devices, according to the researchers.

In an outdoor prototype demonstration, the team found it released adsorbed water even in the morning or afternoon when the sun is weak. The system could also achieve simultaneous adsorption and desorption during the daytime.

The team now plans to work to achieve simultaneous adsorption and desorption, using renewable energy to maximize daily water yield per unit mass of adsorbent.

Citation: Chengjie Xiang, Xinge Yang, Fangfang Deng, Zhihui Chen, and Ruzhu Wang, Dec. 5, 2023, Applied Physics Reviewshttps://doi.org/10.1063/5.0160682 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter