Solar-powered atmospheric water harvesting

Researchers from Shanghai Jiao Tong University in China have developed a new solar-powered atmospheric water harvesting technology that could help provide enough drinking water for people to survive in dryland areas in water-stressed countries.

The United Nations estimates that 3.5 million die every year from water-related diseases and the areas most in need of improved drinking water are also located in some of the sunniest places in the world.

Multiple uses

“This atmospheric water harvesting technology can be used to increase the daily water supply needs, as household drinking water, industrial water, and water for personal hygiene,” said author Ruzhu Wang.

The researchers synthesized a super-hygroscopic gel using plant derivatives and hygroscopic salts that were capable of absorbing and retaining an unparalleled amount of water. One kilogram of dry gel could adsorb 1.18 kilograms of water in arid atmospheric environments and up to 6.4 kilograms in humid atmospheric environments, they estimate. This hygroscopic gel was simple and inexpensive to prepare and would consequently be suitable for large-scale preparation.

In addition to daily water production, these sorbent materials that harvest atmosphere water could also play an important role in future applications such as dehumidification, agriculture irrigation, and thermal management for electronic devices, according to the researchers.

In an outdoor prototype demonstration, the team found it released adsorbed water even in the morning or afternoon when the sun is weak. The system could also achieve simultaneous adsorption and desorption during the daytime.

The team now plans to work to achieve simultaneous adsorption and desorption, using renewable energy to maximize daily water yield per unit mass of adsorbent.

Citation: Chengjie Xiang, Xinge Yang, Fangfang Deng, Zhihui Chen, and Ruzhu Wang, Dec. 5, 2023, Applied Physics Reviewshttps://doi.org/10.1063/5.0160682 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Wearable helmet with magnetic sensors records brain functions while patient is moving

A new wearable brain scanner can accurately record magnetic fields generated by brain activity while people are in motion. It will enable researchers to learn more about brain development and neurological conditions that can affect movement, including autism, epilepsy, stroke, concussion, and Parkinson’s disease, say researchers. 

“This advance could also make it easier to conduct brain scans in young children and individuals with neurological disorders, who can’t always remain still in conventional scanners,” said Niall Holmes, Ph.D., a Mansfield Research Fellow in the School of Physics and Astronomy at the University of Nottingham, who led the research. “Unconstrained movement during a scan opens a wealth of possibilities for clinical investigation and allows a fundamentally new range of neuroscientific experiments.”

How magnetic fields are recorded

When neurons interact, they generate a small electric current. This current produces a magnetic field that can be detected, recorded, and analyzed by sensitive magnetic sensors, using a technique called magnetoencephalography (MEG).

MEG technology can record brain signals every millisecond. By overlaying the neuronal sources of these magnetic fields onto an anatomical image of the brain, clinicians can visualize where and when specific brain activities originate. However, current MEG systems are bulky and rigid, like an old-fashioned hair dryer (where you must keep your head still for a while). Also, these sensors require cooling at or below freezing temperatures, so they can’t be placed directly on your scalp.

Now researchers at the University of Nottingham have used a new generation of magnetic field sensors called “optically pumped magnetometers” (OPMs), which operate at room temperature and can be placed close to the head, enhancing data quality. They are also flexible, allowing children and adults to move during scanning. 

The researchers also designed a magnetic shielding system that would cancel out or compensate for external magnetic fields (see “Technical background: Designing a matrix coil system” below).

Research applications

A company co-founded by Holmes and his colleagues is selling the OPM-MEG systems (which include a magnetically shielded room) to research centers in North America and Europe to conduct a variety of neuroscientific experiments.

One of the U.S. centers to use the MEG-OPM system is Virginia Polytechnique Institute and State University, which collaborated with the Nottingham team on another study to determine how well the OPM-MEG helmet worked when two individuals each wore one and then interacted.  To conduct this proof-of-concept study, two experiments involving social interaction were conducted.

“To really study how the human brain works, we have to embed people in their favorite natural environment—that’s a social setting,” said Read Montague, Ph.D., the principal investigator of the Virginia Tech team and director of the university’s Center for Human Neuroscience Research. The research was published this year in Sensors

For social interactions, two participants stroked each other’s hands and then played a game of ping pong.  Both experiments showed that despite large and unpredictable motions by participants, each person’s brain activity was clearly recorded.

The company is collecting data to obtain approval from regulatory bodies, including the FDA, to deploy the system in clinical populations, which can take up to five years.

Technical background: Designing a matrix coil system

The Nottingham research team constructed a system of electromagnetic coils to shield against the background magnetic-field noise and positioned them on two panels around the participant. Prior research published in Nature shows that eight large coils cancelled the background magnetic fields, but at a fixed position that only allowed small head movements.

Holmes and his team instead designed a new matrix coil system that features 48 smaller coils on two panels positioned around the participant. The coils can be individually controlled and continually recalibrate to compensate for the magnetic field changes experienced by the moving sensors, ensuring high-quality MEG data are recorded.

“This enables magnetic field compensation in any position, which makes OPM-MEG scans more comfortable for everyone and allows people to walk around,” said Holmes.

The researchers demonstrated the capabilities of the new matrix coil system with four experiments. They first wanted to show that the stationary helmet (not worn by anyone) placed inside the two coil panels could reduce background magnetic fields, which it did. Then a healthy participant wore the helmet, demonstrating that the OPMs recorded his brain function when he moved his head and that the coils cancelled the magnetic fields.

A third experiment used a wire coil as a proxy for brain cell activity because it produces magnetic fields when electric currents are applied. The wire coil attached to the helmet with OPM sensors showed that the matrix coil compensated for motion-related changes, ensuring accurate measurements. The last experiment showed that the helmet worn by a second healthy participant could produce a high-quality recording of brain activity when walking around.

“By taking advantage of recent OPM-MEG technology and designing a new magnetic shielding system, this helmet represents a novel magnetoencephalography approach that could help reveal more about how the brain works,” said Shumin Wang, Ph.D., a program director in the U.S. National Institute of Biomedical Imaging and Bioengineering (NIBIB) Division of Applied Science & Technology (Bioimaging).

Citations: Holmes, N, et al. Enabling ambulatory movement in wearable magnetoencephalography with matrix coil active magnetic shielding. NeuroImage. (2023), https://www.sciencedirect.com/science/article/pii/S1053811923003087?via%3Dihub (open-access); and Holmes N, et al. Naturalistic Hyperscanning with Wearable Magnetoencephalography. Sensors. (2023), https://www.mdpi.com/1424-8220/23/12/5454 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists build tiny healing biological robots from human cells

These multicellular bots move around and help heal “wounds” created in cultured neurons (and other possible uses)

Researchers at Tufts University and Harvard University’s Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells. They can move across a surface and encourage the growth of neurons across a region of damage in a lab dish.

The self-assembling, multicellular robots, which range in size from the width of a human hair to the point of a sharpened pencil, are shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers’ vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.

The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots (biobots) from frog embryo cells called Xenobots. These are capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own.

In the current study, published in Advanced Science, Levin, along with PhD student Gizem Gumuskaya, discovered that such bots can be created from adult human cells without any genetic modification and demonstrate some capabilities beyond what was observed with the Xenobots.

Repairing neurons

The researchers used human cells in the trachea to reboot to create new multicellular shapes and move over a surface of human neurons grown in a lab dish to encourage new growth to fill in gaps caused by scratching the layer of cells. (We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats.)

Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a “superbot.”

“The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body,” said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute.

“It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage,” he said. “We’re now looking at how the healing mechanism works, and asking what else these constructs can do.”

The advantages of using human cells include the ability to construct bots from a patient’s own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants.

How to make Anthrobots

Each Anthrobot starts out as a single celld, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung.

Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.

Potential other vital uses

Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, travel to and perform functions in the body, or help build engineered tissues in the lab.

Other uses of the bots could include clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues.

A swarm of Anthrobots. Credit: Gizem Gumuskaya, Tufts University

Citation: Gumuskaya, G., Srivastava, P., Cooper, B. G., Lesser, H., Semegran, B., Garnier, S., & Levin, M. Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells. Advanced Science, 2303575. https://doi.org/10.1002/advs.202303575 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astronomers discover star with six planets in orbits with ‘rhythmic beat’

Astronomers have discovered six planets that orbit the bright star HD 110067 with a rhythmic beat, a rare case of an “in sync” gravitational lockstep that could offer “deep insight into planet formation and evolution,” according to NASA.

An international team of astronomers, led by Rafael Luque, of the University of Chicago, published a paper online on the discovery, “A resonant sextuplet of sub-Neptunes transiting the bright star HD 110067,” in the journal Nature on Nov. 29.

Precise musical patterns

The strange family of six “sub-Neptune” planets (possibly smaller versions of our own Neptune) is moving in a cyclic rhythm. “This orbital waltz repeats itself so precisely it can be readily set to music,” notes NASA.

“The planet closest to the star makes three orbits for every two of the next planet out—called a 3/2 resonance, a pattern that is repeated among the four closest planets.” A pattern of four orbits for every three of the next planet out (a 4/3 resonance) is repeated twice.

“And these resonant orbits are rock-solid: The planets likely have been performing this same rhythmic dance since the system formed billions of years ago. 

This animation shows six “sub-Neptune” exoplanets in rhythmic orbits around their star, with a musical tone as each planet passes a line drawn through the system. The line is where the planets cross in front of (“transit”) their star from Earth’s perspective. In these rhythms, known as “resonance,” the innermost planet makes three orbits for every two of the next planet out. Among the outermost planets, a pattern of four orbits for every three of the next planet out is repeated twice. Animation credit: Dr. Hugh Osborn, University of Bern

Citation: Luque, R., Osborn, H. P., Leleu, A., Pallé, E., Bonfanti, A., Barragán, O., Wilson, T. G., Broeg, C., Cameron, A. C., Lendl, M., Maxted, P. F., Alibert, Y., Gandolfi, D., Delisle, J., Hooton, M. J., Egger, J. A., Nowak, G., Lafarga, M., Rapetti, D., . . . Zingales, T. (2023). A resonant sextuplet of sub-Neptunes transiting the bright star HD 110067. Nature, 623(7989), 932-937. https://doi.org/10.1038/s41586-023-06692-3

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Google DeepMind invents nearly 400,000 new compounds for use by researchers

Google DeepMind has contributed nearly 400,000 new compounds to the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) Materials Project for use by researchers.

Think future materials like lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, and faster transistors for the next generation of computers. 

The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials for use in future technologies. The new compounds include how the atoms of a material are arranged (the crystal structure) and how istable it is (formation energy). 

“We have to create new materials if we are going to address the global environmental and climate challenges,” said Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley. “With innovation in materials, we can potentially develop recyclable plastics, harness waste energy, make better batteries, and build cheaper solar panels that last longer, among many other things.”

Deep learning tool

To generate the new data, Google DeepMind developed a deep learning tool called Graph Networks for Materials Exploration, or GNoME. Researchers trained GNoME using workflows and data that were developed over a decade by the Materials Project, and improved the GNoME algorithm through active learning.

GNoME researchers ultimately produced 2.2 million crystal structures, including 380,000 that they are adding to the Materials Project and predict are stable, making them potentially useful in future technologies. The new results from Google DeepMind are published today in the journal Nature.

Quick discovery

Some of the computations from GNoME were used alongside data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. A-Lab’s first paper, also published today in Nature, showed that the autonomous lab can quickly discover novel materials with minimal human input. 

To make the novel compounds predicted by the Materials Project, A-Lab’s AI created new recipes by combing through scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were used to evaluate the materials’ predicted stability.

The Materials Project is the most widely used open-access repository of information on inorganic materials in the world. The database holds millions of properties on hundreds of thousands of structures and molecules, information primarily processed at Berkeley Lab’s National Energy Research Science Computing Center.

More than 400,000 people are registered as users of the site and, on average, more than four papers citing the Materials Project are published every day. The contribution from Google DeepMind is the biggest addition of structure-stability data from a group since the Materials Project began.

New data freely available

The Materials Project is now processing the compounds from Google DeepMind and adding them into the online database. The new data will be freely available to researchers, and also feed into projects such as A-Lab that partner with the Materials Project.

By following promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed useful properties in new materials across several areas.

These include carbon capture (pulling carbon dioxide from the atmosphere), photocatalysts (materials that speed up chemical reactions in response to light and could be used to break down pollutants or generate hydrogen), thermoelectrics (materials that could help harness waste heat and turn it into electrical power) and transparent conductors (which might be useful in solar cells, touch screens, or LEDs).

Citation: Szymanski, N. J., Rendy, B., Fei, Y., Kumar, R. E., He, T., Milsted, D., McDermott, M. J., Gallant, M., Cubuk, E. D., Merchant, A., Kim, H., Jain, A., Bartel, C. J., Persson, K., Zeng, Y., & Ceder, G. (2023). An autonomous laboratory for the accelerated synthesis of novel materials. Nature, 1-6. https://doi.org/10.1038/s41586-023-06734-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Immersive ‘presence’ in mixed reality can be measured using reaction time

With mixed-reality devices (which combine virtual and augmented reality), such as Meta’s Quest 3 or Apple’s forthcoming Vision Pro, what you feel is real is called “presence.” 

Now researchers at the University of Massachusetts Amherst have identified reaction time as a possible way to adjust presence. That’s better than asking the user to respond to a questionnaire after exiting, they note.

To test this, the researchers put participants in modified “Fruit Ninja” mixed-reality scenarios. In one experiment, they adjusted place illusion and the fruit appeared either as lifelike fruit or abstract cartoons. In another experiment, they created a plausibility illusion by showing mugs filling up with coffee, either in the correct upright position or sideways.  

They found that people were quicker in reacting to the lifelike fruit than they would to the cartoonish-looking food.

Important in medical procedures

Yasra Chandio, doctoral candidate in computer engineering and lead study author, gives medical procedures as an example of the importance of this real-time presence calibration: If a surgeon needs millimeter-level precision, they may use mixed reality as a guide to tell them exactly where they need to operate.  

“If we just show the organ in front of them, and we don’t adjust for the height of the surgeon, ​​​​for instance, that could be delaying the surgeon and could ​​​​have inaccuracies for them,” she says. Low presence can also contribute to cybersickness, a feeling of dizziness or nausea that can occur in the body when a user’s bodily perception does not align with what they’re seeing.

However, if the mixed-reality system is internally monitoring presence, it can make adjustments in real time, like moving the virtual organ rendering to eye level. 

Citation: Y. Chandio, N. Bashir, V. Interrante and F. M. Anwar, “Investigating the Correlation Between Presence and Reaction Time in Mixed Reality,” in IEEE Transactions on Visualization and Computer Graphics, doi: 10.1109/TVCG.2023.3319563

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Hybrid biological transistors behave like living tissue

A team at Tufts University Silklab has created microprocessor-scale transistors that can respond directly to the environment and change like living tissue.

To do that, the researchers replaced the transistors’ insulating material with “silk fibroin,” the structural protein of silk fibers, as they reported in the journal Advanced Materials.

The silk fibroin material can be precisely deposited onto surfaces, where it can be easily modified with other chemical and biological molecules to change its properties, according to the researchers. This allows the material to pick up and detect a wide range of components from the body or environment.

Highly sensitive breath sensor detects hidden diseases

A prototype device developed by the researchers used hybrid transistors to make a highly sensitive and ultrafast breath sensor that could detect changes in humidity.

This allowed the devices to detect some cardiovascular and pulmonary diseases and sleep apnea. They could also pick up carbon dioxide levels and other gases’ molecules in the breath, which could provide diagnostic information. Used with blood plasma, they could potentially provide information on levels of oxygenation and glucose, circulating antibodies, and more.

Bioactive inks for fabrics to detect changes in the environment or body, like the COVID19 virus

Prior to the development of the hybrid transistors, the Silklab, led by Fiorenzo Omenetto, the Frank C. Doble Professor of engineering, had already used fibroin to make bioactive inks for fabrics that can detect changes in the environment or on the body; sensing tattoos that can be placed under the skin or on the teeth to monitor health and diet; and sensors that can be printed on any surface to detect pathogens like the virus responsible for COVID19.

How It Works

A transistor is simply an electrical switch, with a metal electrical lead coming in and another going out. In between the leads is the semiconductor material.

Another source of electrical input called a gate is separated from everything else by an insulator. The gate acts as the “key” to turn the transistor on and off. It triggers the on-state when a threshold voltage– which we will call “1” – creates an electric field across the insulator, priming electron movement in the semiconductor and starting the flow of current through the leads.

In a biological hybrid transistor, a silk layer is used as the insulator, and when it absorbs moisture, it acts like a gel carrying whatever ions (electrically charged molecules) are contained within. The gate triggers the on-state by rearranging ions in the silk gel. By changing the ionic composition in the silk, the transistor operation changes, allowing it to be triggered by any gate value between zero and one.

Analog computing with microprocessors

“You could imagine creating circuits that make use of information that is not represented by the discrete binary levels used in digital computing, but can process variable information, as in analog computing, with the variation caused by changing what’s inside the silk insulator,” said Omenetto.

“This opens up the possibility of introducing biology into computing within modern microprocessors,” he said. Of course, the most powerful known biological computer is the brain, which processes information with variable levels of chemical and electrical signals.

Self-training hybrid biological transistors

The technical challenge in creating hybrid biological transistors was to achieve silk processing at the nanoscale, down to 10nm, or less than 1/10000th the diameter of a human hair. “Having achieved that, we can now make hybrid transistors with the same fabrication processes that are used for commercial chip manufacturing,” said Beom Joon Kim, postdoctoral researcher at the School of Engineering. “This means you can make a billion of these with capabilities available today.”

Having billions of transistor nodes with connections reconfigured by biological processes in the silk could lead to microprocessors that could act like the neural networks used in AI. “Looking ahead, one could imagine having integrated circuits that train themselves, respond to environmental signals, and record memory directly in the transistors, rather than sending it to separate storage,” said Omenetto.

Devices detecting and responding to more complex biological states and large-scale analog and neuromorphic computing are yet to be created. Omenetto is optimistic for future opportunities. “This opens up a new way of thinking about the interface between electronics and biology, with many important fundamental discoveries and applications ahead.”

Citation: Kim, B. J., Bonacchini, G. E., Ostrovsky-Snider, N. A., & Omenetto, F. G. (2023). Bimodal Gating Mechanism in Hybrid Thin-Film Transistors Based on Dynamically Reconfigurable Nanoscale Biopolymer Interfaces. Advanced Materials, 35(45), 2302062. https://doi.org/10.1002/adma.202302062

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bacteria store memories and pass them on for generations, researchers discover

Bacteria can create something similar to memories, which indicate when to cause dangerous infections in people, when to form bacterial swarms, or when millions of bacteria come together on a single surface.

This remarkable discovery by researchers at The University of Texas at Austin (UT) has potential applications for preventing and combatting bacterial infections and addressing antibiotic-resistant bacteria, say the researchers.

Bacterial cells use a common chemical element, iron, to form and pass along these “memories” to their progeny over later generations. E. coli bacteria use iron levels as a way to store information about different behaviors that can then be activated in response to certain stimuli.

Why bacteria swarm

Scientists previously observed that bacteria that had a prior experience of swarming (moving on a surface as a collective, using flagella) improve subsequent swarming performance. So the UT-led research team set out to learn why.

“Bacteria don’t have brains, but they can gather information from their environment, and if they have encountered that environment frequently, they can store that information and quickly access it later for their benefit,” explained Souvik Bhattacharyya, lead author and a member of the Department of Molecular Biosciences at UT.

Iron manbacterium

It’s about iron—one of the most abundant elements on Earth. Singular, free-floating bacteria have varying levels of iron.

And scientists observed that bacterial cells with lower levels of iron were better swarmers. In contrast, bacteria that formed biofilms—dense, sticky mats of bacteria on solid surfaces—had high levels of iron in their cells.

Bacteria with antibiotic tolerance also had balanced levels of iron. These iron memories persist for at least four generations and disappear by the seventh generation.

“Before there was oxygen in the Earth’s atmosphere, early cellular life was utilizing iron for a lot of cellular processes. Iron is not only critical in the origin of life on Earth, but also in the evolution of life,” Bhattacharyya said. “It makes sense that cells would utilize it in this way.”

Forming a biofilm

Researchers theorize that when iron levels are low, bacterial memories are triggered to form a fast-moving migratory swarm to seek out iron in the environment. But when iron levels are high, memories indicate they should stick around and form a biofilm.

“Iron levels are definitely a target for therapeutics because iron is an important factor in virulence,” Bhattacharyya said. “Ultimately, the more we know about bacterial behavior, the easier it is combat them.”

The findings are published in the Proceedings of the National Academy of Sciences. The research was funded by the National Institutes of Health.

Citation: Bhattacharyya, S., Bhattarai, N., Pfannenstiel, D. M., Wilkins, B., Singh, A., & Harshey, R. M. (2023). A heritable iron memory enables decision-making in Escherichia coli. Proceedings of the National Academy of Sciences, 120(48), e2309082120. https://doi.org/10.1073/pnas.2309082120

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Children’s brains shaped by their time on tech devices

Time spent watching television or playing computer games has measurable, long-term negative and positive effects on children’s brain function, according to a review of 23 years of neuroimaging research.

Published Nov. 16 in the peer-reviewed journal Early Education and Development, the analysis of 33 studies used neuroimaging technology to measure the impact of digital technology on the brains of more than 30,000 children under the age of 12. 

Brain changes

Among the findings: screen time leads to changes in the brain’s pre-frontal cortex—the base of executive functions such as working memory and the ability to plan or to respond flexibly to situations. It also finds impacts on the parietal lobe, which helps us to process touch, pressure, heat, cold, and pain; the temporal lobe, which is important for memory, hearing and language; and the occipital lobe, which helps us to interpret visual information.

The research team, which included experts from the Education University of Hong Kong, the Shanghai Normal University in China and Macquarie University in Australia, wanted to know how digital activity affected the brain’s plasticity—or malleability—in critical periods of development. It is known that visual development mostly takes place before the age of eight, while the key time for language acquisition is up to 12.

They synthesized and evaluated studies on children’s digital use of screens, games, virtual scenes and associated brain development published between January 2000 and April 2023, with the ages of participants ranging from six months upwards.

Negative impacts

The paper concludes that these early digital experiences are having a significant impact on the shape of children’s brains and their functioning, potentially positive and negative, but mainly more negative.

For example, negative impacts were witnessed in some studies with how screen time influences the brain function required for attention, executive control abilities, inhibitory control, cognitive processes, and functional connectivity. Other studies suggested that higher screen time is associated with lower functional connectivity in brain areas related to language and cognitive control, potentially adversely affecting cognitive development.

Some device-based research were assessed in the research pool. Tablet device users were found to have worse brain function and problem-solving tasks. Video gaming and high internet users were found, in four studies, to produce negative changes in brain areas, impacting intelligence scores and brain volume.

And general “intensive media usage” was shown to potentially impact visual processing and higher cognitive function regions.

Positive impacts

There were six studies, however, demonstrating how these digital experiences can positively impact a child’s brain functionality. One found improved focusing and learning abilities in the frontal lobe of the brain. Another study suggested playing video games can increase cognitive demand, potentially enhancing children’s executive funct

CITATION: Dandan Wu, Xinyi Dong, Danqing Liu & Hui Li (2023) How Early Digital Experience Shapes Young Brains During 0-12 Years: A Scoping Review, Early Education and Development, DOI: 10.1080/10409289.2023.2278117 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Boosting vaccines and immunotherapies with machine learning

Small molecules called “immunomodulators” can help create more effective vaccines and stronger immunotherapies to treat cancer.

But finding the molecules that trigger the right immune response is difficult. The number of drug-like small molecules has been estimated to be 1060, much higher than the number of stars in the visible universe.

Using machine learning to screen molecules

So a team from the Pritzker School of Molecular Engineering (PME) at The University of Chicago tackled the problem by using machine learning to guide high-throughput experimental screening of this vast search space.

In a potential first for the field of vaccine design, machine learning guided the discovery of new immune pathway-enhancing molecules. It found one particular small molecule that could outperform the best immunomodulators on the market. The results are published in the journal Chemical Science.

“We used artificial intelligence methods to guide a search of a huge chemical space,” said Prof. Aaron Esser-Kahn, co-author of the paper, who led the experiments. “In doing so, we found molecules with record-level performance that no human would have suggested we try. We’re excited to share the blueprint for this process.”

Immunomodulators work by changing the signaling activity of innate immune pathways within the body. In particular, the NF-κB pathway plays a role in inflammation and immune activation, while the IRF pathway is essential in antiviral response.

Earlier this year, the PME team conducted a high-throughput screen that looked at 40,000 combinations of molecules to see if any affected these pathways. They then tested the top candidates, found hat when those molecules were added to adjuvants—ingredients that help boost the immune response in vaccines—the molecules increased antibody response and reduced inflammation.

To find more candidates, the team used these results combined with a library of nearly 140,000 commercially available small molecules to guide an iterative computational and experimental process.

Active learning

They used a machine learning technique called “active learning,” which blends both exploration and exploitation to efficiently navigate the experimental screening through molecular space. This approach learns from the data previously collected and finds potential high-performing molecules to be tested experimentally while also pointing out areas that have been under-explored and may contain some valuable candidates.

The process was iterative; the model pointed out potential good candidates or areas in which it needed more information, and the team conducted a high-throughput analysis of those molecules and then fed the data back into the active learning algorithm.

The team expects to continue this process to search for more molecules, and hopes others in the field will share datasets to make the search even more fruitful. The goal is to screen molecules for more specific immune activity, like activating certain T-cells, or find a combination of molecules that gives them better control of the immune response.

“Ultimately, we want to find molecules that can treat disease,” Esser-Kahn said.

Citation: Yifeng Tang, et al. Chem. Sci., 2023,14, 12747-12766 Data-driven discovery of innate immunomodulators via machine learning-guided high throughput screening. https://pubs.rsc.org/en/Content/ArticleLanding/2023/SC/D3SC03613H (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter