Hybrid biological transistors behave like living tissue

A team at Tufts University Silklab has created microprocessor-scale transistors that can respond directly to the environment and change like living tissue.

To do that, the researchers replaced the transistors’ insulating material with “silk fibroin,” the structural protein of silk fibers, as they reported in the journal Advanced Materials.

The silk fibroin material can be precisely deposited onto surfaces, where it can be easily modified with other chemical and biological molecules to change its properties, according to the researchers. This allows the material to pick up and detect a wide range of components from the body or environment.

Highly sensitive breath sensor detects hidden diseases

A prototype device developed by the researchers used hybrid transistors to make a highly sensitive and ultrafast breath sensor that could detect changes in humidity.

This allowed the devices to detect some cardiovascular and pulmonary diseases and sleep apnea. They could also pick up carbon dioxide levels and other gases’ molecules in the breath, which could provide diagnostic information. Used with blood plasma, they could potentially provide information on levels of oxygenation and glucose, circulating antibodies, and more.

Bioactive inks for fabrics to detect changes in the environment or body, like the COVID19 virus

Prior to the development of the hybrid transistors, the Silklab, led by Fiorenzo Omenetto, the Frank C. Doble Professor of engineering, had already used fibroin to make bioactive inks for fabrics that can detect changes in the environment or on the body; sensing tattoos that can be placed under the skin or on the teeth to monitor health and diet; and sensors that can be printed on any surface to detect pathogens like the virus responsible for COVID19.

How It Works

A transistor is simply an electrical switch, with a metal electrical lead coming in and another going out. In between the leads is the semiconductor material.

Another source of electrical input called a gate is separated from everything else by an insulator. The gate acts as the “key” to turn the transistor on and off. It triggers the on-state when a threshold voltage– which we will call “1” – creates an electric field across the insulator, priming electron movement in the semiconductor and starting the flow of current through the leads.

In a biological hybrid transistor, a silk layer is used as the insulator, and when it absorbs moisture, it acts like a gel carrying whatever ions (electrically charged molecules) are contained within. The gate triggers the on-state by rearranging ions in the silk gel. By changing the ionic composition in the silk, the transistor operation changes, allowing it to be triggered by any gate value between zero and one.

Analog computing with microprocessors

“You could imagine creating circuits that make use of information that is not represented by the discrete binary levels used in digital computing, but can process variable information, as in analog computing, with the variation caused by changing what’s inside the silk insulator,” said Omenetto.

“This opens up the possibility of introducing biology into computing within modern microprocessors,” he said. Of course, the most powerful known biological computer is the brain, which processes information with variable levels of chemical and electrical signals.

Self-training hybrid biological transistors

The technical challenge in creating hybrid biological transistors was to achieve silk processing at the nanoscale, down to 10nm, or less than 1/10000th the diameter of a human hair. “Having achieved that, we can now make hybrid transistors with the same fabrication processes that are used for commercial chip manufacturing,” said Beom Joon Kim, postdoctoral researcher at the School of Engineering. “This means you can make a billion of these with capabilities available today.”

Having billions of transistor nodes with connections reconfigured by biological processes in the silk could lead to microprocessors that could act like the neural networks used in AI. “Looking ahead, one could imagine having integrated circuits that train themselves, respond to environmental signals, and record memory directly in the transistors, rather than sending it to separate storage,” said Omenetto.

Devices detecting and responding to more complex biological states and large-scale analog and neuromorphic computing are yet to be created. Omenetto is optimistic for future opportunities. “This opens up a new way of thinking about the interface between electronics and biology, with many important fundamental discoveries and applications ahead.”

Citation: Kim, B. J., Bonacchini, G. E., Ostrovsky-Snider, N. A., & Omenetto, F. G. (2023). Bimodal Gating Mechanism in Hybrid Thin-Film Transistors Based on Dynamically Reconfigurable Nanoscale Biopolymer Interfaces. Advanced Materials, 35(45), 2302062. https://doi.org/10.1002/adma.202302062

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bacteria store memories and pass them on for generations, researchers discover

Bacteria can create something similar to memories, which indicate when to cause dangerous infections in people, when to form bacterial swarms, or when millions of bacteria come together on a single surface.

This remarkable discovery by researchers at The University of Texas at Austin (UT) has potential applications for preventing and combatting bacterial infections and addressing antibiotic-resistant bacteria, say the researchers.

Bacterial cells use a common chemical element, iron, to form and pass along these “memories” to their progeny over later generations. E. coli bacteria use iron levels as a way to store information about different behaviors that can then be activated in response to certain stimuli.

Why bacteria swarm

Scientists previously observed that bacteria that had a prior experience of swarming (moving on a surface as a collective, using flagella) improve subsequent swarming performance. So the UT-led research team set out to learn why.

“Bacteria don’t have brains, but they can gather information from their environment, and if they have encountered that environment frequently, they can store that information and quickly access it later for their benefit,” explained Souvik Bhattacharyya, lead author and a member of the Department of Molecular Biosciences at UT.

Iron manbacterium

It’s about iron—one of the most abundant elements on Earth. Singular, free-floating bacteria have varying levels of iron.

And scientists observed that bacterial cells with lower levels of iron were better swarmers. In contrast, bacteria that formed biofilms—dense, sticky mats of bacteria on solid surfaces—had high levels of iron in their cells.

Bacteria with antibiotic tolerance also had balanced levels of iron. These iron memories persist for at least four generations and disappear by the seventh generation.

“Before there was oxygen in the Earth’s atmosphere, early cellular life was utilizing iron for a lot of cellular processes. Iron is not only critical in the origin of life on Earth, but also in the evolution of life,” Bhattacharyya said. “It makes sense that cells would utilize it in this way.”

Forming a biofilm

Researchers theorize that when iron levels are low, bacterial memories are triggered to form a fast-moving migratory swarm to seek out iron in the environment. But when iron levels are high, memories indicate they should stick around and form a biofilm.

“Iron levels are definitely a target for therapeutics because iron is an important factor in virulence,” Bhattacharyya said. “Ultimately, the more we know about bacterial behavior, the easier it is combat them.”

The findings are published in the Proceedings of the National Academy of Sciences. The research was funded by the National Institutes of Health.

Citation: Bhattacharyya, S., Bhattarai, N., Pfannenstiel, D. M., Wilkins, B., Singh, A., & Harshey, R. M. (2023). A heritable iron memory enables decision-making in Escherichia coli. Proceedings of the National Academy of Sciences, 120(48), e2309082120. https://doi.org/10.1073/pnas.2309082120

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Children’s brains shaped by their time on tech devices

Time spent watching television or playing computer games has measurable, long-term negative and positive effects on children’s brain function, according to a review of 23 years of neuroimaging research.

Published Nov. 16 in the peer-reviewed journal Early Education and Development, the analysis of 33 studies used neuroimaging technology to measure the impact of digital technology on the brains of more than 30,000 children under the age of 12. 

Brain changes

Among the findings: screen time leads to changes in the brain’s pre-frontal cortex—the base of executive functions such as working memory and the ability to plan or to respond flexibly to situations. It also finds impacts on the parietal lobe, which helps us to process touch, pressure, heat, cold, and pain; the temporal lobe, which is important for memory, hearing and language; and the occipital lobe, which helps us to interpret visual information.

The research team, which included experts from the Education University of Hong Kong, the Shanghai Normal University in China and Macquarie University in Australia, wanted to know how digital activity affected the brain’s plasticity—or malleability—in critical periods of development. It is known that visual development mostly takes place before the age of eight, while the key time for language acquisition is up to 12.

They synthesized and evaluated studies on children’s digital use of screens, games, virtual scenes and associated brain development published between January 2000 and April 2023, with the ages of participants ranging from six months upwards.

Negative impacts

The paper concludes that these early digital experiences are having a significant impact on the shape of children’s brains and their functioning, potentially positive and negative, but mainly more negative.

For example, negative impacts were witnessed in some studies with how screen time influences the brain function required for attention, executive control abilities, inhibitory control, cognitive processes, and functional connectivity. Other studies suggested that higher screen time is associated with lower functional connectivity in brain areas related to language and cognitive control, potentially adversely affecting cognitive development.

Some device-based research were assessed in the research pool. Tablet device users were found to have worse brain function and problem-solving tasks. Video gaming and high internet users were found, in four studies, to produce negative changes in brain areas, impacting intelligence scores and brain volume.

And general “intensive media usage” was shown to potentially impact visual processing and higher cognitive function regions.

Positive impacts

There were six studies, however, demonstrating how these digital experiences can positively impact a child’s brain functionality. One found improved focusing and learning abilities in the frontal lobe of the brain. Another study suggested playing video games can increase cognitive demand, potentially enhancing children’s executive funct

CITATION: Dandan Wu, Xinyi Dong, Danqing Liu & Hui Li (2023) How Early Digital Experience Shapes Young Brains During 0-12 Years: A Scoping Review, Early Education and Development, DOI: 10.1080/10409289.2023.2278117 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Boosting vaccines and immunotherapies with machine learning

Small molecules called “immunomodulators” can help create more effective vaccines and stronger immunotherapies to treat cancer.

But finding the molecules that trigger the right immune response is difficult. The number of drug-like small molecules has been estimated to be 1060, much higher than the number of stars in the visible universe.

Using machine learning to screen molecules

So a team from the Pritzker School of Molecular Engineering (PME) at The University of Chicago tackled the problem by using machine learning to guide high-throughput experimental screening of this vast search space.

In a potential first for the field of vaccine design, machine learning guided the discovery of new immune pathway-enhancing molecules. It found one particular small molecule that could outperform the best immunomodulators on the market. The results are published in the journal Chemical Science.

“We used artificial intelligence methods to guide a search of a huge chemical space,” said Prof. Aaron Esser-Kahn, co-author of the paper, who led the experiments. “In doing so, we found molecules with record-level performance that no human would have suggested we try. We’re excited to share the blueprint for this process.”

Immunomodulators work by changing the signaling activity of innate immune pathways within the body. In particular, the NF-κB pathway plays a role in inflammation and immune activation, while the IRF pathway is essential in antiviral response.

Earlier this year, the PME team conducted a high-throughput screen that looked at 40,000 combinations of molecules to see if any affected these pathways. They then tested the top candidates, found hat when those molecules were added to adjuvants—ingredients that help boost the immune response in vaccines—the molecules increased antibody response and reduced inflammation.

To find more candidates, the team used these results combined with a library of nearly 140,000 commercially available small molecules to guide an iterative computational and experimental process.

Active learning

They used a machine learning technique called “active learning,” which blends both exploration and exploitation to efficiently navigate the experimental screening through molecular space. This approach learns from the data previously collected and finds potential high-performing molecules to be tested experimentally while also pointing out areas that have been under-explored and may contain some valuable candidates.

The process was iterative; the model pointed out potential good candidates or areas in which it needed more information, and the team conducted a high-throughput analysis of those molecules and then fed the data back into the active learning algorithm.

The team expects to continue this process to search for more molecules, and hopes others in the field will share datasets to make the search even more fruitful. The goal is to screen molecules for more specific immune activity, like activating certain T-cells, or find a combination of molecules that gives them better control of the immune response.

“Ultimately, we want to find molecules that can treat disease,” Esser-Kahn said.

Citation: Yifeng Tang, et al. Chem. Sci., 2023,14, 12747-12766 Data-driven discovery of innate immunomodulators via machine learning-guided high throughput screening. https://pubs.rsc.org/en/Content/ArticleLanding/2023/SC/D3SC03613H (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Transforming the future of music creation

Today, in partnership with YouTube, Google DeepMind announced Lyria, “our most advanced AI music generation model to date,” and two AI experiments designed to open a new playground for creativity:

  • Dream Track – an experiment in YouTube Shorts designed to help deepen connections between artists, creators, and fans through music creation.
  • Music AI tools – a set of tools that Google Deep Mind is designing with artists, songwriters, and producers to help bolster their creative processes.

To develop these projects, Google Deepmind brought together Google technical experts with “a diverse range of world-renowned artists and songwriters to explore how generative music technologies can responsibly shape the future of music creation, enhance the work of professional musicians and the artist community, and deliver a positive contribution to the future of music.”

The Lyria model

Built by Google DeepMind, the newe Lyria model “excels at generating high-quality music with instrumentals and vocals, performing transformation and continuation tasks, and giving users more nuanced control of the output’s style and performance.”

An early look at the possibilities as we experiment with AI and music (credit: Google DeepMind)

“Music contains huge amounts of information — consider every beat, note, and vocal harmony in every second. When generating long sequences of sound, it’s difficult for AI models to maintain musical continuity across phrases, verses, or extended passages. Since music often includes multiple voices and instruments at the same time, it’s much harder to create than speech.”

Dream Track

The team is trialing Lyria in an experiment called Dream Track, which is designed to test new ways for artists to connect with their fans and developed in collaboration with YouTube. “A limited set of creators will be able to use Dream Track for producing a unique soundtrack with the AI-generated voice and musical style of artists including Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose. Each participating artist has partnered with us and will have a hand in helping us test and learn to shape the future of AI in music.

“Dream Track users can simply enter a topic and choose an artist from the carousel to generate a 30-second soundtrack for their Short. Using our Lyria model, Dream Track simultaneously generates the lyrics, backing track, and AI-generated voice in the style of the participating artist selected.

Here are a couple of samples generated in the styles of Charlie Puth or T-Pain:

Exploring music AI tools with the industry

“Our researchers have been exploring with artists, songwriters, and producers in YouTube’s Music AI Incubator how generative AI can best support the creative process, and working together to responsibly design a suite of music AI tools.

“Imagine singing a melody to create a horn line, transforming chords from a MIDI keyboard into a realistic vocal choir, or adding an instrumental accompaniment to a vocal track.

“With our music AI tools, users can create new music or instrumental sections from scratch, transform audio from one music style or instrument to another, and create instrumental and vocal accompaniments. This work draws on our history of research and experimentation with AI and music, and we’ll continue testing our music AI tools with incubator participants throughout their development.”

Example user interface of music AI tools (credit: Google DeepMind)
Transforming beatboxing into a drum loop (credit: Google DeepMind)
Transforming singing into an orchestral score (credit: Google DeepMind)
Transforming MIDI keyboard chords into a vocal choir (credit: Google DeepMind)

Watermarking AI-generated audio with SynthID

The team is also “pioneering responsible deployment of our technologies with best-in-class tools for watermarking and identifying synthetically generated content. Any content published by our Lyria model will be watermarked with SynthID, the same technology toolkit we’re using for identifying images generated by Imagen on Google Cloud’s Vertex AI.”

SynthID converts audio into a visual spectrogram to add a digital watermark (credit: Google DeepMind)

“SynthID embeds a watermark into AI-generated audio content that’s inaudible to the human ear and doesn’t compromise the listening experience. It does this by converting the audio wave into a two-dimensional visualization that shows how the spectrum of frequencies in a sound evolves over time.

“The watermark is designed to maintain detectability even when the audio content undergoes many common modifications such as noise additions, MP3 compression, or speeding up and slowing down the track. SynthID can also detect the presence of a watermark throughout a track to help determine if parts of a song were generated by Lyria.”

The future of generative music tools

“Generative music technologies could transform the future of music creation and use, the team continues. “Our cutting-edge work in this space will unlock an exciting new wave of artist tools that can inspire creativity for artists, songwriters, producers, and fans everywhere.

“We’ve only just begun to explore how AI can bolster people’s musical creativity and we can’t wait to see what we can accomplish next in partnership with artists, the music industry, and wider creative community.”

Read more on YouTube’s blog

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to 3D-print a complex robot at high precision and speed

Machine vision enables multi-material 3D printers to create complex shapes. But making functioning objects from multiple materials in a single print-run has proved challenging.

An MIT team has now combined inkjet printing with an error-correction system guided by machine vision. They used this method to make a bio-inspired, 3D-printed robotic gripper that’s shaped like a human hand and controlled by a set of reinforced, flexible tendons.

Tendon-driven robotic hand has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones (credit: Wojciech Matusik, Robert Katzschmann, Thomas) Buchner, et al )

They combined soft and rigid plastics to make mechanical bones, ligaments, and tendons, as well as a pump based on a mammalian heart.

Limited types of materials with current systems

With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans. 

These multimaterial 3D printing systems use thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used. 

Their printer uses computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real time to ensure no areas have too much or too little material.

Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.

660 times faster

In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.

“Our key insight here was to develop a machine vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next,” says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann, PhD ’18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research will appear in Nature.

This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.

Vision-controlled jetting” for composite systems and robots

With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices. 

The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.

“Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting,” says Katzschmann.

The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.

Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesn’t need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper. 

Superior materials

The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and don’t break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and don’t degrade as quickly when exposed to sunlight.

“These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment,” says Katzschmann.

Complex devices now possible

The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.

“We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the system’s ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure,” says Buchner.

The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties. 

“This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldn’t be used in 3D printing before,” Matusik says. 

The researchers are now looking at using the system to print with hydrogels, which are used in tissue-engineering applications, as well as silicon materials, epoxies, and special types of durable polymers.

They also want to explore new application areas, such as printing customizable medical devices, semiconductor polishing pads, and even more complex robots.

This research was funded, in part, by Credit Suisse, the Swiss National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation (NSF).

Nature Podcast: How to 3D-print fully-formed robots

Citation: Buchner, T. J., Rogler, S., Weirich, S., Armati, Y., Cangan, B. G., Ramos, J., Twiddy, S. T., Marini, D. M., Weber, A., Chen, D., Ellson, G., Jacob, J., Zengerle, W., Katalichenko, D., Keny, C., Matusik, W., & Katzschmann, R. K. (2023). Vision-controlled jetting for composite systems and robots. Nature, 623(7987), 522-530. https://doi.org/10.1038/s41586-023-0668

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to make oxygen catalyst on Mars from meteorites via AI-powered robot chemist

This AI-generated process would take a human researcher 2,000 years, says scientist

Researchers in China have developed an AI-powered robot chemist that could use materials found on Mars to produce catalysts. These chemicals would break down water, releasing oxygen, which is needed on Mars for burning (heating) and breathing.

The study, published in Nature Synthesis, was led by Jun Jiang at the University of Science and Technology in Hefei, of China.

The workflow

Workflow of an all-encompassing system for the on-site design and production of an OER electrocatalyst on Mars by an AI chemist (credit: AI-Chemist Group at University of Science and Technology of China)
  1. A mobile machine the size of a refrigerator with a robotic arm analyzed five meteorites that had come from Mars or were collected on Earth, mimicking the Martian surface and producing useful catalysts.
  2. The AI-powered system used acid and alkali to dissolve and separate the material, then analysed the resulting compounds.
  3. These then formed the basis of a search of more than 3.7 million formulae for a chemical that could break down water (as ice at Mars’ poles and under the planet’s surface)—a process the team said would have taken a human researcher 2,000 years.
  4. The result: an oxygen-evolution reaction catalyst that could release oxygen from water.

This video showcases the capabilities of the robotic AIchemist in synthesizing and optimizing oxygen-producing catalysts from Martian meteorites. The process involves automated analysis of Martian ore, catalyst synthesis, characterization, intelligent computing, and oxygen evolution reaction performance testing, which highlights the integration of robotics and AI for complex materials design and manufacture under challenging circumstances. (Credit: Qing Zhu et al.)

NASA’s approach

The Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) on board NASA’s Perseverance rover, which touched down on Mars in February 2021, has already successfully demonstrated the production of oxygen from the Martian air, which is mostly carbon dioxide.

Citation: Zhu, Q., Huang, Y., Zhou, D., Zhao, L., Guo, L., Yang, R., Sun, Z., Luo, M., Zhang, F., Xiao, H., Tang, X., Zhang, X., Song, T., Li, X., Chong, B., Zhou, J., Zhang, Y., Zhang, B., Cao, J., . . . Luo, Y. (2023). Automated synthesis of oxygen-producing catalysts from Martian meteorites by a robotic AI chemist. Nature Synthesis, 1-10. https://doi.org/10.1038/s44160-023-00424-1 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Twisted magnets make machine learning more adaptable, reduce energy use

An international team of researchers has found that by applying an external magnetic field generated by chiral (twisted) magnets and changing temperature, the physical properties of these materials could be adapted to different machine-learning tasks and dramatically reduce energy use.

“This work brings us a step closer to realizing the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains,” said Dr. Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), lead author of a paper. “The next step is to identify materials and device architectures that are commercially viable and scalable.”

Reducing energy required for machine learning

Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tons of carbon dioxide, say the researchers.

Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aim to remove the need for distinct memory and processing units, facilitating more energy-efficient ways to process data.

Customizing computation tasks

The researchers found that different magnetic phases of chiral magnets excelled at different types of computing tasks. The skyrmion phase, where magnetized particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification, for instance, identifying if an animal is a cat or dog.

The work also involved researchers at Imperial College London, the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Citation: Lee, O., Wei, T., Stenning, K. D., Gartside, J. C., Prestwood, D., Seki, S., Aqeel, A., Karube, K., Kanazawa, N., Taguchi, Y., Back, C., Tokura, Y., Branford, W. R., & Kurebayashi, H. (2023). Task-adaptive physical reservoir computing. Nature Materials, 1-9. https://doi.org/10.1038/s41563-023-01698-8 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Potential evolutionary dead ends for humanity and ways to avoid them

Scientists have identified 14 “evolutionary traps” for human societies at large, ranging from global climate tipping points to misaligned artificial intelligence, chemical pollution, and accelerating infectious diseases.

The Anthropocene—the geological epoch shaped by us humans—is showing more and more cracks, says Peter Søgaard Jorgensen, researcher at the Stockholm Resilience Centre at Stockholm University and the Royal Swedish Academy of Sciences.

Multiple global crises, such as the Covid-19 pandemic, climate change, food insecurity, financial crises, and conflicts have started to occur simultaneously in a “polycrisis.”

Evolutionary traps

“The human species has been too successful and, in some ways, too smart for its own future good,” says Jørgensen, lead author of a new landmark study published today in the open-access journal Philosophical Transactions of the Royal Society B, with insights from a wide range of different scientific disciplines across the natural and social sciences and humanities.

The new study shows how humanity could get stuck in “evolutionary traps”—dead ends that occur from initially successful innovations. The study identifes 14 of these, including the simplification of agriculture, economic growth that does not deliver benefits for humans or the environment, the instability of global cooperation, climate tipping points, and artificial intelligence.

Simplifying agricultural systems is an example of such a trap—relying on a few highly productive crops such as wheat, rice, maize, and soya—means that calories produced have skyrocketed over the past century. But the food system has become very vulnerable to environmental change, such as weather extremes or new diseases. 

On the verge of getting stuck

Of the 14 evolutionary traps, 12 are in an advanced state, meaning that humankind is on the verge of getting stuck to a degree where it becomes very difficult to get out. These evolutionary traps tend to reinforce one other.

The researchers suggest that two dead ends that are currently kess advanced are the autonomy of technology (AI/robotics) and loss of social capital through digitalization, the researchers suggest.

Evading the traps

The new assessment also looks into why societies struggle so hard to move out of these traps. “The evolutionary forces that created the Anthropocene do not work well in today’s global systems, where social and environmental problems grow in places that seem distant to the societies that could prevent them.

Also, “addressing them often requires global collaboration on a scale that many evolutionary forces often do not align well with,” says co-author Lan Wang-Erlandsson, researcher at the Stockholm Resilience Centre at Stockholm University and the Royal Swedish Academy of Sciences’ Anthropocene laboratory.

Transforming societies

This doesn’t mean that humanity is doomed to fail, argue the researchers. But we must start to transform our societies actively.

“It’s time for humans to become aware of the new reality and to collectively move where we want to as a species,” explains Jørgensen. “Our creativity, and our power to innovate and collaborate equip us with the perfect tools to actively design our future. We can break out of dead ends and business-as-usual, but for that, we must nurture the capacity for collective human agency and design settings where it can flourish.”

Citation: Peter Søgaard Jørgensen et al. Evolution of the polycrisis: Anthropocene traps that challenge global sustainability. Philosophical Transactions of the Royal Society B: Biological Sciences. Vol. 379, Issue 1893. https://doi.org/10.1098/rstb.2022.0261 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New AI noise-canceling headphone tech could let you pick which sounds you hear

If you’ve used noise-canceling headphones, you know that hearing the right noise at the right time can be vital. You might want to block car horns when working indoors, but not when walking along busy streets. This feature is not currently available.

So a team led by researchers at the University of Washington has developed deep-learning algorithms that could let you pick which sounds filter through your headphones in real time. By using voice commands or a smartphone app, you could select which sounds you want to hear in 20 classes, such as sirens, baby cries, speech, vacuum cleaners and bird chirps.

The team presented its findings Nov. 1 at UIST ’23 in San Francisco. The researchers plan to develop a commercial version of the system.

“Understanding what a bird sounds like and extracting it from all other sounds in an environment requires real-time intelligence that today’s noise-canceling headphones haven’t achieved,” explained senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering.

Audio-visual synchronization

“The sounds headphone wearers hear need to sync with their visual senses. You can’t be hearing someone’s voice two seconds after they talk to you. This means the neural algorithms must process sounds in under a hundredth of a second.”

So instead of cloud servers, a local device such as a connected smartphone is required. Also, sounds from different directions arrive in your ears at different times, so the system must preserve these delays and other spatial cues.

Training on real-world data

Tested in environments such as offices, streets and parks, the system was able to extract sirens, bird chirps, alarms and other target sounds while removing all other real-world noise. When 22 participants rated the system’s audio output for the target sound, they said that on average the quality improved, compared to the original recording.

In some cases, the system struggled to distinguish between sounds that share many properties, such as vocal music and human speech. The researchers note that training the models on more real-world data might improve these outcomes.

Additional co-authors on the paper were Bandhav Veluri and Malek Itani, both UW doctoral students in the Allen School; Justin Chan, who completed this research as a doctoral student in the Allen School and is now at Carnegie Mellon University; and Takuya Yoshioka, director of research at AssemblyAI.

Citation: Bandhav Veluri, Malek Itani,Justin Chan, Takuya Yoshioka, Shyamnath Gollakota. 29 October 2023. Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables. UIST ’23: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology https://doi.org/10.1145/3586183.3606779 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter