Google DeepMind invents nearly 400,000 new compounds for use by researchers

Google DeepMind has contributed nearly 400,000 new compounds to the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) Materials Project for use by researchers.

Think future materials like lighter alloys that improve fuel economy in cars, more efficient solar cells to boost renewable energy, and faster transistors for the next generation of computers. 

The Materials Project, an open-access database founded at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) in 2011, computes the properties of both known and predicted materials for use in future technologies. The new compounds include how the atoms of a material are arranged (the crystal structure) and how istable it is (formation energy). 

“We have to create new materials if we are going to address the global environmental and climate challenges,” said Kristin Persson, the founder and director of the Materials Project at Berkeley Lab and a professor at UC Berkeley. “With innovation in materials, we can potentially develop recyclable plastics, harness waste energy, make better batteries, and build cheaper solar panels that last longer, among many other things.”

Deep learning tool

To generate the new data, Google DeepMind developed a deep learning tool called Graph Networks for Materials Exploration, or GNoME. Researchers trained GNoME using workflows and data that were developed over a decade by the Materials Project, and improved the GNoME algorithm through active learning.

GNoME researchers ultimately produced 2.2 million crystal structures, including 380,000 that they are adding to the Materials Project and predict are stable, making them potentially useful in future technologies. The new results from Google DeepMind are published today in the journal Nature.

Quick discovery

Some of the computations from GNoME were used alongside data from the Materials Project to test A-Lab, a facility at Berkeley Lab where artificial intelligence guides robots in making new materials. A-Lab’s first paper, also published today in Nature, showed that the autonomous lab can quickly discover novel materials with minimal human input. 

To make the novel compounds predicted by the Materials Project, A-Lab’s AI created new recipes by combing through scientific papers and using active learning to make adjustments. Data from the Materials Project and GNoME were used to evaluate the materials’ predicted stability.

The Materials Project is the most widely used open-access repository of information on inorganic materials in the world. The database holds millions of properties on hundreds of thousands of structures and molecules, information primarily processed at Berkeley Lab’s National Energy Research Science Computing Center.

More than 400,000 people are registered as users of the site and, on average, more than four papers citing the Materials Project are published every day. The contribution from Google DeepMind is the biggest addition of structure-stability data from a group since the Materials Project began.

New data freely available

The Materials Project is now processing the compounds from Google DeepMind and adding them into the online database. The new data will be freely available to researchers, and also feed into projects such as A-Lab that partner with the Materials Project.

By following promising leads from data in the Materials Project over the past decade, researchers have experimentally confirmed useful properties in new materials across several areas.

These include carbon capture (pulling carbon dioxide from the atmosphere), photocatalysts (materials that speed up chemical reactions in response to light and could be used to break down pollutants or generate hydrogen), thermoelectrics (materials that could help harness waste heat and turn it into electrical power) and transparent conductors (which might be useful in solar cells, touch screens, or LEDs).

Citation: Szymanski, N. J., Rendy, B., Fei, Y., Kumar, R. E., He, T., Milsted, D., McDermott, M. J., Gallant, M., Cubuk, E. D., Merchant, A., Kim, H., Jain, A., Bartel, C. J., Persson, K., Zeng, Y., & Ceder, G. (2023). An autonomous laboratory for the accelerated synthesis of novel materials. Nature, 1-6. https://doi.org/10.1038/s41586-023-06734-w (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Immersive ‘presence’ in mixed reality can be measured using reaction time

With mixed-reality devices (which combine virtual and augmented reality), such as Meta’s Quest 3 or Apple’s forthcoming Vision Pro, what you feel is real is called “presence.” 

Now researchers at the University of Massachusetts Amherst have identified reaction time as a possible way to adjust presence. That’s better than asking the user to respond to a questionnaire after exiting, they note.

To test this, the researchers put participants in modified “Fruit Ninja” mixed-reality scenarios. In one experiment, they adjusted place illusion and the fruit appeared either as lifelike fruit or abstract cartoons. In another experiment, they created a plausibility illusion by showing mugs filling up with coffee, either in the correct upright position or sideways.  

They found that people were quicker in reacting to the lifelike fruit than they would to the cartoonish-looking food.

Important in medical procedures

Yasra Chandio, doctoral candidate in computer engineering and lead study author, gives medical procedures as an example of the importance of this real-time presence calibration: If a surgeon needs millimeter-level precision, they may use mixed reality as a guide to tell them exactly where they need to operate.  

“If we just show the organ in front of them, and we don’t adjust for the height of the surgeon, ​​​​for instance, that could be delaying the surgeon and could ​​​​have inaccuracies for them,” she says. Low presence can also contribute to cybersickness, a feeling of dizziness or nausea that can occur in the body when a user’s bodily perception does not align with what they’re seeing.

However, if the mixed-reality system is internally monitoring presence, it can make adjustments in real time, like moving the virtual organ rendering to eye level. 

Citation: Y. Chandio, N. Bashir, V. Interrante and F. M. Anwar, “Investigating the Correlation Between Presence and Reaction Time in Mixed Reality,” in IEEE Transactions on Visualization and Computer Graphics, doi: 10.1109/TVCG.2023.3319563

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Hybrid biological transistors behave like living tissue

A team at Tufts University Silklab has created microprocessor-scale transistors that can respond directly to the environment and change like living tissue.

To do that, the researchers replaced the transistors’ insulating material with “silk fibroin,” the structural protein of silk fibers, as they reported in the journal Advanced Materials.

The silk fibroin material can be precisely deposited onto surfaces, where it can be easily modified with other chemical and biological molecules to change its properties, according to the researchers. This allows the material to pick up and detect a wide range of components from the body or environment.

Highly sensitive breath sensor detects hidden diseases

A prototype device developed by the researchers used hybrid transistors to make a highly sensitive and ultrafast breath sensor that could detect changes in humidity.

This allowed the devices to detect some cardiovascular and pulmonary diseases and sleep apnea. They could also pick up carbon dioxide levels and other gases’ molecules in the breath, which could provide diagnostic information. Used with blood plasma, they could potentially provide information on levels of oxygenation and glucose, circulating antibodies, and more.

Bioactive inks for fabrics to detect changes in the environment or body, like the COVID19 virus

Prior to the development of the hybrid transistors, the Silklab, led by Fiorenzo Omenetto, the Frank C. Doble Professor of engineering, had already used fibroin to make bioactive inks for fabrics that can detect changes in the environment or on the body; sensing tattoos that can be placed under the skin or on the teeth to monitor health and diet; and sensors that can be printed on any surface to detect pathogens like the virus responsible for COVID19.

How It Works

A transistor is simply an electrical switch, with a metal electrical lead coming in and another going out. In between the leads is the semiconductor material.

Another source of electrical input called a gate is separated from everything else by an insulator. The gate acts as the “key” to turn the transistor on and off. It triggers the on-state when a threshold voltage– which we will call “1” – creates an electric field across the insulator, priming electron movement in the semiconductor and starting the flow of current through the leads.

In a biological hybrid transistor, a silk layer is used as the insulator, and when it absorbs moisture, it acts like a gel carrying whatever ions (electrically charged molecules) are contained within. The gate triggers the on-state by rearranging ions in the silk gel. By changing the ionic composition in the silk, the transistor operation changes, allowing it to be triggered by any gate value between zero and one.

Analog computing with microprocessors

“You could imagine creating circuits that make use of information that is not represented by the discrete binary levels used in digital computing, but can process variable information, as in analog computing, with the variation caused by changing what’s inside the silk insulator,” said Omenetto.

“This opens up the possibility of introducing biology into computing within modern microprocessors,” he said. Of course, the most powerful known biological computer is the brain, which processes information with variable levels of chemical and electrical signals.

Self-training hybrid biological transistors

The technical challenge in creating hybrid biological transistors was to achieve silk processing at the nanoscale, down to 10nm, or less than 1/10000th the diameter of a human hair. “Having achieved that, we can now make hybrid transistors with the same fabrication processes that are used for commercial chip manufacturing,” said Beom Joon Kim, postdoctoral researcher at the School of Engineering. “This means you can make a billion of these with capabilities available today.”

Having billions of transistor nodes with connections reconfigured by biological processes in the silk could lead to microprocessors that could act like the neural networks used in AI. “Looking ahead, one could imagine having integrated circuits that train themselves, respond to environmental signals, and record memory directly in the transistors, rather than sending it to separate storage,” said Omenetto.

Devices detecting and responding to more complex biological states and large-scale analog and neuromorphic computing are yet to be created. Omenetto is optimistic for future opportunities. “This opens up a new way of thinking about the interface between electronics and biology, with many important fundamental discoveries and applications ahead.”

Citation: Kim, B. J., Bonacchini, G. E., Ostrovsky-Snider, N. A., & Omenetto, F. G. (2023). Bimodal Gating Mechanism in Hybrid Thin-Film Transistors Based on Dynamically Reconfigurable Nanoscale Biopolymer Interfaces. Advanced Materials, 35(45), 2302062. https://doi.org/10.1002/adma.202302062

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bacteria store memories and pass them on for generations, researchers discover

Bacteria can create something similar to memories, which indicate when to cause dangerous infections in people, when to form bacterial swarms, or when millions of bacteria come together on a single surface.

This remarkable discovery by researchers at The University of Texas at Austin (UT) has potential applications for preventing and combatting bacterial infections and addressing antibiotic-resistant bacteria, say the researchers.

Bacterial cells use a common chemical element, iron, to form and pass along these “memories” to their progeny over later generations. E. coli bacteria use iron levels as a way to store information about different behaviors that can then be activated in response to certain stimuli.

Why bacteria swarm

Scientists previously observed that bacteria that had a prior experience of swarming (moving on a surface as a collective, using flagella) improve subsequent swarming performance. So the UT-led research team set out to learn why.

“Bacteria don’t have brains, but they can gather information from their environment, and if they have encountered that environment frequently, they can store that information and quickly access it later for their benefit,” explained Souvik Bhattacharyya, lead author and a member of the Department of Molecular Biosciences at UT.

Iron manbacterium

It’s about iron—one of the most abundant elements on Earth. Singular, free-floating bacteria have varying levels of iron.

And scientists observed that bacterial cells with lower levels of iron were better swarmers. In contrast, bacteria that formed biofilms—dense, sticky mats of bacteria on solid surfaces—had high levels of iron in their cells.

Bacteria with antibiotic tolerance also had balanced levels of iron. These iron memories persist for at least four generations and disappear by the seventh generation.

“Before there was oxygen in the Earth’s atmosphere, early cellular life was utilizing iron for a lot of cellular processes. Iron is not only critical in the origin of life on Earth, but also in the evolution of life,” Bhattacharyya said. “It makes sense that cells would utilize it in this way.”

Forming a biofilm

Researchers theorize that when iron levels are low, bacterial memories are triggered to form a fast-moving migratory swarm to seek out iron in the environment. But when iron levels are high, memories indicate they should stick around and form a biofilm.

“Iron levels are definitely a target for therapeutics because iron is an important factor in virulence,” Bhattacharyya said. “Ultimately, the more we know about bacterial behavior, the easier it is combat them.”

The findings are published in the Proceedings of the National Academy of Sciences. The research was funded by the National Institutes of Health.

Citation: Bhattacharyya, S., Bhattarai, N., Pfannenstiel, D. M., Wilkins, B., Singh, A., & Harshey, R. M. (2023). A heritable iron memory enables decision-making in Escherichia coli. Proceedings of the National Academy of Sciences, 120(48), e2309082120. https://doi.org/10.1073/pnas.2309082120

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Children’s brains shaped by their time on tech devices

Time spent watching television or playing computer games has measurable, long-term negative and positive effects on children’s brain function, according to a review of 23 years of neuroimaging research.

Published Nov. 16 in the peer-reviewed journal Early Education and Development, the analysis of 33 studies used neuroimaging technology to measure the impact of digital technology on the brains of more than 30,000 children under the age of 12. 

Brain changes

Among the findings: screen time leads to changes in the brain’s pre-frontal cortex—the base of executive functions such as working memory and the ability to plan or to respond flexibly to situations. It also finds impacts on the parietal lobe, which helps us to process touch, pressure, heat, cold, and pain; the temporal lobe, which is important for memory, hearing and language; and the occipital lobe, which helps us to interpret visual information.

The research team, which included experts from the Education University of Hong Kong, the Shanghai Normal University in China and Macquarie University in Australia, wanted to know how digital activity affected the brain’s plasticity—or malleability—in critical periods of development. It is known that visual development mostly takes place before the age of eight, while the key time for language acquisition is up to 12.

They synthesized and evaluated studies on children’s digital use of screens, games, virtual scenes and associated brain development published between January 2000 and April 2023, with the ages of participants ranging from six months upwards.

Negative impacts

The paper concludes that these early digital experiences are having a significant impact on the shape of children’s brains and their functioning, potentially positive and negative, but mainly more negative.

For example, negative impacts were witnessed in some studies with how screen time influences the brain function required for attention, executive control abilities, inhibitory control, cognitive processes, and functional connectivity. Other studies suggested that higher screen time is associated with lower functional connectivity in brain areas related to language and cognitive control, potentially adversely affecting cognitive development.

Some device-based research were assessed in the research pool. Tablet device users were found to have worse brain function and problem-solving tasks. Video gaming and high internet users were found, in four studies, to produce negative changes in brain areas, impacting intelligence scores and brain volume.

And general “intensive media usage” was shown to potentially impact visual processing and higher cognitive function regions.

Positive impacts

There were six studies, however, demonstrating how these digital experiences can positively impact a child’s brain functionality. One found improved focusing and learning abilities in the frontal lobe of the brain. Another study suggested playing video games can increase cognitive demand, potentially enhancing children’s executive funct

CITATION: Dandan Wu, Xinyi Dong, Danqing Liu & Hui Li (2023) How Early Digital Experience Shapes Young Brains During 0-12 Years: A Scoping Review, Early Education and Development, DOI: 10.1080/10409289.2023.2278117 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Boosting vaccines and immunotherapies with machine learning

Small molecules called “immunomodulators” can help create more effective vaccines and stronger immunotherapies to treat cancer.

But finding the molecules that trigger the right immune response is difficult. The number of drug-like small molecules has been estimated to be 1060, much higher than the number of stars in the visible universe.

Using machine learning to screen molecules

So a team from the Pritzker School of Molecular Engineering (PME) at The University of Chicago tackled the problem by using machine learning to guide high-throughput experimental screening of this vast search space.

In a potential first for the field of vaccine design, machine learning guided the discovery of new immune pathway-enhancing molecules. It found one particular small molecule that could outperform the best immunomodulators on the market. The results are published in the journal Chemical Science.

“We used artificial intelligence methods to guide a search of a huge chemical space,” said Prof. Aaron Esser-Kahn, co-author of the paper, who led the experiments. “In doing so, we found molecules with record-level performance that no human would have suggested we try. We’re excited to share the blueprint for this process.”

Immunomodulators work by changing the signaling activity of innate immune pathways within the body. In particular, the NF-κB pathway plays a role in inflammation and immune activation, while the IRF pathway is essential in antiviral response.

Earlier this year, the PME team conducted a high-throughput screen that looked at 40,000 combinations of molecules to see if any affected these pathways. They then tested the top candidates, found hat when those molecules were added to adjuvants—ingredients that help boost the immune response in vaccines—the molecules increased antibody response and reduced inflammation.

To find more candidates, the team used these results combined with a library of nearly 140,000 commercially available small molecules to guide an iterative computational and experimental process.

Active learning

They used a machine learning technique called “active learning,” which blends both exploration and exploitation to efficiently navigate the experimental screening through molecular space. This approach learns from the data previously collected and finds potential high-performing molecules to be tested experimentally while also pointing out areas that have been under-explored and may contain some valuable candidates.

The process was iterative; the model pointed out potential good candidates or areas in which it needed more information, and the team conducted a high-throughput analysis of those molecules and then fed the data back into the active learning algorithm.

The team expects to continue this process to search for more molecules, and hopes others in the field will share datasets to make the search even more fruitful. The goal is to screen molecules for more specific immune activity, like activating certain T-cells, or find a combination of molecules that gives them better control of the immune response.

“Ultimately, we want to find molecules that can treat disease,” Esser-Kahn said.

Citation: Yifeng Tang, et al. Chem. Sci., 2023,14, 12747-12766 Data-driven discovery of innate immunomodulators via machine learning-guided high throughput screening. https://pubs.rsc.org/en/Content/ArticleLanding/2023/SC/D3SC03613H (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Transforming the future of music creation

Today, in partnership with YouTube, Google DeepMind announced Lyria, “our most advanced AI music generation model to date,” and two AI experiments designed to open a new playground for creativity:

  • Dream Track – an experiment in YouTube Shorts designed to help deepen connections between artists, creators, and fans through music creation.
  • Music AI tools – a set of tools that Google Deep Mind is designing with artists, songwriters, and producers to help bolster their creative processes.

To develop these projects, Google Deepmind brought together Google technical experts with “a diverse range of world-renowned artists and songwriters to explore how generative music technologies can responsibly shape the future of music creation, enhance the work of professional musicians and the artist community, and deliver a positive contribution to the future of music.”

The Lyria model

Built by Google DeepMind, the newe Lyria model “excels at generating high-quality music with instrumentals and vocals, performing transformation and continuation tasks, and giving users more nuanced control of the output’s style and performance.”

An early look at the possibilities as we experiment with AI and music (credit: Google DeepMind)

“Music contains huge amounts of information — consider every beat, note, and vocal harmony in every second. When generating long sequences of sound, it’s difficult for AI models to maintain musical continuity across phrases, verses, or extended passages. Since music often includes multiple voices and instruments at the same time, it’s much harder to create than speech.”

Dream Track

The team is trialing Lyria in an experiment called Dream Track, which is designed to test new ways for artists to connect with their fans and developed in collaboration with YouTube. “A limited set of creators will be able to use Dream Track for producing a unique soundtrack with the AI-generated voice and musical style of artists including Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose. Each participating artist has partnered with us and will have a hand in helping us test and learn to shape the future of AI in music.

“Dream Track users can simply enter a topic and choose an artist from the carousel to generate a 30-second soundtrack for their Short. Using our Lyria model, Dream Track simultaneously generates the lyrics, backing track, and AI-generated voice in the style of the participating artist selected.

Here are a couple of samples generated in the styles of Charlie Puth or T-Pain:

Exploring music AI tools with the industry

“Our researchers have been exploring with artists, songwriters, and producers in YouTube’s Music AI Incubator how generative AI can best support the creative process, and working together to responsibly design a suite of music AI tools.

“Imagine singing a melody to create a horn line, transforming chords from a MIDI keyboard into a realistic vocal choir, or adding an instrumental accompaniment to a vocal track.

“With our music AI tools, users can create new music or instrumental sections from scratch, transform audio from one music style or instrument to another, and create instrumental and vocal accompaniments. This work draws on our history of research and experimentation with AI and music, and we’ll continue testing our music AI tools with incubator participants throughout their development.”

Example user interface of music AI tools (credit: Google DeepMind)
Transforming beatboxing into a drum loop (credit: Google DeepMind)
Transforming singing into an orchestral score (credit: Google DeepMind)
Transforming MIDI keyboard chords into a vocal choir (credit: Google DeepMind)

Watermarking AI-generated audio with SynthID

The team is also “pioneering responsible deployment of our technologies with best-in-class tools for watermarking and identifying synthetically generated content. Any content published by our Lyria model will be watermarked with SynthID, the same technology toolkit we’re using for identifying images generated by Imagen on Google Cloud’s Vertex AI.”

SynthID converts audio into a visual spectrogram to add a digital watermark (credit: Google DeepMind)

“SynthID embeds a watermark into AI-generated audio content that’s inaudible to the human ear and doesn’t compromise the listening experience. It does this by converting the audio wave into a two-dimensional visualization that shows how the spectrum of frequencies in a sound evolves over time.

“The watermark is designed to maintain detectability even when the audio content undergoes many common modifications such as noise additions, MP3 compression, or speeding up and slowing down the track. SynthID can also detect the presence of a watermark throughout a track to help determine if parts of a song were generated by Lyria.”

The future of generative music tools

“Generative music technologies could transform the future of music creation and use, the team continues. “Our cutting-edge work in this space will unlock an exciting new wave of artist tools that can inspire creativity for artists, songwriters, producers, and fans everywhere.

“We’ve only just begun to explore how AI can bolster people’s musical creativity and we can’t wait to see what we can accomplish next in partnership with artists, the music industry, and wider creative community.”

Read more on YouTube’s blog

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to 3D-print a complex robot at high precision and speed

Machine vision enables multi-material 3D printers to create complex shapes. But making functioning objects from multiple materials in a single print-run has proved challenging.

An MIT team has now combined inkjet printing with an error-correction system guided by machine vision. They used this method to make a bio-inspired, 3D-printed robotic gripper that’s shaped like a human hand and controlled by a set of reinforced, flexible tendons.

Tendon-driven robotic hand has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones (credit: Wojciech Matusik, Robert Katzschmann, Thomas) Buchner, et al )

They combined soft and rigid plastics to make mechanical bones, ligaments, and tendons, as well as a pump based on a mammalian heart.

Limited types of materials with current systems

With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans. 

These multimaterial 3D printing systems use thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used. 

Their printer uses computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real time to ensure no areas have too much or too little material.

Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.

660 times faster

In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.

“Our key insight here was to develop a machine vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next,” says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann, PhD ’18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research will appear in Nature.

This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.

Vision-controlled jetting” for composite systems and robots

With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices. 

The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.

“Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting,” says Katzschmann.

The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.

Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesn’t need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper. 

Superior materials

The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and don’t break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and don’t degrade as quickly when exposed to sunlight.

“These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment,” says Katzschmann.

Complex devices now possible

The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.

“We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the system’s ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure,” says Buchner.

The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties. 

“This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldn’t be used in 3D printing before,” Matusik says. 

The researchers are now looking at using the system to print with hydrogels, which are used in tissue-engineering applications, as well as silicon materials, epoxies, and special types of durable polymers.

They also want to explore new application areas, such as printing customizable medical devices, semiconductor polishing pads, and even more complex robots.

This research was funded, in part, by Credit Suisse, the Swiss National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation (NSF).

Nature Podcast: How to 3D-print fully-formed robots

Citation: Buchner, T. J., Rogler, S., Weirich, S., Armati, Y., Cangan, B. G., Ramos, J., Twiddy, S. T., Marini, D. M., Weber, A., Chen, D., Ellson, G., Jacob, J., Zengerle, W., Katalichenko, D., Keny, C., Matusik, W., & Katzschmann, R. K. (2023). Vision-controlled jetting for composite systems and robots. Nature, 623(7987), 522-530. https://doi.org/10.1038/s41586-023-0668

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to make oxygen catalyst on Mars from meteorites via AI-powered robot chemist

This AI-generated process would take a human researcher 2,000 years, says scientist

Researchers in China have developed an AI-powered robot chemist that could use materials found on Mars to produce catalysts. These chemicals would break down water, releasing oxygen, which is needed on Mars for burning (heating) and breathing.

The study, published in Nature Synthesis, was led by Jun Jiang at the University of Science and Technology in Hefei, of China.

The workflow

Workflow of an all-encompassing system for the on-site design and production of an OER electrocatalyst on Mars by an AI chemist (credit: AI-Chemist Group at University of Science and Technology of China)
  1. A mobile machine the size of a refrigerator with a robotic arm analyzed five meteorites that had come from Mars or were collected on Earth, mimicking the Martian surface and producing useful catalysts.
  2. The AI-powered system used acid and alkali to dissolve and separate the material, then analysed the resulting compounds.
  3. These then formed the basis of a search of more than 3.7 million formulae for a chemical that could break down water (as ice at Mars’ poles and under the planet’s surface)—a process the team said would have taken a human researcher 2,000 years.
  4. The result: an oxygen-evolution reaction catalyst that could release oxygen from water.

This video showcases the capabilities of the robotic AIchemist in synthesizing and optimizing oxygen-producing catalysts from Martian meteorites. The process involves automated analysis of Martian ore, catalyst synthesis, characterization, intelligent computing, and oxygen evolution reaction performance testing, which highlights the integration of robotics and AI for complex materials design and manufacture under challenging circumstances. (Credit: Qing Zhu et al.)

NASA’s approach

The Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) on board NASA’s Perseverance rover, which touched down on Mars in February 2021, has already successfully demonstrated the production of oxygen from the Martian air, which is mostly carbon dioxide.

Citation: Zhu, Q., Huang, Y., Zhou, D., Zhao, L., Guo, L., Yang, R., Sun, Z., Luo, M., Zhang, F., Xiao, H., Tang, X., Zhang, X., Song, T., Li, X., Chong, B., Zhou, J., Zhang, Y., Zhang, B., Cao, J., . . . Luo, Y. (2023). Automated synthesis of oxygen-producing catalysts from Martian meteorites by a robotic AI chemist. Nature Synthesis, 1-10. https://doi.org/10.1038/s44160-023-00424-1 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Twisted magnets make machine learning more adaptable, reduce energy use

An international team of researchers has found that by applying an external magnetic field generated by chiral (twisted) magnets and changing temperature, the physical properties of these materials could be adapted to different machine-learning tasks and dramatically reduce energy use.

“This work brings us a step closer to realizing the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains,” said Dr. Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), lead author of a paper. “The next step is to identify materials and device architectures that are commercially viable and scalable.”

Reducing energy required for machine learning

Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tons of carbon dioxide, say the researchers.

Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aim to remove the need for distinct memory and processing units, facilitating more energy-efficient ways to process data.

Customizing computation tasks

The researchers found that different magnetic phases of chiral magnets excelled at different types of computing tasks. The skyrmion phase, where magnetized particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification, for instance, identifying if an animal is a cat or dog.

The work also involved researchers at Imperial College London, the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Citation: Lee, O., Wei, T., Stenning, K. D., Gartside, J. C., Prestwood, D., Seki, S., Aqeel, A., Karube, K., Kanazawa, N., Taguchi, Y., Back, C., Tokura, Y., Branford, W. R., & Kurebayashi, H. (2023). Task-adaptive physical reservoir computing. Nature Materials, 1-9. https://doi.org/10.1038/s41563-023-01698-8 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter