AI for faster quantum mechanical simulations in material science

Researchers at KAIST have found ways to speed up quantum mechanical simulations by integrating an artificial intelligence (AI) model called DeepSCF. This model was developed to make the calculations needed for understanding how materials behave at the atomic level significantly faster.

Methods used for quantum mechanical simulations of materials (Credit: KAIST).

One of the key methods used in quantum mechanical simulations is Density Functional Theory (DFT), which calculates the properties of materials from first principles, meaning it starts from the basic laws of quantum physics without any empirical parameters. However, DFT calculations are computationally intensive because they involve solving complex equations to determine electron density in three-dimensional space, a process known as the self-consistent field (SCF) method. This method iteratively calculates electron density until the results stabilize, which can take many iterations, making it time-consuming especially for larger systems.

The KAIST researchers introduced the DeepSCF model to overcome these limitations. DeepSCF uses a neural network similar to those used for 3D computer vision to learn the 3D patterns of electron density and chemical bonding.

The researchers describe the DeepSCF project and early results in a paper published in npj Computational Materials.

Instead of repeatedly solving equations through SCF, DeepSCF recognizes chemical bonding patterns, learning from a dataset of known molecular structures how electron density should look for new or altered materials.

AI predictions replace intensive computations

This innovation replaces the traditional, computationally heavy steps with AI predictions. The researchers trained the neural network with data from various organic molecules, enhancing the model’s ability to generalize across different chemical scenarios by applying random rotations and deformations to these structures. This allows the model to not only predict electron densities but also other quantum mechanical properties, with an accuracy comparable to conventional DFT calculations but much faster.

According to the researchers, DeepSCF represents a leap forward in materials science and quantum chemistry, offering a tool that can quickly predict material properties without the need for extensive computational resources. DeepSCF opens up new possibilities for simulating larger and more complex material systems efficiently.

“Since quantum mechanical electron structure calculations underpin materials simulations across all scales,” says research leader Yong-Hoon Kim in a KAIST press release, “this research establishes a foundational principle for accelerating material calculations using artificial intelligence.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Synthetic genes build complex DNA structures inside cells

Researchers from UCLA and the University of Rome Tor Vergata have created synthetic genes that mimic the function of genes in living cells.

These artificial genes can construct complex structures inside cells by following a sequence similar to how furniture is assembled from modular parts at IKEA.

The process involves self-assembly where small parts come together to form larger, structured items. Simple building blocks, suitably programmed, make complex biomolecular materials.  The same components can also break up different materials. 

“Our work suggests a way toward scaling up the complexity of biomolecular materials by taking advantage of the timing of molecular instructions for self-assembly,” says research co-leader Elisa Franco in a UCLA press release.

She adds that this approach can generate distinct materials that can “spontaneously ‘develop’ from the same finite set of parts by simply rewiring the elements that control the temporal order of assembly.”

The researchers describe the methods and results of this study in a paper published in Nature Communications.

The researchers used building blocks of DNA tiles and created a solution containing millions of these tiles. The DNA tiles interacted with one another to form micron-scale tubular structures. These structures only form in the presence of a specific RNA molecule that triggers the formation. A different RNA trigger molecule can also induce the disassembly of the same structures. 

Synthetic genes form or disassemble simple synthetic structures at specific times (Credit: UCLA).

Timing is crucial

The researchers programmed the synthetic genes to release these RNA triggers at specific times, thus controlling when and how the DNA structures assemble or disassemble.

This method could apply to other materials and biological systems where the timing of chemical signals is crucial.

By controlling these signals, the same DNA parts could make different structures or materials that change over time.

This research could lead to new developments in synthetic biology, potentially impacting medicine and biotechnology.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Modified AlphaFold combines AI with new experimental data

Proteins are essential in all living things, controlling everything from muscle movement to food digestion. They’re made up of amino acids, kind of like tiny beads on a string, which can come in many different orders and lengths, creating a huge number of possible proteins. How these amino acids are folded into a 3D shape determines what a protein does.

For a long time, figuring out how proteins fold was really hard and took a lot of time and money. Then came AlphaFold, an Artificial Intelligence (AI) tool by DeepMind, released in 2020. This tool uses neural networks to predict how proteins will fold, making it much easier to understand their functions and design new proteins for medical drugs.

This was such a big deal that Demis Hassabis and John Jumper, respectively CEO and senior research scientists at DeepMind, won half of the Nobel Prize in Chemistry.

However, AlphaFold had some limits; it struggled with very large proteins or using data from experiments. Researchers at Linköping University have modified AlphaFold, creating a tool called AF_unmasked.

The researchers describe the development of AF_unmasked and some early results of the project in a paper published in Nature Communications.

Comparison of AF_unmasked and standard AlphaFold predictions of the structure of a protein (Credit: Nature).

Endless possibilities for protein design

AF_unmasked can take in partial or experimental data, refining protein designs. The idea is to combine what we learn from experiments with what AI predicts, helping researchers design proteins more effectively.

“We’re giving a new type of input to AlphaFold, says researcher Claudio Mirabello in a Linköping University press release. “The idea is to get the whole picture, both from experiments and neural networks, making it possible to build larger structures.”

“The possibilities for protein design are endless, only the imagination sets limits,” adds researcher Björn Wallner. “It’s possible to develop proteins for use both inside and outside the body.”

Mirabello explains that AlphaFold “encodes the evolutionary history of a protein inside the neural network.” He developed this idea with Wallner. “So, you could say that AlphaFold was based on our idea, and now we are building on AlphaFold.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nanoscale transistors overcome the limits of silicon semiconductor technology

Silicon transistors are vital components in electronic devices like smartphones and cars, but physical limits prevent silicon transistors from operating effectively below a certain voltage, impacting the energy efficiency of devices, particularly with the growing demands of AI technologies.

To address this issue, researchers at MIT have engineered a new type of transistor using gallium antimonide and indium arsenide. These materials permit leveraging quantum tunneling, where electrons can pass through barriers more easily, allowing the transistors to switch on and off with less energy. This innovation means the transistors can function efficiently at lower voltages.

The researchers describe the methods and results of this project in a paper published in Nature Electronics.

The researchers built these new transistors on a very small scale, with nanowires just a few nanometers wide. According to the researchers, these are among the smallest transistors built to date.

At this size, quantum effects come into play, altering the electron behavior and material properties to allow for high current flow at low voltages. Additionally, by designing the transistors in three dimensions, they could fit more onto a chip, promising more powerful and energy-efficient electronics.

These transistors not only switch at lower voltages but also outperform similar devices in other tests.

The researchers are now focusing on improving the uniformity of these transistors across an entire chip. When working at the nanoscale, even tiny variations can affect performance. They are also experimenting with different transistor shapes to enhance consistency and efficiency.

Replacing silicon with more energy-efficient alternatives

This advancement could revolutionize electronics by replacing silicon with more energy-efficient alternatives. This is particularly important as the demand for faster and more efficient computing continues to grow, driven by technologies like AI.

“This is a technology with the potential to replace silicon, so you could use it with all the functions that silicon currently has, but with much better energy efficiency,” says researcher Yanjie Shao in an MIT press release.

“With conventional physics, there is only so far you can go… conceptually, it really is a breakthrough,” adds lead researcher Jesús del Alamo.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Soft polymer devices controlled by light wrap around neurons and cells

MIT researchers have developed “wearable” devices that can interact with individual cells within the body, similar to how smartwatches or fitness trackers work with human physiology.

These devices, made from a soft polymer called azobenzene, wrap around neurons’ axons and dendrites gently, without causing damage.

The technology behind these devices allows them to respond to light; when activated, they roll into shapes that conform to the complex structures of cells.

The manufacturing process for these tiny wearables begins by placing azobenzene onto a water-soluble layer, then molding the polymer into thousands of microstructures. After baking to evaporate solvents and etching to clear excess material, the water-soluble layer dissolves, leaving the devices free-floating in water.

Researchers can control these devices with precision through light, allowing adjustments in how they wrap around cells. This capability opens up various applications.

One potential use is as synthetic myelin, providing artificial insulation for axons in conditions like multiple sclerosis where natural myelin is lost.

Another application involves modulating the activity of neurons by integrating the devices with materials capable of stimulating cells. This could prove beneficial in treating brain diseases by allowing for precise electrical activity modulation at a subcellular level.

Additionally, these devices could target specific cell types or subcellular regions, enhancing their utility in biological research and medical treatments.

Great potential for future research

The researchers describe the methods and results of this study in a paper published in Nature Communications Chemistry.

“To have intimate interfaces with these cells, the devices must be soft and able to conform to these complex structures,” says research leader Deblina Sarkar in a MIT press release. “That is the challenge we solved in this work. We were the first to show that azobenzene could even wrap around living cells.”

This study has shown compatibility with living cells, with tests on rat neurons indicating no damage from the application of these devices. Researchers envision a future where thousands of these devices roam the body, controlled noninvasively with light.

This could offer new ways to interface with neural systems at a minute scale, thus revolutionizing treatments for neurological conditions and deepening our understanding of cellular processes.

“The concept and platform technology we introduce here is like a founding stone that brings about immense possibilities for future research,” says Sarkar.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Mathematical analysis of causality in compelx systems

Researchers at Caltech and MIT are developing mathematical methods to understand cause and effect in complex systems. Named synergistic-unique-redundant decomposition of causality (SURD), this approach helps determine how different variables contribute to a specific effect, especially in scenarios where multiple factors are involved. This method has broad applications across various fields including climate science, economics, medicine, and engineering.

The researchers developed SURD to tackle complex causal questions like how climate variables might influence weather patterns, what triggered the 2008 stock market downturn, or what factors lead to material failure.

“Previous methods will only tell you how much causality comes from one variable or another,” explains research leader Adrián Lozano-Durán in a Caltech press release. “What is unique about our method is its ability to capture the full picture of everything that is causing an effect.”

The researchers describe SURD in a paper published in Nature Communications.

According to the researchers, SURD enhances traditional causality analysis by not just identifying the magnitude of causality from individual variables but by categorizing it into unique, redundant, and synergistic causality.

Unique causality happens when a single variable alone causes an effect. Redundant causality involves multiple variables where each could produce the same effect, but only one is necessary. Synergistic causality occurs when two or more variables must work together to produce an effect, which none could achieve alone.

According to the researchers, SURD prevents the misidentification of causal relationships, offering a clearer, more comprehensive picture of how variables interact to drive outcomes.

Multidisciplinary applications

The researchers validated SURD by testing it on various different scenarios, each with known outcomes, demonstrating that the method consistently provides accurate results without mixing up different types of causality.

One of the practical applications of SURD involved studying turbulence around a wall. It helped in understanding how velocity at different altitudes affects each other, showcasing how the tool can pinpoint which variables are truly influential. This kind of insight could be crucial for optimizing designs in aerospace by identifying the real causes of issues like increased drag on an aircraft.

However, causal inference “is very multidisciplinary and has the potential to drive progress across many fields,” says researcher Álvaro Martínez-Sánchez.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Quantum simulator for electrons and electromagnetic fields in materials

MIT researchers have found ways to generate synthetic electromagnetic fields on superconducting quantum processors. This opens the way to better methods to simulate how electrons move in materials.

The researchers generated a synthetic magnetic field inside a quantum processor comprising 16 qubits. The synthetic magnetic field lives inside a computation, and is not “real.” However, it acts like a magnetic field would in nature.

The synthetic magnetic field allows the researchers to study how electrons would behave if there was an actual magnetic field. This is important because electrons in materials can move in unique ways when influenced by magnetic fields. The response of electrons to magnetic fields can change how materials conduct electricity or turn into different states, like from metal to insulator.

By controlling how the qubits interact with each other, the researchers can simulate how electrons jump from one atom to another in a material. They can tweak the energy levels of the qubits, how the qubits talk to each other, and the frequencies of microwaves used to control the qubits.

The researchers described the methods and results of this study in a paper published in Nature Physics.

Better simulations of material physics

The quantum processor is not a large-scale quantum computer but a smaller system designed to mimic the behavior of electrons in materials.

The researchers found that the behavior of the qubits mimicked the basic rules of electromagnetism, confirming that the simulated synthetic magnetic field works like a real magnetic field.

A better understanding of the behavior of electrons in materials can lead to finding new materials for making electronics that work faster or with less energy. Electronics like computers or phones could become much more efficient if we learn how to control materials at this microscopic level.

This research work shows that small quantum simulators can be very effective for studying material properties in a controlled way. It helps explore new possibilities in material science, potentially leading to breakthroughs in electronics and energy technology.

“Our work enables us to simulate much more of the rich physics that has captivated materials scientists,” says researcher Ilan Rosen in a MIT press release. “We are in a very exciting place for the future.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Big data and AI will power autonomous robot scientists

Scientists at the Institute of Tibetan Plateau Research of the Chinese Academy of Sciences are studying how big data and artificial intelligence (AI) are changing the way science works.

Scientists used to focus on finding out why things happen (causation), but now, with so much data, they often look at how things relate to each other (correlation).

In a paper published in Science Bulletin, Xin Li and Yanlong Guo explain how science is moving from being just about looking at data to a new way of data-intensive scientific discovery, where AI does much of the work.

The scientists analyze the full process of science, like watching things happen (observation), figuring out what the data means (data analysis), coming up with ideas (hypothesis generation), guessing what will happen next (prediction), checking if those guesses are right (hypothesis testing), and making big ideas about how things work (theorization). They say that AI tools help and improve the usual science methods, but don’t take over completely.

Totally new science

Li and Guo believe that with the growth in AI, “robot scientists” will become real soon. Robot scientists would watch things, analyze data, think of new ideas, test these ideas, and even come up with new theories without humans doing much.

Future robot scientists could use lots of sensors to watch the world, analyze what they see, make guesses about how things work, test those guesses, and then come up with new theories, all on their own.

Li and Guo underline that AI will make science faster and more automated, but it’s important that AI’s work is clear, makes sense, and can be trusted. While old ways of doing science are still good, using big data and AI makes everything better and quicker.

Robot scientists, with their ability to process huge amounts of information and think in new ways, could do science in ways we can’t even imagine now. Li and Guo envision how AI robots can go from just helping scientists to being scientists themselves, exploring new areas of knowledge.

We are facing, Li and Guo conclude, “a totally new science realm in which everything is changing or has changed.” In the new science process, “correlation supersedes causation.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Low power neuromorphic electronics for AI applications

Researchers at Seoul National University have developed hardware for artificial intelligence (AI) that uses very little power. This hardware is called “neuromorphic hardware” because it is designed to work like the human brain.

The human brain has about 100 billion neurons (nerve cells) and 100 trillion synapses (connections between neurons). These synapses store information and help the brain perform tasks like thinking and learning.

Today’s standard computers for AI applications like large language models (LLMs) use a lot of power to process large amounts of data. These standard computers for AI are based on silicon and use a Von Neumann architecture that separates memory and processing tasks. This can slow down the computer and use more energy.

The new neuromorphic hardware developed by the researchers uses a different approach. It mimics the way the brain works by using memristors. A memristor is a type of electronic component that can store multiple resistance states, similar to how synapses store information. This allows the hardware to perform computations more efficiently and with less power.

The researchers have developed a new type of memristor, based on hybrid organic-inorganic materials, that operates more reliably and uses less energy. This advancement could lead to more efficient AI systems that are better for the environment because they use less power and produce fewer carbon emissions.

Low power neuromorphic hardware for AI (Credit: Seoul National University).

The researchers describe the methods and results of the study in a paper published in Nature Nanotechnology.

Next generation semiconductor devices for AI

“This study provides crucial foundational data for solving the fundamental problems of next-generation intelligent semiconductor devices,” says research leader Ho Won Jang in a Seoul National University press release.

He added that the significance of this work, compared to previous methods, “lies in demonstrating that uniform ion movement across the surface of the material is more important for developing high-performance neuromorphic hardware than creating localized filaments in semiconductor materials.”

This work could help overcome the limitations of current computer systems and lead to the development of more advanced and energy-efficient AI technologies. This is especially relevant as AI continues to play a larger role in various fields.

“This breakthrough in intelligent semiconductor materials boosts commercialization potential and marks a leap forward for AI tech,” Seoul National University posted to X.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Eye implant developed by BCI startup helps AMD patients see better

A brain-computer interface (BCI) startup called Science has shared some exciting news about a BCI device called PRIMA.

This device is an eye implant meant to help people who have lost their ability to see well because of a condition called geographic atrophy (GA), which is an advanced form of age-related macular degeneration (AMD). AMD affects the part of the eye that helps us see clearly, called the macula.

PRIMA is a tiny chip, about the size of a small grain of rice. This chip has around 380 pixels that change light into electrical signals.

Once PRIMA is implanted in the back of the eye, the recipient wears special glasses with a camera. The camera sends what it sees to the PRIMA chip through infrared light. The chip then turns this light into electrical signals and sends the signals to the brain, helping the person see shapes and objects again.

The company has tested this technology in a study called PRIMAvera. In the trial, 38 people with GA received the PRIMA implant. After a year, those who stayed in the study could read almost five more lines on a vision chart than they could before.

A turning point

”The results demonstrate a milestone in the treatment of blindness caused by geographic atrophy due to age-related macular degeneration. For the first time it was possible to restore real form vision in a retina that has deteriorated due to age-related macular degeneration” says PRIMAvera scientific coordinator Frank Holz. “Prior to this, there have been no real treatment options for these patients,”

These promising results suggest that PRIMA could be a big step towards helping people with AMD see better.

The goal of the trial was not just to see if PRIMA works but also to make sure it’s safe for people to use. The hope is to get a safety and performance certification from European authorities.

“This represents an enormous turning point for the field,” continues Hodak, “and we’re incredibly excited to bring this important technology to market over the next few years.”

Hodak is a co-founder and former President of Elon Musk’s BCI company Neuralink.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter