AI designs new proteins for medicine and other applications

Scientists at Argonne National Laboratory have introduced a new Artificial Intelligence (AI) framework for supercomputers to quickly find new proteins for use in medicines, catalysts, and other applications.

The new MProt-DPO framework can use different types of information together, like data from experiments, computer simulations, and even stories or descriptions about proteins. This mix of information helps the AI better understand and design proteins.

Proteins are large molecules made from amino acids. Each protein has a specific shape and function, and changing the amino acids can change how the protein works. The AI helps find the right changes to make proteins work better for specific uses, like breaking down plastics or helping in medical treatments.

MProt stands for Multiple Protein, and DPO stands for Direct Preference Optimization. This means the system learns to make better proteins by figuring out which designs are good or bad from the results of experiments and simulations.

There are billions of ways to arrange amino acids in a protein. Testing all these combinations in a lab would take forever, but with AI and supercomputers, scientists can predict which combinations might work best without having to test each one.

DPO means the AI learns from results. If a protein works well, the AI remembers what made it good and tries to make more like it next time. If it doesn’t work, it learns from that too.

The scientists tested the MProt-DPO framework by redesigning a yeast protein and an enzyme called malate dehydrogenase, which helps cells produce energy. They used data from real experiments and simulations to teach the AI to make better versions of these proteins.

ChatGPT for protein design

Argonne computational scientist Gautham Dharuman will present MProt-DPO at the supercomputing conference SC24.

“If you think about how ChatGPT works, humans provide feedback on whether a response is helpful or not,” says computational biologist Arvind Ramanathan in an Argonne press release. ​“MProt-DPO works in a similar way, but we replace human feedback with the experimental and simulation data to help the AI model learn which protein designs are most successful.”

The scientists are developing AuroraGPT, a big AI model that could help with many scientific areas, not just protein design.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Researchers argue that LLMs don’t understand the world

Large language models (LLMs), like GPT-4, are good at tasks like writing poetry or generating computer code. After training, these models predict the next word in a sentence. This ability sometimes makes it seem like the models understand general truths about the world. However, a recent study questions this idea.

The researchers, from MIT and the universities of Chicago, Cornell, and Harvard, describe the methods and results of the study in a paper published in arXiv.

The researchers tested a transformer-based LLM that uses large amounts of text data to predict what comes next in a sequence. The researchers found that the model could give near-perfect driving directions in New York City, but it did not have an accurate map of the city. When streets were closed or detours added, the model’s performance dropped sharply.

The importance of coherent world models

The maps the model generated had fake streets and strange connections. This showed that while the model could give good directions, it didn’t truly understand the city’s layout.

This finding is important because it shows that LLMs can do tasks well without understanding the world around them. The researchers also tested the model using two problems: navigating through a city and playing Othello. The model could predict valid moves in Othello, but it didn’t understand the game’s rules. It also failed to create a coherent map of New York.

To test whether a model truly understands the world, the researchers created two new metrics. The first, called “sequence distinction,” checks if the model can recognize differences between two situations. The second, “sequence compression,” checks if the model understands that identical situations have the same possible next steps. The researchers found that transformers which made choices randomly performed better at forming accurate world models.

These results show that LLMs can perform tasks accurately without really understanding them. The researchers argue that if we want LLMs to solve complex problems or make new discoveries, we need to rethink how we measure their understanding of the world.

The question of “whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says research leader Ashesh Rambachan in an MIT press release.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astrocytes play a crucial role in memory processes

Researchers at Baylor College of Medicine have studied how memories are stored and recalled in the brain.

Memory strongly depends on the activity of neurons, which are nerve cells responsible for sending and receiving signals in the brain. However, this research reveals that astrocytes, another type of brain cell known for their star-like shape, also play a crucial role in memory processes.

Neurons form networks called engrams, which are groups of cells that become active during learning and hold memories. When the brain learns something, these neurons light up, forming a memory trace.

This study shows that astrocytes, though not directly firing electrical signals like neurons, are also integral in memory storage and recall. They interact closely with neurons, both physically by being near them and functionally by influencing their activity. During learning, specific astrocytes start expressing a gene called c-Fos, which signals that these cells participate in the memory circuit at that moment.

A paper published in Nature describes the study’s methods and results.

Understanding memory and treating memory-related disorders

The researchers trained mice to feel fear in a specific situation, causing them to freeze when they encountered that context again. They developed new tools to observe and manipulate astrocytes. They found that activating these special astrocytes could trigger the memory of fear, making mice freeze even in a different, non-threatening environment.

The study also explored the role of a protein called NFIA in astrocytes. Astrocytes involved in memory have higher amounts of NFIA. After blocking the production of NFIA in these astrocytes, the mice could not recall the memory they were supposed to fear. This shows that NFIA is essential for memory recall in astrocytes.

This research suggests that memory involves a dynamic interaction between neurons and astrocytes.

This could lead to new approaches in understanding and treating memory-related disorders like Alzheimer’s, where there’s not enough memory recall, or PTSD, where there’s too much memory recall. By recognizing the role of astrocytes, scientists now have a more complex picture of how the brain handles memory, opening up new avenues for research and potential therapies.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The brain constantly refreshes and updates memories

Mount Sinai researchers have studied how the brain works with memories, showing how experiences from different times and situations can blend together. The brain doesn’t keep memories just as they are but updates and reshapes them with new information.

The brain can store memories “while flexibly updating them with new and relevant information,” says research leader Denise Cai in a Mount Sinai press release.

“This combination of stability and flexibility within neural ensembles is critical for us to make everyday predictions and decisions, and to interact with an ever-changing world,” she adds.

Neural ensembles are groups of neurons that work together to store memories. Memories are flexible: the brain updates or reorganizes them as we learn more.

The researchers detail the methods and results of this study in a paper published in Nature.

After a bad experience, like getting a shock, the brains of laboratory mice didn’t just replay that event. They also brought up memories from days before, linking these old and new experiences together. This process, called ensemble co-reactivation, helps in connecting memories over time.

A big step toward understanding memories

The brain consolidates and stabilizes a memory after an event. It goes over the event again, which helps fix the memory in place.

Surprisingly, this memory process happened more when the mice were awake than asleep, challenging the idea that sleep is the primary time for memory storage. This suggests that being awake might play a bigger role in how memories are connected than previously thought.

The researchers also found that negative experiences are more likely to be linked with past memories, suggesting that our brain might be trying to make sense of these events by relating them to what we already know. This could be why traumatic events can sometimes alter how we remember other, even unrelated, past experiences.

Cai emphasized that this study is a big step in understanding how our brains manage memory in a world where things constantly change. It shows that our memories are dynamic, continuously shaped by new experiences, which is essential for functioning in our daily lives.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI for faster quantum mechanical simulations in material science

Researchers at KAIST have found ways to speed up quantum mechanical simulations by integrating an artificial intelligence (AI) model called DeepSCF. This model was developed to make the calculations needed for understanding how materials behave at the atomic level significantly faster.

Methods used for quantum mechanical simulations of materials (Credit: KAIST).

One of the key methods used in quantum mechanical simulations is Density Functional Theory (DFT), which calculates the properties of materials from first principles, meaning it starts from the basic laws of quantum physics without any empirical parameters. However, DFT calculations are computationally intensive because they involve solving complex equations to determine electron density in three-dimensional space, a process known as the self-consistent field (SCF) method. This method iteratively calculates electron density until the results stabilize, which can take many iterations, making it time-consuming especially for larger systems.

The KAIST researchers introduced the DeepSCF model to overcome these limitations. DeepSCF uses a neural network similar to those used for 3D computer vision to learn the 3D patterns of electron density and chemical bonding.

The researchers describe the DeepSCF project and early results in a paper published in npj Computational Materials.

Instead of repeatedly solving equations through SCF, DeepSCF recognizes chemical bonding patterns, learning from a dataset of known molecular structures how electron density should look for new or altered materials.

AI predictions replace intensive computations

This innovation replaces the traditional, computationally heavy steps with AI predictions. The researchers trained the neural network with data from various organic molecules, enhancing the model’s ability to generalize across different chemical scenarios by applying random rotations and deformations to these structures. This allows the model to not only predict electron densities but also other quantum mechanical properties, with an accuracy comparable to conventional DFT calculations but much faster.

According to the researchers, DeepSCF represents a leap forward in materials science and quantum chemistry, offering a tool that can quickly predict material properties without the need for extensive computational resources. DeepSCF opens up new possibilities for simulating larger and more complex material systems efficiently.

“Since quantum mechanical electron structure calculations underpin materials simulations across all scales,” says research leader Yong-Hoon Kim in a KAIST press release, “this research establishes a foundational principle for accelerating material calculations using artificial intelligence.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Synthetic genes build complex DNA structures inside cells

Researchers from UCLA and the University of Rome Tor Vergata have created synthetic genes that mimic the function of genes in living cells.

These artificial genes can construct complex structures inside cells by following a sequence similar to how furniture is assembled from modular parts at IKEA.

The process involves self-assembly where small parts come together to form larger, structured items. Simple building blocks, suitably programmed, make complex biomolecular materials.  The same components can also break up different materials. 

“Our work suggests a way toward scaling up the complexity of biomolecular materials by taking advantage of the timing of molecular instructions for self-assembly,” says research co-leader Elisa Franco in a UCLA press release.

She adds that this approach can generate distinct materials that can “spontaneously ‘develop’ from the same finite set of parts by simply rewiring the elements that control the temporal order of assembly.”

The researchers describe the methods and results of this study in a paper published in Nature Communications.

The researchers used building blocks of DNA tiles and created a solution containing millions of these tiles. The DNA tiles interacted with one another to form micron-scale tubular structures. These structures only form in the presence of a specific RNA molecule that triggers the formation. A different RNA trigger molecule can also induce the disassembly of the same structures. 

Synthetic genes form or disassemble simple synthetic structures at specific times (Credit: UCLA).

Timing is crucial

The researchers programmed the synthetic genes to release these RNA triggers at specific times, thus controlling when and how the DNA structures assemble or disassemble.

This method could apply to other materials and biological systems where the timing of chemical signals is crucial.

By controlling these signals, the same DNA parts could make different structures or materials that change over time.

This research could lead to new developments in synthetic biology, potentially impacting medicine and biotechnology.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Modified AlphaFold combines AI with new experimental data

Proteins are essential in all living things, controlling everything from muscle movement to food digestion. They’re made up of amino acids, kind of like tiny beads on a string, which can come in many different orders and lengths, creating a huge number of possible proteins. How these amino acids are folded into a 3D shape determines what a protein does.

For a long time, figuring out how proteins fold was really hard and took a lot of time and money. Then came AlphaFold, an Artificial Intelligence (AI) tool by DeepMind, released in 2020. This tool uses neural networks to predict how proteins will fold, making it much easier to understand their functions and design new proteins for medical drugs.

This was such a big deal that Demis Hassabis and John Jumper, respectively CEO and senior research scientists at DeepMind, won half of the Nobel Prize in Chemistry.

However, AlphaFold had some limits; it struggled with very large proteins or using data from experiments. Researchers at Linköping University have modified AlphaFold, creating a tool called AF_unmasked.

The researchers describe the development of AF_unmasked and some early results of the project in a paper published in Nature Communications.

Comparison of AF_unmasked and standard AlphaFold predictions of the structure of a protein (Credit: Nature).

Endless possibilities for protein design

AF_unmasked can take in partial or experimental data, refining protein designs. The idea is to combine what we learn from experiments with what AI predicts, helping researchers design proteins more effectively.

“We’re giving a new type of input to AlphaFold, says researcher Claudio Mirabello in a Linköping University press release. “The idea is to get the whole picture, both from experiments and neural networks, making it possible to build larger structures.”

“The possibilities for protein design are endless, only the imagination sets limits,” adds researcher Björn Wallner. “It’s possible to develop proteins for use both inside and outside the body.”

Mirabello explains that AlphaFold “encodes the evolutionary history of a protein inside the neural network.” He developed this idea with Wallner. “So, you could say that AlphaFold was based on our idea, and now we are building on AlphaFold.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nanoscale transistors overcome the limits of silicon semiconductor technology

Silicon transistors are vital components in electronic devices like smartphones and cars, but physical limits prevent silicon transistors from operating effectively below a certain voltage, impacting the energy efficiency of devices, particularly with the growing demands of AI technologies.

To address this issue, researchers at MIT have engineered a new type of transistor using gallium antimonide and indium arsenide. These materials permit leveraging quantum tunneling, where electrons can pass through barriers more easily, allowing the transistors to switch on and off with less energy. This innovation means the transistors can function efficiently at lower voltages.

The researchers describe the methods and results of this project in a paper published in Nature Electronics.

The researchers built these new transistors on a very small scale, with nanowires just a few nanometers wide. According to the researchers, these are among the smallest transistors built to date.

At this size, quantum effects come into play, altering the electron behavior and material properties to allow for high current flow at low voltages. Additionally, by designing the transistors in three dimensions, they could fit more onto a chip, promising more powerful and energy-efficient electronics.

These transistors not only switch at lower voltages but also outperform similar devices in other tests.

The researchers are now focusing on improving the uniformity of these transistors across an entire chip. When working at the nanoscale, even tiny variations can affect performance. They are also experimenting with different transistor shapes to enhance consistency and efficiency.

Replacing silicon with more energy-efficient alternatives

This advancement could revolutionize electronics by replacing silicon with more energy-efficient alternatives. This is particularly important as the demand for faster and more efficient computing continues to grow, driven by technologies like AI.

“This is a technology with the potential to replace silicon, so you could use it with all the functions that silicon currently has, but with much better energy efficiency,” says researcher Yanjie Shao in an MIT press release.

“With conventional physics, there is only so far you can go… conceptually, it really is a breakthrough,” adds lead researcher Jesús del Alamo.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Soft polymer devices controlled by light wrap around neurons and cells

MIT researchers have developed “wearable” devices that can interact with individual cells within the body, similar to how smartwatches or fitness trackers work with human physiology.

These devices, made from a soft polymer called azobenzene, wrap around neurons’ axons and dendrites gently, without causing damage.

The technology behind these devices allows them to respond to light; when activated, they roll into shapes that conform to the complex structures of cells.

The manufacturing process for these tiny wearables begins by placing azobenzene onto a water-soluble layer, then molding the polymer into thousands of microstructures. After baking to evaporate solvents and etching to clear excess material, the water-soluble layer dissolves, leaving the devices free-floating in water.

Researchers can control these devices with precision through light, allowing adjustments in how they wrap around cells. This capability opens up various applications.

One potential use is as synthetic myelin, providing artificial insulation for axons in conditions like multiple sclerosis where natural myelin is lost.

Another application involves modulating the activity of neurons by integrating the devices with materials capable of stimulating cells. This could prove beneficial in treating brain diseases by allowing for precise electrical activity modulation at a subcellular level.

Additionally, these devices could target specific cell types or subcellular regions, enhancing their utility in biological research and medical treatments.

Great potential for future research

The researchers describe the methods and results of this study in a paper published in Nature Communications Chemistry.

“To have intimate interfaces with these cells, the devices must be soft and able to conform to these complex structures,” says research leader Deblina Sarkar in a MIT press release. “That is the challenge we solved in this work. We were the first to show that azobenzene could even wrap around living cells.”

This study has shown compatibility with living cells, with tests on rat neurons indicating no damage from the application of these devices. Researchers envision a future where thousands of these devices roam the body, controlled noninvasively with light.

This could offer new ways to interface with neural systems at a minute scale, thus revolutionizing treatments for neurological conditions and deepening our understanding of cellular processes.

“The concept and platform technology we introduce here is like a founding stone that brings about immense possibilities for future research,” says Sarkar.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Mathematical analysis of causality in compelx systems

Researchers at Caltech and MIT are developing mathematical methods to understand cause and effect in complex systems. Named synergistic-unique-redundant decomposition of causality (SURD), this approach helps determine how different variables contribute to a specific effect, especially in scenarios where multiple factors are involved. This method has broad applications across various fields including climate science, economics, medicine, and engineering.

The researchers developed SURD to tackle complex causal questions like how climate variables might influence weather patterns, what triggered the 2008 stock market downturn, or what factors lead to material failure.

“Previous methods will only tell you how much causality comes from one variable or another,” explains research leader Adrián Lozano-Durán in a Caltech press release. “What is unique about our method is its ability to capture the full picture of everything that is causing an effect.”

The researchers describe SURD in a paper published in Nature Communications.

According to the researchers, SURD enhances traditional causality analysis by not just identifying the magnitude of causality from individual variables but by categorizing it into unique, redundant, and synergistic causality.

Unique causality happens when a single variable alone causes an effect. Redundant causality involves multiple variables where each could produce the same effect, but only one is necessary. Synergistic causality occurs when two or more variables must work together to produce an effect, which none could achieve alone.

According to the researchers, SURD prevents the misidentification of causal relationships, offering a clearer, more comprehensive picture of how variables interact to drive outcomes.

Multidisciplinary applications

The researchers validated SURD by testing it on various different scenarios, each with known outcomes, demonstrating that the method consistently provides accurate results without mixing up different types of causality.

One of the practical applications of SURD involved studying turbulence around a wall. It helped in understanding how velocity at different altitudes affects each other, showcasing how the tool can pinpoint which variables are truly influential. This kind of insight could be crucial for optimizing designs in aerospace by identifying the real causes of issues like increased drag on an aircraft.

However, causal inference “is very multidisciplinary and has the potential to drive progress across many fields,” says researcher Álvaro Martínez-Sánchez.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter