Generative AI for genomes

A study from UC Berkeley researchers published in Science introduces Evo, a machine learning model designed to work with DNA, RNA, and proteins.

DNA is like a long string made up of four different letters or nucleotides (A, T, C, G) that carry all the instructions needed for life. This DNA sequence can change, and these changes can help organisms adapt to new conditions, driving what we call evolution.

Evo stands out because it can predict how changes in DNA might affect cells and even design new DNA sequences to change how cells work. This could be very useful for creating new treatments for diseases.

Evo seems a promising example of generative Artificial Intelligence (AI) for DNA.

A related Perspective, also published in Science, notes that “The ability to predict the effects of mutations across all layers of regulation in the cell and to design DNA sequences to manipulate cell function would have tremendous diagnostic and therapeutic implications for disease.”

The researchers explain that Evo is not just another machine learning model; it’s a “large-scale genomic foundation model” with 7 billion parameters.

Evo can evaluate and design DNA changes

Evo was trained on a huge dataset containing 2.7 million different microbial genomes. Microbial genomes are the complete set of DNA in bacteria and other tiny organisms. This training allows Evo to be very good at both predicting what happens when DNA changes (like mutations in bacteria) and creating new DNA sequences. For example, it can predict how a change in DNA might affect proteins or how genes are turned on or off, which is known as gene regulation.

One of the most impressive things about Evo is its ability to generate DNA sequences that are very long – over 1 million bases (a base is one of the nucleotides, A, T, C, or G). This is much longer than what previous models could do, making Evo capable of handling tasks at a whole-genome scale.

The study suggests that future versions of models like Evo might use even more diverse data, including from humans, to understand how very distant parts of DNA interact with each other over large distances within the genome. This could lead to even more advanced tools for biology and medicine by capturing the full complexity of life’s genetic blueprint.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Robot vision via radio waves

Robots often struggle to see and navigate in harsh conditions like fog or smoke because their usual vision tools, like cameras or LiDAR, depend on light. LiDAR works by sending out laser light and measuring the time it takes for that light to bounce back, but it doesn’t work well when visibility is poor.

In nature, animals like bats use sound echoes to find their way in the dark, and sharks sense electrical fields to locate prey. These examples show that there are other ways to “see” besides light.

Researchers at the University of Pennsylvania have developed PanoRadar, which uses radio waves. Radio waves are much longer than light waves, so they can go through smoke, fog, and even some solid materials. Traditional radar, which also uses radio waves, gives rough images but can see through obstacles. PanoRadar takes this a step further by producing detailed 3D images.

Very clear pictures without light

PanoRadar works like a lighthouse, spinning around and sending out radio waves, which then bounce back to give information about the surroundings. Unlike the simple radar, PanoRadar uses smart math and AI to look at these reflections from many angles at once. This process creates a very clear picture, almost as good as what you’d get from LiDAR, but without needing light.

A challenge was making sure the images stay sharp even when the robot moves. Another challenge was to teach PanoRadar to recognize patterns in indoor spaces to understand what it’s seeing, much like how a child learns to recognize objects.

PanoRadar has been tested in real buildings, where it showed it could see through smoke and even see things like glass walls that LiDAR usually misses. This ability to detect finer details, like people, makes it useful for tasks where seeing clearly through tough conditions could save lives, like in rescue missions or with self-driving cars.

The future might see PanoRadar working alongside other sensors, like cameras and LiDAR, to make even smarter robots that can tackle any environment by using the best features of each sensing method.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Test of a quantum internet that transmits information as qubits

Scientists from QuTech (a collaboration between TU Delft and TNO) have connected quantum computers over a distance of 25 kilometers between Delft and The Hague, using regular internet fiber.

This connection between quantum computers is a step towards a quantum internet that transmits information as qubits.

This quantum internet could do things our current internet can’t, like making unbreakable codes for secure data sharing or connecting faraway quantum computers to work together, providing privacy and enhanced computing power.

“The distance over which we create quantum entanglement in this project, via 25 km of deployed underground fiber, is a record for quantum processors,” says research leader Ronald Hanson in a TU Delft press release. He adds that this is the first successful test to establish a quantum link quantum processors in different cities.

Cartographic layout of the distant quantum link and the route of the deployed fiber bundle, with similar quantum processor nodes in Delft and The Hague (Credit: Science Advances).

The scientists describe the methods and results of this project in a paper published in Science Advances.

Making this work outside the lab was challenging. The scientists had to ensure the persistence of quantum signals carried by photons, which is tricky over long distances. They used a method that was very precise in maintaining the stability of the light through the fiber.

“The link needed to be stable well within the wavelength of the photons (smaller than a micrometer) over 25 kilometer of optical fiber,” says researcher Arian Stolk. “That challenge compares to keeping the distance between the earth and the moon constant with the accuracy of only a few millimeter.” 

Toward a quantum internet for Europe

This project required a mix of academic knowledge and practical industry help. Various experts contributed, from developing frequency converters to providing the necessary fiber infrastructure. This collaboration shows how important partnerships between universities and companies are for advancing technology.

The success of this experiment lays a strong foundation for Europe’s quantum internet ambitions. It not only demonstrates the feasibility of quantum networking over metropolitan areas but also shows how current internet infrastructure can be adapted for quantum use. This work is a stepping stone towards a real quantum internet, potentially revolutionizing how we communicate and compute in the future.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Vortices of light could transport huge amounts of data

Information in today’s digital world is often sent through light beams in optic cables. As we need to send more data, finding new ways to pack information into light is crucial. Researchers at Aalto University have developed a new way to do this by creating light vortices, which are like tiny, controlled hurricanes of light that can carry data.

Light vortices are beams where the center is dark, surrounded by a bright ring of light. This happens because the electric field of the light points in different directions, much like winds in a hurricane. The researchers used metal nanoparticles that interact with an electric field.

A paper published in Nature Communications describes the methods and results of this study.

The researchers arranged 100,000 metallic nanoparticles in a complex pattern called a quasicrystal. Quasicrystals are patterns that are not perfectly ordered like a grid but also not entirely random, sitting somewhere between order and chaos.

The trick was to place these particles in areas where the electric field was weakest. By doing this, they could isolate and enhance the parts of the light field they wanted to use for creating special vortices.

Potential for delivering more information through fiber cables

This method could revolutionize how we send information through light. For instance, by sending these light vortices through fiber optic cables, one could potentially fit more data into the same space compared to current methods.

“We could, for example, send these vortices down optic fibre cables and unpack them at the destination,” says researcher Kristian Arjas in a press release issued by Aalto University. “This would allow us to store our information into a much smaller space and transmit much more information at once. An optimistic guess for how much would be 8 to 16 times the information we can now deliver over optic fibre.”

However, turning this discovery into something usable in everyday technology might still take some years because it needs a lot more engineering work.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DeepMind makes AlphaFold3 open source

Google DeepMind has made AlphaFold3, an AI model for predicting protein structures, available as open-source for non-commercial use, Nature News reports.

AlphaFold predicts how proteins fold into their three-dimensional shapes, which is crucial for understanding their function.

This decision came after some controversy over the initial release of the model without its source code. AlphaFold3 is an upgrade from its earlier versions, like AlphaFold2, because it can predict not only protein structures but also how proteins interact with other molecules, which is vital for drug discovery.

Previously, DeepMind made AlphaFold3 accessible only through a web server, which limited the types of predictions users could make, particularly regarding how proteins interact with drugs.

Scientists criticized this method because it hindered reproducibility – the idea that scientific results should be able to be replicated by others to verify their validity.

Scientists argued that for research to be trustworthy, the underlying code should be available. This led DeepMind to promise the release of the code within six months.

The importance of open source code for AI

Now, with the code released, researchers can explore how proteins might interact with potential drugs, aiding in drug discovery.

However, the model weights, which are the learned parameters that allow the AI to make predictions, are still only available to academic users upon request.

The push for openness in AI models stems from the scientific community’s desire for transparency and verifiability in research. This openness allows for further innovation and application of the technology in unexpected ways. In fact, researchers used previous versions of AlphaFold in creative ways like designing new proteins for medical targets.

“We’re very excited to see what people do with this,” says AlphaFold project leaderJohn Jumper as reported by Nature News. Jumper won the 2024 Nobel Prize in Chemistry with CEO Demis Hassabis for their work on AlphaFold2. “People will use it in weird ways,” he adds. “Sometimes it will fail and sometimes it will succeed.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Surgical robot trained with videos performs as well as human surgeons

A robot has learned to perform surgery by watching videos of experienced human surgeons. This method, called imitation learning, allows the robot to copy the actions it sees without needing detailed programming for each step of the surgery. This development makes surgical robots more autonomous, meaning they can work on their own, reducing the need for direct human control.

The project was led by researchers from Johns Hopkins University and included Stanford University researchers. The researchers trained the robot to do three basic surgery tasks: moving a needle, lifting tissue, and stitching (video). The robot learned these skills by watching videos from wrist cameras, capturing how surgeons perform these tasks.

Imitation learning

Imitation learning is a type of machine learning where a system learns by observing examples. Here, the robot watched these surgical videos, which act like lessons or demonstrations for the robot.

The researchers used a model similar to the one used by ChatGPT, but instead of processing language, this model deals with kinematics, particularly how movements like bending and rotating are described mathematically.

With imitation learning, it is possible to train a robot much faster, in just a few days, to perform various surgeries.

The researchers trained the robot on hundreds of video clips. The robot learned not just to mimic but to adapt and perform tasks even when things didn’t go as planned, like if it dropped a needle.

This learning approach allowed the robot to perform these tasks as well as human surgeons, showing that it could adapt to new situations or errors during the surgery.

This breakthrough is significant because traditionally, programming a robot for surgery would take years for even one procedure. This could lead to fewer mistakes in surgery, more precise operations, and quicker training for new procedures.

The researchers presented their work at the Conference on Robot Learning, highlighting its importance in the fields of robotics and artificial intelligence for medical applications.

The team continues to work on making the robot capable of performing entire surgeries, not just individual tasks.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DNA origami technology for efficient, sensitive biosensors

Scientists at Ludwig-Maximilians-Universität München (LMU) have found ways to make biosensors more versatile and quicker to develop.

Biosensors are devices used in medical research and diagnostics to detect substances in the body. However, these sensors typically need to be created specifically for each new use, which can be time-consuming and costly.

The scientists describe the methods and results of this study in a paper published in Nature Nanotechnology.

The scientists created a sensor using DNA origami technology, which permits building a tiny, flexible scaffold made from DNA. This scaffold has two arms that can move like a hinge. Each arm has a special tag that glows. The scientists used a method called fluorescence resonance energy transfer (FRET) to see how far apart these tags are.

When the arms are close together, the sensor is in a closed state, but when they spread apart, it opens up to a 90-degree angle. This opening and closing change the light (fluorescence) the sensor gives off, making it easier to measure with high accuracy.

Enhanced sensitivity for new applications

The DNA origami can be set up with places where different molecules, like DNA, proteins, or antibodies, can attach. By changing how these molecules attach to the sensor, scientists can make it respond in different ways to different substances. This means the sensor can be fine-tuned for sensitivity without changing how it binds to the target molecule.

This ability to adjust the sensor’s sensitivity easily is what makes it special.

“It’s relatively easy to design the origami such that several molecular interactions between target molecule and sensor are queried simultaneously,” says research leader Philip Tinnefeld in an LMU press release. “This flexibility is a major advantage of our system.”

The scientists believe that, by improving this sensor, it might detect when to release drugs based on certain conditions inside the body. This innovation could speed up the creation of new diagnostic and therapeutic tools.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientist successfully treats her own cancer with viruses

Beata Halassy, a virologist from the University of Zagreb and a cancer patient, decided to cure herself with oncolytic virotherapy (OVT), Nature News reports.

OVT uses viruses to attack cancer cells directly and also stimulate the body’s immune system to fight the cancer.

Halassy chose to inject her tumor with two viruses: first, a measles virus commonly used in vaccines, and then a vesicular stomatitis virus (VSV), which can cause mild flu-like symptoms but is generally safe.

Her approach was to target her tumor with these viruses consecutively, a method not typically used in standard OVT trials, which are more focused on late-stage cancers like metastatic melanoma.

A colleague administer the treatment over two months using viruses that Halassy herself prepared in her lab.

Halassy’s self-treatment worked; the tumor not only shrank but also became less invasive, allowing for an easier surgical removal.

After the surgery, Halassy continued with a standard cancer treatment drug called trastuzumab. She has now been cancer-free for four years.

Halassy believes her experience provides valuable insights into OVT’s potential, which hasn’t been officially approved for breast cancer treatment anywhere in the world yet.

Pointless “ethical” concerns

Halassy faced rejections from many journals when she tried to publish her findings, primarily due to the self-experimentation aspect. Eventually she published her work in Vaccines.

The case of Halassy has raised significant ethical questions regarding self-experimentation. While Halassy’s case showed success, there’s concern that publicizing such methods might encourage others to try similar unproven treatments, potentially leading them away from established medical protocols. This could be risky, especially since cancer patients might be desperate for alternatives.

Halassy’s story has sparked a debate on how scientific knowledge gained from self-experimentation should be handled ethically. While some worry about the broader implications on patient care and safety, others argue for the importance of sharing such data for potential advancements.

It can be argued that “ethical” objections to self-administered therapies that have saved lives and show potential to save more lives are pointless. If biohacking saves lives, then it is good.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Kidney and nerve cells learn and form memories like neurons

Scientists at New York University (NYU) have found that not just brain cells but cells from other parts of the body can also keep memories. This discovery could change how we think about learning and help treat memory issues.

“Learning and memory are generally associated with brains and brain cells alone, but our study shows that other cells in the body can learn and form memories, too,” says researcher Nikolay Kukushkin in a NYU press release.

The scientists describe the methods and results of this study in a paper published in Nature Communications.

Neurons in the brain change their connections based on what we learn, and this process works better when we learn things over time rather than all at once.

The scientists looked at cells from nerve tissue and kidney tissue to see if these cells could also learn and remember. The scientists gave these cells different patterns of chemicals, similar to how neurons in the brain get signals when we learn.

When these cells got these chemical signals in a pattern that looked like learning over time, they turned on a memory gene. This gene is the same one that brain cells use when they are forming a memory.

The entire body learns and remembers

To watch this happen, the scientists made these cells glow when the memory gene was active. They found that when the chemical signals came in spaced intervals, the memory gene stayed on longer and stronger than when the signals were given all at once. This showed that these non-brain cells could also benefit from spaced learning, just like brain cells do.

This suggests that our entire body as capable of learning and remembering, not just our brains, which might change medical treatments and educational methods in the future.

“This discovery opens new doors for understanding how memory works and could lead to better ways to enhance learning and treat memory problems,” observes Kukushkin.

Kukushkin posted a thread to X with explanations and comments on this research.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New test flight of Starship scheduled for November 18

The sixth test flight for Elon Musk’s rocket Starship is set to happen on November 18. Viewers will be able to watch it live on SpaceX’s website or through the X platform.

During the last test, the first part of the rocket, the Super Heavy booster, came back to the launch site where it was grabbed by the launch tower’s “chopstick” arms. The upper part, or the Starship itself, flew well, landing softly in the Indian Ocean after showing it could handle space travel better than before.

For this next flight, they’ll try to do this again, but with some new tests.

They’ll light up a Raptor engine on the Starship while it’s in space for the first time. This engine test is important for controlling the Starship’s path back to Earth from orbit. They also plan to test how well the rocket can handle heat during reentry into Earth’s atmosphere. They’ll do this by removing some of the heat shield tiles (these are special tiles that protect the rocket from burning up) to see how it affects the rocket’s performance.

They’ll also change how the Starship comes back to Earth, making it fly at a steeper angle to see how the flaps (like wings) control the descent under tougher conditions. They’re doing all this during the day so everyone can see it better.

The booster will also have some new hardware to make it stronger and more reliable. If everything goes well, it will be caught again by the tower. If not, it will splash in the Gulf of Mexico safely. The test includes making sure the rocket and the tower are in good condition for this catch.

To the Moon and to Mars

Overall, this test is about learning more about making the Starship reusable for future missions, including trips to Mars or back to the Moon, by testing new ways to control and protect the rocket during its journey through space and back.

NASA plans to use Starship to land Artemis 3 astronauts on the Moon, and the role of Starship in NASA programs is likely to grow. Space experts and enthusiasts are discussing test missions to Mars (unpaywalled copy). NASA should “support SpaceX on Mars while focusing NASA on fixing all the problems with Artemis,” said space policy analyst Greg Autry.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter