Surgical robot trained with videos performs as well as human surgeons

A robot has learned to perform surgery by watching videos of experienced human surgeons. This method, called imitation learning, allows the robot to copy the actions it sees without needing detailed programming for each step of the surgery. This development makes surgical robots more autonomous, meaning they can work on their own, reducing the need for direct human control.

The project was led by researchers from Johns Hopkins University and included Stanford University researchers. The researchers trained the robot to do three basic surgery tasks: moving a needle, lifting tissue, and stitching (video). The robot learned these skills by watching videos from wrist cameras, capturing how surgeons perform these tasks.

Imitation learning

Imitation learning is a type of machine learning where a system learns by observing examples. Here, the robot watched these surgical videos, which act like lessons or demonstrations for the robot.

The researchers used a model similar to the one used by ChatGPT, but instead of processing language, this model deals with kinematics, particularly how movements like bending and rotating are described mathematically.

With imitation learning, it is possible to train a robot much faster, in just a few days, to perform various surgeries.

The researchers trained the robot on hundreds of video clips. The robot learned not just to mimic but to adapt and perform tasks even when things didn’t go as planned, like if it dropped a needle.

This learning approach allowed the robot to perform these tasks as well as human surgeons, showing that it could adapt to new situations or errors during the surgery.

This breakthrough is significant because traditionally, programming a robot for surgery would take years for even one procedure. This could lead to fewer mistakes in surgery, more precise operations, and quicker training for new procedures.

The researchers presented their work at the Conference on Robot Learning, highlighting its importance in the fields of robotics and artificial intelligence for medical applications.

The team continues to work on making the robot capable of performing entire surgeries, not just individual tasks.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DNA origami technology for efficient, sensitive biosensors

Scientists at Ludwig-Maximilians-Universität München (LMU) have found ways to make biosensors more versatile and quicker to develop.

Biosensors are devices used in medical research and diagnostics to detect substances in the body. However, these sensors typically need to be created specifically for each new use, which can be time-consuming and costly.

The scientists describe the methods and results of this study in a paper published in Nature Nanotechnology.

The scientists created a sensor using DNA origami technology, which permits building a tiny, flexible scaffold made from DNA. This scaffold has two arms that can move like a hinge. Each arm has a special tag that glows. The scientists used a method called fluorescence resonance energy transfer (FRET) to see how far apart these tags are.

When the arms are close together, the sensor is in a closed state, but when they spread apart, it opens up to a 90-degree angle. This opening and closing change the light (fluorescence) the sensor gives off, making it easier to measure with high accuracy.

Enhanced sensitivity for new applications

The DNA origami can be set up with places where different molecules, like DNA, proteins, or antibodies, can attach. By changing how these molecules attach to the sensor, scientists can make it respond in different ways to different substances. This means the sensor can be fine-tuned for sensitivity without changing how it binds to the target molecule.

This ability to adjust the sensor’s sensitivity easily is what makes it special.

“It’s relatively easy to design the origami such that several molecular interactions between target molecule and sensor are queried simultaneously,” says research leader Philip Tinnefeld in an LMU press release. “This flexibility is a major advantage of our system.”

The scientists believe that, by improving this sensor, it might detect when to release drugs based on certain conditions inside the body. This innovation could speed up the creation of new diagnostic and therapeutic tools.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientist successfully treats her own cancer with viruses

Beata Halassy, a virologist from the University of Zagreb and a cancer patient, decided to cure herself with oncolytic virotherapy (OVT), Nature News reports.

OVT uses viruses to attack cancer cells directly and also stimulate the body’s immune system to fight the cancer.

Halassy chose to inject her tumor with two viruses: first, a measles virus commonly used in vaccines, and then a vesicular stomatitis virus (VSV), which can cause mild flu-like symptoms but is generally safe.

Her approach was to target her tumor with these viruses consecutively, a method not typically used in standard OVT trials, which are more focused on late-stage cancers like metastatic melanoma.

A colleague administer the treatment over two months using viruses that Halassy herself prepared in her lab.

Halassy’s self-treatment worked; the tumor not only shrank but also became less invasive, allowing for an easier surgical removal.

After the surgery, Halassy continued with a standard cancer treatment drug called trastuzumab. She has now been cancer-free for four years.

Halassy believes her experience provides valuable insights into OVT’s potential, which hasn’t been officially approved for breast cancer treatment anywhere in the world yet.

Pointless “ethical” concerns

Halassy faced rejections from many journals when she tried to publish her findings, primarily due to the self-experimentation aspect. Eventually she published her work in Vaccines.

The case of Halassy has raised significant ethical questions regarding self-experimentation. While Halassy’s case showed success, there’s concern that publicizing such methods might encourage others to try similar unproven treatments, potentially leading them away from established medical protocols. This could be risky, especially since cancer patients might be desperate for alternatives.

Halassy’s story has sparked a debate on how scientific knowledge gained from self-experimentation should be handled ethically. While some worry about the broader implications on patient care and safety, others argue for the importance of sharing such data for potential advancements.

It can be argued that “ethical” objections to self-administered therapies that have saved lives and show potential to save more lives are pointless. If biohacking saves lives, then it is good.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Kidney and nerve cells learn and form memories like neurons

Scientists at New York University (NYU) have found that not just brain cells but cells from other parts of the body can also keep memories. This discovery could change how we think about learning and help treat memory issues.

“Learning and memory are generally associated with brains and brain cells alone, but our study shows that other cells in the body can learn and form memories, too,” says researcher Nikolay Kukushkin in a NYU press release.

The scientists describe the methods and results of this study in a paper published in Nature Communications.

Neurons in the brain change their connections based on what we learn, and this process works better when we learn things over time rather than all at once.

The scientists looked at cells from nerve tissue and kidney tissue to see if these cells could also learn and remember. The scientists gave these cells different patterns of chemicals, similar to how neurons in the brain get signals when we learn.

When these cells got these chemical signals in a pattern that looked like learning over time, they turned on a memory gene. This gene is the same one that brain cells use when they are forming a memory.

The entire body learns and remembers

To watch this happen, the scientists made these cells glow when the memory gene was active. They found that when the chemical signals came in spaced intervals, the memory gene stayed on longer and stronger than when the signals were given all at once. This showed that these non-brain cells could also benefit from spaced learning, just like brain cells do.

This suggests that our entire body as capable of learning and remembering, not just our brains, which might change medical treatments and educational methods in the future.

“This discovery opens new doors for understanding how memory works and could lead to better ways to enhance learning and treat memory problems,” observes Kukushkin.

Kukushkin posted a thread to X with explanations and comments on this research.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New test flight of Starship scheduled for November 18

The sixth test flight for Elon Musk’s rocket Starship is set to happen on November 18. Viewers will be able to watch it live on SpaceX’s website or through the X platform.

During the last test, the first part of the rocket, the Super Heavy booster, came back to the launch site where it was grabbed by the launch tower’s “chopstick” arms. The upper part, or the Starship itself, flew well, landing softly in the Indian Ocean after showing it could handle space travel better than before.

For this next flight, they’ll try to do this again, but with some new tests.

They’ll light up a Raptor engine on the Starship while it’s in space for the first time. This engine test is important for controlling the Starship’s path back to Earth from orbit. They also plan to test how well the rocket can handle heat during reentry into Earth’s atmosphere. They’ll do this by removing some of the heat shield tiles (these are special tiles that protect the rocket from burning up) to see how it affects the rocket’s performance.

They’ll also change how the Starship comes back to Earth, making it fly at a steeper angle to see how the flaps (like wings) control the descent under tougher conditions. They’re doing all this during the day so everyone can see it better.

The booster will also have some new hardware to make it stronger and more reliable. If everything goes well, it will be caught again by the tower. If not, it will splash in the Gulf of Mexico safely. The test includes making sure the rocket and the tower are in good condition for this catch.

To the Moon and to Mars

Overall, this test is about learning more about making the Starship reusable for future missions, including trips to Mars or back to the Moon, by testing new ways to control and protect the rocket during its journey through space and back.

NASA plans to use Starship to land Artemis 3 astronauts on the Moon, and the role of Starship in NASA programs is likely to grow. Space experts and enthusiasts are discussing test missions to Mars (unpaywalled copy). NASA should “support SpaceX on Mars while focusing NASA on fixing all the problems with Artemis,” said space policy analyst Greg Autry.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI designs new proteins for medicine and other applications

Scientists at Argonne National Laboratory have introduced a new Artificial Intelligence (AI) framework for supercomputers to quickly find new proteins for use in medicines, catalysts, and other applications.

The new MProt-DPO framework can use different types of information together, like data from experiments, computer simulations, and even stories or descriptions about proteins. This mix of information helps the AI better understand and design proteins.

Proteins are large molecules made from amino acids. Each protein has a specific shape and function, and changing the amino acids can change how the protein works. The AI helps find the right changes to make proteins work better for specific uses, like breaking down plastics or helping in medical treatments.

MProt stands for Multiple Protein, and DPO stands for Direct Preference Optimization. This means the system learns to make better proteins by figuring out which designs are good or bad from the results of experiments and simulations.

There are billions of ways to arrange amino acids in a protein. Testing all these combinations in a lab would take forever, but with AI and supercomputers, scientists can predict which combinations might work best without having to test each one.

DPO means the AI learns from results. If a protein works well, the AI remembers what made it good and tries to make more like it next time. If it doesn’t work, it learns from that too.

The scientists tested the MProt-DPO framework by redesigning a yeast protein and an enzyme called malate dehydrogenase, which helps cells produce energy. They used data from real experiments and simulations to teach the AI to make better versions of these proteins.

ChatGPT for protein design

Argonne computational scientist Gautham Dharuman will present MProt-DPO at the supercomputing conference SC24.

“If you think about how ChatGPT works, humans provide feedback on whether a response is helpful or not,” says computational biologist Arvind Ramanathan in an Argonne press release. ​“MProt-DPO works in a similar way, but we replace human feedback with the experimental and simulation data to help the AI model learn which protein designs are most successful.”

The scientists are developing AuroraGPT, a big AI model that could help with many scientific areas, not just protein design.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Researchers argue that LLMs don’t understand the world

Large language models (LLMs), like GPT-4, are good at tasks like writing poetry or generating computer code. After training, these models predict the next word in a sentence. This ability sometimes makes it seem like the models understand general truths about the world. However, a recent study questions this idea.

The researchers, from MIT and the universities of Chicago, Cornell, and Harvard, describe the methods and results of the study in a paper published in arXiv.

The researchers tested a transformer-based LLM that uses large amounts of text data to predict what comes next in a sequence. The researchers found that the model could give near-perfect driving directions in New York City, but it did not have an accurate map of the city. When streets were closed or detours added, the model’s performance dropped sharply.

The importance of coherent world models

The maps the model generated had fake streets and strange connections. This showed that while the model could give good directions, it didn’t truly understand the city’s layout.

This finding is important because it shows that LLMs can do tasks well without understanding the world around them. The researchers also tested the model using two problems: navigating through a city and playing Othello. The model could predict valid moves in Othello, but it didn’t understand the game’s rules. It also failed to create a coherent map of New York.

To test whether a model truly understands the world, the researchers created two new metrics. The first, called “sequence distinction,” checks if the model can recognize differences between two situations. The second, “sequence compression,” checks if the model understands that identical situations have the same possible next steps. The researchers found that transformers which made choices randomly performed better at forming accurate world models.

These results show that LLMs can perform tasks accurately without really understanding them. The researchers argue that if we want LLMs to solve complex problems or make new discoveries, we need to rethink how we measure their understanding of the world.

The question of “whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says research leader Ashesh Rambachan in an MIT press release.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Astrocytes play a crucial role in memory processes

Researchers at Baylor College of Medicine have studied how memories are stored and recalled in the brain.

Memory strongly depends on the activity of neurons, which are nerve cells responsible for sending and receiving signals in the brain. However, this research reveals that astrocytes, another type of brain cell known for their star-like shape, also play a crucial role in memory processes.

Neurons form networks called engrams, which are groups of cells that become active during learning and hold memories. When the brain learns something, these neurons light up, forming a memory trace.

This study shows that astrocytes, though not directly firing electrical signals like neurons, are also integral in memory storage and recall. They interact closely with neurons, both physically by being near them and functionally by influencing their activity. During learning, specific astrocytes start expressing a gene called c-Fos, which signals that these cells participate in the memory circuit at that moment.

A paper published in Nature describes the study’s methods and results.

Understanding memory and treating memory-related disorders

The researchers trained mice to feel fear in a specific situation, causing them to freeze when they encountered that context again. They developed new tools to observe and manipulate astrocytes. They found that activating these special astrocytes could trigger the memory of fear, making mice freeze even in a different, non-threatening environment.

The study also explored the role of a protein called NFIA in astrocytes. Astrocytes involved in memory have higher amounts of NFIA. After blocking the production of NFIA in these astrocytes, the mice could not recall the memory they were supposed to fear. This shows that NFIA is essential for memory recall in astrocytes.

This research suggests that memory involves a dynamic interaction between neurons and astrocytes.

This could lead to new approaches in understanding and treating memory-related disorders like Alzheimer’s, where there’s not enough memory recall, or PTSD, where there’s too much memory recall. By recognizing the role of astrocytes, scientists now have a more complex picture of how the brain handles memory, opening up new avenues for research and potential therapies.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The brain constantly refreshes and updates memories

Mount Sinai researchers have studied how the brain works with memories, showing how experiences from different times and situations can blend together. The brain doesn’t keep memories just as they are but updates and reshapes them with new information.

The brain can store memories “while flexibly updating them with new and relevant information,” says research leader Denise Cai in a Mount Sinai press release.

“This combination of stability and flexibility within neural ensembles is critical for us to make everyday predictions and decisions, and to interact with an ever-changing world,” she adds.

Neural ensembles are groups of neurons that work together to store memories. Memories are flexible: the brain updates or reorganizes them as we learn more.

The researchers detail the methods and results of this study in a paper published in Nature.

After a bad experience, like getting a shock, the brains of laboratory mice didn’t just replay that event. They also brought up memories from days before, linking these old and new experiences together. This process, called ensemble co-reactivation, helps in connecting memories over time.

A big step toward understanding memories

The brain consolidates and stabilizes a memory after an event. It goes over the event again, which helps fix the memory in place.

Surprisingly, this memory process happened more when the mice were awake than asleep, challenging the idea that sleep is the primary time for memory storage. This suggests that being awake might play a bigger role in how memories are connected than previously thought.

The researchers also found that negative experiences are more likely to be linked with past memories, suggesting that our brain might be trying to make sense of these events by relating them to what we already know. This could be why traumatic events can sometimes alter how we remember other, even unrelated, past experiences.

Cai emphasized that this study is a big step in understanding how our brains manage memory in a world where things constantly change. It shows that our memories are dynamic, continuously shaped by new experiences, which is essential for functioning in our daily lives.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI for faster quantum mechanical simulations in material science

Researchers at KAIST have found ways to speed up quantum mechanical simulations by integrating an artificial intelligence (AI) model called DeepSCF. This model was developed to make the calculations needed for understanding how materials behave at the atomic level significantly faster.

Methods used for quantum mechanical simulations of materials (Credit: KAIST).

One of the key methods used in quantum mechanical simulations is Density Functional Theory (DFT), which calculates the properties of materials from first principles, meaning it starts from the basic laws of quantum physics without any empirical parameters. However, DFT calculations are computationally intensive because they involve solving complex equations to determine electron density in three-dimensional space, a process known as the self-consistent field (SCF) method. This method iteratively calculates electron density until the results stabilize, which can take many iterations, making it time-consuming especially for larger systems.

The KAIST researchers introduced the DeepSCF model to overcome these limitations. DeepSCF uses a neural network similar to those used for 3D computer vision to learn the 3D patterns of electron density and chemical bonding.

The researchers describe the DeepSCF project and early results in a paper published in npj Computational Materials.

Instead of repeatedly solving equations through SCF, DeepSCF recognizes chemical bonding patterns, learning from a dataset of known molecular structures how electron density should look for new or altered materials.

AI predictions replace intensive computations

This innovation replaces the traditional, computationally heavy steps with AI predictions. The researchers trained the neural network with data from various organic molecules, enhancing the model’s ability to generalize across different chemical scenarios by applying random rotations and deformations to these structures. This allows the model to not only predict electron densities but also other quantum mechanical properties, with an accuracy comparable to conventional DFT calculations but much faster.

According to the researchers, DeepSCF represents a leap forward in materials science and quantum chemistry, offering a tool that can quickly predict material properties without the need for extensive computational resources. DeepSCF opens up new possibilities for simulating larger and more complex material systems efficiently.

“Since quantum mechanical electron structure calculations underpin materials simulations across all scales,” says research leader Yong-Hoon Kim in a KAIST press release, “this research establishes a foundational principle for accelerating material calculations using artificial intelligence.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter