Hand-inspired DNA nanobot grabs viruses

Researchers at the University of Illinois have developed a tiny robot made entirely from a single piece of DNA, which they call the NanoGripper. The NanoGripper, inspired by human hands and bird claws, can interact with biological entities on a very small scale.

The NanoGripper is designed like a hand with four fingers. Each finger has three joints, similar to a human finger, allowing it to bend in a controlled way due to the DNA’s design.

The researchers describe the NanoGripper in a paper published in Science Robotics.

The fingers of the NanoGripper have special areas called DNA aptamers that respond to specific markers like the spike protein of the SARS-CoV-2 virus, which causes COVID-19. When these aptamers meet the virus, they trigger the fingers to close around it, effectively “grabbing” the virus.

One application of the NanoGripper is in rapid virus detection. By working with a photonic crystal sensor, which uses light to detect very small changes, the NanoGripper can hold the virus in place while a fluorescent molecule lights up, signaling the virus’s presence. This makes for a very quick and sensitive test for COVID-19, taking only 30 minutes, similar in accuracy to the slower hospital tests.

Beyond detection, the NanoGripper has potential in preventing infections. When added to cells exposed to the virus, these NanoGrippers can surround the virus, preventing its spikes from attaching to and infecting cells. This could lead to innovations like anti-viral nasal sprays where NanoGrippers could block viruses at the entry point in the nose.

Programmable DNA nanobots

Looking ahead, the NanoGripper could be tweaked to target other viruses or even cancer cells. By programming the fingers to recognize different markers, these NanoGrippers could deliver drugs directly to cancer cells or other targeted treatments.

This technology showcases how DNA, with its programmable nature, can be used to create tools that operate at the nanoscale for both medical diagnostics and treatment.

“We wanted to make a soft material, nanoscale robot with grabbing functions that never have been seen before, to interact with cells, viruses and other molecules for biomedical applications,” says research leader Xing Wang in a press release. “We are using DNA for its structural properties. It is strong, flexible and programmable.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Dropletronic technology for new biocompatible devices

Oxford University researchers have studied tiny, soft droplets made of hydrogel that can carry out functions similar to electronic devices but using ions instead of electrons.

Ions are atoms or molecules with an electric charge, like sodium or calcium, which are crucial in biological processes like nerve signaling.

This technology could help build devices that can interact with biological systems in a way that’s friendly to living tissues.

Traditional electronic devices can’t easily integrate with soft, living tissues because they are often rigid and not biocompatible. Ions are naturally present in and around cells, which could make these “dropletronic devices” more bio-friendly.

Dropletronic devices are made from hydrogel droplets, about the size of a small grain of sand, and are manipulated using surfactants (like soap). They could form structures that can perform as diodes, transistors, and even memory storage units. These components are key in electronics for controlling, amplifying, and storing electrical signals, but here they work with ions.

The researchers describe the methods and results of this study in a paper published in Science.

The researchers used dropletronics to sense electrical activity from human heart cells. This shows how these devices can communicate in the same “ionic language” as biological cells.

Dropletronics has vast potential

Ions can do more than electrons because they come in different sizes and charges, allowing for more complex functionality. This could lead to devices with memory capabilities, much like how our brains work. This could help build neuromorphic computers.

The researchers “demonstrated a dropletronic device with long-term memory storage,” says research leader Yujia Zhang in an Oxford University press release. This “offers an unconventional pathway to neuromorphic applications.”

Dropletronics could revolutionize drug delivery systems by precisely responding to biological signals. This fusion of electronics with biology might pave the way for devices that can sense multiple biological markers and react appropriately. This could be groundbreaking in clinical medicine.

The researchers are persuaded that dropletronics could open up a new chapter in bioengineering, where the lines between living organisms and technology blur.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI predicts experimental science outcomes

Large language models (LLMs) trained on huge amounts of text data recognize patterns in language and predict new information. A new study from researchers at University College London (UCL) shows that LLMs are better at predicting the outcomes of neuroscience experiments than human experts.

The researchers describe the methods and results of this study in a paper published in Nature Human Behaviour.

The researchers created a tool called BrainBench to test this ability. BrainBench involves showing both real and fake versions of neuroscience research summaries to LLMs and human experts. The real summaries come from actual studies, detailing the research background, methods, and results. The fake summaries keep the background and methods the same but change the results to something that could be believable but wrong.

Here’s what the researchers found: When asked to pick which summary was real, the LLMs got it right about 81% of the time, while human experts only managed about 63%. Even when only the most knowledgeable scientists were considered, their accuracy was still lower at 66% compared to the LLMs. Plus, the LLMs were more accurate when they felt more certain about their choices, measured by a perplexity score that shows how surprising the text is to the model.

The researchers went a step further by customizing an LLM, which they called BrainGPT, specifically for neuroscience. This version did even better, achieving an accuracy of 86% compared to the general LLM’s 83%.

Why does this work?

This finding suggests that LLMs could be very useful for science. Instead of just looking up known facts, these models can predict new research outcomes by recognizing patterns in existing scientific literature. This could speed up scientific discovery by helping researchers design experiments that are more likely to yield meaningful results.

Researchers Ken Luo explains that this is really about using AI to predict future scientific developments. This could change how experiments are planned, making them more efficient and insightful. Research leader Bradley Love notes that this also raises questions about how novel scientific research really is if AI can predict outcomes so well, suggesting that perhaps scientists need to be more innovative.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nanoscale robots built with Velcro-like DNA strands

Researchers at the University of Sydney have built tiny robots using a technique called DNA origami. This method folds DNA into precise shapes much like origami, the art of paper folding.

This allows engineers to create very small, custom-made DNA structures at the nanoscale. The researchers describe the methods and results of this study in a paper published in Science Robotics.

The researchers created over 50 different nanoscale DNA objects for their experiments. These structures are called DNA origami “voxels,” which are like 3D pixels that can connect and form more complex shapes or machines.

Their approach involves using extra DNA strands that act like Velcro, where only strands with matching sequences can connect. This permits building complex DNA structures with great precision.

This technology could lead to new ways of delivering drugs directly to where they are needed in the body, like targeting cancer cells without affecting healthy ones.

The nanoscale DNA structures could change how they work based on their environment. This could be useful in making new materials or improving medical treatments.

Future nanobots

Moreover, these nanoscale DNA structures might help in creating new materials that can change their properties or in processing light signals more efficiently. This could be used in security or medical diagnostics for better imaging.

“The results are a bit like using Meccano, the children’s engineering toy,” says research leader Shelley Wickham in a press release issued by University of Sydney. “But instead of macroscale metal or string, we use nanoscale biology to build robots with huge potential.”

“Our work demonstrates the incredible potential of DNA origami to create versatile and programmable nanostructures,” adds researcher Minh Tri Luu. “The ability to design and assemble these components opens new avenues for innovation in nanotechnology.”

Future nanobots could work a huge range of tasks, from treating the human body to building futuristic electronic devices.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Quantum material enables clean hydrogen production

Water splitting is a way to make hydrogen gas from water, which could be a clean future energy source, but it’s not easy because the reaction that makes oxygen is slow.

This slowness makes hydrogen production expensive and not very efficient. Now, scientists  the Max Planck Institute and the Weizmann Institute of Science have found a new way to make this process work better.

The scientists describe the methods and results of this study in a paper published in Nature Energy. They have used chiral crystals, in which the atoms are arranged in a way that has a left or right-handed twist. These crystals, made from rhodium mixed with silicon, tin, or bismuth, do something amazing with electrons.

Electrons have a property called “spin,” which is like having a little bit of magnetism. Normally, this spin doesn’t affect much in chemical reactions, but these chiral crystals use this spin to make the oxygen part of the water splitting reaction happen much faster. This speeds up the whole process significantly.

Quantum machines for clean energy

In a press release issued by Max Planck Institute, research co-leader Xia Wang described these crystals as “quantum machines” because they use quantum mechanics to work their magic. She says these new crystals are 200 times better than what we used before.

This could change how we think about producing hydrogen, making it cheaper and more efficient. This means we might be able to use hydrogen as a clean energy source more easily in the future.

Researcher Binghai Yan admits that these crystals use some rare materials right now, but he’s optimistic that they’ll soon find ways to make this new method sustainable and practical.

By blending advanced quantum physics with real-world problems, this research shows how science can lead to practical solutions for our energy needs, moving us towards a cleaner, more sustainable way to power our world.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI-driven protein engineering

Nature has always been adept at creating proteins, but scientists have often found ways to enhance them. Now, Artificial Intelligence (AI) is stepping in with the potential to make proteins even more effective.

AI can be used to design proteins that could help in medicine, like creating antibodies that target diseases better or vaccines that work more efficiently against viruses. But the use of designer proteins isn’t just limited to healthcare; they could also be used to grow crops that are more nutritious or help the environment by absorbing carbon dioxide.

A new tool called EVOLVEpro, developed by researchers from Mass General Brigham and Beth Israel Deaconess Medical Center, is at the forefront of this AI-driven protein engineering. According to a study published in Science, EVOLVEpro can take existing proteins and make them more stable, accurate, and efficient.

For example, with EVOLVEpro, scientists can enhance how well a protein binds to its target, which is crucial for treatments like cancer or autoimmune diseases.

Large language models (LLMs) in AI are changing the game in protein engineering. These models work like AI systems that predict and generate human language, but for proteins, they predict how to tweak the protein “language” to get better results. EVOLVEpro uses this technology by building upon existing models to offer more refined suggestions for protein improvement.

The researchers have shown that EVOLVEpro can dramatically enhance proteins.

Protein engineering can transcend nature’s capabilities

The researchers believe that with tools like EVOLVEpro, we’re entering a new phase where protein design can transcend nature’s capabilities, potentially solving modern problems like climate change and food security by designing proteins tailored for these specific challenges.

In related news, other scientists have developed a method for designing large new proteins and producing them with the desired properties in the laboratory. Their approach involves a new way of using AlphaFold2, for which the Nobel Prize in Chemistry was awarded in 2024.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Genomic bottlenecks could help AI neural networks evolve

Soon after birth, humans and many animals show abilities that seem to come out of nowhere. The brain, with its trillions of connections, makes these behaviors possible. However, the genome, which is like the recipe book for all life, only has room for a tiny bit of that information. This has puzzled scientists for a long time. How can such limited information produce such complex behaviors?

Researchers at Cold Spring Harbor Laboratory have come up with a new idea. They think that the genome’s limited space might actually be what makes us smart. They suggest that because the genome can’t contain all the details, we’re forced to learn and adapt quickly. This concept is hard to test in real life because evolution takes billions of years, but Artificial Intelligence (AI) offers a faster way to experiment.

In a paper published in PNAS, the researchers describe their new “genomic bottleneck algorithm” for neural networks. This algorithm compacts a lot of information into a smaller space, similar to how our genome might pack the instructions for brain development.

Compressed neural networks

“Our results suggest that compressing a neural circuit through the genomic bottleneck serves as a regularizer, enabling evolution to select simple circuits that can be readily adapted to important real-world tasks,” note the researchers in the paper.

They tested compressed neural networks against other AI systems, and found that their new, untrained algorithm could perform tasks like recognizing images or playing simple video games almost as well as the best trained AI models.

The algorithm’s ability to compress information could be very useful in technology. For instance, it could help run sophisticated AI models on devices with limited memory, like cell phones, by expanding the model layer by layer as needed.

This work suggests that the limitations of our genome might not be a flaw but a feature that encourages adaptability and intelligence. It’s a fascinating hint that our evolutionary journey could inform the future of AI, making it more efficient and perhaps closer to achieving human intelligence.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

NASA will task Blue Origin and SpaceX to build lunar cargo systems

NASA and its commercial partners are gearing up for extensive exploration of the Moon through the Artemis program.

As part of this, NASA plans to give more tasks to two companies, Blue Origin and SpaceX, under their current contracts. These tasks involve creating landers that can take big pieces of equipment and infrastructure to the Moon.

NASA wants these companies to develop cargo versions of their existing landers, which are already being made for crewed missions like Artemis III, IV, and V. Cargo versions are landers without space for astronauts, used to transport heavy equipment.

NASA has specific plans for these cargo missions: SpaceX’s Starship will deliver a pressurized rover from Japan’s space agency JAXA to the Moon by 2032 or later. Blue Origin will send a lunar surface habitat, which is a living space for astronauts, by 2033 or later.

A NASA press release states that this is part of a broader plan to have regular Moon visits through collaboration with various partners. Having two different lander designs offers flexibility and ensures continuous Moon exploration.

“These large cargo lander demonstration missions aim to optimize our NASA and industry technical expertise, resources, and funding as we prepare for the future of deep space exploration,” says Lisa Watson-Morgan, program manager, Human Landing System, at NASA’s Marshall Space Flight Center.

NASA’s approach includes using the SLS rocket, ground systems, the Orion spacecraft, and future technologies like spacesuits and the Gateway space station.

However, there have been calls for NASA to abandon some of these projects in favor of faster and cheaper alternatives based on SpaceX technology and commercial systems. Now that Elon Musk will be a trusted advisor to U.S. President elect Donald Trump, the SpaceX option seems likely to gain more weight.

Be what it may, this all contributes to not just exploring the Moon but also preparing for missions to Mars. The plan is to learn how to sustain life on other worlds, expanding our reach into space and eventually becoming a multi-planetary species.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Supercomputer simulation of molecular dynamics wins the Gordon Bell Prize

Researchers led by Giuseppe Barca from the University of Melbourne have won the 2024 Gordon Bell Prize, which is like the “Nobel Prize” for high-performance computing (HPC).

The annual Bell Prize recognizes outstanding work in using supercomputers to solve big problems in science.

The researchers describe their work in a paper titled “Breaking the Million-Electron and 1 EFLOP/s Barriers: Biomolecular-Scale Ab Initio Molecular Dynamics Using MP2 Potentials,” presented at Supercomputing 2024. In simple terms, they developed innovative methods for supercomputers to simulate what happens to molecules in living things.

The researchers used a supercomputer called Frontier. They created software that can accurately show how drugs work inside the body, much like watching a live experiment. This helps scientists see how drugs might break or form bonds with other molecules, which is super important for making new medicines.

Thanks to this work, new drugs can be made faster and at a lower cost. It’s especially helpful for diseases that are hard to treat because now, scientists can test many more ideas on the computer before trying them in real life.

Barca co-founded a company named QDX, which uses this kind of computer simulation to help design new treatments. This company is already working with other businesses around the world to bring these new methods into practical use.

The website of the Association for Computing Machines (ACM), the organization that awards the Bell Prize, gives more technical detail.

Molecular dynamics is a computer simulation method that helps scientists understand how atoms and molecules move. One key technique, Ab Initio Molecular Dynamics (AIMD), starts from basic physical laws to predict molecular behavior.

The reserachers introduced a new molecular dynamics method to simulate over a million electrons on the Frontier supercomputer. Their work achieved simulations 1,000 times larger and faster than before, setting new standards in computational chemistry.

This could allow for more precise drug development, biofuel production, and material engineering, significantly expanding what’s possible in scientific research.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI-generated science summaries promote understanding andn trust

In a world where science plays a big role in our everyday decisions, understanding it can be tricky. Scientists often write in a way that’s hard for most people to grasp, using lots of technical terms or jargon. This can make the public feel left out or distrustful of science because they can’t understand what’s being said. But a new study suggests a way to bridge this gap using artificial intelligence (AI).

David Markowitz, a professor at Michigan State University, has been looking into whether AI can make scientific writing easier to understand. His research, published in PNAS Nexus, shows that AI can indeed simplify complex scientific papers.

AI does this by creating summaries or “significance statements” that use simpler words and sentences. For example, instead of “occupation,” AI might just say “job,” which is easier for everyone to understand.

In his experiments, Markowitz found that when people read these AI-generated summaries, they understood the science better. They could explain what they read more accurately than those who only read the summaries written by scientists. This boost in comprehension also seemed to make readers trust and think more highly of the scientists involved.

The importance of better science communication

However, there’s a twist: when asked, people guessed that the simpler summaries were written by humans, not AI, showing a common misconception about how AI communicates.

While AI can make science more accessible, there are still things we need to learn. For one, as AI gets better, it could become a common tool in scientific journals, but we’re not there yet. There’s also the worry that simplifying too much might lose important details or introduce mistakes if not monitored. Plus, it’s important for readers to know when AI has been used to avoid any confusion or bias.

Ultimately, while AI offers a promising way to make science more digestible, scientists themselves could also improve how they communicate by using less jargon and clearer language, which might mean even without AI, we can all understand science better.

Markowitz summarized his findings in an article published in The Conversation.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter