Seeing inside the black box of deep neural networks

Deep neural networks work by passing data through several layers, starting with an input layer that takes in raw data. Hidden layers then analyze this data, with early layers spotting basic features and deeper layers recognizing more complex patterns.

For example, in deep neural networks for image recognition, early layers spot basic features like edges and deeper layers recognizing more complex patterns, like a cat or dog.

But the hidden layers are like a black box; we see the inputs and outputs, but not what happens inside. This opacity becomes a problem when AI makes errors, sometimes from just a tiny change in input.

Researchers at Kyushu University have found a way to visualize how these networks organize information and described it in a paper published in IEEE Transactions on Neural Networks and Learning Systems.

Traditional methods simplify high-dimensional data into 2D or 3D, which loses detail and makes comparing different neural networks or data classes hard.

Organizing the data

The new method, called the k* distribution method, keeps the data in higher dimensions, preventing information loss. It assigns each data point a k* value, showing how far it is from unrelated data. A high k* value means good separation, like cats far from dogs, while a low value indicates overlap, making classification errors more likely. This method reveals if data is clustered, where similar items group together, or if it’s fractured or overlapping, where items are scattered or mixed, potentially leading to classification mistakes.

In a Kyushu University press release, researcher Danilo Vasconcellos Vargas explains this like organizing a warehouse: if items are well sorted, finding them is easy; if not, errors occur.

This approach is crucial for AI used in safety-critical areas like self-driving cars or medical diagnosis. It helps evaluate AI’s decision-making, identify weaknesses, and improve reliability, especially when dealing with incomplete or unclear data. The goal is to make AI systems that are not only accurate but also adaptable to real-world challenges.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Deep learning system for protein design

Researchers from the University of Science and Technology of China and Harvard Medical School have developed a new artificial intelligence (AI) tool called PocketGen.

Pockets are specific regions on the surface of a protein where small molecules, like drugs or ligands, can bind.

PocketGen uses deep learning to design proteins that can bind to small molecules. Binding means the protein and molecule stick together in a useful way. This can help make new medicines or sensors.

Traditional ways to design these proteins are slow and often don’t work well. They use energy optimization, which tries to find the best shape for a protein, and template matching, where they copy the shape of known proteins. These methods struggle because they don’t handle the complexity of how proteins and molecules interact.

PocketGen solves these problems by using two main parts. One part is a graph transformer encoder, which learns how amino acids, the building blocks of proteins, interact with each other in a 3D space. It updates their positions to match how they would act in real life. The other part is a protein language model, which helps predict the sequence of amino acids. This model gets fine-tuned by PocketGen to make sure the sequence fits well with the protein’s shape.

New paths in bioengineering

The researchers have described PocketGen in a paper published in Nature Machine Intelligence.

Tests show PocketGen works better than older methods. It’s faster by ten times and makes proteins that bind more effectively to molecules like fentanyl and ibuprofen. PocketGen also compared well against other top models like RFDiffusion and RFDiffusionAA.

The tool also offers insights into why proteins bind the way they do, which helps scientists understand and predict protein behavior better. This advancement not only speeds up drug design but also opens new paths in biology and engineering by showing how AI can improve our understanding of life at the molecular level.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists propose a global project to create an AI virtual cell

Scientists see a unique chance to use Artificial Intelligence (AI) to make a virtual human cell. This virtual cell could mimic how real human cells and their parts behave.

In a paper published in Cell, the scientists argue than the AI virtual cell (AIVC) could lead to breakthroughs in biomedical research, personalized medicine, drug discovery, cell engineering, and programmable biology.

Emma Lundberg, a Stanford professor, calls this the “holy grail of biology,” suggesting AI can learn from data to uncover biology’s secrets. “AI offers the ability to learn directly from data and to move beyond assumptions and hunches to discover the emergent properties of complex biological systems,” she says.

The scientists propose that an AI virtual cell could help understand how cells work and why they get sick. It would let scientists test ideas on computers rather than living organisms, speeding up research for new medicines.

A virtual cell could help cancer biologists study how cells become cancerous, microbiologists predict virus impacts, and doctors test treatments on digital versions of patients for personalized medicine.

Bridging AI and biology

For the AI virtual cell to be successful, it must do three things: create general models for different species and cell types, predict cell behavior accurately, and allow for cost-effective experiments on computers.

The project demands a lot of data, far more than what was used for AI models like ChatGPT. It needs global teamwork across many scientific fields and a commitment to sharing results freely.

Lundberg emphasizes that this project is enormous, like the Human Genome Project, needing cooperation and time, potentially over a decade. “But, with today’s rapidly expanding AI capabilities and our massive and growing datasets, the time is ripe for science to unite and begin the work of revolutionizing the way we understand and model biology,” she says.

“By bridging the worlds of computer systems, modern generative AI and AI agents, and biology, the AIVC could ultimately enable scientists to understand cells as information processing systems and build virtual depictions of life,” concludes the Cell paper.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists and philosophers discuss AI consciousness and welfare

The rapid growth of artificial intelligence (AI) has sparked debates about AI ethics, particularly if AI might one day think and feel like humans. AI experts are now discussing whether AI could have subjective experiences, which means having personal feelings or consciousness, and if they could suffer, Nature News reports.

In a recent paper pre-published in arXiv, AI researchers suggests that AI companies should check if their AI systems are conscious or can make decisions on their own. If AI systems do become conscious, there should be rules on how to treat them.

Ignoring consciousness in AI might lead to neglect or harm, causing the AI to suffer. Some find the idea of AI welfare amusing, but others, like Anil Seth from the University of Sussex, believe we should prepare, even if it seems far-fetched. He argues that ignoring the possibility of conscious AI could be dangerous, much like the issues in Frankenstein’s story.

Jonathan Mason, a mathematician from Oxford, stresses the importance of understanding AI consciousness as society relies more on these technologies. Not knowing if AI has perception could be irresponsible. Jeff Sebo, a philosopher at NYU and a co-author of the paper, warns that assuming AI is conscious when it’s not could misdirect resources meant for human or animal welfare.

A transitional moment

The AI researchers argue that this time is a “transitional moment” for AI welfare. Kyle Fish, one of the paper’s authors, has been hired by Anthropic to research AI welfare, marking a new focus in the industry. This shift shows that some at top AI firms are taking the ethical implications of AI consciousness seriously.

The fact that a publication like Nature has chosen to amplify this paper shows that it’s time to take the possibility of AI consciousness very seriously.

“We have argued that there is a realistic, non-negligible chance that some AI systems will be welfare subjects and moral patients in the near future, given current evidence,” reads the conclusion of the paper. “Given that leading AI systems already possess some markers of consciousness and robust agency, given that further AI development might be quite fast, and given that the development of an adequate policy response might be quite slow, leading AI companies should take these steps as soon as possible.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Google DeepMind introduces new generative AI tools

Google DeepMind has introduced a series of new products and prototypes that could help them lead in the development of generative Artificial Intelligence (AI), MIT Technology Review reports (unpaywalled copy).

One major announcement is Gemini 2.0, which is an updated version of their large language models. These models are multimodal, meaning they can handle different types of data like text, images, and video. Gemini 2.0 focuses on controlling agents, which are AI systems that can act on their own to complete tasks.

Alongside Gemini 2.0, Google DeepMind is working on Project Astra. This is an experimental app that aims to be a universal assistant, capable of dealing with various tasks through different inputs like speech or images. In a live demo, Astra showed it could integrate with Google’s apps, like Search and Maps, to respond to user queries. It can even remember past interactions, which adds a personal touch to its assistance.

Another new addition is Mariner, an agent that can browse the web for you. There’s also Jules, a coding assistant powered by Gemini, and a gaming assistant for tips while playing video games. These projects show Google’s push towards AI that can perform actions independently but under supervision.

Faster and better

Gemini 2.0 is said to be much faster than its predecessor, Gemini 1.5, and does better on various benchmarks, like MMLU-Pro, which tests AI on subjects ranging from math to philosophy.

MIT Technology Review tested Project Astra with real-world scenarios like suggesting wine for a meal or discussing art. Mistakes were easily corrected with simple spoken commands, much like teaching a child.

Researchers outside Google are intrigued by the blend of different data types and the memory capability of Astra. However, despite the excitement, there’s still a lot of work to be done before Project Astra is widely available. There’s no set release date, and while there are demos with smart glasses, that’s even further off.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Toward fast, versatile, reusable DNA computing devices

DNA, which holds life’s instructions, could power future computers. These DNA computers could be much smaller and quicker than today’s silicon-based ones. In a new study published in ACS Central Science, researchers describe a DNA computing method that is fast, works in sequence, and can be rewritten like regular computers.

In nature, DNA works in a transcription and translation sequence that produces RNA, then proteins. This happens over and over. Scientists want to use these molecular processes in computers for better performance. They’ve made some progress with DNA computing for specific tasks, but they’re aiming for more versatile, reusable DNA devices.

Previously, the researchers made DNA circuits with many logic gates. They used short DNA pieces, or oligonucleotides, to represent 0 or 1 data. In their system, oligonucleotides interacted with logic gates in a liquid tube, creating an output DNA strand. This strand then attached to an origami-like DNA structure called a register, which is like computer memory. Reading this strand told the system what to do next. But this process was slow and required manual transfers.

To improve speed and efficiency, the researchers placed the DNA register on a glass surface. Here, the output DNA from one gate could easily bind to the register, get read, and then detach, resetting the register without moving it. They also created an amplifier to help DNA pieces find each other faster. This new setup allowed all computing steps to happen in one tube in just 90 minutes.

Toward large scale DNA computing

“DNA computing as a liquid computing paradigm has unique application scenarios and offers the potential for massive data storage and processing of digital files stored in DNA,” says researcher Fei Wang in an ACS press release.

This research could lead to larger, faster DNA computing systems and makes it easier to see and fix problems in DNA algorithms, paving the way for more advanced DNA-based computing.

“This research paves the way for developing large-scale DNA computing circuits with high speed and lays the foundation for visual debugging and automated execution of DNA molecular algorithms,” concludes Wang.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Faster magnetic memory devices based on altermagnetism

Scientists are studying a new kind of magnetism called altermagnetism.

Altermagnetism is when tiny magnetic parts align opposite to their neighbors, but each part sits in a structure that is slightly rotated compared to others. This is different from the usual types of magnetism, like ferromagnetism or antiferromagnetism.

Altermagnets have magnetic moments that point in opposite directions, but their crystal structures are twisted.

Researchers at the University of Nottingham have found ways to control altermagnetism in tiny devices.

“Altermagnets consist of magnetic moments that point antiparallel to their neighbours,” says research leader Peter Wadley in a press release issued by the University of Nottingham. “However, each part of the crystal hosting these tiny moments is rotated with respect to its neighbours.”

“This is like antiferromagnetism with a twist! But this subtle difference has huge ramifications,” he says.

This twist gives altermagnets unique properties that could revolutionize technology and make computer memory devices much faster, up to a thousand times quicker.

The researchers have described the methods and results of this study in a paper published in Nature.

The experiments took place at the MAX IV facility in Sweden, which is an electron accelerator that produces x-rays and uses them to look at materials. Scientists shine x-rays on the magnetic material, and the electrons that come off show what’s happening inside. This method lets them see the magnetism at a very small scale, down to the nanoscale.

Faster and cleaner electronics

Magnetic materials are key in computer memory and electronics, but they use a lot of energy and rely on rare, harmful materials. Altermagnetism might solve these problems. It combines good traits from other types of magnetism, potentially making devices faster, more durable, and less energy-consuming.

According to the researchers, replacing the key components of microelectronic devices with altermagnetic materials could permit increasing speed and efficiency. At the same time, it could reduce our dependency on rare and toxic heavy elements.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bioresorbable acoustic microrobots for precision drug delivery

In the future, tiny robots could deliver drugs exactly where they’re needed in the body. These robots aren’t like the robots you might imagine; they’re more like small bubbles. They must survive in body fluids, be directed to specific spots, release drugs only at the target, and then be safely absorbed by the body.

A research team led by Caltech has made these microrobots, called bioresorbable acoustic microrobots (BAM). The researchers successfully used these bots to shrink bladder tumors in mice. In a Caltech press release, researcher Wei Gao from Caltech explains that these robots can guide drugs directly to tumors, making treatment more effective.

A paper published in Science Robotics describes the BAM microrobots and preliminary test results.

These microrobots are made from a hydrogel, a material that can hold a lot of fluid and is safe for the body. They’re created using a precise technique called two-photon polymerization, which is like 3D printing but at a microscopic level. Each robot is about the size of a human hair.

The robots contain magnetic particles and drugs. Magnetic fields guide them to the right place, and once there, they release the drug. They have a special design where the outside attracts water to avoid clumping, while the inside repels water to hold an air bubble. This bubble is key for moving the robots and tracking them with ultrasound.

Successful tests in laboratory mice

The team tested these robots in mice, showing that they could deliver drugs more effectively than traditional methods. They’re now looking into using these robots for different treatments and possibly in humans. This could change how we do drug delivery and surgery, making them much more precise.

“We think this is a very promising platform for drug delivery and precision surgery,” concludes Gao. “Looking to the future, we could evaluate using this robot as a platform to deliver different types of therapeutic payloads or agents for different conditions. And in the long term, we hope to test this in humans.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Google unveils Willow, a fast quantum chip with error correction

Google has announced a new quantum chip called Willow. Willow reduces errors a lot when there are more qubits. This is a big deal because it solves a problem scientists have been trying to fix for almost 30 years.

A paper published in Nature describes the chip and some benchmarks.

Willow has done, in less than five minutes, a calculation that would take the world’s fastest supercomputer 10 septillion years. That’s a huge number, way bigger than the age of the universe. This shows how powerful quantum computers can be compared to regular ones.

Willow was made in a Santa Barbara lab focused on quantum chips. The components of the chip (e.g. single and two-qubit gates, qubit reset, and readout) need to work well together. If one part isn’t good, the whole chip suffers.

Willow’s qubits can hold their quantum state for longer than previous quantum chips, which is crucial for quantum computing. According to Google, this makes Willow the best in its class for reducing errors and doing quick calculations.

Willow isn’t just about having lots of qubits but about making high-quality ones. Google is excited about what Willow can do to push quantum computing forward, making it not just faster but also more reliable for real-world use.

Real-life applications of quantum computing

The goal now is to use Willow for real-life problems that regular computers can’t handle. Until now, Google scientists have either tested the chip’s speed or used it for simple scientific simulations.

Artificial Intelligence (AI) and quantum computing will be “the most transformational technologies of our time” and “advanced AI will significantly benefit from access to quantum computing,” says Google Quantum AI leader Hartmut Neven.

Potential applications of Willow include “helping us discover new medicines, designing more efficient batteries for electric cars, and accelerating progress in fusion and new energy alternatives,” concludes Neven. “Many of these future game-changing applications won’t be feasible on classical computers; they’re waiting to be unlocked with quantum computing,” he says.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New AI model uses brain-like processes to understand moving images

Scientists at Scripps Research have developed an artificial intelligence (AI) called MovieNet that can understand moving images like a human brain.

This AI, described in a study published in PNAS, works by mimicking how neurons process real-time scenes. Instead of focusing on still images, MovieNet deals with complex, changing visuals. This could change how we do things like medical diagnosis or drive cars autonomously.

MovieNet is more accurate and better for the environment than older AI models. Scripps researcher Hollis Cline explains that our brains don’t just see pictures; they understand a continuous story. By watching how tadpole neurons react to moving images, the scientists found neurons in a visual processing region of the tadpole brain that detect changes like brightness or rotation. These neurons put together bits of moving images into a whole scene, much like solving a puzzle where each piece is a moment in time.

The new AI learned to spot these moving patterns by looking at short clips, from 100 to 600 milliseconds long, rather than single frames. When tested with videos of swimming tadpoles, MovieNet could tell normal from abnormal movements with 82.3% accuracy, beating both human observers and other AI like GoogLeNet by a significant margin.

Better results with less data and power

MovieNet doesn’t just work better; it uses less energy. Old AI needs lots of data and power, but MovieNet simplifies data, making it eco-friendly. This could make AI more sustainable and affordable for many uses.

In medicine, MovieNet might help spot subtle signs of diseases early, like Parkinson’s, by noticing tiny changes in movement. It could also improve drug testing by observing how cells react over time, rather than just at certain moments.

The scientists plan to keep improving MovieNet, making it adaptable for more uses. By learning from biology, they think AI can become more efficient and useful in ways we haven’t even thought of yet.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter