AI pioneers win the Nobel Prize in Physics

John Hopfield and Geoffrey Hinton have won the Nobel Prize in Physics 2024 for foundational discoveries and inventions that enable machine learning with artificial neural networks.

John J. Hopfield is a Professor at Princeton University. Geoffrey E. Hinton is a Professor at University of Toronto.

Two documents, titled “Popular science background: They used physics to find patterns in information” and “Scientific background: For foundational discoveries and inventions that enable machine learning with artificial neural networks,” give detailed explanations of the work of Hopfield and Hinton.

Artificial neural networks

Artificial Neural Networks (ANNs) are computer systems based on large collections of “neurons”, or nodes, connected by “synapses.” After suitable training, ANN perform certain tasks.

In the 1980s, Hopfield developed an associative memory based on a simple ANN. Then Hinton co-developed a stochastic extension of Hopfield’s model called the Boltzmann machine. Hinton also used backpropagation methods to successfully implement examples of deep and dense networks, a milestone toward what is now known as deep learning.

The video of the announcement includes a short phone interview with Hinton. Hinton commented on the influence that his discoveries, and in particular his work on backpropagation, had on future developments and current developments up to GPT-4, of which is is a frequent user.

An artist’s illustration of artificial intelligence (AI). This image was inspired by neural networks used in deep learning (Credit: Pexels)

Physics and beyond

Hopfield and Hinton “have used tools from physics to develop methods that are the foundation of today’s powerful machine learning,” notes the Nobel Foundation in a press release.

“In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” adds Ellen Moons, Chair of the Nobel Committee for Physics.

These words explain why two of the main developers of the neural networks used in Machine Learning (ML) and Artificial Intelligence (AI) at large have won the Nobel Prize in Physics.

But this year’s award also makes a lot of sense in view of the spectacular impact that the work of Hopfield and Hinton is having on today’s rapid and pervasive development of AI technology. AI could deeply change society and the world at large. And future AIs could even discover new breakthrough physics.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The discoverers of microRNA win the Nobel Prize in Medicine

Victor Ambros and Gary Ruvkun have won the 2024 Nobel Prize in Physiology or Medicine for the discovery of microRNA and its role in gene regulation.

Victor Ambros is a Professor of Natural Science at the University of Massachusetts Medical School. Gary Ruvkun is a Professor of Genetics at Harvard Medical School.

Gene regulation ensures that only the correct set of genes is active in each specific cell type and that cellular functions keep adapting to changing conditions in our bodies and environment.

Ambros and Ruvkun discovered microRNA, tiny RNA molecules that play a crucial role in gene regulation. “Their groundbreaking discovery revealed a completely new principle of gene regulation that turned out to be essential for multicellular organisms, including humans,” notes the Nobel Foundation in a press release.

In the late 1980s, Ambros and Ruvkun studied C. elegans worms. In particular, they investigated two mutant genes called lin-4 and lin-14, which resulted in anomalies in the activation of genetic programs during development.

C. elegans, a model organism used for genetics and molecular bio research (Credit: snickclunk / Flickr)
C. elegans, a model organism used for genetics and molecular bio research (Credit: snickclunk / Flickr)

Eventually, the two scientists discovered a new principle of gene regulation mediated by microRNA, a previously unknown type of RNA, and published their findings in 1993.

The research results, which seemed limited to C. elegans worms, weren’t considered too significant after publication. But then, in 2000 Ruvkun’s research group discovered another mutant gene, the let-7 gene, which is present throughout the animal kingdom.

Gene regulation by microRNA is universal

In the following years, hundreds of different microRNAs were identified. Today, we know that there are more than a thousand genes for different microRNAs in humans, and that gene regulation by microRNA is universal among multicellular organisms.

“As the field exploded, which is just a joy to watch, then there was a sense that this is the sort of field, the sort of sea-change that gets awards and things,” said Ruvkun in a first interview. “But that took a long time and was an unbelievable pleasure to watch, to participate in.”

Advances are being made in developing microRNA-based diagnostics and therapeutics for metabolic disorders, cardiovascular diseases, neurodegenerative conditions, and cancer.

More detailed explanations are given in a document titled “Scientific background: For the discovery of microRNA and its role in post-transcriptional gene regulation.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists reactivate memory in laboratory mice

Neuroscientists at Johns Hopkins Medicine have reactivated a specific memory circuit in the brains of laboratory mice.

The scientists stimulated neurons to reactivate a spatial memory and tested the successful reactivation of the memory circuit by causing the mice to seek out shelter when no shelter is actually present.

The neuroscientists have detailed the research methods and results in a paper published in Nature Neuroscience.

“When we artificially reactivate those memory circuits in the brain, it triggers the mouse to do the same thing it did naturally, even without the fear stimuli that cause them to seek shelter to begin with,” says research leader Hyungbae Kwon in a Johns Hopkins University School of Medicine press release.

First, the scientists allowed the mice to explore a box with a shelter to form a mental map of their surroundings, and stimulated the mice to seek shelter.

Then, the scientists removed the shelter and activated shelter memory neurons with a system called Cal-light, which Kwon co-developed in 2017 at Max Planck Florida Institute for Neuroscience. Cal-light allows to control the activity of neurons with light. The scientists report that the mice sought out the area of the box where the shelter had once been, when neither the original threat nor the shelter were present.

“The Cal-light system allowed us to selectively tag a specific function in the brain, helping us to map out memory on a cellular level,” says Kwon. 

The findings advance understanding of how memories are structured in the mammalian brain. The envisioned applications include the development of new methods to slow down or prevent memory loss in human patients with neurodegenerative diseases.

Engineering memory

Eventually, Kwon says as reported in the press release, “this research could provide a foundation for reactivating or engineering memory circuits in people with Alzheimer’s.”

The possibility to edit memory circuits and even engineer new ones is especially intriguing. These findings seem to indicate that one day, and perhaps soon, we’ll be able to engineer human memory just like we engineer computer memory.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

NASA confirms that water ice is abundant on the Moon

A new analysis of NASA’s LRO (Lunar Reconnaissance Orbiter) data indicates that water ice is more abundant than previously thought in permanently shadowed regions (PSRs) near the lunar South Pole.

“We find that there is widespread evidence of water ice within PSRs outside the South Pole, towards at least 77 degrees south latitude,” says Timothy P. McClanahan in a NASA press release. “Our model and analysis show that greatest ice concentrations are expected to occur near the PSRs’ coldest locations below 75 Kelvin (-198°C or -325°F) and near the base of the PSRs’ poleward-facing slopes.”

A paper published in Planetary Science Journal describes the findings in detail.

This illustration shows the distribution of permanently shadowed regions (in blue) on the Moon poleward of 80 degrees South latitude (Credit: NASA/GSFC/Timothy P. McClanahan).

The importance of the lunar South Pole

“Our results are highly relevant,” conclude the researchers in the Planetary Science Journal paper, “to the planned on-surface robotic and human investigations of the lunar surface by NASA’s Artemis and Commercial Lunar Payload Services missions.”

In fact, NASA has identified 13 candidate landing regions near the lunar South Pole for Artemis III, which will be the first of the Artemis missions to bring crew to the lunar surface. Due to many considerations including the presence of water, the lunar South Pole region is considered the top candidate region for future research outposts and eventually settlements on the Moon.

“Ice would be a valuable resource for future lunar expeditions,” notes the NASA press release. “Water could be used for radiation protection and supporting human explorers, or broken into its hydrogen and oxygen components to make rocket fuel, energy, and breathable air.”

The importance of establishing crewed bases on the Moon is evident to us, the space enthusiasts persuaded that humanity must become multi-planetary as soon as possible. It should also be evident to politicians in the West, because if the West doesn’t lead the way, China will.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists turn on new neuron production in mice brains

New research results published by Stanford University scientists suggest the possibility of finding ways to turn on new neuron production in old or injured brains.

Some new neurons are still produced in the adult brain by neural stem cells. But, as brains age, the neural stem cells become less and less adept at making these new neurons. This results in memory impairments and degenerative brain diseases as Alzheimer’s and Parkinson’s. It also makes it more difficult to recover from stroke or other brain injury.

A research paper published in Nature provides new results on how and why neural stem cells, the cells behind the generation of new neurons in the adult brain (neurogenesis), become less active as brains age. The paper suggests new ways to make old neural stem cells more active, and the possibility to stimulate neurogenesis in younger brains in need of repair.

The researchers used CRISPR technology, which allows to precisely edit the DNA of living cells, to conduct a genome-wide search for genes that, when knocked down, increase the activation of neural stem cells in cultured samples from old mice, without also increasing it in young ones. Knocking out a gene means inactivating it by editing its DNA in a way that permanently stops the expression of the gene.

One of those genes is the gene for the glucose transporter, known as the GLUT4 protein. This suggests that elevated glucose levels in and around old neural stem cells could be keeping those cells inactive.

The scientists demonstrated that knocking this gene out indeed had an activating and proliferative effect on neural stem cells, leading to a significant increase in new neuron production in living mice. After the intervention, the scientists observed a more than twofold increase in newborn neurons in old mice.

According to the scientists, similar methods could permit treating brain injuries.

A hopeful finding

This “is a hopeful finding,” says research leader Anne Brunet. It suggests not only the possibility of designing pharmaceutical or genetic therapies to turn on new neuron growth in old or injured brains, but also the possibility of developing simpler behavioral interventions, such as a low carbohydrate diet that might adjust the amount of glucose taken up by old neural stem cells. “The next step,” Brunet continued, “is to look more closely at what glucose restriction, as opposed to knocking out genes for glucose transport, does in living animals.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists map an entire fly brain

Scientists at Princeton University, in collaboration with other scientists and citizen scientists, have built a neuron-by-neuron and synapse-by-synapse map (connectome) of the entire brain of an adult fruit fly.

“Any brain that we truly understand tells us something about all brains,” said research co-leader Sebastian Seung. “With the fly wiring diagram, we have the potential for an unprecedented, detailed and deep understanding.”

With almost 140,000 neurons and tens of millions of synapses connecting them, this is by far the most complex brain that has been mapped so far.

The map was built from 21 million images. Using an Artificial Intelligence (AI) AI model built Seung’s team, the images were turned into a three-dimensional map by the FlyWire Consortium.

Brain cells from the fruit fly’s auditory circuits, as mapped and annotated by FlyWire (Credit: Princeton University).
Brain cells from the fruit fly’s auditory circuits, as mapped and annotated by FlyWire (Credit: Princeton University).

“This is a major achievement,” says research co-leader Mala Murthy. “There is no other full brain connectome for an adult animal of this complexity.”

FlyWire is a collaboration between professional scientists from several institutions and citizen scientists including gamers. The cloud computational infrastructure used by FlyWire has been developed by teams led by Seung and Murthy, in collaboration with the Allen Institute for Brain Science.

Much of this research work has been done by AI systems. Humans have checked, assembled, and annotated the AI-generated products, adding cell type labels to each neuron.

This momentous achievement is described and discussed in a special issue of Nature. The main paper is titled “Neuronal wiring diagram of an adult brain.”

It seems plausible that future developments could permit mapping an entire human brain and better understanding human brain diseases.

A step toward mind uploading?

But other, wilder ideas come immediately to mind.

“Minds differ because connectomes differ,” said Seung in “Connectome: How the Brain’s Wiring Makes Us Who We Are” (2012). “Perhaps even your memories, the most idiosyncratic aspect of your personal identity, could be encoded in your connectome.”

In other words, all that matters to personal identity and distinguishes one living being from another is encoded in the connectome. This view, shared by many (but not all) scientists, seems to open the door to the possibility of mind uploading. Humans, Seung said, could one day “discard their bodies completely and transfer their minds to computers.” Mind uploading seems a science fictional idea, but “perhaps all we have to do is wait for computers to get more powerful.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

On-board AI for robotic operations

MIT engineers have developed a method that enables robots to make intuitive, task-relevant decisions in complex environments in the real world. The applications of the new method range from factory and domestic robots to search and rescue operations.

The new method is called Clio, like the Greek muse of history, for its ability to identify and remember only the things that really matter for a given task. For example, a quadruped robot (Spot by Boston Dynamics) running Clio on-board in real-time identified and mapped only those parts of an office building that related to the robot’s tasks (such as retrieving a dog toy while ignoring piles of office supplies), allowing the robot to grasp the objects of interest.

MIT’s Clio runs in real-time to map task-relevant objects in a robot’s surroundings, allowing the bot (Boston Dynamic’s quadruped robot Spot, pictured) carry out a natural language task  (Credit: MIT).
MIT’s Clio runs in real-time to map task-relevant objects in a robot’s surroundings, allowing the bot (Boston Dynamic’s quadruped robot Spot, pictured) carry out a natural language task (Credit: MIT).

The Clio system is described in a research paper published in Robotics and Automation Letters. The final manuscript of the paper is available from MIT, and a preprint is published in arXiv.

With Clio, a robot is given a list of tasks described in natural language and, based on those tasks, it analyzes its surroundings to identify and remember only what is relevant to the given tasks.

“Search and rescue is the motivating application for this work, but Clio can also power domestic robots and robots working on a factory floor alongside humans,” says lead researcher Luca Carlone in a MIT press release. “It’s really about helping the robot understand the environment and what it has to remember in order to carry out its mission.”

Combining computer vision and LLMs

On-board Artificial Intelligence (AI) for robots that must navigate and operate in complex environments is important and critical. Such systems would be useful not only for search and rescue operations and the other applications mentioned in this study, but also for all applications that require robots to operate as autonomously as possible, such as law enforcement operations, military operations, and operations in deep space.

Clio makes use of both computer vision and large language models (LLMs), and can be seen as an advance toward multimodal AI.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

GPT-4 performs as well as human radiologists

Researchers at Osaka Metropolitan University have compared the diagnostic performance of GPT-4 based ChatGPT and human radiologists (two board-certified neuroradiologists and three general radiologists) on 150 preoperative brain tumor MRI reports and concluded that GPT-4 has performed as well as the radiologists.

The study is described in a research paper published in European Radiology.

“GPT-4 exhibited good diagnostic capability, comparable to neuroradiologists in differentiating brain tumors from MRI reports,” conclude the researchers. “GPT-4 can be a second opinion for neuroradiologists on final diagnoses and a guidance tool for general radiologists and residents.”

The accuracy of both sets of diagnoses was evaluated based on the actual diagnoses of the tumors after removal. The evaluation results show a 73 percent accuracy for GPT-4, compared to 72 percent for neuroradiologists and 68 percent for general radiologists.

The researchers emphasize that the diagnostic accuracy of GPT-4 seems to increase with the skill level of the human source of the input report: the accuracy with neuroradiologist reports was 80 percent, compared to 60 percent when using general radiologist reports.

In the future, said graduate student Yasuhito Mitsuyama, the lead author of the paper, in the Osaka Metropolitan University press release, “we intend to study large language models in other diagnostic imaging fields with the aims of reducing the burden on physicians, improving diagnostic accuracy, and using AI to support educational environments.”

Future AI applications in medicine

The instance of GPT-4 used in the study is a few months old (May 24 version). In view of the fast pace of development of Artificial Intelligence (AI) technology, and in particular the ongoing development of AI systems with enhanced reasoning ability, it seems likely that more spectacular results could materialize soon.

It seems plausible that AI systems could playing a growing and eventually leading role in medical research and clinical practice.

For a fascinating overview of current and future AI applications in medicine, written by top experts in the field, see “The AI Revolution in Medicine: GPT-4 and Beyond” (2023).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Photons that seem to travel in negative time

A paper published in arXiv a few weeks ago is making the rounds after having being highlighted by Scientific American on September 30.

The Scientific American headline reads “Evidence of ‘Negative Time’ Found in Quantum Physics Experiment.” According to a group of experimental physicists, photons have been found to exit a material before entering it.

The research results, interesting to say the least, had previously been highlighted by New Scientist. The New Scientist story is paywalled, but open copies are easy to find online, for example here.

The arXiv paper is titled “Experimental evidence that a photon can spend a negative amount of time in an atom cloud.” So far, the paper has been published only in arXiv. That is, the paper has not been peer-reviewed yet. However, the paper is authored by a team of scientists that includes well-known physicists such as Howard Wiseman and Aephraim Steinberg. “Woo-hoo!,” the latter posted to X. Atoms “seem to spend a negative amount of time” in an excited state.

When photons of light pass through a material substrate, in this case a cloud of rubidium atoms at a very cold temperature, prepared in a magneto-optical trap formed using two beams of light and a magnetic field gradient, the photons are absorbed and then re-emitted by the excited atoms of the substrate. This process influences the group delay, an interval of time that can be intuitively thought of as the time that the light spends in the substrate. In certain experimental conditions, the observed group delay is negative. Or in other words, the photons seem to exit the cloud of cold atoms before they have entered it.

What does it mean?

The proposed explanations of the experimental results involve weird quantum phenomena. Before beginning to speculate about time travel and all that, one should bear in mind that quantum mechanics, as it is understood today, rules out the possibility of leveraging weird quantum effects to send signals faster than light and backward in time.

In this case, it can be argued that the group delay is not a physically meaningful quantity. But the sober suggestion of the paper is that it is.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Large World Models with spatial intelligence

A startup called World Labs has raised $230 million to build Artificial Intelligence (AI) technology that can understand and operate in the three-dimensional physical world, Reuters reported.

The startup is the brainchild of Fei-Fei Li, a heavyweight in the AI sector who is often called the “godmother of AI.” Among other achievements, Li developed ImageNet, a large data set that has been instrumental to waves of breakthroughs in computer vision, in deep learning, and AI at large.

Fei-Fei Li
Fei-Fei Li (Credit: Wikimedia Commons).

World Labs will strive to build “Large World Models (LWMs) to perceive, generate, and interact with the 3D world,” notes the startup’s announcement. “We aim to lift AI models from the 2D plane of pixels to full 3D worlds – both virtual and real – endowing them with spatial intelligence as rich as our own.”

Li’s 2023 autobiography titled “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI” tells the story of ImageNet and other recent AI milestones. At the heart of AI, explains Li, is “the power of data at large scales.” Big Data is, indeed, at the heart of both Large Language Models (LLMs) and LWMs.

The funding round was co-led by Andreessen Horowitz. In a commentary, Andreessen Horowitz partners Martin Casado and Sarah Wang note that, while Large Language Models (LLMs) have had a huge impact over the last few years, language is just one way we humans reason and communicate. The physical world is spatial, and we first understand and reason about it by seeing and interacting with it. The envisioned short-term applications of LWM technology include computer games and Augmented/Virtual Reality (AR/VR). Longer term applications could include AI-powered autonomous robots, a critical technology that is likely to be first prototyped with software robots acting in VR worlds.

In an episode of the Andreessen Horowitz podcast titled “The Future of AI is Here” – Fei-Fei Li Unveils the Next Frontier of AI,” Casado interviews Li and World Labs co-founder Justin Johnson on the rapid ongoing expansion of AI technology and the prospect of LWM-powered machines.

Grokking the world

The prospect of LWM technology promises machines that understand the physical world intuitively, from inside – or, using the delicious term coined by Robert Heinlein, machines able to “grok” the world and act autonomously in the world.

LLM technology has enabled spectacular advances in understanding and generating human language, but this is not likely enough for Artificial General Intelligence (AGI). Recent developments with enhanced reasoning ability such as OpenAI’s Strawberry promise multimodal AIs with a wider range of abilities. LWMs with human-like spatial intelligence that groks the physical world like we do could soon add more and very important capabilities to the AI toolbox and advance toward AGI.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter