NASA confirms that water ice is abundant on the Moon

A new analysis of NASA’s LRO (Lunar Reconnaissance Orbiter) data indicates that water ice is more abundant than previously thought in permanently shadowed regions (PSRs) near the lunar South Pole.

“We find that there is widespread evidence of water ice within PSRs outside the South Pole, towards at least 77 degrees south latitude,” says Timothy P. McClanahan in a NASA press release. “Our model and analysis show that greatest ice concentrations are expected to occur near the PSRs’ coldest locations below 75 Kelvin (-198°C or -325°F) and near the base of the PSRs’ poleward-facing slopes.”

A paper published in Planetary Science Journal describes the findings in detail.

This illustration shows the distribution of permanently shadowed regions (in blue) on the Moon poleward of 80 degrees South latitude (Credit: NASA/GSFC/Timothy P. McClanahan).

The importance of the lunar South Pole

“Our results are highly relevant,” conclude the researchers in the Planetary Science Journal paper, “to the planned on-surface robotic and human investigations of the lunar surface by NASA’s Artemis and Commercial Lunar Payload Services missions.”

In fact, NASA has identified 13 candidate landing regions near the lunar South Pole for Artemis III, which will be the first of the Artemis missions to bring crew to the lunar surface. Due to many considerations including the presence of water, the lunar South Pole region is considered the top candidate region for future research outposts and eventually settlements on the Moon.

“Ice would be a valuable resource for future lunar expeditions,” notes the NASA press release. “Water could be used for radiation protection and supporting human explorers, or broken into its hydrogen and oxygen components to make rocket fuel, energy, and breathable air.”

The importance of establishing crewed bases on the Moon is evident to us, the space enthusiasts persuaded that humanity must become multi-planetary as soon as possible. It should also be evident to politicians in the West, because if the West doesn’t lead the way, China will.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists turn on new neuron production in mice brains

New research results published by Stanford University scientists suggest the possibility of finding ways to turn on new neuron production in old or injured brains.

Some new neurons are still produced in the adult brain by neural stem cells. But, as brains age, the neural stem cells become less and less adept at making these new neurons. This results in memory impairments and degenerative brain diseases as Alzheimer’s and Parkinson’s. It also makes it more difficult to recover from stroke or other brain injury.

A research paper published in Nature provides new results on how and why neural stem cells, the cells behind the generation of new neurons in the adult brain (neurogenesis), become less active as brains age. The paper suggests new ways to make old neural stem cells more active, and the possibility to stimulate neurogenesis in younger brains in need of repair.

The researchers used CRISPR technology, which allows to precisely edit the DNA of living cells, to conduct a genome-wide search for genes that, when knocked down, increase the activation of neural stem cells in cultured samples from old mice, without also increasing it in young ones. Knocking out a gene means inactivating it by editing its DNA in a way that permanently stops the expression of the gene.

One of those genes is the gene for the glucose transporter, known as the GLUT4 protein. This suggests that elevated glucose levels in and around old neural stem cells could be keeping those cells inactive.

The scientists demonstrated that knocking this gene out indeed had an activating and proliferative effect on neural stem cells, leading to a significant increase in new neuron production in living mice. After the intervention, the scientists observed a more than twofold increase in newborn neurons in old mice.

According to the scientists, similar methods could permit treating brain injuries.

A hopeful finding

This “is a hopeful finding,” says research leader Anne Brunet. It suggests not only the possibility of designing pharmaceutical or genetic therapies to turn on new neuron growth in old or injured brains, but also the possibility of developing simpler behavioral interventions, such as a low carbohydrate diet that might adjust the amount of glucose taken up by old neural stem cells. “The next step,” Brunet continued, “is to look more closely at what glucose restriction, as opposed to knocking out genes for glucose transport, does in living animals.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scientists map an entire fly brain

Scientists at Princeton University, in collaboration with other scientists and citizen scientists, have built a neuron-by-neuron and synapse-by-synapse map (connectome) of the entire brain of an adult fruit fly.

“Any brain that we truly understand tells us something about all brains,” said research co-leader Sebastian Seung. “With the fly wiring diagram, we have the potential for an unprecedented, detailed and deep understanding.”

With almost 140,000 neurons and tens of millions of synapses connecting them, this is by far the most complex brain that has been mapped so far.

The map was built from 21 million images. Using an Artificial Intelligence (AI) AI model built Seung’s team, the images were turned into a three-dimensional map by the FlyWire Consortium.

Brain cells from the fruit fly’s auditory circuits, as mapped and annotated by FlyWire (Credit: Princeton University).
Brain cells from the fruit fly’s auditory circuits, as mapped and annotated by FlyWire (Credit: Princeton University).

“This is a major achievement,” says research co-leader Mala Murthy. “There is no other full brain connectome for an adult animal of this complexity.”

FlyWire is a collaboration between professional scientists from several institutions and citizen scientists including gamers. The cloud computational infrastructure used by FlyWire has been developed by teams led by Seung and Murthy, in collaboration with the Allen Institute for Brain Science.

Much of this research work has been done by AI systems. Humans have checked, assembled, and annotated the AI-generated products, adding cell type labels to each neuron.

This momentous achievement is described and discussed in a special issue of Nature. The main paper is titled “Neuronal wiring diagram of an adult brain.”

It seems plausible that future developments could permit mapping an entire human brain and better understanding human brain diseases.

A step toward mind uploading?

But other, wilder ideas come immediately to mind.

“Minds differ because connectomes differ,” said Seung in “Connectome: How the Brain’s Wiring Makes Us Who We Are” (2012). “Perhaps even your memories, the most idiosyncratic aspect of your personal identity, could be encoded in your connectome.”

In other words, all that matters to personal identity and distinguishes one living being from another is encoded in the connectome. This view, shared by many (but not all) scientists, seems to open the door to the possibility of mind uploading. Humans, Seung said, could one day “discard their bodies completely and transfer their minds to computers.” Mind uploading seems a science fictional idea, but “perhaps all we have to do is wait for computers to get more powerful.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

On-board AI for robotic operations

MIT engineers have developed a method that enables robots to make intuitive, task-relevant decisions in complex environments in the real world. The applications of the new method range from factory and domestic robots to search and rescue operations.

The new method is called Clio, like the Greek muse of history, for its ability to identify and remember only the things that really matter for a given task. For example, a quadruped robot (Spot by Boston Dynamics) running Clio on-board in real-time identified and mapped only those parts of an office building that related to the robot’s tasks (such as retrieving a dog toy while ignoring piles of office supplies), allowing the robot to grasp the objects of interest.

MIT’s Clio runs in real-time to map task-relevant objects in a robot’s surroundings, allowing the bot (Boston Dynamic’s quadruped robot Spot, pictured) carry out a natural language task  (Credit: MIT).
MIT’s Clio runs in real-time to map task-relevant objects in a robot’s surroundings, allowing the bot (Boston Dynamic’s quadruped robot Spot, pictured) carry out a natural language task (Credit: MIT).

The Clio system is described in a research paper published in Robotics and Automation Letters. The final manuscript of the paper is available from MIT, and a preprint is published in arXiv.

With Clio, a robot is given a list of tasks described in natural language and, based on those tasks, it analyzes its surroundings to identify and remember only what is relevant to the given tasks.

“Search and rescue is the motivating application for this work, but Clio can also power domestic robots and robots working on a factory floor alongside humans,” says lead researcher Luca Carlone in a MIT press release. “It’s really about helping the robot understand the environment and what it has to remember in order to carry out its mission.”

Combining computer vision and LLMs

On-board Artificial Intelligence (AI) for robots that must navigate and operate in complex environments is important and critical. Such systems would be useful not only for search and rescue operations and the other applications mentioned in this study, but also for all applications that require robots to operate as autonomously as possible, such as law enforcement operations, military operations, and operations in deep space.

Clio makes use of both computer vision and large language models (LLMs), and can be seen as an advance toward multimodal AI.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

GPT-4 performs as well as human radiologists

Researchers at Osaka Metropolitan University have compared the diagnostic performance of GPT-4 based ChatGPT and human radiologists (two board-certified neuroradiologists and three general radiologists) on 150 preoperative brain tumor MRI reports and concluded that GPT-4 has performed as well as the radiologists.

The study is described in a research paper published in European Radiology.

“GPT-4 exhibited good diagnostic capability, comparable to neuroradiologists in differentiating brain tumors from MRI reports,” conclude the researchers. “GPT-4 can be a second opinion for neuroradiologists on final diagnoses and a guidance tool for general radiologists and residents.”

The accuracy of both sets of diagnoses was evaluated based on the actual diagnoses of the tumors after removal. The evaluation results show a 73 percent accuracy for GPT-4, compared to 72 percent for neuroradiologists and 68 percent for general radiologists.

The researchers emphasize that the diagnostic accuracy of GPT-4 seems to increase with the skill level of the human source of the input report: the accuracy with neuroradiologist reports was 80 percent, compared to 60 percent when using general radiologist reports.

In the future, said graduate student Yasuhito Mitsuyama, the lead author of the paper, in the Osaka Metropolitan University press release, “we intend to study large language models in other diagnostic imaging fields with the aims of reducing the burden on physicians, improving diagnostic accuracy, and using AI to support educational environments.”

Future AI applications in medicine

The instance of GPT-4 used in the study is a few months old (May 24 version). In view of the fast pace of development of Artificial Intelligence (AI) technology, and in particular the ongoing development of AI systems with enhanced reasoning ability, it seems likely that more spectacular results could materialize soon.

It seems plausible that AI systems could playing a growing and eventually leading role in medical research and clinical practice.

For a fascinating overview of current and future AI applications in medicine, written by top experts in the field, see “The AI Revolution in Medicine: GPT-4 and Beyond” (2023).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Photons that seem to travel in negative time

A paper published in arXiv a few weeks ago is making the rounds after having being highlighted by Scientific American on September 30.

The Scientific American headline reads “Evidence of ‘Negative Time’ Found in Quantum Physics Experiment.” According to a group of experimental physicists, photons have been found to exit a material before entering it.

The research results, interesting to say the least, had previously been highlighted by New Scientist. The New Scientist story is paywalled, but open copies are easy to find online, for example here.

The arXiv paper is titled “Experimental evidence that a photon can spend a negative amount of time in an atom cloud.” So far, the paper has been published only in arXiv. That is, the paper has not been peer-reviewed yet. However, the paper is authored by a team of scientists that includes well-known physicists such as Howard Wiseman and Aephraim Steinberg. “Woo-hoo!,” the latter posted to X. Atoms “seem to spend a negative amount of time” in an excited state.

When photons of light pass through a material substrate, in this case a cloud of rubidium atoms at a very cold temperature, prepared in a magneto-optical trap formed using two beams of light and a magnetic field gradient, the photons are absorbed and then re-emitted by the excited atoms of the substrate. This process influences the group delay, an interval of time that can be intuitively thought of as the time that the light spends in the substrate. In certain experimental conditions, the observed group delay is negative. Or in other words, the photons seem to exit the cloud of cold atoms before they have entered it.

What does it mean?

The proposed explanations of the experimental results involve weird quantum phenomena. Before beginning to speculate about time travel and all that, one should bear in mind that quantum mechanics, as it is understood today, rules out the possibility of leveraging weird quantum effects to send signals faster than light and backward in time.

In this case, it can be argued that the group delay is not a physically meaningful quantity. But the sober suggestion of the paper is that it is.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Large World Models with spatial intelligence

A startup called World Labs has raised $230 million to build Artificial Intelligence (AI) technology that can understand and operate in the three-dimensional physical world, Reuters reported.

The startup is the brainchild of Fei-Fei Li, a heavyweight in the AI sector who is often called the “godmother of AI.” Among other achievements, Li developed ImageNet, a large data set that has been instrumental to waves of breakthroughs in computer vision, in deep learning, and AI at large.

Fei-Fei Li
Fei-Fei Li (Credit: Wikimedia Commons).

World Labs will strive to build “Large World Models (LWMs) to perceive, generate, and interact with the 3D world,” notes the startup’s announcement. “We aim to lift AI models from the 2D plane of pixels to full 3D worlds – both virtual and real – endowing them with spatial intelligence as rich as our own.”

Li’s 2023 autobiography titled “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI” tells the story of ImageNet and other recent AI milestones. At the heart of AI, explains Li, is “the power of data at large scales.” Big Data is, indeed, at the heart of both Large Language Models (LLMs) and LWMs.

The funding round was co-led by Andreessen Horowitz. In a commentary, Andreessen Horowitz partners Martin Casado and Sarah Wang note that, while Large Language Models (LLMs) have had a huge impact over the last few years, language is just one way we humans reason and communicate. The physical world is spatial, and we first understand and reason about it by seeing and interacting with it. The envisioned short-term applications of LWM technology include computer games and Augmented/Virtual Reality (AR/VR). Longer term applications could include AI-powered autonomous robots, a critical technology that is likely to be first prototyped with software robots acting in VR worlds.

In an episode of the Andreessen Horowitz podcast titled “The Future of AI is Here” – Fei-Fei Li Unveils the Next Frontier of AI,” Casado interviews Li and World Labs co-founder Justin Johnson on the rapid ongoing expansion of AI technology and the prospect of LWM-powered machines.

Grokking the world

The prospect of LWM technology promises machines that understand the physical world intuitively, from inside – or, using the delicious term coined by Robert Heinlein, machines able to “grok” the world and act autonomously in the world.

LLM technology has enabled spectacular advances in understanding and generating human language, but this is not likely enough for Artificial General Intelligence (AGI). Recent developments with enhanced reasoning ability such as OpenAI’s Strawberry promise multimodal AIs with a wider range of abilities. LWMs with human-like spatial intelligence that groks the physical world like we do could soon add more and very important capabilities to the AI toolbox and advance toward AGI.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A fish with legs?

Yep. Sea robins are actual ocean fish—with leg-like appendages that make them adept at finding prey. (OK, sea robin’s “legs” are extensions of their pectoral fins, of which they have three on each side.)

Corey Allard, a postdoctoral fellow at Cape Cod’s Marine Biological Laboratory, saw some sea robins in a tank “because they know we like weird animals,” Allard said in a statement.

Framework for evolutionary adaptations

The study led to a collaboration with Stanford researchers studying the fish’s developmental genetics, and to papers in Current Biology dealing with how sea robins use their legs, what genes control the emergence of those legs, and how these animals could be used as a conceptual framework for evolutionary adaptations.

The team also explored Prionotus carolinus, a fish that digs to find buried prey and are highly sensitive to touch and chemical signals; and P. evolans, which lack these sensory capabilities and use their legs for locomotion and probing, but not for digging.

How walking fish could teach us

The walking fish are a potentially powerful model organism to compare specialized traits, and to teach us about how evolution allows for adaptation to very specific environments, the team suggests. For example, there are genetic transcription factors that control the development of the sea robins’ legs that are also found in the limbs of other animals, including humans.

A second study, focused on genetics, included the Max Planck Institute in Germany and comprehensively examined the genetic underpinnings of the walking fish’s unusual trait. The researchers used techniques including transcriptomic and genomic editing to identify which gene transcription factors are used in leg formation and function in the sea robins. They also generated hybrids between two sea robin species with distinct leg shapes to explore the genetic basis for these differences.

Citation: Corey AH Allard, Amy L Herbert, Stephanie P Krueger, Qiaoyi Liang, Brittany L Walsh, Andrew L Rhyne, Allex N Gourlay, Agnese Seminara, Maude W Baldwin, David M Kingsley, Nicholas W Bellono. Evolution of novel sensory organs in fish with legs. September 26, 2024. Current Biology. https://www.cell.com/current-biology/abstract/S0960-9822(24)01126-6.

Citation: Corey AH Allard, Amy L Herbert, Stephanie P Krueger, Qiaoyi Liang, Brittany L Walsh, Andrew L Rhyne, Allex N Gourlay, Agnese Seminara, Maude W Baldwin, David M Kingsley, Nicholas W Bellono. September 26, 2024. BioRxiv. https://www.biorxiv.org/content/10.1101/2023.10.14.562285v1 (preprint, open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Psilocybin generates psychedelic experience by disrupting brain network

People who consume psilocybin-containing “magic mushrooms” typically undergo a surreal experience in which their sense of space, time and self is distorted.

Researchers at Washington University School of Medicine in St. Louis have now explained how it may work: psilocybin temporarily scrambles the brain’s “default mode network,” a critical network of brain areas involved in introspective thinking, such as daydreaming and remembering.

Therapies

This finding suggests psilocybin-based therapies for mental illnesses such as depression and post-traumatic stress disorder.

The study, published July 17 in the journal Nature, creates a road map that other scientists can follow to evaluate the effects of psychoactive drugs on brain function—potentially accelerating drug development efforts for any number of psychiatric illnesses, say the researchers.

Visualizing the impact of psilocybin

In an experiment, the researchers found a way to visualize the impact of psilocybin on individual participants’ neural communication pathways that connect different brain regions and to correlate changes in these networks with subjective experiences.

They found that psilocybin caused profound and widespread—yet not permanent—changes to the brain’s functional networks. It desynchronized the “default mode network,” an interconnected set of brain areas that are ordinarily simultaneously active when the brain is not working on anything in particular.

After falling out of sync, the network re-established itself when the acute effects of the drug wore off, but small differences from pre-psilocybin scans persisted for weeks, making the brain more flexible and potentially more able to come into a healthier state.

Rating feelings of transcendence, connectedness and awe

The brains of people on psilocybin look more similar to each other than to their untripping selves—what people say about losing their sense of self during a trip.

During the experience, participants were asked to rate their feelings of transcendence, connectedness and awe using the validated Mystical Experience Questionnaire. The magnitude of the changes to the functional networks tracked with the intensity of each participant’s subjective experience.

The researchers also said they were able to get very precise data on the effects of the drug in each individual—a step toward precision clinical trials.

Citation: Siegel JS, Subramanian S, Perry D, Kay BP, Gordon EM, Laumann TO, Reneau TR, Metcalf NV, Chacko RV, Gratton C, Horan C, Krimmel SR, Shimony JS, Schweiger JA, Wong DF, Bender DA, Scheidter KM, Whiting FI, Padawer-Curry JA, Shinohara RT, Chen Y, Moser J, Yacoub E, Nelson SM,  Vizioli L, Fair DA, Lenze EJ, Carhart-Harris R, Raison CL, Raichle ME, Snyder AZ, Nicol GE, Dosenbach NUF. Psilocybin desynchronizes the human brain. Nature. July 17, 2024. DOI: 10.1038/s41586-024-07624-5 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

State of the Future 20.0

The Millennium Project has announced State of the Future 20.0, a 500-page, extensive overview of future Issues and opportunities, compiled by The Millennium Project.

The book provides a “broad, detailed, and readable look at the issues and opportunities on the future of humanity, and what we should know today to avoid the worst and achieve the best for the future of civilization.”

The Millennium Project, a global participatory think-tank, distilled research reports and insights from hundreds of futurists and related experts around the world, along with 70 of its own futures research reports and an Executive Summary on the prospects for civilization.

In State of the Future Index 2035, the book integrates 29 variables to show if the future is getting better or worse, and where we are winning and losing.

The future of AGI

Notably, the book distills insights from 55 of the world’s leading AGI experts on 22 questions on the future of AGI. Drawing on these insights, it identifies 40 national and international regulations and five governance models that are assessed by an international 299-member panel.

It also covers “Governing the Transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI)” and distills insights from 55 of the world’s leading AGI experts on 22 questions on the future of AGI.

Drawing on these insights, the report identifies 40 national and international regulations and five governance models assessed by an international 299-member panel.

In “Beneficial AGI Competition,” the book discusses the world of 2045, its issues and opportunities, offering a scenario about a day in 2045 and how AGI could greatly improve life on that day.

In “Robots 2050,” the book illustrates the future possibilities of AI robots in governments, in daily life, in bed, in the oceans, and in space.

According to the Millennium Project, “the State of the Future 20.0 is one of the largest, broadest, most detailed publications on future issues and possibilities written. It covers what the globally-minded citizen should know about the future and decision-making today.”

More in State of the Future 20.0 — Executive Summary. The book is available as a PDF download.

Full disclosure: Amara Angelica is an advisor to the Millennium Project.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter