Exoplanet-hunting telescope to begin search for another Earth

Europe’s next big space mission telescope will hunt for Earth-like rocky planets outside of our solar system, launching at the end of 2026 on Europe’s new rocket, Ariane-6.

PLATO (PLAnetary Transits and Oscillations of stars), is being built to find nearby potentially habitable worlds around Sun-like stars that we can examine in detail.

The habitable zone

“PLATO’s goal is to search for exoplanets around stars similar to the Sun and at orbital periods long enough for them to be in the habitable zone,” said Dr. David Brown, of the University of Warwick, in a statement. “But it is also designed to carefully and precisely characterize the exoplanets that it finds (i.e., work out their masses, radii, and bulk density).”

It will also study the stars, using a range of techniques, including asteroseismology (measuring the vibrations and oscillations of stars) to work out their masses, radii, and ages.

Multiple cameras

Unlike most space telescopes, PLATO has 24 “Normal” cameras (N-CAMs) and 2 “Fast” cameras (F-CAMs). This gives PLATO a very large field of view, improved scientific performance, redundancy against failures, and a built-in way to identify “false positive” signals that might mimic an exoplanet transit, Brown explained.

Brown is giving an update on the mission at the Royal Astronomical Society’s National Astronomy Meeting at the University of Hull this week.

Image credit: An artist’s impression of the European Space Agency’s PLATO spacecraft. ESA/ATG medialab

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How do neurons react to magic mushrooms?

The Allen Institute for Brain Science has just launched projects to investigate this and three other key research questions. This research is conducted via OpenScope, a shared neuroscience observatory that lets neuroscientists worldwide propose and direct experiments on the Allen Brain Observatory. This research is made freely available to anyone tackling open questions in neural activity in health and disease. 

Psychedelic science

One of this year’s OpenScope projects will explore how psilocybin, the psychoactive compound in “magic mushrooms,” can induce intense psychedelic experiences in humans, changing brain activity at a cellular level.

Using advanced recording techniques in mice, scientists will investigate the neural mechanisms that underlie altered cognition and perception, and observe how neurons communicate differently under the influence of psilocybin. They will also explore how those changes might influence the brain’s ability to process and predict sensory information, which is crucial to understanding how perception is constructed.

“Our interest in these compounds goes beyond their potential clinical applications,” said Roberto de Filippo, Ph.D., a postdoc at Humboldt University of Berlin, in a statement. “We believe that uncovering the biological mechanisms underlying their effects can provide fundamental insights into the processes that govern perception, cognition, and consciousness itself.”

How the past subtly shapes our worldview

Another 2024 OpenScope project aims to uncover the neural underpinnings of these updates. How does the brain recognize objects moving around us? The project aims to demystify this fundamental process by studying motion perception in the visual cortex of mice. This project will use microscopy to simultaneously observe the activity of many neurons over several weeks and in different parts of the visual cortex.

Seeing the patterns

Our brains instantly recognize countless complex visual textures that surround us, from the intricate designs on a butterfly’s wings to the grain pattern of wood. But how does it pull off this remarkable feat of visual perception? In this OpenScope project, mice will be trained to distinguish textures while their neuronal activity is monitored in the visual cortex, linking neural responses to perception.

The key goals are to determine how certain textures are easily recognized while others pose a challenge and to map how different brain regions interact to transform visual inputs into coherent representations that guide behavior.

Those findings could uncover core principles for how the brain extracts understanding from our richly patterned visual world, the researchers said. However, the scale and complexity of the research necessitate tools and resources beyond those in a typical laboratory setting.

“Using the Allen Brain Observatory will not only increase the scope and reach of our project severalfold, but it will also allow us to compare and contextualize with all the other Open Science projects they have led in the last decade,” said Federico Bolaños, Ph.D., lead data scientist at the University of British Columbia. “As it happened in other fields like high energy physics or astronomy, research in systems neuroscience needs to move from individual laboratories into a bigger and interconnected community, in which we progress together.”

The Research described here was supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Neural networks made of light 

Scientists at the Max Planck Institute for the Science of Light have proposed a new way of implementing a neural network: an optical system, which could make machine learning and AI tasks more sustainable in the future.

Currently, increasingly complex neural networks—some with billions of parameters—are required. But rapid growth of neural network size has put the networks on an unsustainable path due to their exponentially growing energy consumption and training times, say the researchers in a statement.

For example, training a large language model like GPT-3 is estimated to use just under 1,300 megawatt hours (MWh) of electricity—about as much power as consumed annually by 130 US homes. This trend has created a need for faster, more energy- and cost-efficient alternatives.

Light vs electrons

The researchers’ new idea is to perform the required mathematical operations physically by light in a potentially faster and more energy-efficient way. Optics and photonics are particularly promising platforms for neuromorphic computing, since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds, only limited by the speed of light.

But there are two challenges: realizing the necessary complex mathematical computations requires high laser powers and the lack of an efficient general training method for such physical neural networks.

Both challenges could be overcome with a new method proposed by Max Planck Institute for the Science of Light researchers in a new paper in Nature Physics. The method avoids the complicated physical interactions needed by the required mathematical functions. But the authors have demonstrated in simulations that their new approach can be used to perform image classification tasks with the same accuracy as digital neural networks.

The authors plan to collaborate with experimental groups to explore the implementation of their method. Their proposal significantly relaxes the experimental requirements, so it can be applied to many physically very different systems. That opens up new possibilities for neuromorphic devices—allowing physical training over a broad range of platforms.

Citation: Wanjura, C. C., & Marquardt, F. (2024). Fully nonlinear neuromorphic computing with linear wave scattering. Nature Physics, 1-7. 10.1038/s41567-024-02534-9 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Learning dance moves could help humanoid robots work better with humans

University of California San Diego engineers have trained a humanoid robot to effortlessly learn and perform a variety of expressive movements. These include simple dance routines (from videos) and gestures that can range from high-fiving to hugging.

The robot movements are directed by a human operator using a game controller, which dictates speed, direction and specific motions. The team envisions a future version equipped with a camera to enable the robot to perform tasks and navigate terrains autonomously.

The team will present their work at the 2024 Robotics: Science and Systems Conference July 15 to 19 in Delft, Netherlands.

Citation: Xuxin Cheng et al. Expressive Whole-Body Control for
Humanoid Robots. UC San Diego. https://expressive-humanoid.github.io/resources/Expressive_Whole-Body_Control_for_Humanoid_Robots.pdf (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Generative AI for databases

MIT researchers have developed a new tool that makes it easier for database users to perform complicated statistical analyses of tabular data, without the need to know what’s going on behind the scenes.

GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.

GenSQL combines a tabular dataset with a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.

GenSQL can also produce and analyze synthetic data that mimic the real data in a database—useful where sensitive data cannot be shared, such as patient health records, or when real data are sparse.

Extending SQL

This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.

Compared to popular, AI-based approaches for data analysis, GenSQL is faster and also produces more accurate results, the researchers say. Also, the generated models are explainable, so users can read and edit them.

Next, the researchers want to apply GenSQL more broadly to conduct large-scale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.

ChatGPT-like AI expert

In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal: develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   

The research was recently presented at the ACM Conference on Programming Language Design and Implementation. It is funded in part by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation.

Citation: Mathieu Huot et al., 20 June 2024, Proceedings of the ACM on Programming Languages, Volume 8, Issue PLDI, https://doi.org/10.1145/3656409 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First mouse model with complete functional human immune system

Scientists have created a humanized mouse model with a human immune system, which could create antibody responses from a fully developed and functional human immune system.

The aim of the multi-year project: overcome the limitations of currently available in vivo human models by creating a humanized mouse with a human-like gut microbiome  (community of microorganisms).

The breakthrough promises new insight into immunotherapy development and disease modeling. which will appear in the August 2024 issue of Nature Immunology,

The scientists were led by Paolo Casali, MD, University of Texas Ashbel Smith Professor and Distinguished Research Professor, Department of Microbiology, Immunology and Molecular Genetics in the Joe R. and Teresa Lozano Long School of Medicine. Casali has five decades of biomedical research experience in immunology and microbiology and is a leading researcher in molecular genetics and epigenetics of the antibody response.

“Humanized” mouse model

Mice are widely used in biological and biomedical research because they are small, easy to handle, and share many immune elements and biological properties with humans. But many of the more-than 1,600 immune response mouse genes are limited.

The new humanized mice, called “TruHuX” (for truly human) or THX), possess a fully developed and fully functional human immune system, including lymph nodes, germinal centers, thymus human epithelial cells, human T and B lymphocytes, memory B lymphocytes, and plasma cells making highly specific antibody and autoantibodies identical to those of humans.

Wide range of new experiments and developments

The THX mouse discovery opens the possibilities for human in vivo experimentation, development of immunotherapeutics such as cancer checkpoint inhibitors, development of human bacterial and viral vaccines, and modeling of many human diseases. (Casali also hopes the new approach could make obsolete the use of non-human primates for immunological and microbiological biomedical research.)

The Casali lab is also investigating the in vivo human immune response to SARS-CoV-2 (COVID-19).

Citation: Chupp, D. P., Rivera, C. E., Zhou, Y., Xu, Y., Ramsey, P. S., Xu, Z., Zan, H., & Casali, P. (2024). A humanized mouse that mounts mature class-switched, hypermutated and neutralizing antibody responses. Nature Immunology, 1-18. https://www.nature.com/articles/s41590-024-01880-3 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Robots learn better by also listening

For robots to move into homes, they’ll need to learn to listen, suggests MIT Technology Review.

“Researchers at the Robotics and Embodied AI Lab at Stanford University have built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone.

“Thus far, robots have been training on videos that are muted,” says Zeyi Liu, a PhD student at Stanford and lead author of the study. “But there is so much helpful data in audio.”

The results, published in a paper on arXiv: “When using vision alone in the dice test, the robot could tell 27% of the time if there were dice in the cup, but that rose to 94% when sound was included.”

Citation: Zeyi Liu et al. ManiWAV: Learning Robot Manipulation from
In-the-Wild Audio-Visual Data. arXiv https://arxiv.org/pdf/2406.19464 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to drink water from the air

An innovative “Hydropanel” device that distills water directly from water vapor in the atmosphere, using only sunlight power, has been developed by Source, a Scottsdale, Arizona company.

After distillation, the water is mineralized for ideal pH and TDS (total dissolved solid contaminants), resulting in “safe, premium-quality drinking water,” the company says. The process also mineralizes the water with essential magnesium and calcium to increase body absorption and refine alkalinity and taste.

“Each Hydropanel can produce up to three liters of drinking water a day, about the average daily intake for one person,” according to Cody Friesen, an Associate Professor at the School for Engineering of Matter, Transport and Energy at Arizona State University.

Other larger solar-powered devices for producing water are also commercially available.

The company also plans to offer a can version.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ant selectively amputates infected limbs of wounded nestmates

Scientists have found that Florida carpenter ants selectively treat the wounded limbs of fellow nestmates by either wound cleaning or amputation to aid in recovery—based on the injury.

“When we’re talking about amputation behavior, this is literally the only case in which a sophisticated and systematic amputation of an individual by another member of its species occurs in the animal Kingdom,” says first author Erik Frank (@ETF1989), a behavioral ecologist from the University of Würzburg In a study published July 2 in the journal Current Biology.

Assessing the type of injury to choose treatment

In a paper published in 2023, it was discovered that a different group of ants, Megaponera analis, use a special gland to inoculate injuries with antimicrobial compounds meant to quell possible infections. But what makes Florida carpenter ants (Camponotus floridanus) stand out is that because they have no such gland; they appear to be using only mechanical means to treat their nestmates.

The researchers found that this mechanical care involves one of two routes: perform wound cleaning with just their mouthparts or perform a cleaning followed by the full amputation of the leg. To select which route they take, the ants appear to assess the type of injury to make informed adjustments on how best to treat.

“The fact that the ants are able to diagnose a wound, see if it’s infected or sterile, and treat it accordingly over long periods of time by other individuals—the only medical system that can rival that would be the human one,” Frank says.

So how are these ants capable of such precise care?

“When you look at the videos where you have the ant presenting the injured leg and letting the other one bite off completely voluntarily, and then present the newly made wound so another one can finish cleaning process—this level of innate cooperation to me is quite striking,”

Citation: Frank et al. July 02, 2024. Wound-dependent leg amputations to combat infections in an ant society. Current Biologyhttps://www.cell.com/current-biology/fulltext/S0960-9822(24)00805-4 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI model finds cancer clues at lightning speed

Researchers at the University of Gothenburg have developed an AI model that’s faster and better at detecting cancer than the current semi-manual method.

Glycans (structures of sugar molecules in cells) can be measured by mass spectrometry to detect cancer. However, the data must be carefully analyzed by one of the few experts in the world—taking up to days per sample.

Detecting cancer in seconds

Researchers at the University of Gothenburg have now developed an AI model named “Candycrunch” that does it in just a few seconds per test, as reported in the journal Nature Methods.

“The AI model was trained using a database of more than 500,000 examples of different fragmentations and associated structures of sugar molecules in 90 percent of cases,” said Daniel Bojar, Associate Senior Lecturer in Bioinformatics at the University of Gothenburg in a statement.

Detects biomarkers missed by human analyses

That means the AI model could soon reach the same levels of accuracy as the sequencing of other biological sequences, such as DNA, RNA or proteins. In addition, the AI model is fast and accurate in its answers, so it can accelerate the discovery of glycan-based biomarkers for both diagnosis and prognosis of a cancer.

The Candycrunch model is also able to identify low concentrations of biomarkers, which are often missed by human analyses.

Citation: Urban, J., Jin, C., Thomsson, K. A., Karlsson, N. G., Ives, C. M., Fadda, E., & Bojar, D. (2024). Predicting glycan structure from tandem mass spectrometry via deep learning. Nature Methods, 1-10. https://doi.org/10.1038/s41592-024-02314-6 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter