Cool invention for hot AI data centers

Artificial intelligence (AI) is hot right now. Also hot: the data centers that power that technology, which require a tremendous amount of energy.

In 2022, data centers used more than 4% of all electricity in the U.S., with 40% of that energy spent to keep equipment cool. As demand on data centers increases, even more energy will be required.

To mitigate that, the U.S. Department of Energy has awarded more than $40 million to researchers to find new ways to cool data centers. University of Missouri researcher Chanwoo Park recently received nearly $1.65 million from that initiative, known as COOLERCHIPS.

Two-phase cooling system designed to efficiently dissipate heat

Currently, data centers are cooled with either air-moving fans or liquid that moves heat away from computer racks. But Park and his team are developing a new two-phase cooling system designed to efficiently dissipate heat from server chips through phase change, such as boiling a liquid into vapor in a thin, porous layer. The new system can operate passively without consuming any energy when less cooling is needed.

“Even in active mode, where a pump is used, it consumes only a negligible amount of energy. The liquid goes in different directions and evaporates on a thin metal surface,” Park said in a statement. “Using this boiling surface, we’re able to achieve very efficient heat transfer with low thermal resistance.”

“Drastically reduce” cooling energy needed

The system also includes a mechanical pump that is activated to absorb more heat only when needed. Early tests show that two-phase cooling techniques “drastically reduce” the amount of energy needed to keep equipment cool.

“Eventually, there will be limitations under current cooling systems, and that’s a problem,” Park said. “We’re trying to get ahead of the curve and have something ready and available for the future of AI computing.”

Citation: R. Kokate and C. Park (2024). Experimental analysis of subcooled flow boiling in a microchannel evaporator of a pumped two-phase loop. Applied Thermal Engineering, 249, 123154. https://www.sciencedirect.com/science/article/abs/pii/S1359431124008226?via%3Dihub (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI and satellite predict a wildfire’s next move

Researchers at USC have developed a new model that combines generative AI and satellite data to accurately forecast wildfire spread—a potential breakthrough in wildfire management and emergency response.

The model uses satellite data to track a wildfire’s real-time progression, then feeds this information into a sophisticated computer algorithm that can accurately forecast the fire’s likely path, intensity and growth rate.

Detailed in an early study proof published in Artificial Intelligence for the Earth Systems, the study comes as California and much of the western US continue to grapple with an increasingly severe wildfire season.

Training generative AI model

The researchers began by gathering historical wildfire data from high-resolution satellite images of past wildfires, tracking how each fire started, spread and was eventually contained. Their comprehensive analysis revealed patterns influenced by different factors like weather, fuel (for example, trees, brush, etc.) and terrain.

They then trained a generative AI-powered computer model known as a conditional Wasserstein Generative Adversarial Network, or cWGAN, to simulate how these factors influence how wildfires evolve over time. They taught the model to recognize patterns in the satellite images that match up with how wildfires spread in their model.

Anticipating future fire spread

They then tested the cWGAN model on real wildfires that occurred in California between 2020 and 2022 to see how well it predicted where the fire would spread.

“By studying how past fires behaved, we can create a model that anticipates how future fires might spread,” said Assad Oberai, Hughes Professor and Professor of Aerospace and Mechanical Engineering at USC Viterbi and co-author of the study, in a statement.

“Fuel-like grass, shrubs or trees ignites, leading to complex chemical reactions that generate heat and wind currents. Factors such as topography and weather also influence fire behavior. Fires don’t spread much in moist conditions but can move rapidly in dry conditions,” he said. “These are highly complex, chaotic and nonlinear processes. To model them accurately, you need to account for all these different factors. You need advanced computing.”

The research was funded by the Army Research Office, NASA and the Viterbi CURVE program.

Citation: Bryan Shaddy et al., 23 Apr 2024, Generative Algorithms for Fusion of Physics-Based Wildfire Spread Models with Satellite Data for Initializing Wildfire Forecasts, Artificial Intelligence for the Earth Systems, https://journals.ametsoc.org/view/journals/aies/aop/AIES-D-23-0087.1/AIES-D-23-0087.1.xml (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Organs printed on demand

Researchers at the University of Virginia School of Engineering and Applied Science have developed what they believe could be the template for the first building blocks of human-compatible organs, printed on demand.

Their bioprinting “digital assembly of spherical particles” (DASP) method deposits particles of biomaterial in a supporting matrix (both water-based). This matrix allows for building 3D structures that provide a suitable environment for the cells to grow. (The assembly process uses “voxels,” the 3D version of pixels, to construct the 3D objects.)

DASP’s applications will include artificial organ transplant, disease and tissue modeling, and screening candidates for new drugs, according to the researchers.

Printing organoids

“For example, with this level of control, we could print organoids, which are 3D cell-based models that function as human tissue, to study disease progression in the search for cures,” said Liheng Cai, an assistant professor of materials science and engineering and chemical engineering, in a statement.

The particles are polymer hydrogels engineered to mimic human tissue by tweaking the arrangement and chemical bonds of single-molecule monomers, which link together in chains to form networks. The “double network” hydrogels—formed from two intertwined molecular networks—are mechanically strong but highly tunable for mimicking the physical characteristics of human tissue. The bioprinter uses a multichannel nozzle to mix the hydrogel components on demand.

The results were published July 13 in the journal Nature Communications.

Funding: National Science Foundation, the UVA LaunchPad for Diabetes, the UVA Coulter Center for Translational Research, Juvenile Diabetes Research Foundation, Virginia’s Commonwealth Health Research Board and the UVA Center for Advanced Biomanufacturing.

Citation: Zhu, J., He, Y., Wang, Y., & Cai, L. (2024). Voxelated bioprinting of modular double-network bio-ink droplets. Nature Communications, 15(1), 1-16. https://doi.org/10.1038/s41467-024-49705-z (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Exoplanet-hunting telescope to begin search for another Earth

Europe’s next big space mission telescope will hunt for Earth-like rocky planets outside of our solar system, launching at the end of 2026 on Europe’s new rocket, Ariane-6.

PLATO (PLAnetary Transits and Oscillations of stars), is being built to find nearby potentially habitable worlds around Sun-like stars that we can examine in detail.

The habitable zone

“PLATO’s goal is to search for exoplanets around stars similar to the Sun and at orbital periods long enough for them to be in the habitable zone,” said Dr. David Brown, of the University of Warwick, in a statement. “But it is also designed to carefully and precisely characterize the exoplanets that it finds (i.e., work out their masses, radii, and bulk density).”

It will also study the stars, using a range of techniques, including asteroseismology (measuring the vibrations and oscillations of stars) to work out their masses, radii, and ages.

Multiple cameras

Unlike most space telescopes, PLATO has 24 “Normal” cameras (N-CAMs) and 2 “Fast” cameras (F-CAMs). This gives PLATO a very large field of view, improved scientific performance, redundancy against failures, and a built-in way to identify “false positive” signals that might mimic an exoplanet transit, Brown explained.

Brown is giving an update on the mission at the Royal Astronomical Society’s National Astronomy Meeting at the University of Hull this week.

Image credit: An artist’s impression of the European Space Agency’s PLATO spacecraft. ESA/ATG medialab

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How do neurons react to magic mushrooms?

The Allen Institute for Brain Science has just launched projects to investigate this and three other key research questions. This research is conducted via OpenScope, a shared neuroscience observatory that lets neuroscientists worldwide propose and direct experiments on the Allen Brain Observatory. This research is made freely available to anyone tackling open questions in neural activity in health and disease. 

Psychedelic science

One of this year’s OpenScope projects will explore how psilocybin, the psychoactive compound in “magic mushrooms,” can induce intense psychedelic experiences in humans, changing brain activity at a cellular level.

Using advanced recording techniques in mice, scientists will investigate the neural mechanisms that underlie altered cognition and perception, and observe how neurons communicate differently under the influence of psilocybin. They will also explore how those changes might influence the brain’s ability to process and predict sensory information, which is crucial to understanding how perception is constructed.

“Our interest in these compounds goes beyond their potential clinical applications,” said Roberto de Filippo, Ph.D., a postdoc at Humboldt University of Berlin, in a statement. “We believe that uncovering the biological mechanisms underlying their effects can provide fundamental insights into the processes that govern perception, cognition, and consciousness itself.”

How the past subtly shapes our worldview

Another 2024 OpenScope project aims to uncover the neural underpinnings of these updates. How does the brain recognize objects moving around us? The project aims to demystify this fundamental process by studying motion perception in the visual cortex of mice. This project will use microscopy to simultaneously observe the activity of many neurons over several weeks and in different parts of the visual cortex.

Seeing the patterns

Our brains instantly recognize countless complex visual textures that surround us, from the intricate designs on a butterfly’s wings to the grain pattern of wood. But how does it pull off this remarkable feat of visual perception? In this OpenScope project, mice will be trained to distinguish textures while their neuronal activity is monitored in the visual cortex, linking neural responses to perception.

The key goals are to determine how certain textures are easily recognized while others pose a challenge and to map how different brain regions interact to transform visual inputs into coherent representations that guide behavior.

Those findings could uncover core principles for how the brain extracts understanding from our richly patterned visual world, the researchers said. However, the scale and complexity of the research necessitate tools and resources beyond those in a typical laboratory setting.

“Using the Allen Brain Observatory will not only increase the scope and reach of our project severalfold, but it will also allow us to compare and contextualize with all the other Open Science projects they have led in the last decade,” said Federico Bolaños, Ph.D., lead data scientist at the University of British Columbia. “As it happened in other fields like high energy physics or astronomy, research in systems neuroscience needs to move from individual laboratories into a bigger and interconnected community, in which we progress together.”

The Research described here was supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Neural networks made of light 

Scientists at the Max Planck Institute for the Science of Light have proposed a new way of implementing a neural network: an optical system, which could make machine learning and AI tasks more sustainable in the future.

Currently, increasingly complex neural networks—some with billions of parameters—are required. But rapid growth of neural network size has put the networks on an unsustainable path due to their exponentially growing energy consumption and training times, say the researchers in a statement.

For example, training a large language model like GPT-3 is estimated to use just under 1,300 megawatt hours (MWh) of electricity—about as much power as consumed annually by 130 US homes. This trend has created a need for faster, more energy- and cost-efficient alternatives.

Light vs electrons

The researchers’ new idea is to perform the required mathematical operations physically by light in a potentially faster and more energy-efficient way. Optics and photonics are particularly promising platforms for neuromorphic computing, since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds, only limited by the speed of light.

But there are two challenges: realizing the necessary complex mathematical computations requires high laser powers and the lack of an efficient general training method for such physical neural networks.

Both challenges could be overcome with a new method proposed by Max Planck Institute for the Science of Light researchers in a new paper in Nature Physics. The method avoids the complicated physical interactions needed by the required mathematical functions. But the authors have demonstrated in simulations that their new approach can be used to perform image classification tasks with the same accuracy as digital neural networks.

The authors plan to collaborate with experimental groups to explore the implementation of their method. Their proposal significantly relaxes the experimental requirements, so it can be applied to many physically very different systems. That opens up new possibilities for neuromorphic devices—allowing physical training over a broad range of platforms.

Citation: Wanjura, C. C., & Marquardt, F. (2024). Fully nonlinear neuromorphic computing with linear wave scattering. Nature Physics, 1-7. 10.1038/s41567-024-02534-9 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Learning dance moves could help humanoid robots work better with humans

University of California San Diego engineers have trained a humanoid robot to effortlessly learn and perform a variety of expressive movements. These include simple dance routines (from videos) and gestures that can range from high-fiving to hugging.

The robot movements are directed by a human operator using a game controller, which dictates speed, direction and specific motions. The team envisions a future version equipped with a camera to enable the robot to perform tasks and navigate terrains autonomously.

The team will present their work at the 2024 Robotics: Science and Systems Conference July 15 to 19 in Delft, Netherlands.

Citation: Xuxin Cheng et al. Expressive Whole-Body Control for
Humanoid Robots. UC San Diego. https://expressive-humanoid.github.io/resources/Expressive_Whole-Body_Control_for_Humanoid_Robots.pdf (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Generative AI for databases

MIT researchers have developed a new tool that makes it easier for database users to perform complicated statistical analyses of tabular data, without the need to know what’s going on behind the scenes.

GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.

GenSQL combines a tabular dataset with a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.

GenSQL can also produce and analyze synthetic data that mimic the real data in a database—useful where sensitive data cannot be shared, such as patient health records, or when real data are sparse.

Extending SQL

This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.

Compared to popular, AI-based approaches for data analysis, GenSQL is faster and also produces more accurate results, the researchers say. Also, the generated models are explainable, so users can read and edit them.

Next, the researchers want to apply GenSQL more broadly to conduct large-scale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.

ChatGPT-like AI expert

In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal: develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   

The research was recently presented at the ACM Conference on Programming Language Design and Implementation. It is funded in part by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation.

Citation: Mathieu Huot et al., 20 June 2024, Proceedings of the ACM on Programming Languages, Volume 8, Issue PLDI, https://doi.org/10.1145/3656409 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First mouse model with complete functional human immune system

Scientists have created a humanized mouse model with a human immune system, which could create antibody responses from a fully developed and functional human immune system.

The aim of the multi-year project: overcome the limitations of currently available in vivo human models by creating a humanized mouse with a human-like gut microbiome  (community of microorganisms).

The breakthrough promises new insight into immunotherapy development and disease modeling. which will appear in the August 2024 issue of Nature Immunology,

The scientists were led by Paolo Casali, MD, University of Texas Ashbel Smith Professor and Distinguished Research Professor, Department of Microbiology, Immunology and Molecular Genetics in the Joe R. and Teresa Lozano Long School of Medicine. Casali has five decades of biomedical research experience in immunology and microbiology and is a leading researcher in molecular genetics and epigenetics of the antibody response.

“Humanized” mouse model

Mice are widely used in biological and biomedical research because they are small, easy to handle, and share many immune elements and biological properties with humans. But many of the more-than 1,600 immune response mouse genes are limited.

The new humanized mice, called “TruHuX” (for truly human) or THX), possess a fully developed and fully functional human immune system, including lymph nodes, germinal centers, thymus human epithelial cells, human T and B lymphocytes, memory B lymphocytes, and plasma cells making highly specific antibody and autoantibodies identical to those of humans.

Wide range of new experiments and developments

The THX mouse discovery opens the possibilities for human in vivo experimentation, development of immunotherapeutics such as cancer checkpoint inhibitors, development of human bacterial and viral vaccines, and modeling of many human diseases. (Casali also hopes the new approach could make obsolete the use of non-human primates for immunological and microbiological biomedical research.)

The Casali lab is also investigating the in vivo human immune response to SARS-CoV-2 (COVID-19).

Citation: Chupp, D. P., Rivera, C. E., Zhou, Y., Xu, Y., Ramsey, P. S., Xu, Z., Zan, H., & Casali, P. (2024). A humanized mouse that mounts mature class-switched, hypermutated and neutralizing antibody responses. Nature Immunology, 1-18. https://www.nature.com/articles/s41590-024-01880-3 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Robots learn better by also listening

For robots to move into homes, they’ll need to learn to listen, suggests MIT Technology Review.

“Researchers at the Robotics and Embodied AI Lab at Stanford University have built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone.

“Thus far, robots have been training on videos that are muted,” says Zeyi Liu, a PhD student at Stanford and lead author of the study. “But there is so much helpful data in audio.”

The results, published in a paper on arXiv: “When using vision alone in the dice test, the robot could tell 27% of the time if there were dice in the cup, but that rose to 94% when sound was included.”

Citation: Zeyi Liu et al. ManiWAV: Learning Robot Manipulation from
In-the-Wild Audio-Visual Data. arXiv https://arxiv.org/pdf/2406.19464 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter