China wants to leverage space and AI for military supremacy

The U.S. Department of Defense has released a report about China’s military growth, focusing on space and artificial intelligence (AI) technology, SpaceNews reports.

The People’s Republic of China (PRC) is using more satellites and AI to improve its military operations. Satellites help them watch U.S. and allied forces, and could aid in precise attacks. They combine satellite information with AI to spot weaknesses and guide missiles accurately.

China’s space program has grown rapidly. In 2010, they had 36 satellites, but by 2024, this number has jumped to over 1,000, with many for spying and tracking. “The PRC emphasizes the importance of space-based surveillance capabilities in supporting precision strikes,” notes the Department of Defense.

China has adopted “multi-domain precision warfare,” a strategy mixing AI, big data, and space tech for better military coordination. China’s People’s Liberation Army (PLA) wants to use information from different areas like land, sea, air, space, and cyberspace for exact military strikes.

Besides satellites, China is working on anti-satellite weapons and reusable space planes. This shows China wants to lead not only in military tactics and strategy but also in technology worldwide.

“The PRC is investing in digital infrastructure abroad, including next-generation cellular networks – such as fifth-generation (5G) networks – fiber optic cables, undersea cables, and data centers,” notes the report. “The initiative includes developing advanced technologies, such as satellite navigation systems, artificial intelligence (AI), and quantum computing for domestic use and export.”

Domestic production of critical technologies

“The PRC’s increasingly limited access to advanced technologies from the West may impede the PLA’s progress toward fully achieving an ‘intelligentized’ military,” says the report. However, and this seems a key point, “the PRC is pursuing domestic production of critical technologies to reduce reliance on foreign sources.”

The PRC also wants to pursue “human spaceflight and deep-space exploration missions – including to the Moon and Mars.”

The West and in particular the U.S. are well positioned to respond, but it seems wise to take China very seriously. What the incoming Trump administration in the U.S. will do remains to be seen.

I wish all readers a happy New Year! I’ll be back in January 2025. In the meantime, Conor will post some news stories, though at a less frequent rate.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

SciAgents: AI helps scientists think of new research ideas

Creating a new research hypothesis is tough and can take a lot of time. MIT researchers have developed an artificial intelligence (AI) system to help scientists generate and check these hypotheses faster, especially in biologically inspired materials.

A paper published in Advanced Materials introduces a system called SciAgents, which uses multiple AI agents. Each agent has a specific job and can access data. They work together using graph reasoning, where AI models use a knowledge graph to connect different scientific ideas.

This method is inspired by how biological systems work, where many simple parts come together to do complex tasks.

AI models like large language models (LLMs) are good at answering questions but not at creating new ideas. The MIT team used these models to go beyond just recalling information. They built a knowledge graph from many research papers, which helps AI models think more like scientists by focusing on relationships and principles.

How SciAgents works (Credit: MIT).
How SciAgents works (Credit: MIT).

In SciAgents, different AI models have roles like “Ontologist,” which defines terms and links concepts, and “Scientist 1,” which makes research proposals. “Scientist 2” expands these ideas, while a “Critic” model points out flaws for improvement. This teamwork leads to better, more creative hypotheses.

Thousands of new research ideas for biomaterials

For example, the MIT researchers have explored silk and energy efficiency, proposing new biomaterials combining silk with dandelion pigments. SciAgents suggested simulations and applications, like bioinspired adhesives.

In other tests, SciAgents generated original hypotheses about biomimetic microfluidic chips, collagen-based scaffolds, and graphene-based bioelectronic devices.

“The system was able to come up with these new, rigorous ideas based on the path from the knowledge graph,” says researcher Alireza Ghafarollahi in an MIT press release. “In terms of novelty and applicability, the materials seemed robust and novel. In future work, we’re going to generate thousands, or tens of thousands, of new research ideas.”

The researchers plan to add more tools for data retrieval and simulations, making the system even better as AI technology advances.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Open source AI model matches the performance of AlphaFold3

MIT scientists have released Boltz-1, a new artificial intelligence (AI) tool that could have a significant impact on how we do biomedical research and make new drugs.

Boltz-1 is a fully open source model that matches the performance of AlphaFold3, a model from Google DeepMind that guesses the shapes of proteins and other biological molecules.

Proteins are vital for all biological functions, and their shape decides what they do. Knowing a protein’s shape is key to making new drugs or designing protein functions.

The scientists have presented Boltz-1 at an MIT event, hoping it would encourage worldwide teamwork and speed up scientific discoveries. They want Boltz-1 to be a starting point for further community work.

Boltz-1 is the state-of-the-art open-source model that predicts the 3D structure of proteins, RNA, DNA, and small molecules; it handles modified residues, covalent ligands and glycans, as well as condition the generation on pocket residues (Credit: MIT).

Predicting protein shapes is tough because of the complex way amino acids fold into 3D structures. AlphaFold2, which won a Nobel Prize, used machine learning to predict these shapes very accurately. AlphaFold3 added a new AI method called a diffusion model, which helps deal with the uncertainty in these predictions. But AlphaFold3 isn’t fully open-source, leading to some pushback and a race to make an open-source version.

Democratizing research in biology

The Boltz-1 code is publicly available via GitHub. The Boltz-1 GitHub repository links to a technical reports titled “Boltz-1 Democratizing Biomolecular Interaction Modeling,” published in bioRxiv.

Boltz-1 started with the same methods as AlphaFold3 but added improvements to make it more accurate. They also shared all the code for training Boltz-1, making it easier for others to improve or use it. This project took four months, and one big challenge was handling the Protein Data Bank’s vast and varied data.

Boltz-1 matches AlphaFold3’s accuracy in predicting various biomolecular structures. The team plans to keep enhancing Boltz-1, reducing prediction times, and they invite others to join in via GitHub or Slack. Experts see Boltz-1 as a breakthrough, democratizing access to advanced tools in biology, which could lead to new medicines, and predict a surge in discoveries thanks to this open-source model.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nvidia launches small, cheap and powerful AI supercomputer

Nvidia has launched a new compact AI supercomputer, the Jetson Orin Nano Super Developer Kit. This device, small enough to hold in your hand, boosts performance while cutting the price to $249 from $499.

The Jetson Orin Nano Super Developer Kit improves generative AI capabilities by 1.7 times, increases performance by 70% to 67 INT8 TOPS, and raises memory bandwidth by 50% to 102GB/s compared to the older model. According to Nvidia, this kit is perfect for making AI chatbots, visual AI agents, or robots.

The update isn’t just for new users; owners of the previous Jetson Orin Nano Developer Kit can also upgrade their AI performance with a software update. This kit is great for people learning about AI, robotics, or computer vision. It’s an accessible platform to turn ideas into real projects as AI moves towards more flexible models.

Powerful edge AI for applications in the field

The Jetson Orin Nano Super includes a system-on-module (SoM) with 8GB memory and a reference carrier board for prototyping AI at the edge. This means AI applications that run on local devices instead of remote data centers, which is often essential for deploying AI applications in the field.

The SoM has an Nvidia Ampere GPU with tensor cores for faster processing and a 6-core Arm CPU. This setup supports up to four cameras for better visuals and performance.

NVIDIA supports this kit with a broad software ecosystem. The Jetson AI lab provides resources and tutorials for the latest AI models. It uses Nvidia’s software like Isaac for robotics, Metropolis for vision AI, and Holoscan for sensors. Tools like Omniverse Replicator and TAO Toolkit help speed up development by generating synthetic data and fine-tuning AI models.

The community around Jetson is large, offering support, inspiration, and additional tools. Partners provide further software, hardware like cameras, and design services.

Everyone with a Jetson Orin Nano or NX can now upgrade their performance with the new ‘Super’ mode through the JetPack SDK, available for immediate download.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Google updates its AI models for video and image generation

Earlier this year, Google introduced Veo and Imagen 3, artificial intelligence (AI) models that help people create videos and images. Now, Google has updated these tools, introducing Veo 2 and a new version of Imagen 3, which are available on VideoFX and ImageFX (Google tools that are available in some but not all countries), plus a new tool called Whisk.

Veo 2 is a video generator that makes videos with great detail and realism. It understands how things move in real life and can create different styles of videos. You can tell Veo 2 to use certain camera lenses or effects, like filming with an 18mm lens for a wide shot or using a “shallow depth of field” to blur the background and focus on the subject. It can make videos in high-quality 4K resolution and can last for minutes. Veo 2 does well in tests against other video generators, creating fewer unwanted details like extra fingers. Google is careful about how Veo 2 is used, focusing on safety and putting watermarks on videos to prevent misinformation.

Imagen 3 now makes pictures that are brighter and more detailed. It can create images in various styles, from realistic photos to artistic impressions like anime. This model listens to your instructions better and adds more texture to images. It’s now available in over 100 countries through ImageFX.

Rave reviews

Whisk is a new fun tool where you mix images to make something new. You pick images for your subject, scene, and style, and Whisk combines them into unique creations like digital toys or stickers. It uses both Imagen 3 and another AI model, Gemini, to understand and describe the images you upload. Whisk is currently only available in the United States.

These tools are part of Google’s push to make AI more creative and useful, with Veo 2 expanding to YouTube Shorts soon.

Google’s new generative AI tools, and especially Veo 2, have already received rave reviews. “The latest release is a new version of the Veo artificial intelligence video model that has the most accurate understanding of physics I’ve seen from any video tool so far,” notes one review.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New method to build faster 3D electronic chips

The electronics industry is hitting a wall with how many transistors they can fit on a single computer chip. Instead of making transistors smaller, they’re now stacking them in layers like floors in a skyscraper. This means chips can process more data and do more complex tasks. But there’s a problem: each layer includes a silicon wafer base. The silicon layer is thick and bulky, and slows down communication between layers.

MIT engineers have found a new way to stack chips without using these thick silicon bases. They describe their new method in a paper published in Nature. This new method lets them grow high-quality semiconductor layers directly on top of each other, even at low temperatures that don’t damage the existing circuitry below.

Their technique involves growing materials on any type of surface, not just silicon. This direct layering makes communication between chip layers faster and more efficient. The researchers think this could make artificial intelligence (AI) hardware for laptops or wearables as powerful as supercomputers, with huge storage capabilities like data centers.

Enormous potential for the semiconductor industry

“This breakthrough opens up enormous potential for the semiconductor industry, allowing chips to be stacked without traditional limitations,” says study co-author Jeehwan Kim in an MIT press release. “This could lead to orders-of-magnitude improvements in computing power for applications in AI, logic, and memory.”

The new method uses very thin layers of materials called transition-metal dichalcogenides (TMDs), which are like super-thin sheets of atoms that can work well even when very small, unlike silicon.

The engineers have managed to grow these TMDs at low temperatures by using a technique where atoms start to form crystals at the edges of small pockets on a mask, similar to how metal cools and forms in molds. This has allowed them to stack different types of TMDs, potentially doubling the number of transistors in the same space.

The team has already started a company, FS2, to turn this research into products. They aim to scale up and show how this can work in real AI chips.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Magnetic materials research could lead to spintronics for faster electronics

Researchers from the University of Missouri have discovered a new type of quasiparticle in magnetic materials. Quasiparticles are disturbances in materials that behave like particles. The researchers have found that these new quasiparticles are present in all magnetic materials, regardless of their strength or temperature.

This finding changes what we thought about magnetism, showing it’s more dynamic than previously thought.

“We’ve all seen the bubbles that form in sparkling water or other carbonated drinks,” says research co-leader Carsten Ullrich in a press release issued by the University of Missouri. “These quasiparticles are like those bubbles, moving freely around at surprisingly high speeds.”

This could open the door to developing a new generation of electronics that are not only faster and smarter but also significantly more energy-efficient.

One area this could help is spintronics, or spin electronics. Traditional electronics rely on the electrical charge of electrons for storing and processing information. In contrast, spintronics leverages the natural spin of electrons, which is fundamentally linked to their quantum nature. New electronics based on spintronics could, for example, make devices like cell phones last much longer on one charge.

“Electrons have two properties: charge and spin,” explains research co-leader Deepak Singh. “By focusing on the spin aspect instead of the conventional charge, we can achieve greater efficiency because spin loses much less energy.”

Physics research could lead to better electronics

The researchers describe the methods and results of this study in a paper titled “Emergent topological quasiparticle kinetics in constricted nanomagnets,” published in Physical Review Research.

The results of the study suggest that universal dynamical behavior “can be prevalent in nanoscopic magnets, irrespective of the nature of the underlying magnetic material.”

This new study builds upon and extends the team’s previous work, which was published in Nature Communications, where the researchers first described this dynamic behavior at the nanoscale.

The implications of this work are vast, potentially revolutionizing how we understand and manipulate magnetic materials for technological advancements and better electronics.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Light-driven nanoreactor makes hydrogen

Scientists at the University of Liverpool have made an advance in engineering biology and clean energy. They have created a new kind of nanoreactor that uses light to make hydrogen, which is a clean energy source. This nanoreactor mixes natural processes with the precision of human-made materials.

The research, described in a paper published in ACS Catalysis, shows a new way to use sunlight for making fuel. Nature’s photosynthetic systems are very good at using sunlight, but artificial systems have not been as good until now.

This hybrid nanoreactor uses recombinant α-carboxysome shells from bacteria. These shells protect sensitive enzymes called hydrogenases, which make hydrogen. Hydrogenases work well but can easily stop working if they touch oxygen. The shells keep them safe so they can keep making hydrogen.

The scientists made a special material that works like a solar panel, capturing light and turning it into energy that the enzymes use to produce hydrogen.

The potential of artificial photosynthesis

The nanoreactor copies how photosynthesis works in nature. It uses human-made materials that can absorb lots of light and natural enzymes that are great at making hydrogen, all powered by sunlight.

This hybrid nanoreactor “combines the broad light absorption and exciton generation efficiency of synthetic materials with the catalytic power of biological enzymes,” says research co-leader Luning Liu in a press release issued by the University of Liverpool. “This synergy enables the production of hydrogen using light as the sole energy source.”

Other methods to make hydrogen with artificial photosynthesis rely on expensive metals like platinum. But this new method offers a cheaper way to make fuel with the same efficiency. Besides making hydrogen, it could help in other areas of biotechnology too.

These findings could lead to making more nanoreactors that mimic nature, helping us move towards a world where we produce no carbon emissions. The potential of artificial photosynthesis to revolutionize energy production and contribute to reducing carbon emissions is vast.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Better plastic for 3D printing, adjustable and cheap

Princeton engineers have used thermoplastic elastomers to make soft 3D printed items with adjustable stiffness. Thermoplastic elastomers are polymers that can be melted and molded but turn elastic when cooled.

The engineers can control where the material is stiff or stretchy by changing how the 3D printer lays down the plastic. This technique could be useful for making soft robots, medical devices, helmets, and shoe soles.

The engineers have described the methods and results of this study in a paper published in Advanced Functional Materials.

The secret behind this material’s versatility is its internal structure. The engineers have used block copolymers to form stiff cylinders just 5-7 nanometers wide within a soft, stretchy material. By controlling the direction of these cylinders during printing, they can make parts of the object hard in one direction while soft in others.

Controlling these nanostructures allows for tailored properties. The process starts with selecting the right polymer. Block copolymers have sections that separate like oil and water, which helps create the stiff cylinders in a stretchy matrix.

The engineers developed a 3D printing method that aligns these nanostructures by adjusting print speed and material flow. The method includes thermal annealing, where the plastic is heated and cooled to improve its properties and even self-heal damage.

Cheaper wearable tech and biomedical devices

This approach makes the material not only functional but also cost-effective, with the elastomers costing just a cent per gram compared to much pricier alternatives. The engineers have been able to add things like glowing molecules to the plastic without losing control over its properties.

They’ve made complex items like a tiny vase and printed text, demonstrating how the material can be both rigid and flexible in different ways. Annealing not only perfects the material’s structure but also allows it to repair itself if cut.

The engineers plan to explore more use cases, like in wearable tech and biomedical devices, showing how versatile and practical this 3D printing method can be.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Microfluidic device helps lab-grown neurons connect

In human brains, connected neurons strengthen their links when they activate at the same time. This is part of neural plasticity, the brain’s ability to change and adapt. However, when scientists grow neurons in a dish, these cells don’t behave the same way. They form random connections and fire in unison, not like real brain learning.

Researchers at Tohoku University have found a way to make lab-grown neurons act more like those in living brains. They have used a microfluidic device with microchannels that help neurons link up in patterns similar to those in animal nervous systems.

By adjusting the size of these microchannels, the scientists could control how the neurons connected. In experiments, they saw that networks with smaller microchannels had more complex patterns of activity. These patterns included groups of neurons, called neuronal ensembles, which are key to how the brain stores and recalls information. In real brains, these ensembles shift and change with new experiences, helping with learning and memory.

The team’s findings showed that with the right setup, lab-grown neurons could form several different ensembles, not just one as seen in simpler setups. They also observed that repeated stimulation could change these ensembles, much like how experiences shape memory in the brain. This suggests these lab-grown networks can mimic some aspects of neural plasticity.

Better models for brain functions

This research, described in a paper published in Advanced Materials Technologies, opens new doors for studying how brains learn and remember without the complexities of using live animals. By making lab-grown neurons behave more naturally, scientists can better understand brain functions under controlled conditions. This could lead to better models for exploring specific brain functions, like memory formation and retrieval, in the future.

“Lab-grown neurons allow scientists to explore how learning and memory work in highly controlled conditions,” says researcher Hideaki Yamamoto in a Tohoku University press release. “There is a demand for these neurons to be as close to the real thing as possible.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter