Quantum information theory maps how quarks and gluons are entangled inside protons

Scientists at Brookhaven National Laboratory have looked inside protons using high-energy particle collisions. They have used quantum information theory to understand how quantum entanglement affects particles streaming from electron-proton collisions.

Quantum entanglement means that particles like quarks and gluons, the parts that make up protons, can share information even when far apart. This happens over very tiny distances inside protons, less than one quadrillionth of a meter.

The team’s work, detailed in a recent paper in Reports on Progress in Physics, shows how entanglement influences the spread of particles coming out of these collisions. It suggests that quarks and gluons are entangled, complicating our view of protons’ inner workings.

Physicists usually consider quarks and gluons as separate entities within protons. Now, evidence of entanglement shows protons are more dynamic and complex.

This research could help answer questions about how protons behave within larger atomic nuclei, which will be studied at the upcoming Electron-Ion Collider (EIC). The study used quantum information science to predict the effect of entanglement on collision outcomes, looking at entropy as a sign of entanglement.

A messy room illustrates high entropy, like the particle distribution from entangled protons. If quarks and gluons are highly entangled, collisions result in a high-entropy particle spread. The scientists confirmed this by checking data from past experiments at HERA in Germany against their calculations.

Global entanglement

The findings reveal that entanglement is not just between two particles but among all particles in a system, potentially simplifying complex nuclear physics problems. The study shows that the collective behavior of quarks and gluons in protons is more telling than individual behaviors, much like how water’s temperature arises from the collective vibrations of its molecules.

Future experiments at the EIC will explore how entanglement behaves when protons are part of a nucleus, potentially revealing more about quantum coherence or decoherence in nuclear environments.

This research opens new ways to understand the structure of matter and could impact other fields where quantum entanglement is key.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Printed e-tattoo reads brainwaves for health monitoring and BCI

For over a decade, scientists have been working on electronic tattoos, or e-tattoos, which stick to the skin. These tattoos help measure heart activity, check for pneumonia, and gauge stress levels. But applying them on hairy skin has been tricky because they don’t work as well.

Researchers from The University of Texas at Austin and UCLA came up with a solution by looking at regular tattoos. They created a special ink that can print directly onto a patient’s scalp to measure brainwaves.

This e-tattoo acts as a sensor for electroencephalography (EEG). EEG can diagnose brain tumors, sleep issues, and other brain problems. Nanshu Lu, who leads the project, says the goal is to make EEG sensors that people can wear comfortably for a long time, even outside hospitals.

Current EEG gear includes caps with rigid or soft electrodes and lots of wires, which can take hours to set up and maintain. The new method involves using a camera to map the shape of a person’s head. Then, a computer algorithm designs the EEG sensors for that specific head, and a robot printer places the conductive ink. This printer shoots the ink through hair without touching the patient, though it works best with short hair.

The e-tattoo design is thin and barely noticeable, making it possible to wear under hats or helmets. This means longer, less invasive monitoring. The researchers will try to make the e-tattoo work on people with longer hair and to make the ink tougher. This would be crucial for monitoring conditions like epilepsy or sleep disorders in real-life settings.

Applications to brain-computer interfaces

The researchers have described the methods and results of their study in a paper published in Cell Biomaterials.

José del R. Millán, another key researcher, notes that these e-tattoos could change brain-computer interfaces (BCI), where people control devices using their thoughts. This could greatly help those with brain impairments. Millán has already used this tech for a wheelchair and a decision-making game controlled by the mind.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Generative AI for faster climate modeling

Researchers from UC San Diego and the Allen Institute for AI have developed a new climate model using artificial intelligence (AI) tools like the image generator DallE. This model, Spherical DYffusion, can predict 100 years of climate patterns 25 times faster than existing models.

Using Spherical DYffusion, it takes just 25 hours to simulate what would take weeks with other models. Usually, top models need supercomputers, but this one runs on GPU clusters in labs.

“Deep learning models are about to change how we model weather and climate,” say the researchers. They will share their findings at the annual conference on Neural Information Processing Systems (NeurIPS) conference in Vancouver in December 2024. In their conference paper, the researchers describe the methods and results of this study.

Climate simulations are costly because they’re complex. This limits how many scenarios scientists can explore.

The team used diffusion models, which are types of AI that generate data by learning patterns, combined with a Spherical Neural Operator, a network for handling spherical data like Earth’s surface.

This new model uses known climate data to predict future changes through a series of transformations. It’s both faster and almost as accurate as other methods but uses less computing power.

The importance of better climate models

However, the model isn’t perfect yet. The researchers plan to add more elements to their simulations, like atmospheric responses to CO2.

The researchers note they’ve focused on emulating the atmosphere, a key climate element.

This advancement is crucial because faster, more accessible climate modeling allows scientists to run more simulations.

This means that researchers can assess a broader range of climate scenarios, helping to predict impacts like sea-level rise or extreme weather events more accurately. Better predictions can guide policy decisions, enhance disaster preparedness, and contribute to effective climate change strategies, all of which are vital for our planet’s future.

With this technology, we’re better equipped to tackle the challenges of climate change head-on.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Training AI with human feedback

When you start learning to drive, your instructor gives you instant tips on every move. They might even grab the wheel and yell “Brake!” to keep you safe. This direct teaching builds your driving skills over time. However, teaching artificial intelligence (AI) to drive or perform tasks is different. AI usually learns from huge amounts of data, not from real-time advice like you get from a driving coach.

Now, researchers from Duke University and the Army Research Laboratory have created GUIDE, a new AI teaching method. GUIDE helps AI learn like a human would, with real-time feedback. They will show GUIDE at the Conference on Neural Information Processing Systems (NeurIPS 2024) conference in Vancouver. Boyuan Chen, a professor at Duke, explains that AI often struggles with tasks needing quick decisions based on little information. GUIDE changes this by using ongoing human feedback instead of big data sets.

Fast human-assisted AI training

In GUIDE, humans watch what the AI does and give detailed instructions, much like a driving coach. In one study, they used GUIDE to teach AI to play hide-and-seek. Here, a human trainer guides an AI-controlled red player to find a green one on a playing field with obstacles. The human provides feedback using a scale, not just saying if it’s good or bad, allowing for more precise guidance.

This study, involving 50 people with no special training, showed that just ten minutes of feedback could boost AI performance by 30%. After human coaching, the AI could continue learning from a simulated coach, showing that human guidance can be brief yet effective. Chen points out that even if the coach isn’t the best at the task, they can still teach well, just like many sports coaches.

The researchers found that people’s different skills, like quick thinking or understanding space, affect how well they can guide AI. This discovery opens up new ways to train both humans and AI better.

GUIDE could make AI more adaptable and intuitive, working well even with limited information, thus making AI more useful in real-life situations where things change quickly.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

LLMs and machine learning for genomics research

Researchers at UC San Diego have shown that large language models (LLMs) like GPT-4 can speed up functional genomics.

Functional genomics studies what genes do and how they work together. A common method called gene set enrichment compares new gene groups to known databases to find their function. However, this method misses out on new biology not in these databases.

Using artificial intelligence (AI), specifically LLMs, could cut down the time researchers spend on this task.

The researchers describe the methods and results of this study in a paper published in Nature Methods.

The team tested five LLMs and found that GPT-4 was the best, with a 73% success rate in naming gene functions correctly. When given random genes, GPT-4 correctly refused to name functions 87% of the time, avoiding made-up answers or hallucinations. It also gave explanations for its choices.

The study suggests more work is needed, but LLMs could transform genomics by quickly creating new scientific ideas. The researchers made a website to help others use LLMs in their work.

Machine learning finds new patterns in the genome

In related news, researchers at the University of Toronto are using machine learning to study how human chromosomes are organized. This can affect health and disease, like cancer.

The researchers developed a method called “Signature,” which uses machine learning to find new patterns in the genome, which is all the genetic material in a human. A paper published in Nature Communications describes the development of Signature and some preliminary tests.

Signature combines imaging with chromosome conformation capture (Hi-C), a technique that gives billions of reads of genetic data, allowing to study many interactions at once. The researchers analyzed 62 data sets, each with over 3.8 million possible chromosome interactions.

“In supervised learning, you know your target. In unsupervised, you let the data speak,” notes a researcher. The team used network clustering in the unsupervised approach to find patterns in the data.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Micron-sized robot acts like a walking microscope

At Cornell University, scientists have built the world’s tiniest walking robot. This robot, measuring just 2 to 5 microns, can move around freely. Its goal is to reach and study tiny places, like inside tissue samples, to take pictures and measure tiny forces.

Paul McEuen, who led the project, compared this robot to putting a microscope’s lens right into the world of the very small. This robot can perform tasks a regular microscope can’t do up close.

In a study published in Science, the researchers explained how they control the robot. It uses magnets to move in a pinching motion, allowing it to “walk” or even swim through liquids. This movement lets the robot reach places where it can do its job of imaging or sensing.

These robots are much smaller than any previous walking robot, which were 40 to 70 microns. Their new tiny robots can do what they’re told by controlling the magnetic fields that drive them.

Robots that work at very small scales

The robots carry tiny magnets of different shapes – long and thin, or short and stubby. This allows the robots to move in response to magnetic fields, with short ones needing less force to change direction.

Screenshot from a video showing the robot in operation (Credit: Science).

This technology blends robotics with optics at the micro level, allowing the robot to not just move but also to manipulate light for better imaging.

By changing shape, the robot can enhance how it captures images or senses forces by altering how light diffracts around it.

The term “diffraction” refers to how light bends or spreads out when it passes through or around something.

The team sees future uses of these robots in science, like studying DNA, or in medicine, where they could be used for precise tasks inside the body.

These micro robots could one day perform detailed microscopic work by moving over surfaces to inspect, measure, or repair at a scale previously unimaginable.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Progress toward self-assembling electronic devices

Researchers at North Carolina State University have developed a new way to make electronic devices self-assemble. This means the materials arrange themselves into the right shapes without needing detailed human control.

The traditional way to make chips for electronics is very complex, involving many steps and expensive machinery. The new approach is much quicker and cheaper. Instead of the usual methods that can produce a lot of faulty chips, this technique makes devices more consistently, reducing waste.

The process starts with tiny droplets of metal that stays liquid at low temperatures. Next to these droplets, there’s a mold. After adding a solution with special molecules called ligands, these molecules grab metal ions from the liquid metal and start building structures as the solution moves into the mold. As the liquid part of the solution evaporates, the process packs these structures tightly together.

The mold shapes how these structures form, making sure they line up neatly rather than randomly. After removing the mold and applying heat, the heat breaks down the ligands, allowing the metal ions to react with oxygen to form semiconductor materials wrapped in layers of graphene.

Potential for large, complex, 3D electronics

The researchers have used this method to make simple electronic components like diodes and transistors, which are key parts of gadgets like computers and smartphones.

This method allows to control how much energy is needed to make the material conduct electricity. It can also make the material react to light, which is useful for making devices that work with light, like solar cells.

This technique can produce materials on a large scale, determined only by the size of the mold. They can also adjust the properties of the semiconductors by changing the liquid or the evaporation rate.

This self-assembly method offers a promising new way to build electronic devices, potentially leading to the creation of more complex and three-dimensional chips in the future.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Photonic hardware could make neural networks run faster and use less energy

Deep neural networks, which are behind many of today’s smart technologies like image recognition or understanding human speech, are getting big and complex. They’re starting to stretch the capabilities of regular computer chips.

Researchers from MIT and other places have developed hardware that uses light instead of electricity for computing. This “photonic” hardware could handle these complex tasks much faster and use less power. The problem was that some of the calculations needed in neural networks couldn’t be done with just light, requiring slow and less efficient electronic components.

Now MIT researchers have built a photonic chip that can do all the necessary calculations for deep neural networks using only light. This chip has a design that allows it to perform both the simple (linear) and more complex (nonlinear) operations of neural networks right on the chip.

The researchers have described the new photonic chip in a paper published in Nature Photonics.

The chip was able to process and classify information in less than half a nanosecond with over 92% accuracy, which is as good as or better than traditional electronic chips.

Faster data processing

This is “an end-to-end system that can run a neural network in optics, at a nanosecond time scale,” says researcher Saumil Bandyopadhyay in an MIT press release.

The chip uses light to encode data, then uses beam splitters to manipulate this light in a way that mimics how neural networks multiply and process data. For the nonlinear operations, they’ve made special devices called Nonlinear Optical Function Units (NOFUs). These devices turn some light into electrical signals for processing, but then quickly turn it back into light.

This new chip was made using the same manufacturing processes as regular computer chips, which means it could be mass-produced easily.

Looking forward, the researchers are thinking about making this chip bigger and connecting it with everyday electronic devices.

They also want to explore new ways to train these optical systems to be even faster and more energy-efficient.

This could lead to breakthroughs in many areas, for example in real-time image processing for cars and devices.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Lunar helium 3 for clean nuclear energy and other technologies

Helium-3 is incredibly valuable because it’s used in many important technologies. The problem is, there’s not much helium-3 on Earth.

But there’s a lot of helium-3 on the Moon.

A company called Interlune is working on ways to get helium-3 from the Moon. If we could bring back helium-3, it would open up amazing possibilities for energy, quantum computing, security, and health.

Helium-3 could fuel future fusion reactors. Nuclear fusion with helium-3 would produce energy with less harmful waste compared to other methods. This could mean a world powered by a clean, endless energy source, reducing our need for fossil fuels and helping the environment.

Interlune co-founder and Apollo astronaut Harrison Schmitt, the only geologist to walk on the Moon, wrote a book titled, “Return to the Moon” (2006), which laid the early foundation for the Interlune business plan. 

Schmitt developed a comprehensive end-to-end plan for mining helium-3 on the Moon and shipping it to the Earth to power next-generation nuclear fusion reactors.

The fusion reaction between deuterium and helium-3, which produces a proton and helium-4, doesn’t produce energetic neutrons that would damage the fusion reactor and make it radioactive. This is the aneutronic nuclear reaction that requires the least input energy, and therefore is the easiest to achieve.

Other critical technologies

A good supply of helium-3 would enable other important technologies as well:

Helium-3 helps keep quantum computers cool, very close to absolute zero, which is important for quantum computing. With more helium-3, we could build and use more quantum computers.

Helium-3 is used in detectors at borders and ports to catch smuggled nuclear materials. These detectors work by capturing neutrons that come from dangerous materials like plutonium. If we had plenty of helium-3, we could protect our countries better, ensure nuclear facilities follow rules, and even help find landmines in war zones.

Helium-3 can be used in magnetic resonance imaging (MRI) to look at lungs in great detail, spotting diseases like cancer or asthma early on without using harmful radiation. When helium-3 is breathed in by a patient, it helps create pictures of the lungs that show problems that regular imaging can miss.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The skin’s electrical properties can reveal what we’re feeling

Researchers at Tokyo Metropolitan University are exploring how our skin’s electrical properties can reveal what we’re feeling.

The skin changes how it conducts electricity based on how much we sweat, which happens quickly when we feel emotions. The researchers conducted an experiment where volunteers watched different types of videos – scary, emotional (family bonding), and funny – while their skin was measured for changes in conductance.

The researchers describe the method and results of this study in a paper published in IEEE Access.

Every video had specific moments meant to trigger emotions. The scientists noticed that the skin’s response to fear took the longest to return to normal. This might be because our bodies are wired to hold onto fear longer for survival reasons.

When volunteers watched family bonding scenes, which likely mixed feelings of sadness and happiness, the skin’s response was slower. This might be because these mixed emotions interfered with each other.

By looking at how fast the skin’s conductance changed and how it returned to baseline after each emotional peak, the researchers found patterns.

They could tell, for instance, if someone was scared or feeling connected with family, though not perfectly.

New technology to sense emotions

This means that by studying the ‘dynamics’ or the way the skin conductance changes over time, scientists might predict what emotion someone is experiencing.

The results “indicate that some of the differences in human emotions are evident in skin conductance response waveforms,” reads the conclusion of the paper. “The results of this study are expected to contribute to the development of technologies that can be used to accurately estimate emotions, when combined with other physiological signals.”

This could lead to future tech like phones or wearables that respond to your mood. However, they’re not there yet; this research just suggests it’s possible to understand emotions better by looking at these signals, moving us away from relying only on facial expressions, which aren’t always available or accurate.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter