AI gets a keen sense of touch

Artificial Intelligence (AI) has become very good at seeing, talking, and making things. However, it has not been great at sensing touch or feeling how rough or smooth something is.

At Stevens Institute of Technology, researchers have now found a way to teach AI how to feel surfaces. They do this by using a laser that fires light at an object.

The researchers describe the methods and results of this study in a paper published in Applied Optics.

When laser beams hit a surface and bounce back, they create a pattern known as speckle noise, which is usually seen as interference in images. However, the team at Stevens uses this noise to learn about the surface. They’ve trained their AI on how these patterns change with different textures.

A Stevens press release describes this setup is a blend of AI and quantum science, because quantum interactions provides critical information to the AI. “This is a marriage of AI and quantum,” says researcher Daniel Tafone.

The researchers tested their method on sandpapers with different roughness levels. The roughness they measured was in microns (one micron is a thousandth of a millimeter). The researchers found that their system could tell very fine differences in the surface texture, down to about 4 microns after some improvements, which is as good as the best tools currently used in industry.

Applications to medicine, manufacturing, and robotics

This technology could have lots of uses. For instance, in medicine, it might help doctors tell if a mole on the skin is just a harmless mark or something dangerous like melanoma. AI could discriminate by sensing tiny differences in the mole’s surface that are too small for the eye to see.

In manufacturing, where even the smallest flaw can cause big problems, this technology could check the quality of parts by measuring how smooth or rough they are.

This could be integrated into devices like self-driving cars or robots that already use similar light-based technologies for navigation and mapping.

AI with touch-like sensory capabilities would better understand the world around them.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Photonic time crystals could exponentially amplify visible light

Scientists at Aalto University, University of Eastern Finland, Karlsruhe Institute of Technology, and Harbin Engineering University have designed photonic time crystals that exponentially amplify light.

In time crystals, the properties of light change over time, not just in space like with traditional crystals. Time crystals have a pattern that repeats over and over, but instead of repeating in space, this pattern repeats in time. Nobel laureate Frank Wilczek introduced the concept of time crystals in 2012, and following experiments have succeeded in building time crystals.

In photonic time crystals, light can get stuck and amplified in special zones where light pauses while its intensity grows exponentially over time.

These special zones work like a medium that switches between air and water quadrillions of times per second.

The scientists describe the methods and results of this study in a paper published in Nature Photonics.

Previously, Aalto University scientists had developed time crystals that worked with microwaves. However, making time crystals for visible light is hard. But working with theoretical models and electromagnetic simulations, the scientists have concluded that using an array of tiny silicon spheres would permit overcoming the challenges.

In a press release issued by Aalto University, Viktar Asadchy explains that this research could lead to the actual creation of photonic time crystals in the lab, which could change how we use light in different technologies.

Applications to nanosensing for environmental monitoring and medicine

Photonic time crystals could be used in making sensors that detect very small things like viruses, pollutants, or cancer markers. When these tiny targets emit light, the crystal can catch that light and make it stronger, helping us see or measure these particles more easily.

“This work could lead to the first experimental realization of photonic time crystals, propelling them into practical applications and potentially transforming industries,” says Asadchy.

“From high-efficiency light amplifiers and advanced sensors to innovative laser technologies, this research challenges the boundaries of how we can control the light-matter interaction,” he adds.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Personalized, blood-based, 3D-printed implants for regenerative medicine

Scientists at the University of Nottingham have developed a new type of “biocooperative” material, which is made from blood and can help fix broken bones.

The new biocooperative material can help the body heal from injuries or diseases. It would be possible to create personalized healing products from a person’s own blood.

The scientists have described the methods and results of this study in a paper published in Advanced Materials.

The researchers have focused on using small proteins called peptides. These peptides guide the healing process in our bodies. The researchers mixed these peptides with blood to create the biocooperative material , which works like the body’s natural way of healing.

After an injury, blood turns from a liquid into a solid, creating a regenerative hematoma (RH). This RH is like a busy workshop where different cells and substances work together to repair the damage.

The new material not only copies the natural RH but also makes it better at healing. It does all the things that the RH does, like helping blood cells called platelets to work properly, making growth factors that help cells grow, and bringing the right cells to the injury to fix it.

Promising test results

They researchers have tested this on animals, using the animals’ own blood to heal their bones, and it worked well.

In a University of Nottingham press release, research leader Alvaro Mata explains that instead of trying to make artificial versions of the body’s healing environment, they’re working with what’s already there.

Researcher Cosimo Ligorio adds that blood is readily available and using it like this could mean we could quickly and safely turn a patient’s blood into something that can heal them.

This method could lead to new ways to help tissues regenerate. The researchers hope to create a set of tools that doctors can use right in hospitals to make healing implants from blood. Doctors would be able to shape or even 3D print the implants.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New VR/AR interface uses AI to turn your skin into a touch-sensitive controller

Imagine a future where the way you control virtual or augmented reality (VR/AR) isn’t with a device in your hand, but by touching your own skin. This idea is becoming a reality with a new technology called EgoTouch, developed by researchers at Carnegie Mellon University. EgoTouch uses artificial intelligence (AI) to turn your skin into a touch-sensitive controller.

AI helps the AR/VR headset recognize when and how you touch your skin. Before EgoTouch, there was OmniTouch, which also aimed to let users control interfaces by touching their skin, but it needed a special camera that was big and cumbersome.

EgoTouch, however, uses the cameras that are already part of AR/VR headsets. Here’s how it works: when you touch your skin, it creates little shadows and slight changes in how your skin looks. These changes are what the AI model, trained by the researchers, looks for to detect touch. They collected data using a special sensor on the finger and palm, teaching the AI to understand different touches, like light or hard pressing, without needing to label each touch manually.

Simple hand controls for VR/AR

This system can detect touch with over 96% accuracy, meaning it gets it right almost every time. It can tell if you’re pressing down, lifting off, or dragging your finger across your skin, and it’s smart enough to differentiate between a light touch and a firm one, which could be used like a right-click on a computer mouse.

EgoTouch is simple and broadly applicable. It works well across different skin types and conditions, though it’s less effective over bony parts like knuckles because the skin there doesn’t deform much when touched. The researchers are also looking into making EgoTouch work in the dark by using night vision technology.

With EgoTouch, the future of interacting with VR/AR might just be at our fingertips, quite literally, as we might soon control virtual environments simply by touching our own skin.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Light and sound entangled for better quantum technology

Quantum entanglement is when two particles, like photons (particles of light) or phonons (quasiparticles of sound), are connected in such a way that what happens to one immediately affects the other, no matter how far apart they are.

This connection, which Einstein called “spooky action at a distance,” is key for new technologies like secure communication that can’t be hacked, and for powerful quantum computers that leverage weird quantum effects.

Scientists at the Max-Planck-Institute have found a new way to entangle photons with phonons, the quasiparticles that carry sound.

Normally, entanglement is sensitive to things like heat or vibration (noise) from outside, which can mess up the quantum states. But this new method is tough; it can keep the entanglement stable even when there’s noise around.

The scientists have described the methods and results of this study in a paper published in Physical Review Letters. An open preprint was previously published in arXiv.

The process they use is called Brillouin scattering, where light (photons) interacts with sound waves (phonons) in a way that links their quantum states. Think of it like a dance where light and sound waves move together in a special structure called a photonic waveguide. Here, sound travels slower than light, but because sound and light are entangled, changes in one will echo in the other instantly.

Robust entangled states

What makes this research stand out is that it can work at higher temperatures, around tens of degrees above absolute zero, rather than needing super cold conditions which are expensive to maintain. This makes it practical for real-world applications because it could be integrated into everyday devices like optical fibers or small chips used in quantum tech.

The findings opens up possibilities for using entanglement in more practical, less fragile quantum technologies, potentially making quantum communication and computing more accessible and efficient.

There is “great potential for applications in quantum computation, quantum storage, quantum metrology, quantum teleportation, entanglement-assisted quantum communication, and the exploration of the boundary between classical and quantum worlds,” the scientists conclude in the preprint.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Generative AI for genomes

A study from UC Berkeley researchers published in Science introduces Evo, a machine learning model designed to work with DNA, RNA, and proteins.

DNA is like a long string made up of four different letters or nucleotides (A, T, C, G) that carry all the instructions needed for life. This DNA sequence can change, and these changes can help organisms adapt to new conditions, driving what we call evolution.

Evo stands out because it can predict how changes in DNA might affect cells and even design new DNA sequences to change how cells work. This could be very useful for creating new treatments for diseases.

Evo seems a promising example of generative Artificial Intelligence (AI) for DNA.

A related Perspective, also published in Science (this X post has parts of the text), notes that “The ability to predict the effects of mutations across all layers of regulation in the cell and to design DNA sequences to manipulate cell function would have tremendous diagnostic and therapeutic implications for disease.”

The researchers explain that Evo is not just another machine learning model; it’s a “large-scale genomic foundation model” with 7 billion parameters.

Evo can evaluate and design DNA changes

Evo was trained on a huge dataset containing 2.7 million different microbial genomes. Microbial genomes are the complete set of DNA in bacteria and other tiny organisms. This training allows Evo to be very good at both predicting what happens when DNA changes (like mutations in bacteria) and creating new DNA sequences. For example, it can predict how a change in DNA might affect proteins or how genes are turned on or off, which is known as gene regulation.

One of the most impressive things about Evo is its ability to generate DNA sequences that are very long – over 1 million bases (a base is one of the nucleotides, A, T, C, or G). This is much longer than what previous models could do, making Evo capable of handling tasks at a whole-genome scale.

The study suggests that future versions of models like Evo might use even more diverse data, including from humans, to understand how very distant parts of DNA interact with each other over large distances within the genome. This could lead to even more advanced tools for biology and medicine by capturing the full complexity of life’s genetic blueprint.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Robot vision via radio waves

Robots often struggle to see and navigate in harsh conditions like fog or smoke because their usual vision tools, like cameras or LiDAR, depend on light. LiDAR works by sending out laser light and measuring the time it takes for that light to bounce back, but it doesn’t work well when visibility is poor.

In nature, animals like bats use sound echoes to find their way in the dark, and sharks sense electrical fields to locate prey. These examples show that there are other ways to “see” besides light.

Researchers at the University of Pennsylvania have developed PanoRadar, which uses radio waves. Radio waves are much longer than light waves, so they can go through smoke, fog, and even some solid materials. Traditional radar, which also uses radio waves, gives rough images but can see through obstacles. PanoRadar takes this a step further by producing detailed 3D images.

Very clear pictures without light

PanoRadar works like a lighthouse, spinning around and sending out radio waves, which then bounce back to give information about the surroundings. Unlike the simple radar, PanoRadar uses smart math and AI to look at these reflections from many angles at once. This process creates a very clear picture, almost as good as what you’d get from LiDAR, but without needing light.

A challenge was making sure the images stay sharp even when the robot moves. Another challenge was to teach PanoRadar to recognize patterns in indoor spaces to understand what it’s seeing, much like how a child learns to recognize objects.

PanoRadar has been tested in real buildings, where it showed it could see through smoke and even see things like glass walls that LiDAR usually misses. This ability to detect finer details, like people, makes it useful for tasks where seeing clearly through tough conditions could save lives, like in rescue missions or with self-driving cars.

The future might see PanoRadar working alongside other sensors, like cameras and LiDAR, to make even smarter robots that can tackle any environment by using the best features of each sensing method.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Test of a quantum internet that transmits information as qubits

Scientists from QuTech (a collaboration between TU Delft and TNO) have connected quantum computers over a distance of 25 kilometers between Delft and The Hague, using regular internet fiber.

This connection between quantum computers is a step towards a quantum internet that transmits information as qubits.

This quantum internet could do things our current internet can’t, like making unbreakable codes for secure data sharing or connecting faraway quantum computers to work together, providing privacy and enhanced computing power.

“The distance over which we create quantum entanglement in this project, via 25 km of deployed underground fiber, is a record for quantum processors,” says research leader Ronald Hanson in a TU Delft press release. He adds that this is the first successful test to establish a quantum link quantum processors in different cities.

Cartographic layout of the distant quantum link and the route of the deployed fiber bundle, with similar quantum processor nodes in Delft and The Hague (Credit: Science Advances).

The scientists describe the methods and results of this project in a paper published in Science Advances.

Making this work outside the lab was challenging. The scientists had to ensure the persistence of quantum signals carried by photons, which is tricky over long distances. They used a method that was very precise in maintaining the stability of the light through the fiber.

“The link needed to be stable well within the wavelength of the photons (smaller than a micrometer) over 25 kilometer of optical fiber,” says researcher Arian Stolk. “That challenge compares to keeping the distance between the earth and the moon constant with the accuracy of only a few millimeter.” 

Toward a quantum internet for Europe

This project required a mix of academic knowledge and practical industry help. Various experts contributed, from developing frequency converters to providing the necessary fiber infrastructure. This collaboration shows how important partnerships between universities and companies are for advancing technology.

The success of this experiment lays a strong foundation for Europe’s quantum internet ambitions. It not only demonstrates the feasibility of quantum networking over metropolitan areas but also shows how current internet infrastructure can be adapted for quantum use. This work is a stepping stone towards a real quantum internet, potentially revolutionizing how we communicate and compute in the future.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Vortices of light could transport huge amounts of data

Information in today’s digital world is often sent through light beams in optic cables. As we need to send more data, finding new ways to pack information into light is crucial. Researchers at Aalto University have developed a new way to do this by creating light vortices, which are like tiny, controlled hurricanes of light that can carry data.

Light vortices are beams where the center is dark, surrounded by a bright ring of light. This happens because the electric field of the light points in different directions, much like winds in a hurricane. The researchers used metal nanoparticles that interact with an electric field.

A paper published in Nature Communications describes the methods and results of this study.

The researchers arranged 100,000 metallic nanoparticles in a complex pattern called a quasicrystal. Quasicrystals are patterns that are not perfectly ordered like a grid but also not entirely random, sitting somewhere between order and chaos.

The trick was to place these particles in areas where the electric field was weakest. By doing this, they could isolate and enhance the parts of the light field they wanted to use for creating special vortices.

Potential for delivering more information through fiber cables

This method could revolutionize how we send information through light. For instance, by sending these light vortices through fiber optic cables, one could potentially fit more data into the same space compared to current methods.

“We could, for example, send these vortices down optic fibre cables and unpack them at the destination,” says researcher Kristian Arjas in a press release issued by Aalto University. “This would allow us to store our information into a much smaller space and transmit much more information at once. An optimistic guess for how much would be 8 to 16 times the information we can now deliver over optic fibre.”

However, turning this discovery into something usable in everyday technology might still take some years because it needs a lot more engineering work.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DeepMind makes AlphaFold3 open source

Google DeepMind has made AlphaFold3, an AI model for predicting protein structures, available as open-source for non-commercial use, Nature News reports.

AlphaFold predicts how proteins fold into their three-dimensional shapes, which is crucial for understanding their function.

This decision came after some controversy over the initial release of the model without its source code. AlphaFold3 is an upgrade from its earlier versions, like AlphaFold2, because it can predict not only protein structures but also how proteins interact with other molecules, which is vital for drug discovery.

Previously, DeepMind made AlphaFold3 accessible only through a web server, which limited the types of predictions users could make, particularly regarding how proteins interact with drugs.

Scientists criticized this method because it hindered reproducibility – the idea that scientific results should be able to be replicated by others to verify their validity.

Scientists argued that for research to be trustworthy, the underlying code should be available. This led DeepMind to promise the release of the code within six months.

The importance of open source code for AI

Now, with the code released, researchers can explore how proteins might interact with potential drugs, aiding in drug discovery.

However, the model weights, which are the learned parameters that allow the AI to make predictions, are still only available to academic users upon request.

The push for openness in AI models stems from the scientific community’s desire for transparency and verifiability in research. This openness allows for further innovation and application of the technology in unexpected ways. In fact, researchers used previous versions of AlphaFold in creative ways like designing new proteins for medical targets.

“We’re very excited to see what people do with this,” says AlphaFold project leaderJohn Jumper as reported by Nature News. Jumper won the 2024 Nobel Prize in Chemistry with CEO Demis Hassabis for their work on AlphaFold2. “People will use it in weird ways,” he adds. “Sometimes it will fail and sometimes it will succeed.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter