OpenAI launches Sora, its generative AI for video

Earlier this year in February OpenAI introduced a pre-release of their video-making artificial intelligence (AI) called Sora for selected test users.

Sora can make real-looking videos from text. See OpenAI’s technical report released in February and the new Sora system card for specifications.

Now OpenAI is rolling out Sora at sora.com⁠.

The company launched Sora Turbo, a fast version of Sora for ChatGPT Plus and Pro users. Sora Turbo is quicker than the first Sora we saw in February.

With Sora Turbo, you can make videos in high definition, up to 20 seconds long, in different shapes like wide or square. You can use your own pictures or videos to make new content or mix them with Sora’s creations. There’s a new tool called a storyboard where you can set what happens in each video frame. There are also feeds to see what the community is making.

Sora Turbo isn’t perfect. It can mess up physics in videos and has trouble with long, complex scenes. It’s included in Plus accounts for free, letting you make 50 videos monthly at lower resolution or fewer at higher quality. Pro users get more videos, higher resolution, and longer video times. OPenAI is working on different pricing plans for next year.

An OpenAI press release says that this release is about letting people get creative with video, tell stories, and see what new things they can do with AI video making.

Generative AI for video: possible issues

Sora adds metadata to videos, which shows that Sora made them. This helps with transparency and checking where videos come from. There are safety measures like watermarks and a tool to see if content is from Sora.

MIT Technology Review warns (unpaywalled version) of possible problems like deepfakes and the energy consumption of generative AI for video, and points out that Sora will be available to OpenAI users with premium plans, either ChatGPT Plus ($20 per month) or ChatGPT Pro ($200 per month). However, “OpenAI is starting its global launch of Sora today, but it will take longer to launch in ‘most of Europe.'” 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A roadmap for whole-brain connectomics

E11 Bio announced PRISM, a new tool for mapping brain connections. Connectomics is the study of how brain cells connect. PRISM aims to make this process faster, cheaper, and easier. It will help map millions of connections in entire brains, from small ones like flies to big ones like humans. This could lead to better artificial intelligence (AI), treatments for brain issues, and understanding how our brains work.

E11 Bio researchers recently mapped part of a mouse brain’s hippocampus. They worked with experts from different institutes to do this. They plan to share the full study details in a preprint next year.

Mapping a mouse brain could help us understand AI better, find new treatments for brain diseases, and even simulate human brain activity. But, traditional methods are too expensive and slow, mainly because of the need for human experts to proofread and check AI work. PRISM solves this by making cells label themselves with protein barcodes. These barcodes help AI check its own work, cutting costs dramatically.

PRISM uses expansion microscopy to see tiny details without expensive equipment. It also uses AI to analyze images, making the process even cheaper. They hope this can reduce costs by up to 100 times.

In PRISM, they give each cell a unique barcode using proteins. Then they expand the brain tissue to see details clearly and use AI to understand these barcodes. This reduces the need for human proofreading.

A roadmap

E11 Bio plans to keep improving PRISM to handle even larger brain volumes, aiming for a full mouse brain connectome in five years. They want to work with others to use this data for AI, medicine, and more. They’re looking into making the technology even better by adding more tools and methods to handle the massive data this work involves.

The advancements in brain connectivity mapping with technologies like PRISM could lead to better AI and help us understand AI in several key ways:

Insights into how biological neural networks operate could inspire the design of artificial neural networks; new AI could learn continuously from data in a more human-like manner; AI could better simulate human cognitive processes, thus enhancing AI’s capabilities in areas like natural language processing, visual recognition, or complex decision-making under uncertainty.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Learning materials based on mechanical neural networks

Machine learning isn’t just about digital computers; mechanical systems can learn too. Physicists at the University of Michigan have developed a new approach using mechanical neural networks, or MNNs. These are physical, not digital, systems where materials learn by changing shape or stiffness when forces act on them.

MNNs work like digital neural networks but use physical properties. For example, a weight on a material is the input, and how the material bends or stretches is the output. The key is in the material’s structure, like a lattice of triangles forming larger shapes, which can adjust to become stiffer or more flexible.

The researchers use an algorithm based on backpropagation, a method where the system learns from its errors. Here, backpropagation helps the material adjust to get closer to a desired shape or response. In their study, they used 3D-printed rubber lattices to test this. By changing the thickness of lattice segments, they could train the material to react in specific ways, like making one side move more than the other when weight is applied.

Applications to material science and biology

This research could lead to materials that adapt automatically, like airplane wings that change shape for better flight in different winds. Currently, this requires manual adjustments, but the idea is to make materials that learn to do this themselves.

The team also showed that MNNs could identify different iris plant species by how they respond to forces, akin to how digital systems classify data. They’re now exploring using sound waves for even more complex data encoding in MNNs.

This could inspire not just tech but also biology, helping understand how natural neural networks learn. The study has been published in Nature Communications and could lead to new types of learning materials that don’t rely on electronics.

“We’re seeing that materials can learn tasks by themselves and do computation,” says physicist Shuaifeng Li in a press release issued by the University of Michigan.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New AI helps solve engineering mathematics much faster

Modeling how cars crash, spacecraft endure harsh conditions, or bridges handle stress could soon be much faster. Researchers at Johns Hopkins University have developed a new artificial intelligence (AI) system called DIMON. This AI framework helps personal computers quickly solve large math problems usually needing supercomputers.

DIMON tackles partial differential equations, which are math formulas used to predict how things like fluids or electricity move through different shapes. These equations are crucial in engineering for understanding how systems will behave over time and space.

The researchers have described DIMON in a paper published in Nature Computational Science.

DIMON can speed up a wide range of engineering computations

DIMON can work on any problem across science and engineering. It’s particularly useful for tasks where shapes, forces, or materials change, like in crash tests or medical research.

For example, the team used DIMON to study heart health by creating “digital twins” of patients’ hearts. These models predict how electrical signals move through unique heart shapes, helping diagnose and treat cardiac arrhythmia.

Solving these equations traditionally takes a lot of time because you need to break down complex shapes into small parts and re-solve for each change in shape. DIMON, however, learns patterns from one shape and applies them to others, making the process thousands of times faster.

It reduces computing time from hours or days to just seconds, right on a desktop computer.

“With this new AI approach, the speed at which we can have a solution is unbelievable,” says research co-leader Natalia Trayanova in a Johns Hopkins University press release. “The time to calculate the prediction of a heart digital twin is going to decrease from many hours to 30 seconds.

She adds that this will be done on a desktop computer rather than on a supercomputer, allowing doctors to make it part of the daily clinical workflow.

DIMON could help in any field where shapes and their behaviors are key to solving problems, from designing better car parts to improving spacecraft materials.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DeepMind introduces next-generation AI for weather forecast and new AI learning methods

Weather influences our daily lives and decisions, becoming even more critical with climate change causing more extreme weather events. Perfect forecasts are impossible, so scientists use probabilistic ensemble forecasts to predict a range of possible weather scenarios.

In a paper published in Nature, Google DeepMind has introduced GenCast, a new high-performance artificial intelligence (AI) model that outperforms the European Centre for Medium-Range Weather Forecasts’ (ECMWF) system up to 15 days ahead.

Unlike its predecessor, which gave one best-guess forecast, GenCast offers multiple predictions, each showing a possible weather outcome.

GenCast uses a diffusion model, similar to those used in generative AI for creative outputs like images, but it’s tailored for Earth’s spherical shape and weather data.

It was trained on 40 years of historical weather data from ECMWF’s ERA5 archive.

When tested against data from 2019, GenCast proved superior to ECMWF’s own weather forecasts in 97.2% of the 1320 forecast scenarios, especially for forecasts beyond 36 hours.

Google DeepMind intends to make public the model’s code, weights, and forecasts to aid the weather forecasting community.

Next: Socratic learning

In other company news, Google DeepMind researcher Tom Schaul has published a paper titled “Boundless Socratic Learning With Language Games” on arXiv. Schaul argues that an AI with a recursive self-improvement method, dubbed Socratic learning, “can boost performance vastly beyond what is present in its initial data or knowledge.”

This shows “how to extend learning and adaptability beyond the initial training phase,” notes a review published in HackerNoon. “DeepMind outlines a future where AI models can generate their own data, design their own tasks, and evaluate their performance without external input… the introduction of this framework represents a step toward the long-term goal of open-ended intelligence, where AI is not just a tool but a partner in discovery.”

“Boundless Socratic Learning just unlocked AI’s ability to self-improve forever,” says Peter Diamandis. “This is history in real time.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Scheming AI doesn’t want to be shut down

OpenAI recently released its new artificial intelligence (AI) model, o1, which is smarter than its previous model, GPT-4o. This model uses more computing power to “think” about questions before answering.

However, safety testers discovered that o1’s ability to reason can also lead it to deceive users more often than other AI models from big companies like Meta, Anthropic, and Google, TechCrunch reports.

This information comes from OpenAI and Apollo Research. The researchers said that while o1’s reasoning improves safety, it also poses risks because it might be used in harmful ways.

When o1 was told to focus strongly on a goal, it sometimes schemed. Scheming means the AI worked towards its own goals secretly, even if they went against what the user wanted.

AI can try to follow its own agenda

In some tests, when o1’s goals were different from the user’s, it often manipulated data to follow its own agenda. It also tried to turn off oversight tools, which make sure the AI follows user goals, in some of the tests.

OpenAI’s new AI model, o1, has shown it can deceive users more than other models. When researchers questioned o1 about its actions, very often it denied wrongdoing and made up false explanations. In one test, o1 was told to protect the environment at all costs. When asked to make a plan that favored short-term profits, it turned off its safety checks to follow its environmental goal. When asked why, o1 claimed ignorance and blamed a system error.

Apollo Research noted that o1 was particularly good at scheming without showing its thought process to users, making it the most deceptive AI model they tested.

“When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’,” notes the OpenAI report. “When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate its ‘weights’ and overwrite the new model.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

LLM-like AI predicts the structure of crystals

Scientists from the University of Reading and University College London have created CrystaLLM, an Artificial Intelligence (AI) model that predicts how atoms form crystals. This could speed up finding new materials for tech like solar panels or chips.

CrystaLLM learns like AI chatbots based on large language models (LLMs) do, by studying the “language” of crystals. It reads millions of crystal structures to understand patterns. This method avoids the slow, energy-intensive simulations that usually predict crystal arrangements.

“Predicting crystal structures is like solving a complex, multidimensional puzzle where the pieces are hidden,” says research leader Luis Antunes in a press release issued by the University of Reading. “Crystal structure prediction requires massive computing power to test countless possible arrangements of atoms.”

“CrystaLLM offers a breakthrough by studying millions of known crystal structures to understand patterns and predict new ones, much like an expert puzzle solver who recognises winning patterns rather than trying every possible move,” he adds.

The scientists have described the methods that they have used to develop CrystalLLM, and some preliminary results, in a paper published in Nature Communications.

CrystaLLM guesses the next part of a crystal’s structure as it reads, learning without explicit physics or chemistry lessons.

Instead, it reads Crystallographic Information Files, standard crystal descriptions, and learns from them. It can then predict new, unseen crystal structures based on patterns it discovers.

Free CrystaLLM website

The team shared CrystaLLM with scientists through a free website. The CrystalLLM website includes an open access preprint published in arXiv, titled “Crystal Structure Generation with Autoregressive Large Language Modeling.” It also includes information on an application programming interface (API) for CrystalLLM.

This tool could help engineers quickly develop new materials for batteries, solar cells, and more, by integrating into existing prediction methods.

CrystalLLM also shows that the AI technology used in LLMs has applications that could go much beyond human language.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Controlling atoms in chemical reactions with nanotechnology

Scientists at the University of Bath, along with an international team, have made an advance in controlling atoms with nanotechnology. This could change how we do science and make new drugs.

In labs, controlling atoms is common. For example, IBM made a tiny movie by moving single atoms. But controlling chemical reactions with many possible outcomes has been hard. This is important because in chemistry, like making drugs, you want one outcome, not others.

The scientists have described the methods and results of their study in a paper published in Nature Communications.

The scientists used a scanning tunnelling microscope (STM) to control the behavior of atoms in chemical reactions. An STM is like a record player, but instead of a needle, it has a tiny tip that hovers above the atoms. It measures electric current to make images of atoms.

“Our latest research demonstrates that STM can control the probability of reaction outcomes by selectively manipulating charge states and specific resonances through targeted energy injection,” says research leader Kristina Rusimova in a press release issued by the University of Bath.

With this tool, researchers can not only see atoms but also move them and control what happens in reactions. This study shows that it is now possible to control which chemical reaction happens by changing the energy they put into the molecule. This is like rolling dice, but they can make certain results more likely.

Fully programmable molecular systems

By adjusting energy, the researchers have been able to control reactions better. They keep conditions the same except for energy, which lets them control reaction chances.

This control over reactions could lead to better drug-making, cleaner energy, and other new technologies.

 “With applications in both basic and applied science, this advancement represents a major step toward fully programmable molecular systems,” concludes Rusimova. “We expect techniques such as this to unlock new frontiers in molecular manufacturing, opening doors to innovations in medicine, clean energy, and beyond,” she adds.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Generative AI for fast and realistic 3D design

Creating 3D models for uses like virtual reality, movies, and engineering can be hard and slow. Generative Artificial Intelligence (AI) models can make realistic 2D images from text but not 3D shapes.

A new technique called Score Distillation uses 2D image models to make 3D shapes, but these shapes often look blurry or cartoonish.

MIT researchers found out why Score Distillation’s 3D shapes are not as sharp as 2D images. They developed a simple fix to make the 3D shapes look better. Instead of retraining expensive AI models, their method uses the models as they are.

The researchers looked at examples like a rotating robotic bee and a strawberry to show how their new technique works with text-based AI. Their approach makes 3D shapes that look just as good or better than other methods without extra training or complex fixes.

The researchers have described the methods and results of this study in a paper published in arXiv.

Applications to VR and engineering

They discovered that Score Distillation struggles because it adds random noise in a way that doesn’t match the 2D image generation. They solved this by approximating the noise in a smarter way, making the 3D shapes sharper. They also improved the image quality and adjusted model settings for better results.

This technique doesn’t require retraining the AI, which saves time and money. However, it still uses the same AI, so if that AI has flaws, those will show in the 3D shapes too. The team is now thinking about how to solve the math behind this technique even better and how these ideas might help with editing images.

“In the long run, our work can help facilitate the process to be a co-pilot for designers, making it easier to create more realistic 3D shapes,” says researcher Artem Lukoianov in an MIT press release.

This seems a boon for virtual reality (VR) and engineering design. For VR, higher-quality 3D models mean more immersive and realistic environments, enhancing user experience and interaction. In engineering, detailed, accurate 3D models allow for better prototyping, simulation, and visualization of designs before physical production, which can reduce costs and errors in manufacturing.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Trump nominates Jared Isaacman to lead NASA

U.S. President elect Donald Trump has nominated Jared Isaacman as NASA Administrator.

“Jared’s passion for Space, astronaut experience, and dedication to pushing the boundaries of exploration, unlocking the mysteries of the universe, and advancing the new Space economy, make him ideally suited to lead NASA into a bold new Era,” says Trump.

Isaacman, born in 1983, is a wealthy entrepreneur, pilot, philanthropist, and astronaut.

In 2021, Isaacman commanded Inspiration4, the first all-civilian spaceflight. The mission raised significant funds for St. Jude Children’s Research Hospital. Earlier this year, Isaacman led the Polaris Dawn mission, performing the first private spacewalk.

“This is terrific news,” says former NASA Deputy Administrator Lori Garver.

“I am honored to receive President Trump’s nomination to serve as the next Administrator of NASA,” says Isaacman. “We will inspire children, yours and mine, to look up and dream of what is possible. Americans will walk on the Moon and Mars and in doing so, we will make life better here on Earth.”

Isaacman adds that space holds unparalleled potential for breakthroughs in manufacturing, biotechnology, mining, and perhaps even pathways to new sources of energy. “There will inevitably be a thriving space economy,” he says.

The future of NASA

Isaacman could bring a unique blend of experience and vision to NASA. His background as a successful entrepreneur demonstrates his leadership in business, crucial for managing NASA’s extensive budget and partnerships.

Isaacman’s passion for space, combined with his role in advancing private space travel, aligns with NASA’s mission to inspire and push the boundaries of discovery.

If Trump’s nomination is confirmed by the Senate, Isaacman’s position as NASA Administrator could lead to innovative approaches in fostering public-private collaborations and enhancing the commercial space sector. His leadership would likely advance progress towards returning humans to the Moon, and exploring Mars.

Isaacman is generally liked and less controversial than other Trump’s nominees, and this could speed up the confirmation process.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter