Artificial intelligence inspired by ChatGPT designs novel proteins that destroy germs.

Profluent, a California-based biotech startup, has used a ChatGPT-like AI model to design novel antimicrobial proteins that have been proven to kill bacteria in the lab. The AI platform, ProGen, learned the language of proteins and was trained on 280 million protein sequences to generate new proteins with any desired shape or feature. The researchers selected five of the generated proteins and tested them against E. coli, with two of them capable of killing the bacteria. The results show the potential for using language AI models for precise design of proteins in various fields, including biology, medicine, and the environment. The proteins themselves are made of chains of amino acids that form complex structures, with a base of 20 different amino acids that can be linked together in any order.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Green hydrogen produced by splitting seawater

A group of international researchers led by the University of Adelaide has made a significant advance in the production of green hydrogen. The researchers were able to split seawater into hydrogen and oxygen with nearly 100% efficiency, eliminating the need for pre-treatment processes such as reverse osmosis, purification, or alkalization. In a commercial electrolyzer, the researchers used a non-precious and inexpensive catalyst, cobalt oxide with chromium oxide on its surface. The results were comparable to platinum/iridium catalysts operating in purified deionized water. Seawater is regarded as a nearly infinite resource and a natural feedstock electrolyte, making it suitable for areas with long coastlines and abundant sunlight. The team will work on commercializing the system, such as hydrogen generation for fuel cells and ammonia synthesis.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Researchers claim that artificial intelligence may reach the Singularity within the next seven years.

Based on their metric “Time to Edit” (TTE), which measures the time it takes a human translator to edit a translation produced by another human or AI, Translated.com predicts that artificial intelligence (AI) will reach the “singularity” within seven years. TTE for AI-translated texts has consistently decreased over the years, leading to the prediction that AI will reach the singularity, when time is equivalent to human translators. The team believes that language translation is the closest problem to producing Artificial General Intelligence (AGI), and that the ability to translate on par with humans would be a significant milestone in AI, even if machines will not be free of human control by 2030. According to the CEO of Translated, we have been able to quantify the speed of AI’s progress toward the singularity for the first time.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

“In the AGI era, effective international collaboration is key,” says Esti Peshin, VP and General Manager of the Israel Aerospace Industries (IAI) Cyber Division

According to Esti Peshin, VP and General Manager of the Israel Aerospace Industries (IAI) Cyber Division, effective international collaboration and information sharing are required for cyber resilience in the age of AGI (artificial general intelligence). Peshin emphasizes that cyber is a strategic threat that is being used effectively to exert power in the international arena. In the AGI era, the challenges are balancing cyber security and business continuity. Peshin emphasized the importance of ethics in the AGI era, claiming that next-generation Security Operation Centers must handle the entire ecosystem with AI-powered data. Peshin concluded by saying that cyber resilience is critical and requires international cooperation to be effective.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DJED, a new Cardano-based stablecoin, quickly gathered over 27,000,000 ADA in reserves.

DJED, a new decentralized stablecoin based on the Cardano (ADA) smart contract platform, has quickly amassed a reserve of over 27 million ADA. DJED’s stability is maintained by over-collateralization, which eliminates the need for trust in a governance protocol. This stability is provided by the SHEN reserve coin, which guarantees a 400-800% over-collateralization ratio. COTI Network’s payments platform created the stablecoin, which was officially launched on the Cardano blockchain this week. The launch of DJED is regarded as a significant step forward for the crypto industry, as it has the potential to accelerate the widespread adoption of crypto payments. DJED’s launch also represents a significant step forward for COTI, with significant changes made to the stablecoin, such as moving it to a multi-chain network and lowering deposit fees by 50%.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

ChatGPT has surpassed 100 million users in just two months since its launch.

ChatGPT, a new AI-powered chatbot, has accumulated 100 million monthly active users in just two months since its December launch. This is a faster rate of growth than TikTok and Instagram, which took 9 months and 2.5 years, respectively, to reach the same number of users. According to UBS equities analyst Lloyd Walmsley, it is the fastest consumer application in his career to reach 100 million monthly active users. For the time being, the tool is free to use, but OpenAI, the company behind it, is testing a $20 monthly subscription for access to a pro version called “ChatGPT Plus.” ChatGPT can write essays and generate documents, and it is even used by corporate leaders for strategic brainstorming. The growing popularity of this technology raises concerns about its implications for knowledge workers with lower skill levels. However, if the hype fades, its future growth may slow. However, the use of ChatGPT in higher education has raised many questions about how to ensure academic integrity.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Microsoft’s Azure-hosted OpenAI language models – Enterprise ready platform

Microsoft has made its Azure OpenAI language models generally available for use in code by customers. OpenAI’s generative AI tools, including the GPT-3 text generation and Codex code model, were built and trained on Microsoft Azure servers. Microsoft, on the other hand, is restricting access to the models in order to ensure that projects adhere to ethical AI usage policies and to prevent impersonation or privacy violations. Users can build code using Azure OpenAI’s API endpoints and deploy models using Azure OpenAI Studio once their account has been approved. When designing an application, Microsoft recommends starting with the most capable model and customizing it using training examples with inputs and desired outputs. In addition, the company is constantly monitoring its AI language models to avoid any problems.

LINK TO ANNOUNCEMENT

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

C3 AI Launches Enterprise-Ready C3 Generative AI Product Suite

C3 AI, the Enterprise AI application software company, has announced the release of its first product, C3 Generative AI for Enterprise Search, as part of the C3 Generative AI Product Suite. This product provides a natural language interface for enterprise users to locate and present all relevant data across an enterprise’s information systems. The product suite incorporates the most recent AI capabilities from organizations such as Open AI, Google, and academia, and embeds advanced transformer models with C3 AI’s pre-built AI applications, allowing customers to leverage these models more quickly. C3 Generative AI is set to go public in March 2023 and will be showcased at a conference in Boca Raton, Florida. The technology is expected to revolutionize industries such as defense, intelligence, and healthcare, and investors can expect to learn more about the product suite when it is released.

LINK TO PRESS RELEASE

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Discovery could improve learning speed by tuning into your brainwaves

Delivering information at the natural tempo of your neural pulses could accelerate your ability to learn, University of Cambridge neuroscientists have discovered.

They used EEG brain sensors attached to the head to measure participants’ alpha brain waves (8 to 13 Hz). The researchers then used these readings to create a 1.5-second optical “pulse”: a white square flickering on a dark background at the same tempo as each person’s individual alpha waves.

The mid-range of the brainwave spectrum, the alpha frequency range, tends to dominate when we are awake and relaxed.

Three times faster in a cognitive task

Participants who received this visual cue (or “entrainment”) at their personal brainwave frequency were at least three times faster in improving at a cognitive task (trying to identify specific shapes within a barrage of visual clutter). The improvement persisted the next day.

Priming brains for optimal rhythms could help us remain quick learners (with “neuroplasticity”) throughout life, help people with learning difficulties, and give professionals an edge in training simulations, the neuroscientists suggest.

EEG headband kit
(credit: OpenBCI)

Their study used complex EEG machines, but there are simple headband systems that could allow you to measure brain frequencies quite easily, say the researchers. As Tim Cook would say, “We can’t wait to see what you’ll discover!”

Meanwhile, the findings will be explored as part of the Centre for Lifelong Learning and Individualised Cognition: a research collaboration between Cambridge and Nanyang Technological University (NTU), Singapore.  

How it works

A brainwave cycle consists of a peak and trough. Some participants received pulses matching the peak of their waves, some the trough, while some got rhythms that were either random or at the wrong rate (a little faster or slower). Each participant repeated over 800 variations of the cognitive task, and the neuroscientists measured how quickly people improved.

When participants returned the next day to complete another round of tasks, those who learned much faster under entrainment had maintained their higher performance level. 

“It was exciting to uncover the specific conditions you need to get this impressive boost in learning,” said first author Dr. Elizabeth Michael, now at Cambridge’s Cognition and Brain Sciences Unit. “The intervention itself is very simple, just a brief flicker on a screen, but when we hit the right frequency plus the right phase alignment, it seems to have a strong and lasting effect.”

Importantly, entrainment pulses need to synch with the trough of a brainwave. Scientists believe this is the point in a cycle when neurons are in a state of “high receptivity.”

Wide range of potential uses

“We are tapping into a mechanism that allows our brain to align to temporal stimuli in our environment, especially communicative cues like speech, gaze and gesture that are naturally exchanged during interactions between parents and babies,” said aid co-author Prof Victoria Leong, from NTU and Cambridge’s Department of Paediatrics.  

“When adults speak to young children they adopt child-directed speech – a slow and exaggerated form of speaking. This study suggests that child-directed speech may be a spontaneous way of rate-matching and entraining the slower brainwaves of children to support learning.”

The researchers note that while the new study tested visual perception, these mechanisms are likely to be “domain general,” applying to a wide range of tasks and situations, including auditory learning. Other early applications of brainwave entrainment to boost learning could involve training in professions where fast learning and quick decision-making is vital, such as pilots or surgeons.

“Our brain’s plasticity is the ability to restructure and learn new things, continually building on previous patterns of neuronal interactions. By harnessing brainwave rhythms, it may be possible to enhance flexible learning across the lifespan, from infancy to older adulthood,” Kourtzi said.

The findings, published in the journal Cerebral Cortex, will be explored as part of the Centre for Lifelong Learning and Individualised Cognition, a research collaboration between Cambridge and Nanyang Technological University (NTU), Singapore.  

“We feel as if we constantly attend to the world, but in fact our brains take rapid snapshots and then our neurons communicate with each other to string the information together,” said co-author Prof Victoria Leong, from NTU and Cambridge’s Department of Paediatrics.  

“Our hypothesis is that by matching information delivery to the optimal phase of a brainwave, we maximize information capture because this is when our neurons are at the height of excitability.”

Starts with mother/baby communication

Previous work from Leong’s Baby-LINC lab shows that brainwaves of mothers and babies will synchronize when they communicate. Leong believes the mechanism in this latest study is so effective because it mirrors the way we learn as infants.  

“We are tapping into a mechanism that allows our brain to align to temporal stimuli in our environment, especially communicative cues like speech, gaze and gesture that are naturally exchanged during interactions between parents and babies,” said Leong.

“When adults speak to young children they adopt child-directed speech — a slow and exaggerated form of speaking. This study suggests that child-directed speech may be a spontaneous way of rate-matching and entraining the slower brainwaves of children to support learning.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI Startup Anthropic Close to Securing $5 Billion Valuation with New Funding Round

Anthropic, an artificial intelligence startup, is making waves in the tech industry with its latest funding round. The San Francisco-based company is on the verge of securing a $300 million investment, which could value it at $5 billion. Anthropic has been on a mission to make systems easier to control and understand through its research on machine learning systems since its inception in 2021. The new funding will allow the company to delve deeper into AI’s predictable scaling properties and investigate how capabilities and safety issues can emerge. The success of the startup demonstrates the ongoing excitement for the new class of AI startups as well as Silicon Valley’s continued focus on generative AI. Anthropic is a shining example of a ray of hope in an otherwise bleak tech investment market.

From their CEO & Cofounder, Dario Amodei:

“With this fundraise, we’re going to explore the predictable scaling properties of machine learning systems, while closely examining the unpredictable ways in which capabilities and safety issues can emerge at scale”

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter