Get Ready for the AI Revolution: GPT-4 Is Coming in 2023 with New Language Capabilities!

While no official announcement has been made, there have been rumors about GPT-4’s impending release. GPT-4 is expected to be released this year, with some believing it is already available, powering Bing’s newly launched ChatGPT functionality. However, it is unclear whether it will be trained on 100 trillion parameters, as previously rumored, or if it will be slightly larger than GPT-3, as CEO Sam Altman hinted. The algorithm’s efficiency will be critical, as it will reduce the running costs of GPT-4 and ChatGPT, making it more accessible.

GPT-4 is expected to push AI even further, breaking new ground in computer code generation, and OpenAI has been hiring programmers and software developers to help with this. While some speculated that GPT-4 would include graphics, Altman has confirmed that it will remain a text-only model. GPT-4, on the other hand, is expected to have better performance when it comes to generating computer code, which could lead to more powerful versions of tools like Github Copilot. Some people may be disappointed with GPT-4, as with any new technology, but it is undeniable that it will have a significant impact on society, bringing AI closer to becoming “more profound than fire or electricity,” as Google CEO Sundar Pichai once stated.

Source: Forbes (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Singapore Lab Breeding 24 Million Sterile Mosquitoes Weekly *

The Singapore National Environment Agency (NEA) is breeding Aedes aegypti, a mosquito species that can transmit viruses to humans, including dengue — a “growing global threat which, by some estimates, infects 100 to 400 million and kills about 21,000 people each year,” according to an article in UNDARK, cited by The Atlantic.

Since 2016, NEA scientists have been setting free male Wolbachia-carrying mosquitoes around Singapore.

“By releasing male Wolbachia-infected mosquitoes into the community, Singapore is following a protocol that aims to suppress the population of native mosquitoes. When such males mate with local Wolbachia-free females, the females lay eggs that won’t hatch, and in time, the number of mosquitoes decreases. Additionally, under some circumstances, the bacterium can interfere with mosquitoes’ ability to reproduce.”

Racks of larvae in the NEA lab are kept under careful conditions to ensure that males and females grow into pupae that are as different in size as possible. NEA scientists have perfected the process, keeping the diet, temperature, and humidity perfectly constant. (Credit: NEA)

 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Why giving AI rights is not that simple.

For years, experts have debated whether AI should have rights, but no clear answer has emerged. Some people believe that AI should be granted human rights because it has the potential to become sentient in the future. Others, however, argue that this is unlikely because consciousness is not a binary concept and exists on a wide spectrum. The ability to pass the Turing test is insufficient to classify AI as sentient. Furthermore, in some cases, even humans fail the Turing test. 

The ELIZA effect, or anthropomorphizing machines, is a common tendency, and we must be careful not to make assumptions about AI. We must understand that AI is a reflection of the developers who coded it and the datasets on which it was trained. While it is critical to have a contemporary discussion about the theory of consciousness and morals, it is also critical to recognize that AI is not the same as humans. According to Stuart Russell, an Alan Turing Institute lecturer, giving AI rights would greatly complicate matters. While sentience may confer some rights, it is not necessary to grant AI human rights.

Source: PCGamer (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Qualcomm & the AI World: Stable Diffusion Generative AI Now Runs on Your Smartphone!

Qualcomm has made a significant breakthrough in the world of generative AI models by developing a way to run Stable Diffusion on Android smartphones. Previously, generative AI models like Stable Diffusion had to run in the cloud, but Qualcomm has shrunk the model using a combination of software techniques and hardware optimization to run inference models on common Android smartphones. Stable Diffusion, created by startup Stability AI, is one of the most popular generative AI models for image creation, frequently competing with OpenAI’s DALL-E.

Users can now generate Stable Diffusion generative AI images directly on an Android smartphone, eliminating the need to call out to the cloud to do the heavy lifting. Because users’ information or ideas remain solely on the device, this has significant implications for privacy and security. This advancement is even more important for enterprise use of generative AI, as company confidential information can now be protected.

Qualcomm’s achievement is a critical step toward efficient edge AI processing. It protects user privacy while running Stable Diffusion and other generative AI models because the input text and generated image are never sent outside of the device. This is significant for both consumer and enterprise application adoption. Lessons learned from this project can be applied to bring other large generative AI models from the cloud to the device, benefiting future iterations and Qualcomm AI Stack users.

Source: VentureBeat (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Is OpenAI’s ChatGPT at Risk? The Truth About Its Legal Protection

“OpenAI’s ChatGPT at Risk of Losing Section 230 Protections,” says University of North Carolina at Chapel Hill scholar Matt Perault. Under Section 230 of the US Communications Decency Act, ChatGPT and other large language models (LLMs) may not receive the same legal protection as social media behemoths Meta and Twitter. According to Perault, courts may consider LLMs to be information content providers, which means that companies that use generative AI tools will be barred from using Section 230 in cases involving AI-generated content.

Social media companies have used Section 230 to shield themselves from legal liability for user-generated content. If ChatGPT and other LLMs are not protected by Section 230, platforms that use these tools will be subject to a wide range of lawsuits under federal and state law. The compliance minefield may even force them to modify their products on a state-by-state basis, or to withdraw entirely from certain states. This could pose significant difficulties in the deployment of LLMs.

This disclosure highlights the potential risks that businesses face when deploying AI technologies. It emphasizes the importance of businesses taking proactive steps to address the risks associated with the use of these technologies. It will be interesting to see how this issue develops over the next few months and years.

Source: ZDNET (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

China Cracks Down on Access to ChatGPT

According to reports, Chinese authorities have restricted access to the popular discussion bot ChatGPT. The bot was created by OpenAI, a company based in the United States, and while it is not officially available in China due to the government’s strict internet censorship, many people have been using VPNs to access it. Third-party developers also created programs that provided limited access to the service. However, these apps have vanished from WeChat accounts, and major tech companies such as Tencent and Ant Group have been ordered to disable access to them. The crackdown is unsurprising given that the Chinese government has been found guilty of massive human rights violations in Xinjiang, which it denies. The dangers of ChatGPT as a potential tool for the US to “spread false information” had been warned by state media. Searches for ChatGPT on Chinese platforms now yield no results, and workaround programs have been disabled or replaced with a notice stating that they have been suspended for “violating relevant laws and regulations.”

The crackdown comes amid widespread interest in ChatGPT, fueling a tech race in China to develop domestic chatbots. The bot’s popularity in China may raise questions about sensitive topics such as human rights violations in Xinjiang, Taiwan, and the Diaoyu islands, which the Chinese government may regard as politically sensitive. While the model was trained on open data from Western countries, the crackdown on ChatGPT could escalate the AI race between China and the US.

Source: The Guardian (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Neuroscientists grow electronic circuits in living brain tissue, replacing implants

Researchers at three universities in Sweden have successfully grown electrodes in living tissue, using the body’s own molecules as triggers. The results, published in the journal Science, pave the way for forming fully integrated, biocompatible electronic circuits in living organisms.

By injecting a gel containing enzymes as the “assembly molecules,” the researchers were able to grow soft, conductive electrodes in the living tissue of zebrafish and medicinal leeches. In the future, it may be possible to fabricate fully integrated electronic circuits in living organisms, including humans, according to the researchers.

In experiments, the researchers achieved electrode formation in the brain, heart, and tail fins of zebrafish and around the nervous tissue of medicinal leeches. Contact with the body’s substances changed the structure of the gel and made it electrically conductive.

The animals were not harmed by the injected gel and were not affected by the electrode formation, which was accepted by the brain tissue and immune system, says Xenofon Strakosas, a researcher at the Laboratory for Organic Electronics and Lund University, and one of the study’s authors.

Body’s own tissues trigger formation of bio-friendly electrodes

With the new method, the body’s own molecules are enough to trigger the formation of electrodes. There is no need for genetic modification, implanted objects, or external signals, such as light or electrical energy, which have been required in previous experiments.

Linking electronics to biological tissue is important in combating diseases in the brain, developing future human-machine interfaces, and understanding complex biological functions, according to the researchers. Conventional bioelectronics systems are based on a fixed design that makes them difficult or impossible to combine with living biological signal systems.

Citation: Xenofon Strakosas et al. Metabolite-induced in vivo fabrication of substrate-free organic bioelectronics. Science. 23 Feb 2023 Vol 379, Issue 6634 pp. 795-802 https://www.science.org/doi/10.1126/science.adg4761

Abstract

Electronic devices implanted into a tissue close to neurons of interest are meant to exchange signals with the nervous system. Such bioelectronic devices not only facilitate the study of neural communication, they can also hijack neural circuitry in a therapeutic approach known as bioelectronic medicine. The success of these applications relies on the robustness of the implanted devices and their compatibility with the body. Conventional bioelectronic devices have solid substrates that carry conducting films. Their rigidity can damage soft tissues and reduce an implant’s long-term performance. On page 795 of this issue, Strakosas et al. (1) address the mechanical mismatch between soft and wet biological matter and solid-state electronics and describe an approach that generates electronics directly inside a tissue without a substrate, causing little damage to the tissue.

https://www.science.org/doi/10.1126/sciadv.aay5347?intcmp=trendmd-adv#

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Costly Challenge of Chatbots for Google: A Marketing Expert’s Analysis

According to analysts, Google’s new chatbot, “Bard,” which is powered by the large language model ChatGPT, could cost the company billions of dollars more in processing costs. Google is concerned about the cost of search because it must process billions of searches every day, and the back-and-forth nature of chatbots necessitates more processing than standard keyword searches. Morgan Stanley and SemiAnalysis estimate annual costs of $6 billion and $3 billion, respectively. While Google is cautious about the scale of Bard, it is unclear how the company will monetize the new search engine, and inserting ads into chatbot responses may not be successful.

As Alphabet chairman John Hennessy put it, cutting costs will be a “couple year problem at best.” Google has a history of addressing cost issues, such as when it purchased YouTube and built its own server chips for machine learning. However, given the company’s recent cost-cutting efforts, the prospect of skyrocketing costs for “a few years” is not ideal. For marketers, the question is whether chatbots will increase or decrease ad revenue and whether they will result in less time spent on Google than the traditional 10 blue links search engine.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Microsoft’s ChatGPT is teaching robots to follow commands in plain English, but what could go wrong?

Microsoft is collaborating with OpenAI, the creators of ChatGPT, to teach robots to obey simple commands in plain English, and the implications of this development are enormous. The collaboration’s goal is to make it easier for people to interact with robots without having to learn complex programming languages or details about robotic systems.

Microsoft’s new set of design principles for ChatGPT enables robots to understand physical laws and receive instructions in plain English. The framework begins by defining a set of high-level tasks that a robot can perform, then writes a prompt that ChatGPT translates into robot speak, and then runs a simulation of the robot following your instructions.

While this is an exciting development, it also raises concerns because it implies that robots could potentially command themselves, which could have unintended consequences. Before any significant deployment, it is critical to ensure that the technology has been thoroughly tested and that any potential risks have been addressed. Nonetheless, this advancement has the potential to transform how people interact with robots, making it easier to develop and deploy new capabilities.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Hugging Face and Amazon Web Services are democratizing AI with their expanded partnership

Hugging Face and Amazon Web Services (AWS) have expanded their collaboration in order to make artificial intelligence (AI) more open and accessible to all. The partnership’s goal is to speed up the availability of next-generation machine learning models by making them more accessible to the machine learning community and assisting developers in achieving the best performance at the lowest cost.

Machine learning is becoming more common, and its impact on all sectors of the economy is becoming clear. As a result, it is critical that every developer has access to and evaluates the most recent models. Hugging Face’s collaboration with AWS paves the way for this future by making it faster and easier to build, train, and deploy the most advanced machine learning models in the cloud using purpose-built tools.

Hugging Face and AWS’s collaboration will have significant ramifications for the AI industry, and it is expected to revolutionize the development and deployment of machine learning models. With more accessible and affordable AI, the possibilities for innovation are limitless. This collaboration is an important step toward democratizing AI and making it more inclusive, accessible, and affordable to all.

Source.

Interesting story? Please click on the 👍 button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter