Why giving AI rights is not that simple.

For years, experts have debated whether AI should have rights, but no clear answer has emerged. Some people believe that AI should be granted human rights because it has the potential to become sentient in the future. Others, however, argue that this is unlikely because consciousness is not a binary concept and exists on a wide spectrum. The ability to pass the Turing test is insufficient to classify AI as sentient. Furthermore, in some cases, even humans fail the Turing test. 

The ELIZA effect, or anthropomorphizing machines, is a common tendency, and we must be careful not to make assumptions about AI. We must understand that AI is a reflection of the developers who coded it and the datasets on which it was trained. While it is critical to have a contemporary discussion about the theory of consciousness and morals, it is also critical to recognize that AI is not the same as humans. According to Stuart Russell, an Alan Turing Institute lecturer, giving AI rights would greatly complicate matters. While sentience may confer some rights, it is not necessary to grant AI human rights.

Source: PCGamer (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Qualcomm & the AI World: Stable Diffusion Generative AI Now Runs on Your Smartphone!

Qualcomm has made a significant breakthrough in the world of generative AI models by developing a way to run Stable Diffusion on Android smartphones. Previously, generative AI models like Stable Diffusion had to run in the cloud, but Qualcomm has shrunk the model using a combination of software techniques and hardware optimization to run inference models on common Android smartphones. Stable Diffusion, created by startup Stability AI, is one of the most popular generative AI models for image creation, frequently competing with OpenAI’s DALL-E.

Users can now generate Stable Diffusion generative AI images directly on an Android smartphone, eliminating the need to call out to the cloud to do the heavy lifting. Because users’ information or ideas remain solely on the device, this has significant implications for privacy and security. This advancement is even more important for enterprise use of generative AI, as company confidential information can now be protected.

Qualcomm’s achievement is a critical step toward efficient edge AI processing. It protects user privacy while running Stable Diffusion and other generative AI models because the input text and generated image are never sent outside of the device. This is significant for both consumer and enterprise application adoption. Lessons learned from this project can be applied to bring other large generative AI models from the cloud to the device, benefiting future iterations and Qualcomm AI Stack users.

Source: VentureBeat (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Is OpenAI’s ChatGPT at Risk? The Truth About Its Legal Protection

“OpenAI’s ChatGPT at Risk of Losing Section 230 Protections,” says University of North Carolina at Chapel Hill scholar Matt Perault. Under Section 230 of the US Communications Decency Act, ChatGPT and other large language models (LLMs) may not receive the same legal protection as social media behemoths Meta and Twitter. According to Perault, courts may consider LLMs to be information content providers, which means that companies that use generative AI tools will be barred from using Section 230 in cases involving AI-generated content.

Social media companies have used Section 230 to shield themselves from legal liability for user-generated content. If ChatGPT and other LLMs are not protected by Section 230, platforms that use these tools will be subject to a wide range of lawsuits under federal and state law. The compliance minefield may even force them to modify their products on a state-by-state basis, or to withdraw entirely from certain states. This could pose significant difficulties in the deployment of LLMs.

This disclosure highlights the potential risks that businesses face when deploying AI technologies. It emphasizes the importance of businesses taking proactive steps to address the risks associated with the use of these technologies. It will be interesting to see how this issue develops over the next few months and years.

Source: ZDNET (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

China Cracks Down on Access to ChatGPT

According to reports, Chinese authorities have restricted access to the popular discussion bot ChatGPT. The bot was created by OpenAI, a company based in the United States, and while it is not officially available in China due to the government’s strict internet censorship, many people have been using VPNs to access it. Third-party developers also created programs that provided limited access to the service. However, these apps have vanished from WeChat accounts, and major tech companies such as Tencent and Ant Group have been ordered to disable access to them. The crackdown is unsurprising given that the Chinese government has been found guilty of massive human rights violations in Xinjiang, which it denies. The dangers of ChatGPT as a potential tool for the US to “spread false information” had been warned by state media. Searches for ChatGPT on Chinese platforms now yield no results, and workaround programs have been disabled or replaced with a notice stating that they have been suspended for “violating relevant laws and regulations.”

The crackdown comes amid widespread interest in ChatGPT, fueling a tech race in China to develop domestic chatbots. The bot’s popularity in China may raise questions about sensitive topics such as human rights violations in Xinjiang, Taiwan, and the Diaoyu islands, which the Chinese government may regard as politically sensitive. While the model was trained on open data from Western countries, the crackdown on ChatGPT could escalate the AI race between China and the US.

Source: The Guardian (link)
Images: MidJourney, Prompts by Lewis Farrell

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Neuroscientists grow electronic circuits in living brain tissue, replacing implants

Researchers at three universities in Sweden have successfully grown electrodes in living tissue, using the body’s own molecules as triggers. The results, published in the journal Science, pave the way for forming fully integrated, biocompatible electronic circuits in living organisms.

By injecting a gel containing enzymes as the “assembly molecules,” the researchers were able to grow soft, conductive electrodes in the living tissue of zebrafish and medicinal leeches. In the future, it may be possible to fabricate fully integrated electronic circuits in living organisms, including humans, according to the researchers.

In experiments, the researchers achieved electrode formation in the brain, heart, and tail fins of zebrafish and around the nervous tissue of medicinal leeches. Contact with the body’s substances changed the structure of the gel and made it electrically conductive.

The animals were not harmed by the injected gel and were not affected by the electrode formation, which was accepted by the brain tissue and immune system, says Xenofon Strakosas, a researcher at the Laboratory for Organic Electronics and Lund University, and one of the study’s authors.

Body’s own tissues trigger formation of bio-friendly electrodes

With the new method, the body’s own molecules are enough to trigger the formation of electrodes. There is no need for genetic modification, implanted objects, or external signals, such as light or electrical energy, which have been required in previous experiments.

Linking electronics to biological tissue is important in combating diseases in the brain, developing future human-machine interfaces, and understanding complex biological functions, according to the researchers. Conventional bioelectronics systems are based on a fixed design that makes them difficult or impossible to combine with living biological signal systems.

Citation: Xenofon Strakosas et al. Metabolite-induced in vivo fabrication of substrate-free organic bioelectronics. Science. 23 Feb 2023 Vol 379, Issue 6634 pp. 795-802 https://www.science.org/doi/10.1126/science.adg4761

Abstract

Electronic devices implanted into a tissue close to neurons of interest are meant to exchange signals with the nervous system. Such bioelectronic devices not only facilitate the study of neural communication, they can also hijack neural circuitry in a therapeutic approach known as bioelectronic medicine. The success of these applications relies on the robustness of the implanted devices and their compatibility with the body. Conventional bioelectronic devices have solid substrates that carry conducting films. Their rigidity can damage soft tissues and reduce an implant’s long-term performance. On page 795 of this issue, Strakosas et al. (1) address the mechanical mismatch between soft and wet biological matter and solid-state electronics and describe an approach that generates electronics directly inside a tissue without a substrate, causing little damage to the tissue.

https://www.science.org/doi/10.1126/sciadv.aay5347?intcmp=trendmd-adv#

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Costly Challenge of Chatbots for Google: A Marketing Expert’s Analysis

According to analysts, Google’s new chatbot, “Bard,” which is powered by the large language model ChatGPT, could cost the company billions of dollars more in processing costs. Google is concerned about the cost of search because it must process billions of searches every day, and the back-and-forth nature of chatbots necessitates more processing than standard keyword searches. Morgan Stanley and SemiAnalysis estimate annual costs of $6 billion and $3 billion, respectively. While Google is cautious about the scale of Bard, it is unclear how the company will monetize the new search engine, and inserting ads into chatbot responses may not be successful.

As Alphabet chairman John Hennessy put it, cutting costs will be a “couple year problem at best.” Google has a history of addressing cost issues, such as when it purchased YouTube and built its own server chips for machine learning. However, given the company’s recent cost-cutting efforts, the prospect of skyrocketing costs for “a few years” is not ideal. For marketers, the question is whether chatbots will increase or decrease ad revenue and whether they will result in less time spent on Google than the traditional 10 blue links search engine.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Microsoft’s ChatGPT is teaching robots to follow commands in plain English, but what could go wrong?

Microsoft is collaborating with OpenAI, the creators of ChatGPT, to teach robots to obey simple commands in plain English, and the implications of this development are enormous. The collaboration’s goal is to make it easier for people to interact with robots without having to learn complex programming languages or details about robotic systems.

Microsoft’s new set of design principles for ChatGPT enables robots to understand physical laws and receive instructions in plain English. The framework begins by defining a set of high-level tasks that a robot can perform, then writes a prompt that ChatGPT translates into robot speak, and then runs a simulation of the robot following your instructions.

While this is an exciting development, it also raises concerns because it implies that robots could potentially command themselves, which could have unintended consequences. Before any significant deployment, it is critical to ensure that the technology has been thoroughly tested and that any potential risks have been addressed. Nonetheless, this advancement has the potential to transform how people interact with robots, making it easier to develop and deploy new capabilities.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Hugging Face and Amazon Web Services are democratizing AI with their expanded partnership

Hugging Face and Amazon Web Services (AWS) have expanded their collaboration in order to make artificial intelligence (AI) more open and accessible to all. The partnership’s goal is to speed up the availability of next-generation machine learning models by making them more accessible to the machine learning community and assisting developers in achieving the best performance at the lowest cost.

Machine learning is becoming more common, and its impact on all sectors of the economy is becoming clear. As a result, it is critical that every developer has access to and evaluates the most recent models. Hugging Face’s collaboration with AWS paves the way for this future by making it faster and easier to build, train, and deploy the most advanced machine learning models in the cloud using purpose-built tools.

Hugging Face and AWS’s collaboration will have significant ramifications for the AI industry, and it is expected to revolutionize the development and deployment of machine learning models. With more accessible and affordable AI, the possibilities for innovation are limitless. This collaboration is an important step toward democratizing AI and making it more inclusive, accessible, and affordable to all.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Scary Future According to AGI Safety Researcher Eliezer Yudkowsky

David Wood, Chair of London Futurists:

There’s been a significant development in the global conversation about finding a pathway to safe AGI.

“In this video, released last night, pioneering AGI safety researcher Eliezer Yudkowsky explains why he has taken a sabbatical from MIRI (which he co-founded) and now expects to ‘go down fighting with dignity’ in his quest to prevent a misconfigured AGI from killing everyone.”

I’ll warn you in advance that it’s 90 minutes of existential despair. It’s not for the faint-hearted.

[Eliezer starts speaking at ~15 minutes]

Dr. Ben Goertzel responds:

My view on this is summarized in this tweet I just wrote:

“We should build a neural-symbolic-evolutionary AGI with rich self-reflective and compassionate capability, educate it well, work with it on benefit projects, put it under decentralized control, and have some of us fuse with it. Working on it. No guarantees but odds seem good!”

For a longer quote, I would say: “Eliezer is a very serious thinker on these matters and was the core source of most of the ideas in nick bostroms influential book Superintelligence. But ever since I met him, and first debated these issues with him,  back in 2000 I have felt he had a somewhat narrow view of humanity and the universe in general.   

There are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of, and that we can tap into by creating self reflective compassionate AGIs and doing good loving works together with them. There are no guarantees to be found regarding the future of AGI but taking big risks is how humanity diverged from our apelike ancestors, its what we do. Despondence is unwarranted and unproductive.  We need to focus on optimistically maximizing odds of a wildly beneficial Singularity together.   

A few things I agree with Eli in are : AGI may very likely be just a few years off… This fact is not taken nearly seriously enough by society today. And yes the odds of existential disaster are not zero (but I don’t agree w him they are high)”

_______________________________________________________________________________

Wood continues:

Please do not think Eliezer is merely repeating centuries-old (or decades-old) worries.
Famously, he also started out as an optimist about the prospects for human flourishing in partnership with AI. But he gradually realised the scale of the difficulties we are facing.

To be absolutely clear, the risks from AGI are NOTHING TO DO WITH POSSIBLE AI SENTIENCE

If you’re not familiar with the arguments about that, kindly do yourself a HUGE favor and take the time to bring yourself up to speed.

Moreover, sometimes FEAR IS APPROPRIATE. Without an element of fear, minds would have failed to concentrate sufficiently regarding the threats posed by Covid-19, and the impact of that pathogen upon human civilization would have been even worse. Yes, fear sometimes plays into the hands of people who would manipulate us. But, equally, complacency and self-assurance and glib pseudo-philosophical platitudes can also play into the hands of people who would manipulate us.

This is not the time for naive optimism. Nor for a panicked pessimism. This is time for a clear-eyed realism.

If you don’t have the patience to listen to Eliezer’s explanations (and I concede that his personal style of communication is sometimes awkward), you can find a simpler, more straightforward explanation of the risks in a number of chapters of my book, The Singularity Principles.

— David W. Wood, Chair of London Futurists, Author of The Singularity Principles

.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Artificial Intelligence can help conserve biodiversity

The International Telecommunication Union (ITU) hosted a webinar to discuss the potential of artificial intelligence (AI) in supporting the detection and attribution of biodiversity change, as well as its role in implementing a biodiversity conservation framework. The AI for Good Discovery Series on AI for Biodiversity, co-organized with the Convention on Biological Diversity (CBD), seeks to bring together the AI and biodiversity communities in order to protect the natural world.

CBD Acting Executive Secretary David Cooper emphasized the importance of AI in identifying species, drivers of biodiversity loss, and mapping suitable areas for ecosystem restoration. He did, however, emphasize the importance of human intelligence in ensuring that AI does not disregard local knowledge or violate the rights of stakeholders and rights holders, including Indigenous Peoples and local communities.

McGill University’s Andrew Gonzalez proposed a framework for the detection and attribution of biodiversity change, as well as the potential of AI in tracking progress and linking action on drivers to biodiversity outcomes. He emphasized the importance of theory-guided data science, which would combine big and small data to guide AI applications.

Bottlenecks in advancing AI in the detection and attribution system were discussed, including AI literacy, data availability, and the disconnect between the human, the analytical tool, and the inferences made. Participants endorsed public-private partnerships to leverage tools for biodiversity change, peer-to-peer data sharing, and combining causal inference frameworks in machine learning with traditional and theoretical knowledge.

AI has the potential to play a significant role in biodiversity conservation; however, it must integrate human intelligence, combine big and small data, and address existing bottlenecks in AI advancement.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter