What Will Be the Next Big Tech Wave? Lawmakers are concerned that Generative AI will be a repeat of social media.

As generative AI continues to grow and advance, lawmakers face a new challenge. Many people are concerned about the lack of consumer protections and standards surrounding AI products and systems, which could expose users to a variety of risks. Concerns have been raised that lawmakers will be ill-equipped to regulate this new technology, as they were with social media, so some have called for the creation of an effective regulatory regime that will force tech companies to prioritize consumer protections.

Several bills related to specific applications of AI have been introduced in Congress, but no comprehensive bill offering comprehensive consumer protections and development standards related to AI products and systems has been introduced. Some argue that the development and deployment of AI must be done in a way that is safe, ethical, and respectful of all Americans’ rights and privacy.

Concerns have also been raised about the potential impact of generative AI, which has the potential to transform the tech industry by creating an infinite amount of customized and personalized content out of thin air. While this may open up new opportunities for users, there are concerns about privacy and security, as well as content owners claiming harm from AI companies using their content to train models without permission. To avoid repeating the mistakes made with social media, lawmakers must be proactive in regulating generative AI.

Create an engaging and catchy title for lewisfarrell@gmail.com that makes people want to read the summary.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Babies Beat Bots

In a new study, researchers conducted a series of experiments with 11-month-old infants, and compared their responses to those of state-of-the-art learning-driven neural-network models.

Infants are fascinated by other people and observe their actions. They attribute goals to others and expect others to pursue goals rationally and efficiently. Conversely, “commonsense AI” — driven by machine-learning algorithms — predicts specific actions directly.

Infant understanding vs. algorithms

In the research, infants watched a series of Zoom videos with simple animated shapes moving around the screen — similar to a video game. The shapes’ actions simulated human behavior and decision-making. The researchers then compared their responses to those yielded by state-of-the-art learning-driven neural-network models with the same videos.

Results showed that infants recognize human-like motivations, even in the simplified actions of animated shapes; while the algorithm-based models showed no such evidence of understanding the motivations underlying the actions. Perhaps like the notorious Bing chatbot saying “I love you!”

Citation: Stojnić, G., Gandhi, K., Yasuda, S., Lake, B. M., & Dillon, M. R. (2023). Commonsense psychology in human infants and machines. Cognition, 235, 105406. https://www.sciencedirect.com/science/article/pii/S0010027723000409

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Why Language and Thought Shouldn’t be Equated: A Cognitive Perspective on Large Language Models

University of Texas at Austin and Massachusetts Institute of Technology (MIT) researchers have proposed a new way of thinking about large language models (LLMs). They argue in their paper “Dissociating language and thought in large language models: a cognitive perspective” that in order to understand the capabilities and limitations of LLMs, we must distinguish between “formal” and “functional” linguistic competence. The capacities required to produce and comprehend a given language are referred to as formal linguistics, whereas functional linguistics is concerned with using language to do things in the world.

In their paper, the researchers investigate two common fallacies in language and thought. The first fallacy is that an entity that is proficient in language is also proficient in thought. The second fallacy is that a language model is not a good model of human language if it cannot fully capture the richness and sophistication of human thought. These fallacies, according to the researchers, stem from the same misconception of equating language and thought.

LLMs excel at formal linguistic competence, which encompasses linguistic rules as well as statistical regularities that cannot be captured by rules. They do, however, have a long way to go in terms of functional linguistic competence. To achieve near-human performance, LLMs require unrealistic amounts of data, and they lack pre-existing biases that can help them learn quickly from sparse and noisy input. What would be interesting is to research inductive biases that can help LLMs learn faster and with less data, as well as architectures that can capture these biases.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How to turn plastic waste into profitable nanomaterials

It would also be cool if the supply-chain/manufacturing-scale issues with nanotubes could be solved, and bring us into the next age of materials, with space habitats, invulnerable cars, etc.

Plastic waste (credit: Organization for Economic Cooperation)

The amount of plastic waste produced globally has doubled over the past two decades and plastic production is expected to triple by 2050. Most of this waste is ending up in landfills, incinerated or otherwise mismanaged, according to the Organization for Economic Cooperation and Development

When you put that soda bottle or takeout container into the recycling bin, you have no clue if it’s going to be converted into something new and useful. Some estimates suggest only 5% is actually being recycled.

Turning plastic into nanotubes and hybrid nanomaterials

Now Kevin Wyss, a Rice graduate student and lead author on a study published in Advanced Materials describes how he and colleagues in the lab of chemist James Tour had a solution. They used their flash Joule heating technique to turn plastic into valuable carbon nanotubes and hybrid nanomaterials.

By controlling the reaction parameters, a hybrid carbon nanomaterial consisting of nanotubes with graphene-sheet bits attached to their ends was obtained from mixed waste plastics. When used in composites, these hybrids performed better than either graphene or carbon nanotubes alone. (Credit: Tour lab/Rice University)

“We were able to make a hybrid carbon nanomaterial that outperformed both graphene and commercially available carbon nanotubes,” Wyss said.

Graphene, carbon nanotubes and other carbon-based nanomaterials are generally strong and chemically robust, have low density and lots of surface area, and possess conductivity and broadband electromagnetic absorption abilities. This makes them useful in a variety of industrial, medical and electronics applications such as composites, coatings, sensors, electrochemical energy storage and more.

The structure of the new hybrid carbon nanomaterial is responsible for its enhanced performance. “Let’s say I was trying to pull a string of yarn out of a sweater,” Wyss said. “If the string is straight and smooth, it can come out quite easily sometimes and ruin the weave. It’s the same with the carbon nanotubes; having these masses of graphene attached at the ends helps make them much harder to remove, thereby strengthening the composite.

Flash Joule heating can be used to make carbon nanotubes and carbon nanofibers from mixed waste plastics, a method that is 90% more efficient than existing production processes. The nanotube diameter can be controlled by changing the power or catalyst used. (Credit: Tour lab/Rice University)

“Upcycling” for an economic incentive

“Recycling plastic costs more than just producing new plastic,” Wyss said. “There’s very little economic incentive to recycle plastic. That’s why we turned to upcycling, or turning low-value waste materials into something with a higher monetary or use value. If we can turn waste plastic into something more valuable, then people can make money off of being responsible in how they deal with discarded plastics.”

A life cycle analysis of the production process also revealed that flash joule heating was considerably more energy-efficient and environmentally friendly than existing nanotube production processes. “Compared to commercial methods for carbon nanotube production that are being used right now, ours uses about 90% less energy and generates 90%-94% less carbon dioxide,” Wyss said.

The plastic, which does not need to be sorted or washed as in traditional recycling, is “flashed” at temperatures over 3,100 kelvins (about 5,120 degrees Fahrenheit). “All we do is grind the material into small, confetti-sized pieces, add a bit of iron and mix in a small amount of a different carbon — say, charcoal — for conductivity,” Wyss said.

Machine learning

The researchers describe in Advanced Materials how machine-learning models that adapt to variables and show them how to optimize procedures are helping them push forward. “Machine-learning algorithms will be critical to making the flash process rapid and scalable without negatively affecting the graphene roduct’s properties,” said James Tour, the T. T. and W. F. Chao Professor of Chemistry and a professor of materials science and nanoengineering in Rice’s George R. Brown School of Engineering.

Citation: Waste Plastic into Hybrid Carbon Nanomaterials | Advanced Materials | DOI:
https://doi.org/10.1002/adma.202209621 Authors: Kevin Wyss, John Li, Paul Advincula, Ksenia Bets, Weiyin Chen, Lucas Eddy, Karla Silva, Jacob Beckham, Jinhang Chen, Wei Meng, Bing Deng, Satish Nagarajaiah, Boris Yakobson and James Tour.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Peeking Inside the Enigmatic World of AI with Nineteenth-Century Math

Using a well-established mathematical analysis known as Fourier analysis, scientists were able to examine the complex inner workings of deep neural networks, a type of artificial intelligence. The analysis enables them to identify patterns and see what calculations the AI is performing that are similar to human calculations. AI can now transcribe human speech into words, colorize images, detect financial fraud, and design drug molecules thanks to deep neural networks. However, one of these networks’ challenges is that they are designed as a black box, and even their creators do not fully comprehend them.

The success of this recent attempt may allow scientists to gain a better understanding of how their neural networks work and possibly adapt them for other applications. The new data could lead to improved weather forecasting models and other climate applications, as well as assist renewable energy operators in determining what to put into the grid in the coming hours.

Scientists have been able to lift the veil on the enigmatic world of AI by using Fourier analysis, revealing a powerful tool inspired by centuries-old mathematical concepts.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Artificial Intelligence Search Engines Under Legal Fire

Could AI-powered search engines such as Google Bard and Microsoft’s new Bing face legal action? Section 230, a fundamental internet law, is set to be reconsidered by the United States Supreme Court, and while it has been used to protect traditional search engines, AI-powered ones remain uncharted territory. Gonzalez v. Google is the case before the Supreme Court, and it includes the question of whether algorithmic recommendations should be given full legal protection under Section 230, which could have broader implications for AI search engines. While tech behemoths like OpenAI promote large language models like ChatGPT as the future of search, it should be noted that these summarize information that can be both ambiguous and inaccurate, especially if the information on which they rely is false.

Section 230 allows old-school search engines to claim some protection by linking to inaccurate information because they are simply posting links to the content of other sources. However, it’s more difficult for AI-powered chatbot search interfaces. If Section 230 remains largely unchanged, many hypothetical future cases will turn on whether an AI search engine was repeating someone else’s illegal speech or producing its own, potentially exposing search providers to legal action.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

DARPA’s AI Algorithms Can Now Control an Actual F-16 in Flight

DARPA, the research arm of the US Department of Defense, recently announced that their AI algorithms can now control a full-scale fighter jet. This accomplishment was demonstrated in early December 2022, over the course of several days, at the Air Force Test Pilot School in California. The AI software was loaded into the X-62A or VISTA, a modified F-16 test aircraft. DARPA’s Air Combat Evolution (ACE) program, which focuses on human-machine collaboration in dogfighting, began in 2019. The AlphaDogfight Trials, a competition between different companies to see who could create the most advanced algorithm for an AI-powered aircraft, took place in 2020.

The ACE program is one of more than 600 Defense Department projects that incorporate artificial intelligence into the nation’s defense programs. In 2018, the government committed to spending up to $2 billion on AI investments over the next five years, with $2.58 billion spent on AI research and development alone in 2022. DARPA hopes to use AI to allow the “human pilot to focus on larger battle management tasks in the cockpit” while the AI controls the jet and provides live-flight data. The ACE program will enable American defense to develop “much smaller autonomous aircraft” while also providing valuable live-flight data. In 2024, four AI-powered L-39s will compete in a live dogfight over Lake Ontario. The Air Force Test Pilot School is investigating how well pilots trust the AI agent and how to calibrate trust between humans and AI.

This technological breakthrough is significant because it demonstrates how artificial intelligence systems can control a fighter jet in flight. It has implications for defense systems, particularly in terms of human-machine collaboration in dogfighting. Pilots can focus on larger battle management tasks by allowing the AI to control the jet. This technology has the potential to become commonplace in defense programs with continued research and development, leading to smaller autonomous aircraft and invaluable live-flight data.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Superhuman Robotic Boots to Enhance Balance – A Promising Solution for Fall Prevention

Robotic boots have been shown by researchers to improve balance in healthy young adults. The key to improving balance, according to the study, is to have boots that can respond faster than human reaction times. The motorized floors used in the study “pulled the rug out” from under participants, forcing them to stay balanced with their feet in place. The exo-boots were programmed to provide participants with a burst of assistance faster than humanly possible, and the results revealed that only the mode that outperformed the human reaction helped users recover balance faster and prevented them from taking a step toward recovery.

Because falls are the leading cause of health decline in older adults, the development of assistive technology to improve balance can prevent falls and allow people to be active for longer periods of time. The research is a proof-of-concept demonstration that exoskeletons can improve balance in a controlled laboratory setting during a simple upright standing task for young adults who do not have balance issues. The findings show that the nervous system acts to gather information from throughout the body in order to stay upright while standing or walking.

Although much work remains to be done to enable wearable robots to assist with balance in everyday life and improve the lives of people with balance impairments, the results of this study are promising. Wearable robots can be used at other lower-limb joints, such as the knee and hip, to develop artificial intelligence capable of predicting the risk of falling and enabling personalized superhuman balance.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Digital humans and AI chatbots

In our current AI chat frenzy, we may have overlooked another kind of chatbot that has been quietly emerging: “digital humans.”

These are images on phones, computers and other screens that mimic people. They function in roles like sales assistants, corporate trainers, and social-media influencers, like “Lil Miquela,” an online influencer with nearly 3 million Instagram followers who books herself as a “19-year-old Robot living in LA.“

AI with a human face

Thanks to rapid progress in computer graphics and AI, human faces are now being added to chatbots and other computer-based interfaces for customers, employees, and others.

Alan Dennis, professor of information systems and the John T. Chambers Chair of Internet Systems at the Indiana University Kelley School of Business, has been observing this emerging trend for seven years with colleagues at the University of Sydney and Iowa State University.

In a Feb. 14 Harvard Business Review article, “AI with a Human Face: The case for — and against — digital employees,” Dennis and associates explain how different types of digital humans interact with us in various ways, and when they are most appropriate.

“Digital humans can be a much better choice when it comes to communicating complex instructions or describing features of a product. This is why YouTube instruction videos — rather than pages of text — are so successful,” they note.

Types of digital humans

The researchers identify four types of digital humans:

Virtual agents, whose role is to complete specific, one-time tasks, such as providing instructions to travelers at international airports.

Virtual influencers, who “supply their human followers with experiences” but are not “personalized,” like those employed by the fashion industry.

Virtual assistants, who help users in completing specific tasks, often developing personal relationships with users, which enables them to function as rehabilitation therapists, personal assistants, and coaches, for example.

Virtual companions, who enable older people to stay in their homes longer, which is known to be better for their physical and mental health. They are also much cheaper than assisted living or nursing homes. “Similar opportunities exist in education. Children are more engaged when they watch other children. Thus, a child-aged digital human could, at times, be a more effective teacher than a human adult teacher.”

Reality check

To further explore these ideas, I spoke with professor Dennis. One question: “How real could or should  these images be?”

Dennis said he was concerned about the “uncanny valley” effect and pointed to Soul Machines, an autonomous animation software company that has deployed about 50 digital humans in organizations around the world.

“In a study, about a third of our students found their productions to be in the uncanny valley,” he said. “Some also said they don’t want to talk, they want to type, while others don’t like listening, because people talk so slow … we turn the speed up on videos to 1.5 or 2.”

Smart chatbot faces … and some issues

“Anywhere there’s a chatbot, we could put a digital face on it, and drop it onto your phone, and suddenly, we have an AI agent that looks and sounds like a real human,” Dennis said.

Other possible uses of digital faces he mentioned include customer support and apps like Siri, Alexa and Google Voice. “And we could drop a face into a Zoom chat, where It becomes “another” team member [assuming we can fix ‘Zoom face‘].”

Adding ChatGPT or other language models

“Now imagine combining that with ChatGPT, and it could function as an entry-level employee. People who collaborate with AI will go far. In the future, we’ll be talking about partnering with AI (not just using it), because technology is becoming more and more like a human,” said Dennis.

“Wherever you use a chatbot, we will be able to see a human face, and hopefully, you’ll be able to customize it so you get the look and feel that you want,” said Dennis.

However, “when a digital human can look like any person, there will be avatars to look like famous people, and of course, imposter issues,” he notes.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Out-of-this-world spacecraft parts: NASA designs with AI

NASA is designing specialized spacecraft parts known as “evolved structures” in as little as an hour or two using commercially available AI software. The evolved structures are intended to be more durable, lighter, and require less assistance from humans. According to Ryan McClelland, the research engineer who pioneered the design of these parts, the evolved structures can save up to two-thirds of the weight and reduce the risk of failure when compared to standard components. Starting with the mission requirements, the AI software draws the surfaces where the part connects to the instrument or spacecraft, and a computer-assisted design specialist draws the surfaces where the part connects to the instrument or spacecraft. The AI software then connects the dots to create complex structure designs.

While AI software can generate complex designs, McClelland admits that algorithms require a human eye because human intuition knows what looks right, and the algorithm can sometimes produce structures that are too thin. The technology is currently being used in a number of NASA missions, including the EXoplanet Climate Infrared TElescope (EXCITE) mission, which is a balloon-borne telescope that studies hot Jupiter-type exoplanets orbiting other stars. AI-assisted design at NASA could allow for the production of larger components in orbit or the construction of structures on the Moon or Mars using materials found there, advancing “In-space Servicing, Assembly, and Manufacturing” (ISAM) capabilities.

Source.

Interesting story? Please click on the ? button below!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter