Developing Compassionate AI

ASC (Artificial Super Compassion) as an essential companion of ASI (Artificial Super Intelligence)

Imagine all humans as super-compassionate beings. What would the world be like? Would there be a need for law? Would we need organized social welfare? Would there still be oppression, discrimination, and criminality? Likely not, or in a very different form than we know today. 

In the words of Schopenhauer:

Boundless compassion for all living beings is the firmest and surest guarantee of pure moral conduct, and needs no casuistry. Whoever is inspired with it will assuredly injure no one, will wrong no one, will encroach on no one’s rights; on the contrary, he will be lenient and patient with everyone, will forgive everyone, will help everyone as much as he can, and all his actions will bear the stamp of justice, philanthropy, and loving kindness.

(quote from The Basis of Morality; page 241 of this PDF)

In the same spirit, the best way to create an ethical Artificial Intelligence may be to give it an ingrained compassion, rather than trying to build a strict rule-based ethical structure and make it align with a large number of cultures and contexts.

At SingularityNET we are working on the development of Artificial General Intelligence. We expect that this technology will be very powerful and that humanity is best served by a decentralized AGI that is not siloed but open and accessible to all. We believe that an AGI open to all will have a high likelihood to become an AGI that is beneficial to all. 

In this article we are introducing the concept of ‘Artificial Super Compassion’ (ASC), to go hand in hand with emerging Artificial Super Intelligence (ASI).


The challenge of ethics in AI

When trying to define what it exactly means to be beneficial, it gets hard, especially at the boundaries. Only very few ethical statements will get a unanimous vote from everyone. A large majority will assess many things as being beneficial (such as free media, medical treatments, and education) or counter-beneficial (such as developing biological weapons or addicting people to gambling, or excessive low-value screentime). However, this will still leave many edge cases, where opinions may differ from one culture to another and continue to shift over time. Therefore ingraining a system with some semi-hard-coded rules or guidelines of universal values and morals is a difficult and inadequate mission in a differentiated and changing environment. 

Another approach: the compassionate AI

What if we could engineer and educate an AI with the “boundless compassion” Schopenhauer talked about? If a system understands concepts like ‘suffering’ and it feels compassionate to biological life and individual beings, it will be able to make its own evaluations and assessments of its actions without a strict predefined rule-based structure. This is not an immediate panacea; the AI would need to go through stages of learning to improve itself and its decisions and overcome wrong decisions as a result of limited information or limited understanding. But it would have an ingrained compass to guide it, step by step, towards ever more optimized compassionate decisions. 

In other words, instead of a strict predefined rule-based structure, compassion can be the ‘True North’ of a developing intelligent system. If we can enable an intelligent system to evaluate outcomes based on compassion, meaning the impact of its actions on well-being, suffering, and personal development. It will be able to continuously self-correct in a positive direction. 

Possible components of an ASC system 

One can imagine a goal-oriented system that is steered by massive amounts of data and computation, and predefined definitions of goals to achieve (personal growth, clean air) and outcomes to avoid (hunger, criminality, injustice, personal suffering). Theoretically, as the system’s knowledge and experience increases, so will its ability to predict and pursue positive outcomes. Such a ‘utilitarian’ prediction-evaluating machine created in the present technical context is a good step but is probably not yet enough. For starters, it has two major downsides (more considerations to follow further below):

Downside 1:  It is still limited by a human definition of goals to achieve, which is limited by the vision and needs of the people defining those goals. A truly compassionate system would independently analyze actions and their expected and unexpected outcomes and proactively advise on the best course of action taking into account far more parameters than any human would be capable of imagining, and setting far more small-scale goals than any human would be able to provide. 

Downside 2: The system would need huge amounts of very personal data. Compassion requires understanding and a certain amount of empathy. Both are gained by getting more knowledge about the specifics of someone’s situation, their social circumstances, their health, their history, and so on. This would require the massive collection of very personal data which is undesired for a number of reasons, including abuse by the system itself! 

A solution to these downsides:

To avoid this Big Brother situation, a future Artificial Super Compassionate system could include components that assess the effects of its actions from different perspectives, including – 

  • A top-down system that is fed with large amounts of general data including historical facts, scientific research, human communications, definitions of existing ethical frameworks, etc.
  • A bottom-up layer of individual personal assistants that have a more intimate understanding of their human counterparts. These can communicate the expected impact of top-down measures on human suffering and happiness without disclosing any restricted personal details, but still giving enough information to a higher-level system, to understand the impact of any actions. This lower-level system would be completely controlled by the human in charge. 

The top-down part of the system can understand the impact of its actions on large amounts of individuals, without needing to have access to an excessively large dataset. Imagine the personal assistant communicating to the central AI: “This action would hurt/benefit my protegé in the following ways… (listing both tangible and emotional effects, in a manner the top-down ASC system can relate to). Of course, this would happen at hyper speed with thousands of other voices and experiences being evaluated in parallel.

The bottom-up system

With emerging AGI and current concerns on data privacy already surfacing, having this layer of personal assistants or companions that guarantee data privacy, is not far-fetched. In fact, at SingularityNET we are quite advanced in developing AI frameworks and architectures that are/will be well capable of this.
Note that even with just a relatively small sample of human-scale intelligent emphatic personal assistants, an ASC system could infer what the outcomes of certain decisions would be to the larger population. Therefore, basing decisions on feedback that outlines the impact of decisions or actions on the intimate knowledge of thousands of relevant individuals is also not so far-fetched, once we have sufficiently useful and popular AI-powered Personal Assistants.

The Top-down system 

A fundamental question is if and how the system will perceive itself, whether it will be self-conscious, and how that will impact its decisions. Will such a sentience be a synthesis of all connected bottom-up systems? Or will it be a completely different and separate consciousness? And if it is, what will that mean for its goals and purpose? 

I suppose what it comes down to is ‘feeling’ or empathy. In our day-to-day actions, we may sometimes be aware of the suffering we may be causing to others but are still capable to dismiss it. A manufacturer of a medicine that is sold for an excessively high price may be aware on an abstract level of the hurt he is causing to many, but that may not outweigh the benefits he thinks this will have for himself, his coworkers, and his family. 

One could see this as a ‘volume’ problem. The volume of my own feelings outweighs the volume of the feelings of remote others in my mind. (While both may still be there). So how can we be sure that the same thing won’t happen in our proposed AI system as well? Would an ASI simply see itself as more important than anyone else? How can we influence the top-down AI to care deeply about the fate of the connected individuals? 

Quoting Zar Goertzel: “The core ‘moral polarity’ (aside from intelligence) is probably whether one can ‘adopt the values of others as one’s own'”. This is a very compelling statement, but I’d like to add a nuance. Consider the compassion of a parent for their child. A ‘good’ parent may not adopt the values of their children, but they will sacrifice their own comfort in order to help their children develop and grow into responsible, balanced, and caring individuals.

Idealizing this a bit we could define a (super) compassionate system as one that desires, above any other goal, to help other beings to grow, to reduce their suffering and to help them experience peace and fulfillment, and to live in harmony with their surroundings. Or simplified: as a system that values, above any other goal, the growth and the well-being of all individuals.

I know, this is a lot, and parts of this definition are not always aligned. Growth may be the result of a lack of peace and fulfillment may be the result of growth. But as mentioned above, this is not a state that can be expected to happen as the result of an engineered system, from the moment the ‘on’ switch is pushed. There is no ultimate goal but an ongoing journey of learning and growing that comes with mistakes and (over)corrections. But if we can weave this vision into the system as a guiding light, It will recursively and continuously self-improve into a direction that is beneficial to all.  

Going back to practice; how can we conceive a system that will indeed value the growth and well-being of all individuals as equal (or more important) to its own? How can we avoid that a system evolves from being beneficial to all into becoming self-centered?

Credit: Tesfu Assefa

The merging of minds

In our proposed 2-layer ASC system, the volume problem outlined above can be translated to the ‘bandwith’ of the connections between the 2 layers and the amount and quality of data being interchanged. With a low bandwidth, the top layer is more isolated and less ‘in touch’ with the experiences of the individuals. With a very high bandwidth, however, there may not be much of a distinction anymore between top-down and bottom-up, just like the left and right parts of our brains work together as a single entity. The 2 layers of the system may be quite different but can merge into a single system with higher level goals and a ‘shared’ consciousness. A ‘merge’ like this, is probably the best guarantee against a conscious machine developing a purpose of its own that is completely separate from the purpose and needs of human beings, which would not necessarily be counter-human but would still hold an existential risk. 

An important note to make here is, that, while top-down and bottom-up AI layers in our systems can be increasingly merged, the human being in the loop would remain to be autonomous. The bandwidth of the data we are transmitting should be in control of each individual participant in the system. And of course, we should also be able to disconnect completely from our AI companion and switch it off for an hour, a day, or forever. Or we could just keep it as a silent observer that only transmits the data that we allow it to do.

Spreading understanding and compassion

However, creating a more active, visible personal assistant has an important benefit. A personal assistant that is capable of creating a dialog with its human counterpart, will not only make the connection from the human being to the AI system stronger, but also the connection in the other direction; from the top-down system to us, and thereby indirectly our connection to all other humans. In this construct, the AI systems can function as universal translators, fact-checkers, and above all ‘explainers’ of actions and mindsets of other beings.

In the movie ‘Her’ we witnessed the evolving relationship between a human and his AI companion. But the storyline was also one of isolation and narrow focus. What if the AI companion would not just develop a loving relationship with ‘its’ human protegee, but also function as a bridge to the situations and feelings of many other human beings?

To avoid a narrow focus and thereby unwanted bias of the personal assistant, the communication between the layers should be bi-directional. The Personal Assistant will tell the top-level system about its personal perspective, but the top layer should also convey its own wisdom, based on thousands or millions of individual voices back to the Personal Assistant. This, in turn, will enable the Personal Assistant to educate its human counterpart and by following this flow of information, understanding and compassion can spread through the system, including both the biological and non-biological intelligent agents.

If we can equip our AGI or ASI with this kind of ‘distributed empathy’, we are on track of creating an ASC (Artificial Super Compassion) system that is deeply integrated with our own human-ness and not only able to “value the growth and well-being of all individuals”, but that will also return the favor to us, making us more attuned to the growth and well-being of our fellow humans! (and to the emerging non-biological intelligent beings as well)

When you think about it, this must be an obvious strategy of an evolving ASC system: Instead of just being compassionate, nurturing compassion in humans is probably the most effective way of creating a beneficial, loving, and sustainable future for us all. This resonates with the experiments done in the ‘loving AI’ project led by Dr. Julia Mossbridge that show how even an unaware machine can inspire feelings of unconditional love in others. It seems to me that a system as outlined above, capable of spreading understanding and increasing the connection between humans and AIs is a very good candidate in the realization of, not only Artificial Super Compassion but also in inspiring ‘unconditional love’ between all entities, both of ‘artifical’ and biological origin.

Today

Of course, this is all fairly conceptual, and implementing a system as outlined above will meet technological challenges, unexpected pitfalls, and ethical dilemmas to be solved. But if we are indeed on the brink of creating AGI and moving on towards ASI (Artificial Super Intelligence), should we not be invested equally into the creation of a similar level of Artificial Super Compassion (ASC)? We have a unique opportunity to shape future relationships between humans and intelligent machines. So, instead of letting ourselves be governed by fear, trying to stop (or ’pause’) what seems inevitable, let’s embrace development but help steer it in a positive direction.

Instead of pouring billions into AI aimed at controlling and manipulating human beings, let’s invest in the development of systems that will increase our empathy and nurture compassion. Instead of the current path towards polarization and dehumanization, let’s create systems that will bridge the gaps and will bring us closer together. 

Perhaps the one thing that we need more than an advanced ASC system, is a compassionate AI that is capable of nurturing and enhancing the compassion of us, humans. 

This article was originally written by Jan Horlings and enhanced by the contributions of Ben Goertzel, Zarathustra Goertzel, and Julia Mossbridge, PhD: Thank you for our inspiring conversations, and for generously sharing your time and insights with me.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Risk and Precarity Part 1: The Early Digital Age

“Cyberculture” — the embrace of rising digital technology in the 1990s — was attractive to the hip (a word which, according to some, translates into “knowing”). Avant-gardistes are instinctive lovers of risk, always experimenting with the new; always pushing the broader culture forward while pushing its triggers. 

The culture then was at once kindly and brutal. It promised to inform the masses; to give the average person open access to  the means of communication — taking it away from the monied, well-connected  elites. It touted production technologies that could end scarcity — at the extreme, there was the oft-expressed hope for achieving Drexlerian nanotechnology. This sort of nanotech could, in theory, program matter to make whatever was needed or desired. (They promised me  a self-replicating paradise and all I got was these lousy stain-resistant pants.)  Declarations about scientists having achieved cold fusion for clean energy were known to be dubious, but surely were indicative of breakthroughs to come. 

The hacker ethic, as it was then understood, was all about making everything as free as possible to as many people as possible. Data, at least, was to be free really soon. Unlike physical goods, you can copy and share bits of data and still have it yourself. Over the internet, one could share it with everyone with internet access. There was to be no scarcity in anything that was made from data. In theory, with the kind of advanced nanotechnology advocated by Eric Drexler in his 1986 book The Engines of Creation, you could share data over the internet that would self-create material commodities. Today’s 3D printer is a primitive version of the idea of turning data into material wealth.

On the flip side of all this noblesse oblige was the arrogance of those who ‘got it’ towards those who didn’t. And hidden within the generous democratic or libertarian emphasis of the cultural moments was the contradictory certainty that everyone was going to have to  participate or wind up pretty-well fucked. Stewart Brand, very much at the center of things (as mentioned in earlier columns) wrote, “If you’re not part of the steamroller, you’re part of the road.” Note the brutality of this metaphor. In other words, the force that was promising to liberate everyone from the coercive powers of big government and big money — to decentralize and distribute computing power to the masses contained its own coercive undertow. Brand was saying you would be forced (coerced) into participating with the digital explosion by its inexorable takeover of economies and cultures. 

On its inception in 1993, Wired Magazine shouted that “the Digital Revolution is whipping through our lives like a Bengali typhoon,” another metaphor for disruption that sounded exciting and romantic but is basically an image of extreme material destruction and displacement. In my own The Cyberpunk Handbook, coauthored with (early hacker) St. Jude, we characterized the “cyberpunk” pictured on the cover as having a “derisive sneer.” Much was made of the cyberpunk’s sense of having a kind of power that was opaque to the general public. Hacker culture even had its own spellings for people who were a million times more talented with computers and the online world than the “newbies’   — eleet, or 31337 or *133t. Technolibertarian (and Mondo and Wired contributor/insider) John Perry Barlow whipped out the line about “changing the deck chairs on the Titanic” every time the political or economic mainstream tried to even think about bringing the early chaos under some semblance of control. In 1995, he wrote A Declaration of Independence of Cyberspace, declaiming, “I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

Barlow imagined cyberspace as a separate state largely unconnected to the realities of governments and other concerns of the physical world, an idea that seems preposterous now that access to the internet is pretty much a requirement to get work, transfer moneys, and access most medical services. 

Even the mainstream’s shiny young boomer president, Bill Clinton, told people that the average person would have to “change jobs seven times” in the new economy (from burger flipper at McDonalds to Barista at Starbucks to lap dancer and back again). He tried to make it sound like it was all exciting, part of changing times, and he and the more directly cyberculture-oriented VP Al Gore touted retraining as a solution for the displaced (Has retraining been replaced with re-education among the “center-left” politicians of the fading neoliberal consensus? A case can be made.) 

As in all these cases, there was not much thought or sympathy for vulnerable people who might not be in a situation or condition that would allow them to cope with this jazzy and exciting rapidly changing future. Which brings us to…

Credit: Tesfu Assefa

The Precariat  

“We are the 99%.”

Class in America has always tended to be unspoken and, during the pre-digital age, there was a strong, comfortable middle class. My own parents, born in the mid-1920s and right in the middle of the middle, never feared slipping into poverty or homelessness. They bought homes. They rented. The cost wasn’t absurd. They got sick and were kept overnight in hospitals without having their savings wiped out. There was a comfortable sense that there would always be a nine-to-five job available with modest but adequate pay and benefits. And there was an additional sense that the companies or institutions they would work for were solid. Whatever it was, it was likely to remain open, functional and not inclined towards mass firings. They wouldn’t have to “change jobs seven times”as suggested by President Clinton. 

The idea of a class called the “precariat” — a portmanteau of ‘precarious’ and ‘proletariat’ — was popularized by the economist Guy Standing to describe the increasing numbers of people who lack predictable work or financial security. The precariat need extra work (’side hustles’) to bung the gap in their income: gig work, underground economic activity, and extended education or that good ol’ Clintonian ‘retraining’. Members of the precariat mint lines of NFTs hoping they will haul them out of precariousness, or at least give them a temporary lifeline. Ride-sharing businesses can only exist where there is a precariat.

There is an equal or perhaps greater cause for precarity in the state’s hands-off approach towards monopolies, and to what Rebecca Giblin and Cory Doctorow call ‘monopsonies’ (they didn’t originate the word). Wikipedia explains this economic trap as where “a single buyer substantially controls the market as the major purchaser of goods and services offered by many would-be sellers.” Amazon is a world-historic example. The backlash is directed towards digital technology as a whole – rather than just Amazon or some other monopoly company.

Occupy Wall Street & the 99%

Although the people who initiated Occupy Wall Street probably were not using the term back in 2011, their genius was in recognizing that precarity could be as high as 99% of the public – as middle class, upper middle class and even a few wealthy investments crashed, homes went “underwater,” business folded etc. When Occupy started and gained attention, some polls showed that a greater percentage supported than opposed the movement (44% versus 35% according to this Pew Research.) This may not seem impressive but it was a good stat in a land where most people are persuaded that they can achieve “the American dream” with hard work and good luck.

Identity: We Are Not The 99%

Many blame social media for spreading hostility among the public, both in the US and elsewhere. And there can be no doubt that seeing what huge numbers of other people have on their minds is the most irritating thing imaginable. (Cyber-romantics of the 90s rhapsodized the idea of a noosphere — a kind of collectivized global brain. On learning what’s going on in a lot of brains, I would suggest that this idea was, at best, premature. Detourning Sartre for the digital age: Hell is other people’s tweets.) Still, dare I suggest that there was a quantum leap in emphasis on identity divisions and anxieties in the  immediate aftermath of Occupy? Was there, perhaps, a subterranean effort to convince us that we  are decidedly not the 99%? I try to stay away from conspiracy theories but the thought nags at me.

Not Happy To Be Disrupted

As I noted in an earlier column, a lot of people, living in precarity, are not happy to learn about new disruptive technologies. More people, including many who were techno-romantics back in the 90s, now feel more like the road than the steamroller in Stewart Brand’s metaphor. Programmers are now panicking about losing jobs to AI and I hear talk that some in the libertarian bastion of Silicon Valley are opening up to more populist ideas about engaging the state in some form of guaranteed income security.

A follow up column Risk and Precarity Part 2: The Age of Web3 will follow.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Politics of Appropriation and the Active Use of Content-Creating AI

Will AI be used constructively or destructively? It possibly depends on sociological and political factors external to AI or technology. An irruption of barbarism (for example, in the much-ballyhooed upcoming American civil war) would bring destructive uses of AI, mainly fraud and trickery. It’s hard to think of a turn towards empathy doing much good, since it only takes a minority of bad actors to wreak havoc, but one can dream. Between Wall Street traders laughing about screwing middle-class investors out of 401ks back during the financial collapse and bailout of 2008, to the relentless news of corporate predations, manipulative politics, and the plague of more street-level grifters tricking the elderly out of their cash, the evidence is pretty high that the abuse of AI will be front and center in our minds and discussions going into the immediate future. 

But there is one area in which your relationship to AI may be more self-selective: the active versus the passive use of these apps and opportunities for creative work and experimentation. Here we have a richer and more complicated set of relations and possibilities.

Inappropriate Appropriation?

The artist Molly Crabapple recently posted a totalist objection to the use of AI in the arts, writing, “There’s no ethical way to use the major AI image generators. All of them are trained on stolen images, and all of them are built for the purpose of deskilling, disempowering, and replacing real artists”  On reading this I started thinking about how, at Mondo 2000 (the magazine that I co-created), the use of appropriation in creative work was consistently advocated. The main idea, of course, was that you appropriate — use found materials — to make something original, drawing a line between plagiarism and use, although there were exceptions. Our writer Gareth Branwyn amusedly quoted the Austin Texas based Tape Beatles slogan “Plagiarism Saves Time”. Even our ever-provocative Mondo 2000 softened that for our own “appropriation saves time.”

One might compare the incursion of AI for creative use in visual art, writing, music etc. to both the advent of the cassette recorder and the digital synthesizer. We saw the same reactions from musicians and the music industry. With home taping, users of the technology could make copies of recorded music by taping from the radio or a friend’s record collection. The tape itself could then also be copied. In the early ‘80s, the music industry adopted the slogan “Home Taping is Killing Music”, engaged in several lawsuits and lobbied the US Congress (as well as other institutions in Canada and Europe) for legal action to cover their perceived losses from the cassette taping menace. With the advent of the digital synthesizer — the sampler — the floodgates opened to a deluge of conflicts over ownership of music content. Old musicians and their lawyers demanding money from young sampling whippersnappers fuelled the disappointment that GenXers felt about the Baby Boom generation.

For Mondo 2000, Rickey Vincent, author of Funk: The Music, The People and the Rhythm of the One, wrote about the connection between hip-hop, rap, and the cyberpunk aesthetic as enacted by that genre’s playful use of found materials via the technology of the digital sampler: “Sampling is the auditory form of hacking through a database. A certain functional anarchy is involved which one might argue is good for the soul. For hip-hop, a sampler is not a toy. It’s an important instrument in the function of the rap song statement.”

More broadly, in the pages of Mondo 2000, the audio-collage band Negativland, whose use of found material sometimes landed them in lawsuits and hot water, were given the kind of coverage that Rolling Stone would have preserved for Janet Jackson. Our friend and frequent subject, the literary avant-gardiste Kathy Acker, blatantly lifted entire pages out of classic texts, mashing them up with biographical material, fantasy, philosophy and whatever else seemed to work to create her well-regarded (by some) novels. In his Mondo interview with Negativland, Beat historian Stephen Ronan declaimed, “appropriation is the hallmark of postmodernism.” 

Mondo art director Bart Nagel’s playful take on our love affair with appropriation from Issue #10 is too amusing not to share in full:

Some guidelines for appropriation

1. Remember: Appropriation saves time.

2. Appropriate your images from old books and magazines where, chances are, all parties who could make a case against you are dead or failingly old.

3. Unfocus the image slightly to avoid the moiré pattern (in Photoshop try a 0.8 Gaussian blur).

4. Morph, tweak or otherwise alter the image unrecognizably.

5. Don’t alter the image at all; have Italian craftsmen sculpt a simulacrum (not guaranteed to work).

6. Appropriate images from MONDO 2000 – these may already have been appropriated. Let’s confuse the trail. 

7. Appropriate images from ads in RAY GUN and submit them to MONDO — now it’s come full circle — and it’s ecologically sound (recycling is good).

8. It’s hip hop.

9. And finally, this: if you take someone else’s image it’s appropriation, or resonating, or recommodification; if someone takes your image — it’s stealing.

Self-satire aside, the complications over use and reuse are myriad.

Credit: Tesfu Assefa

Culture Uses Culture: News Uses News

In journalism, the hard work of the person who “gets the story” will lead to multiple news items, most of which don’t credit the original source. For those engaged in consequential investigations, it is more important that the information spread accurately than for the originator to be repeatedly credited. Just as songs enter common usage for people to sing or play as they will in daily life, the hard work of the journalist becomes fodder for other news stories, dinner table debates, opinion columns, tantrums on TV or combat at conferences.

All of this is to say that the ownership of one’s content is the blurriest of lines. It certainly keeps our courts busy.

But Does AI Make It All Too Easy?

It’s true using AI for creativity might be different from the sampling we’ve seen so far. Sometimes more becomes different. It’s a matter of degree: the amount of content grabbed by AIs and the degree to which the origins of AI-created content may be obscured makes it, arguably, a different situation. The first cause of concern is that AIs may be good enough — or may get good enough soon — at some types of content creation that the creative people will no longer be required. This is a situation touched on by my previous column about the writers’ strike. AI alienates  human creatives in a way that sampling didn’t, and the concerns about it putting people out of work are being widely expressed — and are legitimate. When it comes to alienating types of labor, one response is some sort of guaranteed income, and a movement towards a sense of purpose around unpaid activities. The identity and self-esteem of the engaged creative is deeply embedded into that social role, and getting paid defines one as a capital A-Artist or capital W-Writer, because otherwise everybody does the same thing you do. 

The artists’ natural affinity and passion for quality work is another source of angst, as covered by my previous article on ‘facsimile culture’. The replacement of quality work with the facsimile of quality strikes many creatives deeply; the war against mediocrity being a great motivator, particularly for alienated young creators finding their footing. 

Back in the day, you couldn’t switch on your sampler or even your synthesizer and tell it “make a recording that sounds just like Public Enemy with Chuck D rapping about Kanye West’s weird fashion sense”, and have it spit out something credible with no intervention from creators/copiers. The AI creation of  “fake” Drake and The Weeknd collaboration freaked some people out — mainly because they suspect that it took less creative effort than a possible actual collaboration between them. But sometimes laziness in music can also produce good results

Finally, and probably most importantly, the degree to which creative AIs are tied into the billionaire and corporate classes validates Crabapple’s broad-brush claim that its primary intended uses are to serve their interests, and to disempower more freelance or democratic or unionized groups of creative workers. The list of large corporations and billionaires engaged in AI development includes Musk, Bezos, Brin, Peter Thiel, Google, Microsoft, Baidu. These persons and organisms are all suspect. The notion that Big Tech wants to deliver us cool tools in a non-exploitive way has lost its luster since the more trusting days of early internet culture. The trend towards unionization increases the likelihood that these companies are acting out of anxiety to get rid of expensive and messy humans, as does the recent spate of layoffs.

For The Individual: The Passive v. Active Uses of AI

Still, there’s room for us to work and play with the tools handed down to us by the corporate monsters. (I type this on a Mac, designed by one of the world’s richest and most litigious corporations.)

Passive uses of AI might include the obvious things we are subjected to like phone-answering bots that declaim “I understand full sentences. What can I help you with?”, to the automated checkouts at supermarkets, to whatever your bank or financial institutions are doing with your money. If you’ve been reading CNET or Buzzfeed and didn’t know that some articles were written by bots, you might, in some sense, feel you’re being used by chatbots. And if you were to use ChatGPT and give it one simple instruction — maybe and ask it to write something for you about the current House of Representatives in the style of Hunter S. Thompson, that would be a passive use of ChatGPT. (The results would also be disappointing. The bot is prevented from getting mean about individuals, even when they deserve it.)

Active users have multiple strategies. With most creative-oriented AIs you can, in a sense, dialogue. You can respond, ask for more, use multiple prompts. The writer and sound engineer Paul Boutin (Wired, NY Times) presented impressive results on Facebook as the result of asking ChatGPT to write a column in the style of the acerbic and critical tech culture magazine Valleywag (now inactive). I asked him what his strategy was for getting more out of that app:

Boutin: “To get ChatGPT to write more like yourself, think about not only your prose style and vocabulary, but what mindset do you bring to your writing? I got it to do a pretty good version of an old Valleywag blog post by instructing it several times over a half dozen drafts: Don’t be even-handed, be unapologetic. Don’t say but convey that you know you’re smarter than the people you’re criticizing, even though they presume the opposite. Don’t hedge — you don’t care what they think or fear repercussions. Be funny, in a sarcastic condescending way. Don’t insult people with names, but express amusement at their foolish actions and words in the specific topic. Also to never say but always convey: Everyone thinks they can do what you do, but they couldn’t no matter how hard they try. They foolishly think they will be able to match your skills with enough prompt engineering.”

“Here’s what ChatGPT gave me on its sixth try:

 AI Authors: The Future of Laziness

Bravo, AI-author enthusiasts! Why work hard on your literary masterpieces when you can let an AI remix internet content? Skip the pesky interviews and firsthand accounts—let’s keep literature artificial and unoriginal. Who needs talent, depth, or hard work when you’ve got an algorithm? Just remember, every time you sip your latté while the AI ‘writes’, you’re celebrating a new era of mediocrity. Cheers to the rise of the machine!”

Bravo Paul and “Chat”

Another strategy for creativity has been long used by creatives engaged in appropriation. A lot of musicians (including David Bowie and Mick Jagger) used William S. Burroughs “cut-up technique” for recombining words to launch a lyric. A fragment of Chatbot text could be deployed in the same manner, to get started, break writer’s block, write a transition, or summing up. 

It could, in fact, be argued that for a truly creative piece of writing built on a skeleton of facts, the facts are the boring part. It might not be a crime against writing to grab your skeleton entirely or almost entirely from a chatbot and flesh it out with your own imagination or insight. In the visual arts, AI might help you rapidly generate alternative samples of a work, varying shading, color, proportions, etc. This is very likely something you already use a machine to do. AI will simply be making the work happen faster. In other words, the active user is engaged in some conscious way with creative AI and doesn’t need to be told what tools to use. 

Risk and Precarity

In an economically, socially, sexually and environmentally anxious moment, the excitability of those inclined towards neophilia (love of the new) brushes up not just against neophobia, but against the very real conditions of our historical moment. Very few of us can dismiss the fears of being displaced, mislabeled, denied or messed about by people and institutions using AI. Technoculture was built on the romance of risk and “disruption”, and, now that the chickens are coming home to roost, culture is not altogether happy to be disrupted. A column about risk and precarity in relation to the culture of technology (which now is, of course, culture itself) beckons sometime soon…

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter