The Politics of Appropriation and the Active Use of Content-Creating AI

Will AI be used constructively or destructively? It possibly depends on sociological and political factors external to AI or technology. An irruption of barbarism (for example, in the much-ballyhooed upcoming American civil war) would bring destructive uses of AI, mainly fraud and trickery. It’s hard to think of a turn towards empathy doing much good, since it only takes a minority of bad actors to wreak havoc, but one can dream. Between Wall Street traders laughing about screwing middle-class investors out of 401ks back during the financial collapse and bailout of 2008, to the relentless news of corporate predations, manipulative politics, and the plague of more street-level grifters tricking the elderly out of their cash, the evidence is pretty high that the abuse of AI will be front and center in our minds and discussions going into the immediate future. 

But there is one area in which your relationship to AI may be more self-selective: the active versus the passive use of these apps and opportunities for creative work and experimentation. Here we have a richer and more complicated set of relations and possibilities.

Inappropriate Appropriation?

The artist Molly Crabapple recently posted a totalist objection to the use of AI in the arts, writing, “There’s no ethical way to use the major AI image generators. All of them are trained on stolen images, and all of them are built for the purpose of deskilling, disempowering, and replacing real artists”  On reading this I started thinking about how, at Mondo 2000 (the magazine that I co-created), the use of appropriation in creative work was consistently advocated. The main idea, of course, was that you appropriate — use found materials — to make something original, drawing a line between plagiarism and use, although there were exceptions. Our writer Gareth Branwyn amusedly quoted the Austin Texas based Tape Beatles slogan “Plagiarism Saves Time”. Even our ever-provocative Mondo 2000 softened that for our own “appropriation saves time.”

One might compare the incursion of AI for creative use in visual art, writing, music etc. to both the advent of the cassette recorder and the digital synthesizer. We saw the same reactions from musicians and the music industry. With home taping, users of the technology could make copies of recorded music by taping from the radio or a friend’s record collection. The tape itself could then also be copied. In the early ‘80s, the music industry adopted the slogan “Home Taping is Killing Music”, engaged in several lawsuits and lobbied the US Congress (as well as other institutions in Canada and Europe) for legal action to cover their perceived losses from the cassette taping menace. With the advent of the digital synthesizer — the sampler — the floodgates opened to a deluge of conflicts over ownership of music content. Old musicians and their lawyers demanding money from young sampling whippersnappers fuelled the disappointment that GenXers felt about the Baby Boom generation.

For Mondo 2000, Rickey Vincent, author of Funk: The Music, The People and the Rhythm of the One, wrote about the connection between hip-hop, rap, and the cyberpunk aesthetic as enacted by that genre’s playful use of found materials via the technology of the digital sampler: “Sampling is the auditory form of hacking through a database. A certain functional anarchy is involved which one might argue is good for the soul. For hip-hop, a sampler is not a toy. It’s an important instrument in the function of the rap song statement.”

More broadly, in the pages of Mondo 2000, the audio-collage band Negativland, whose use of found material sometimes landed them in lawsuits and hot water, were given the kind of coverage that Rolling Stone would have preserved for Janet Jackson. Our friend and frequent subject, the literary avant-gardiste Kathy Acker, blatantly lifted entire pages out of classic texts, mashing them up with biographical material, fantasy, philosophy and whatever else seemed to work to create her well-regarded (by some) novels. In his Mondo interview with Negativland, Beat historian Stephen Ronan declaimed, “appropriation is the hallmark of postmodernism.” 

Mondo art director Bart Nagel’s playful take on our love affair with appropriation from Issue #10 is too amusing not to share in full:

Some guidelines for appropriation

1. Remember: Appropriation saves time.

2. Appropriate your images from old books and magazines where, chances are, all parties who could make a case against you are dead or failingly old.

3. Unfocus the image slightly to avoid the moiré pattern (in Photoshop try a 0.8 Gaussian blur).

4. Morph, tweak or otherwise alter the image unrecognizably.

5. Don’t alter the image at all; have Italian craftsmen sculpt a simulacrum (not guaranteed to work).

6. Appropriate images from MONDO 2000 – these may already have been appropriated. Let’s confuse the trail. 

7. Appropriate images from ads in RAY GUN and submit them to MONDO — now it’s come full circle — and it’s ecologically sound (recycling is good).

8. It’s hip hop.

9. And finally, this: if you take someone else’s image it’s appropriation, or resonating, or recommodification; if someone takes your image — it’s stealing.

Self-satire aside, the complications over use and reuse are myriad.

Credit: Tesfu Assefa

Culture Uses Culture: News Uses News

In journalism, the hard work of the person who “gets the story” will lead to multiple news items, most of which don’t credit the original source. For those engaged in consequential investigations, it is more important that the information spread accurately than for the originator to be repeatedly credited. Just as songs enter common usage for people to sing or play as they will in daily life, the hard work of the journalist becomes fodder for other news stories, dinner table debates, opinion columns, tantrums on TV or combat at conferences.

All of this is to say that the ownership of one’s content is the blurriest of lines. It certainly keeps our courts busy.

But Does AI Make It All Too Easy?

It’s true using AI for creativity might be different from the sampling we’ve seen so far. Sometimes more becomes different. It’s a matter of degree: the amount of content grabbed by AIs and the degree to which the origins of AI-created content may be obscured makes it, arguably, a different situation. The first cause of concern is that AIs may be good enough — or may get good enough soon — at some types of content creation that the creative people will no longer be required. This is a situation touched on by my previous column about the writers’ strike. AI alienates  human creatives in a way that sampling didn’t, and the concerns about it putting people out of work are being widely expressed — and are legitimate. When it comes to alienating types of labor, one response is some sort of guaranteed income, and a movement towards a sense of purpose around unpaid activities. The identity and self-esteem of the engaged creative is deeply embedded into that social role, and getting paid defines one as a capital A-Artist or capital W-Writer, because otherwise everybody does the same thing you do. 

The artists’ natural affinity and passion for quality work is another source of angst, as covered by my previous article on ‘facsimile culture’. The replacement of quality work with the facsimile of quality strikes many creatives deeply; the war against mediocrity being a great motivator, particularly for alienated young creators finding their footing. 

Back in the day, you couldn’t switch on your sampler or even your synthesizer and tell it “make a recording that sounds just like Public Enemy with Chuck D rapping about Kanye West’s weird fashion sense”, and have it spit out something credible with no intervention from creators/copiers. The AI creation of  “fake” Drake and The Weeknd collaboration freaked some people out — mainly because they suspect that it took less creative effort than a possible actual collaboration between them. But sometimes laziness in music can also produce good results

Finally, and probably most importantly, the degree to which creative AIs are tied into the billionaire and corporate classes validates Crabapple’s broad-brush claim that its primary intended uses are to serve their interests, and to disempower more freelance or democratic or unionized groups of creative workers. The list of large corporations and billionaires engaged in AI development includes Musk, Bezos, Brin, Peter Thiel, Google, Microsoft, Baidu. These persons and organisms are all suspect. The notion that Big Tech wants to deliver us cool tools in a non-exploitive way has lost its luster since the more trusting days of early internet culture. The trend towards unionization increases the likelihood that these companies are acting out of anxiety to get rid of expensive and messy humans, as does the recent spate of layoffs.

For The Individual: The Passive v. Active Uses of AI

Still, there’s room for us to work and play with the tools handed down to us by the corporate monsters. (I type this on a Mac, designed by one of the world’s richest and most litigious corporations.)

Passive uses of AI might include the obvious things we are subjected to like phone-answering bots that declaim “I understand full sentences. What can I help you with?”, to the automated checkouts at supermarkets, to whatever your bank or financial institutions are doing with your money. If you’ve been reading CNET or Buzzfeed and didn’t know that some articles were written by bots, you might, in some sense, feel you’re being used by chatbots. And if you were to use ChatGPT and give it one simple instruction — maybe and ask it to write something for you about the current House of Representatives in the style of Hunter S. Thompson, that would be a passive use of ChatGPT. (The results would also be disappointing. The bot is prevented from getting mean about individuals, even when they deserve it.)

Active users have multiple strategies. With most creative-oriented AIs you can, in a sense, dialogue. You can respond, ask for more, use multiple prompts. The writer and sound engineer Paul Boutin (Wired, NY Times) presented impressive results on Facebook as the result of asking ChatGPT to write a column in the style of the acerbic and critical tech culture magazine Valleywag (now inactive). I asked him what his strategy was for getting more out of that app:

Boutin: “To get ChatGPT to write more like yourself, think about not only your prose style and vocabulary, but what mindset do you bring to your writing? I got it to do a pretty good version of an old Valleywag blog post by instructing it several times over a half dozen drafts: Don’t be even-handed, be unapologetic. Don’t say but convey that you know you’re smarter than the people you’re criticizing, even though they presume the opposite. Don’t hedge — you don’t care what they think or fear repercussions. Be funny, in a sarcastic condescending way. Don’t insult people with names, but express amusement at their foolish actions and words in the specific topic. Also to never say but always convey: Everyone thinks they can do what you do, but they couldn’t no matter how hard they try. They foolishly think they will be able to match your skills with enough prompt engineering.”

“Here’s what ChatGPT gave me on its sixth try:

 AI Authors: The Future of Laziness

Bravo, AI-author enthusiasts! Why work hard on your literary masterpieces when you can let an AI remix internet content? Skip the pesky interviews and firsthand accounts—let’s keep literature artificial and unoriginal. Who needs talent, depth, or hard work when you’ve got an algorithm? Just remember, every time you sip your latté while the AI ‘writes’, you’re celebrating a new era of mediocrity. Cheers to the rise of the machine!”

Bravo Paul and “Chat”

Another strategy for creativity has been long used by creatives engaged in appropriation. A lot of musicians (including David Bowie and Mick Jagger) used William S. Burroughs “cut-up technique” for recombining words to launch a lyric. A fragment of Chatbot text could be deployed in the same manner, to get started, break writer’s block, write a transition, or summing up. 

It could, in fact, be argued that for a truly creative piece of writing built on a skeleton of facts, the facts are the boring part. It might not be a crime against writing to grab your skeleton entirely or almost entirely from a chatbot and flesh it out with your own imagination or insight. In the visual arts, AI might help you rapidly generate alternative samples of a work, varying shading, color, proportions, etc. This is very likely something you already use a machine to do. AI will simply be making the work happen faster. In other words, the active user is engaged in some conscious way with creative AI and doesn’t need to be told what tools to use. 

Risk and Precarity

In an economically, socially, sexually and environmentally anxious moment, the excitability of those inclined towards neophilia (love of the new) brushes up not just against neophobia, but against the very real conditions of our historical moment. Very few of us can dismiss the fears of being displaced, mislabeled, denied or messed about by people and institutions using AI. Technoculture was built on the romance of risk and “disruption”, and, now that the chickens are coming home to roost, culture is not altogether happy to be disrupted. A column about risk and precarity in relation to the culture of technology (which now is, of course, culture itself) beckons sometime soon…

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Holographic Memories

What’s the first thing that comes to mind when I mention hologram? Is it Artoo-Detoo’s projection of Princess Leia? Perhaps Salvador Dali’s The Crystal Grotto? Maybe you’re thinking of those shiny postcards that seem to change pictures when you tilt them in different directions. Or if you’re a bit more modern you might be thinking of Magic Leap’s video of a humpback whale jumping through a high school gymnasium. These are all exceptionally creative examples of using a medium to display more than what meets the eye. The trouble is, none of them are true holograms. 

Credit:  Star Wars/Disney

By definition, a hologram is created by superimposing wavefronts on a physical medium. In physics, a wavefront is simply a group of points, such as atoms or molecules, that move in unison over time. Although it is possible to use non-optical points like acoustics to create a hologram, we’ll mostly focus on holography by laser light. The technique is called transmission holography and was invented back in the 1940’s by Dennis Gabor. Light Amplification by Stimulated Emission of Radiation, or rather a laser, emits light photons where all photons are lined up. This just means they’re in phase in the wave of light being emitted. So that’s what one laser does, and they’re pretty great for playing with cats. But, if you have one laser, a beam splitter, a lens, a mirror, and a special material called a photographic plate, you can make a hologram.

The laser is shined at the beam splitter which splits the light into two beams. The first beam of laser is passed through a lens which spreads out the light and is projected on the object you’re trying to make a hologram of. When this happens, some of the light bounces off the entire surface of the object in a particular direction. The photographic plate is placed to capture where all the bounces of light land. The second beam of laser light is redirected using a mirror to land on the photographic plate. When both of these light waves reconvene on the plate, they are coming from different directions: one from the reflection off the surface of the object and the other from the mirror. When they hit the plate, the two light waves mix together, which we call interference. The photons in one light wave interfere with the photons in the other light wave as they collide. The video below shows this process well.

As an analogy, inside the plate there are tiny mirrors that partially reflect the light. As light interferes in the plate, the pattern is captured by the tiny mirrors. Microscopically, you can see the interference in the form of abstract patterns in the plate. If you’re very careful and don’t move at all during this process, the hologram is complete. Now when you want to view the hologram, you shine the original laser beam through the plate. When this happens, the mirrors reflect the pattern of the object, which illuminates the plate to show the object.

Now here is where the magic happens, you can take that hologram plate, cut out a tiny square, and throw away the rest. What will the results be when you shine the light through the tiny square? You’ll still see the whole object. The pattern of interference for that object is distributed over the entire plate. The information of how to reconstruct the image of the object is mixed all over the plate, which means you only need a piece of it to get the whole image back.

Distributed information isn’t a new concept and it’s relevant today more than ever. Blockchain technologies have been rapidly growing adoption from cryptocurrency users, internet of things applications, banking solutions, supply chain management, and more. One of the core principles of blockchain is to distribute what information is shared between parties in such a way that it increases the security of the entire collection of information. Here, the information is secured, or distributed, not in one location, but over the entire blockchain network. If you’re not familiar with blockchain, this works because many computers, called nodes, hold the record of the blockchain. Even if a very large portion of nodes are deleted, the information stored in the remaining nodes holds all of the blockchain record. This distribution of information is very similar to the distribution of light patterns all over the photographic plate in a hologram. Inn blockchain, the distribution is a curated by a mathematical algorithm whereas in a hologram it is curated from interference. 

However, there is a type of math that  can describe the interference. Fourier (pronounced, Four-Yay!) math is often used to describe how light information is transformed through holography. Fourier transforms convert a pattern into a waveform and vice versa. In the case of our hologram, a wave was converted to a pattern of interference and then back into a wave. This process is often referred to as frequency analysis since the frequency of elements is analyzed between the pattern state and wave state.

In 1979 a vital discovery made fourier transforms an interesting topic amongst neuroscientists. Karen and Russell DeValois discovered that brain cells in the visual cortex respond to the waveform representation of a pattern, not only the pattern itself. This led to a series of experiments from many neuroscientists trying to determine if brain cells are detecting features of patterns in the world or if they are detecting patterns of frequencies from waveforms in the world. While the jury is still out on this debate, the concept is still important today as many deep learning models rely on feature detection. 

Suspending whether or not it’s true, the idea lends itself to an interesting debate about the brain. The brain may be using the same mathematical principles of holography. So the question begs, is the brain holographic? This question is explored in depth by Michael Talbot’s good read, “The Holographic Universe”. In which describes the “Holographic Theory”, first and independently introduced by David Bohm and Karl Pribram in the 80’s. While for the sake of this discussion, it lends itself less as a theory and more of an analogy, the holographic theory can give us important insights into our own minds.

Let’s think about memories. When the brain is slightly damaged, the person doesn’t only remember the first 12 letters of the alphabet, the memory might just get fuzzier. This characteristic is similar to distributed information. Memories are not located in a single neuron or a small group of neurons, it’s mixed about in various sectors of the brain. In an iconic and ethically questionable experiment, Paul Pietsch diced and sliced a salamander’s brain 700 times and reconfigured it. After the salamander healed, it remembered how to feed and return to normal behavior (see his book “Shuffle Brain” for more details). The information of how to be a salamander was distributed across the brain in such a way that it was recoverable, even if it was all jumbled up. 

Talbot describes how proponents of the holographic theory suggest that neurons fire according to a frequency of input stimulus. For vision, the stimulus is  a light wave, the cochlea in the ear captures sounds according to frequencies, and skin responds to frequencies of vibrations.  Neurons then fire according to these input frequencies and interfere with surrounding neuronces. As we know, when neurons fire the connections between neurons get stronger. This system is then thought to store the pattern of interference of firing neurons. 

The implications for memories being holographically represented in the brain help us understand memories better. First, a memory is distributed. Unlike a Von Neumman computer, there’s no central location where the memory of an object is stored. You can’t go to neuron A26 and find a memory, which means you can’t delete a memory like you would an unwanted file on your computer, nor can you upload it to the web. This supports the idea that the brain is so beautifully adept at alternating between learned tasks with a finite number of cells because neurons most likely contribute to many different memories. The amount of neurons needed is conserved because they are being multiplexed. 

Secondly, when we remember something, many of the original neurons that fired during the original experience fire again. We can understand this with our analogy by thinking back to the hologram when the laser is shined through the interfered plate, the image is reconstructed. Likewise, if a random selection of the right type of neurons are stimulated, the memory will flood the person’s mind. The right light reflects the corresponding object, a different light can reflect a different object in the plate. Likewise, our brains encode lots of information that doesn’t get mixed into one coherent blob, it gets mixed into nearly precise and reconstructable representations.This is important if you want to remember not to stick your hand over a hot stove verses setting a pan on the hot stove to cook some pancakes. And this is consistent with what we know about learning. Memories are used as part of the learning process. If you saw a hot stove, stuck your hand over it, and got burned, you learned that is probably not desirable to do again. Then if later in life, you learn that you can cook on a hot stove, you don’t suddenly forget not to touch the stove. The new memories do not replace the old memories, they are carefully mixed in.

Third, when only partial stimulus is present, the brain will fill in the gaps of what it thinks will be there. This concept is the basis of optical illusions and why we perceive vision as a continuous experience, even though we have a blind spot that is created from having two separate eyes. A if partial light, or very weak light is used on the hologram, the image will be fuzzy and partially reflected. To see the whole picture, we have to increase the amount of light going through the plate. The same is true for our brains. There is a mechanism of filling in the gaps, reproducing the stimulus we expect to be there, even when all of the information isn’t available. This allows us not only to operate in a seemingly continuous flow of information while conditions are radically changing (think billions of photons crashing into our retinas throughout the day).

But the question that Talbot and others have tackled in the holographic theory have expanded on these points to ask if reality itself is a hologram. If we really are experiencing reconstructions of waveforms in our mind, then what exactly is the nature of reality? Is nature itself a hologram? The emerging concept of “Simulation Theory” (again, still technically not a theory) that proposes that we are actually living in a simulation, coincides with the holographic theory well. The simulation hypothesis can be traced back to ancient origins in “The Butterfly Dream” by Zhuangzi in 426 BC, but has recently gained traction due to formulation by popular AI philosopher, Nick Bostrom.  In 2003, Bostrom proposed the simulation argument trilema which posits that the probabilities of living in a simulation are near zero. Several prominent physicists have debated on the topic and discussion has led to the simulation theory’s formulation as plausible. Popular advocates included in the discussion range from Elon Musk, Jurgen Schmidhuber, and Neil Degrass Tyson. 

In essence, the hypothesis proposes that all of reality is being computed by a simulation. That our lives are created for us by a computing device that is capable of simulating not only the world we live in, but even ourselves. That this simulated reality is computed so convincingly, we believe it is real. The connection between simulation theory and holographic theory relies on the nature of reality being abstracted from our subjective experience of it. We experience patterns and images, but the true source of reality is something different. Or rather what the innate source of our experience is, is not at all like what we experience. 

In holography, this is easier to understand. We experience an image of an object from our vision, but the object itself does not actually exist in this form, it exists as a waveform. Or according to simulation theory, it exists as some computed source, akin to bits that compiled the letters you’re reading now. Until more science is served, we may never know the real reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Hollywood Writers Strike Versus Facsimile Culture

Since I first made fun of AI panic back in my second column, I’ve been growing more disturbed. But I never thought I’d join the Luddites. However, the striking writers of the entertainment industry are demanding to “Regulate use of material produced using artificial intelligence or similar technologies”. These writers are the first line of resistance against cultural productions being turned into facsimiles of creative brilliance. This has become a point of emphasis among the signs being carried on the picket lines, an indication of its populist appeal. It’s likely that the strike will actually make entertainment bigwigs more attracted to the possibility of ditching the humans for compliant chatbots with no demands and few needs. The fight against AI taking over TV writing is one that should be taken up ‘by viewers like you’ (as PBS likes to say). If you like complex TV shows and films with brilliant dialogue, it’s in your interests to keep the humans and not let the products of their minds be replaced by an AI-created facsimile. 

In The Art World Facsimiles Have Become A Field Of Play in Which Toying with Financial Valuation Serves as a Sort Of Content

In the art world, the distinction between the real thing and a copy of the real thing has been obscured for many years, with a wide variety of consequences. In visual arts, copying became a field of play. The philosopher Walter Benjamin set the terms of the discourse in 1935 with his essay ‘The Work of Art in the Age of Mechanical Reproduction’. Benjamin was dealing with physical objects, and he theorized that an original artwork carried an ‘aura’ that gave it a high capital valuation. In the age of increased reproducibility, Benjamin conjectured that the value or the original would diminish. This hasn’t happened, as originals both old and new fetch huge prices. At least since the Pop Art movement of the 1960s, the art world has toyed with this trope — this predicted tug-of-war between the original and the facsimile, by just saying yes; delivering both originals and multiples. Warhol started mass distributing postcards of his most widely-recognized works in the early 1960s, while the original maintained its ‘aura’ and could be sold to collectors (although it took the pale man’s actual demise for the aura to glow enough to attract really spectacular sums.)

An odd twist comes into play in the land of NFTs. The work is infinitely replicable and can be distributed in moments to billions of internet users, and the NFT-collector may or may not be purchasing exclusive access. What the collector seems to be after is not the aura of the artwork, but the aura of ownership in and of itself – or of a particular relationship to the work.

The Mass Distribution of the Facsimile of Recorded Music 

In the world of recorded music, Neil Young stood out as the loudest early curmudgeon complaining that digitally-created CDs and later music files offer a pallid facsimile of what a recording artist intends. (Of course, it could be argued that recorded music itself is a facsimile of the way music was experienced for millenia prior to its invention, but I’m not going to try to unpack that here. In 1931, The American Federation of Musicians denounced recorded music as, basically, a facsimile of live music that would debase the art.) Over the years, Young’s complaint has become a generally accepted wisdom. We trade quality for the facsimile that is easily distributed and  conveniently available.

My friend ChatGPT agrees: “Digital audio compression works by removing parts of the audio signal that are perceived as less important or less noticeable, in order to reduce the file size and make it more convenient for storage and distribution. However, this process can also result in the loss of subtle nuances and details that contribute to the overall richness and depth of the sound.

“Studies have shown that digital audio compression can result in a loss of dynamic range, which is the difference between the loudest and softest parts of a recording. This can make the music sound less dynamic and less engaging, particularly in genres such as classical music that rely on subtle changes in volume and tone.”

Will the ironies never cease?

Is All Cultural Production A Facsimile of Life?

Taking a sharply anarchic left turn in this exploration, we might take up the view of the European radicals of the 1960s, the Situationists, who viewed all cultural production as contributing to the ‘society of the spectacle’. In his book, ‘Society of the Spectacle’, Guy Debord wrote, “The spectacle is a social relation between people that is mediated by an accumulation of images that serve to alienate us from a genuinely lived life. The image is thus an historical mutation of the form of commodity fetishism.” In other words, all art (from the word artifice or artificial) alienates us from direct experience. Among the practices used by the Situationists, the one most familiar to us today would probably be actions that contemporary people would call pranks. These were actions designed to break the trances citizens going through their normal routines. The Situationists called this tactic ‘construction’, and it involved created situations that would disrupt the status quo and encourage spontaneous excitement, joy or, for that matter, anxiety.

Situationism pretty much abandons mediation completely for intensely lived daily lives, what Situationist Raoul Vaneigem called ‘the revolution of everyday life’.

An eruption of this sort of consciousness would pretty much put the striking writers out to pasture. But this is not our world today.

The boss needs you, you don’t need him! (Credit: Wikimedia)

Remember When Youtube Was Going To Wipe Out Professional Television Production?

Aside from AI creativity — or possibly in combination with it — another specter looming up to challenge TV writers is the democratization of video production. This was, first of all, the dream of avant-gardists like Nam June Paik: that everyone could be a video artist. That it would become a medium of creative self-expression and break up the confining linearity of storytelling. And,  back in the earlier years of this century, Wired magazine related pundits like Kevin Kelly and Chris Anderson predicted that the “long tail” of small scale content creators (video in particular) would create niche audiences that would seriously impact and begin to replace the big movie and television productions. This doesn’t appear to have happened, although it could be that TikTok is grabbing them while they’re young and a generation will emerge that prefer 30-second clips of someone having their cat speak in a funny voice to the complex plots and dialogues of shows like ‘Succession’ or ‘Atlanta’. 

Maybe Our Lives Are A Facsimile

Finally we come to Simulation Theory, that favorite 21st century cosmology that our very lives themselves may be, in a sense, a facsimile, a mediated creation… a computer simulation. In this case, we may as well carry on by emphasizing that which gives us pleasure – at least until we find a way to bust out of The Matrix without switching off our facsimile universe. Like Pinnochio and Zuckerberg, we all long to be real boys (or others).

What Is To Be Done?

I’ve seen mixed results from attempts to get Chatbots to achieve authentic creative madness. So I think we should place our bets on a proven winner. That would be the screenwriters who have managed to send some wonders to our screens in this century, from the aforementioned ‘Succession’ and ‘Atlanta’ to ‘Fleabag’, ‘Black Mirror’, ‘Mad Men’, ‘Fargo’… the list of well-written shows goes on. (I won’t mention the unfunny comedy writing of ‘SNL’ or ‘The Daily Show’. Nope. Won’t mention it.) 

I mean, assuming there won’t be a revolution in everyday life in which we achieve some kind of unmediated intensely experienced existence, I suggest we try to keep these writer-freaks employed, well-paid and supplying us with cool content. (Why do I imagine that a Situationist revolution of unmediated intensely experienced existence could end up being as exhausting as having to work all the time?  It’s a bit like being asked to constantly engage in leisure activity as a participant when sometimes you just want to kick back and watch a good TV show. Sometimes we choose slack over engagement.) Speaking of which, after completing the first draft of this piece, it was announced that Facebook’s ‘Metaverse’ had failed and was being shut down. It’s unclear whether the failed attempt to bring VR to Facebooks 3 billion users represents a rejection of VR as a participatory medium that some, as far back as the early 1990s, thought would replace TV, or if the technology is still too raw for people to want to climb in, or whether Facebook’s particular attempt was somehow flawed.

In any case, we should support our striking writers lest the profiteers of television decide that they could fill the air with cheap reality programming, some of it possibly featuring dumb AIs and even dumber humans engaged in banal contests, and that they don’t need any pesky humans, even if the award-winning smart shows disappear. After all, reality TV gets a big viewership and is extremely profitable. I fear this may be the ultimate result of the great battle of the Hollywood writers against the entertainment machines.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Paul Cockshott on the Economics of Automation

Paul Cockshott is a computer scientist and economist, with an interest in how advanced computerised planning can supplant our existing economic order. I spoke with him about how Artificial Intelligence will automate jobs that people currently do. The discussion focuses on the economic costs of training AI models, how they weigh up against labour costs, and the economic circumstances under which human jobs will be automated. Over to Paul Cockshott:

* * * * * * *

The use of AI requires a great deal of processing power. It needs it in two distinct ways. The first is in training, and the second is in application. 

Let’s look at the training aspect. This has become feasible because of two developments over the last 15 years, in data and in power. 

Data

The build-up of large collections of images and text on the internet that can be used as training data for neural networks. I recall back in the 90s when a team I worked with was developing neural network video encoding, one of our problems, pre internet, was just getting and collections of image data to train with. We resorted to capturing TV broadcasts and training neural nets on those. Now, of course, due to Android camera phones, Google has almost unbounded collections of images from around the world on which neural nets can be trained for vision purposes. In addition, there are vast quantities of indexed images on the web, with dubious ownership, that smaller groups like Stability.AI can use. The same applies to text. It is the ready availability of a vast corpus of academic papers and books that makes systems like ChatGPT and Bard able to answer questions, if not like an expert, at least like a 3rd year student. 

Power 

Actual nervous systems work by electrochemical means to aggregate multiple discrete impulses to produce a discrete response. The Church–Turing–Deutsch principle states that any physical system can be emulated to an arbitrary degree of accuracy by a universal computing machine. This includes the semi-analogue, semi-digital processes that occur in nervous systems. Whilst this has been theoretically known at least since the 1980s and in informal terms since the 1950s, it was, until recently, impractical to apply on a large scale.

To emulate the analogue aspects of synaptic responses requires a great deal of floating-point arithmetic, more specifically it requires a lot of matrix to vector multiplication. A lot of work from the 1960s went into developing supercomputers for matrix mathematics, since these techniques turn out to be of very general applicability in the physical sciences. By the end of the last century, this had produced machines that were well able to handle tasks like climate simulation and weather forecasts. 

But the scale of maths required by artificial intelligence was considerably greater. The human brain contains tens of billions of neurons, and each neuron would have to be represented by a vector of synaptic weights. If each neuron has or the order of 10,000 synaptic weights and can fire about 10 times a second, we would require a vector processor of from 10¹⁵ to 10¹⁶ operations per second to emulate the brain: that is to say it would have to reach the petaflop range. 

The first machines in this range became available about 12 years ago. Last year, Tesla launched its Dojo supercomputer complex with a processing power of 10¹⁸ operations a second. That makes it equal to around 100 human brains in processing rate. The downside is the power usage – in the region of 1-2 megawatts. In comparison, the metabolic energy consumption of 100 human brains would be of the order of 1.5 kilowatts, so the Dojo system is about 1,000 times as energy intensive. 

The machine is built of 120 individual ‘training tiles’ as shown below. 

 

Credit: Paul Cockshott

However, at this point we are just comparing operations per second, not information storage. A brain with 80 billion neurons each with 15,000 connections would have 1.2 quadrillion weights. Tesla stores its weights in cfloat8 format, so that each of their training trays can store about 11 billion weights or about 1/100,000 of a human brain. 

So the current best Tesla technology is 5 orders of magnitude behind the human brain in storage, and 3 orders of magnitude behind in energy-efficiency: overall about 8 orders of magnitude away from the storage x power efficiency of the human brain. 

The consequence is that whilst it is possible, by consuming megawatts of power, to train a system on a specialist skill like driving, it is not yet possible to incorporate a human level of intelligence and knowledge into the car itself.  

A human can be taught to drive with a few tens of hours of driving instruction, and they can still do other jobs after they have driven to work. Tesla must spend years of processing time at a huge power bill to obtain the set of neural weights that a person needs to drive. 

Credit: Tesfu Assefa

Of course, the Tesla business plan is to train once and then replicate the information in all their cars. But the size and power hungry nature of the chips at present prevents them being put in each car. 

It will take some time, one or two decades, before the energy × storage efficiency of chips reaches a point where mobile robot devices with general intelligence comparable to humans are likely to be available. So, to harness general AI, a lot of effort must go into improving power consumption and memory capacity of chips. Until that point, it will be only available as remote online services running on big data centers. 

These in turn will increase demand for electricity at a time when, due to environmental considerations and political instability, energy is becoming scarce and expensive. The implication is that an ever-increasing share of GDP is going to have to be directed to producing non-fossil fuel energy infrastructure. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter