Holographic Memories

What’s the first thing that comes to mind when I mention hologram? Is it Artoo-Detoo’s projection of Princess Leia? Perhaps Salvador Dali’s The Crystal Grotto? Maybe you’re thinking of those shiny postcards that seem to change pictures when you tilt them in different directions. Or if you’re a bit more modern you might be thinking of Magic Leap’s video of a humpback whale jumping through a high school gymnasium. These are all exceptionally creative examples of using a medium to display more than what meets the eye. The trouble is, none of them are true holograms. 

Credit:  Star Wars/Disney

By definition, a hologram is created by superimposing wavefronts on a physical medium. In physics, a wavefront is simply a group of points, such as atoms or molecules, that move in unison over time. Although it is possible to use non-optical points like acoustics to create a hologram, we’ll mostly focus on holography by laser light. The technique is called transmission holography and was invented back in the 1940’s by Dennis Gabor. Light Amplification by Stimulated Emission of Radiation, or rather a laser, emits light photons where all photons are lined up. This just means they’re in phase in the wave of light being emitted. So that’s what one laser does, and they’re pretty great for playing with cats. But, if you have one laser, a beam splitter, a lens, a mirror, and a special material called a photographic plate, you can make a hologram.

The laser is shined at the beam splitter which splits the light into two beams. The first beam of laser is passed through a lens which spreads out the light and is projected on the object you’re trying to make a hologram of. When this happens, some of the light bounces off the entire surface of the object in a particular direction. The photographic plate is placed to capture where all the bounces of light land. The second beam of laser light is redirected using a mirror to land on the photographic plate. When both of these light waves reconvene on the plate, they are coming from different directions: one from the reflection off the surface of the object and the other from the mirror. When they hit the plate, the two light waves mix together, which we call interference. The photons in one light wave interfere with the photons in the other light wave as they collide. The video below shows this process well.

As an analogy, inside the plate there are tiny mirrors that partially reflect the light. As light interferes in the plate, the pattern is captured by the tiny mirrors. Microscopically, you can see the interference in the form of abstract patterns in the plate. If you’re very careful and don’t move at all during this process, the hologram is complete. Now when you want to view the hologram, you shine the original laser beam through the plate. When this happens, the mirrors reflect the pattern of the object, which illuminates the plate to show the object.

Now here is where the magic happens, you can take that hologram plate, cut out a tiny square, and throw away the rest. What will the results be when you shine the light through the tiny square? You’ll still see the whole object. The pattern of interference for that object is distributed over the entire plate. The information of how to reconstruct the image of the object is mixed all over the plate, which means you only need a piece of it to get the whole image back.

Distributed information isn’t a new concept and it’s relevant today more than ever. Blockchain technologies have been rapidly growing adoption from cryptocurrency users, internet of things applications, banking solutions, supply chain management, and more. One of the core principles of blockchain is to distribute what information is shared between parties in such a way that it increases the security of the entire collection of information. Here, the information is secured, or distributed, not in one location, but over the entire blockchain network. If you’re not familiar with blockchain, this works because many computers, called nodes, hold the record of the blockchain. Even if a very large portion of nodes are deleted, the information stored in the remaining nodes holds all of the blockchain record. This distribution of information is very similar to the distribution of light patterns all over the photographic plate in a hologram. Inn blockchain, the distribution is a curated by a mathematical algorithm whereas in a hologram it is curated from interference. 

However, there is a type of math that  can describe the interference. Fourier (pronounced, Four-Yay!) math is often used to describe how light information is transformed through holography. Fourier transforms convert a pattern into a waveform and vice versa. In the case of our hologram, a wave was converted to a pattern of interference and then back into a wave. This process is often referred to as frequency analysis since the frequency of elements is analyzed between the pattern state and wave state.

In 1979 a vital discovery made fourier transforms an interesting topic amongst neuroscientists. Karen and Russell DeValois discovered that brain cells in the visual cortex respond to the waveform representation of a pattern, not only the pattern itself. This led to a series of experiments from many neuroscientists trying to determine if brain cells are detecting features of patterns in the world or if they are detecting patterns of frequencies from waveforms in the world. While the jury is still out on this debate, the concept is still important today as many deep learning models rely on feature detection. 

Suspending whether or not it’s true, the idea lends itself to an interesting debate about the brain. The brain may be using the same mathematical principles of holography. So the question begs, is the brain holographic? This question is explored in depth by Michael Talbot’s good read, “The Holographic Universe”. In which describes the “Holographic Theory”, first and independently introduced by David Bohm and Karl Pribram in the 80’s. While for the sake of this discussion, it lends itself less as a theory and more of an analogy, the holographic theory can give us important insights into our own minds.

Let’s think about memories. When the brain is slightly damaged, the person doesn’t only remember the first 12 letters of the alphabet, the memory might just get fuzzier. This characteristic is similar to distributed information. Memories are not located in a single neuron or a small group of neurons, it’s mixed about in various sectors of the brain. In an iconic and ethically questionable experiment, Paul Pietsch diced and sliced a salamander’s brain 700 times and reconfigured it. After the salamander healed, it remembered how to feed and return to normal behavior (see his book “Shuffle Brain” for more details). The information of how to be a salamander was distributed across the brain in such a way that it was recoverable, even if it was all jumbled up. 

Talbot describes how proponents of the holographic theory suggest that neurons fire according to a frequency of input stimulus. For vision, the stimulus is  a light wave, the cochlea in the ear captures sounds according to frequencies, and skin responds to frequencies of vibrations.  Neurons then fire according to these input frequencies and interfere with surrounding neuronces. As we know, when neurons fire the connections between neurons get stronger. This system is then thought to store the pattern of interference of firing neurons. 

The implications for memories being holographically represented in the brain help us understand memories better. First, a memory is distributed. Unlike a Von Neumman computer, there’s no central location where the memory of an object is stored. You can’t go to neuron A26 and find a memory, which means you can’t delete a memory like you would an unwanted file on your computer, nor can you upload it to the web. This supports the idea that the brain is so beautifully adept at alternating between learned tasks with a finite number of cells because neurons most likely contribute to many different memories. The amount of neurons needed is conserved because they are being multiplexed. 

Secondly, when we remember something, many of the original neurons that fired during the original experience fire again. We can understand this with our analogy by thinking back to the hologram when the laser is shined through the interfered plate, the image is reconstructed. Likewise, if a random selection of the right type of neurons are stimulated, the memory will flood the person’s mind. The right light reflects the corresponding object, a different light can reflect a different object in the plate. Likewise, our brains encode lots of information that doesn’t get mixed into one coherent blob, it gets mixed into nearly precise and reconstructable representations.This is important if you want to remember not to stick your hand over a hot stove verses setting a pan on the hot stove to cook some pancakes. And this is consistent with what we know about learning. Memories are used as part of the learning process. If you saw a hot stove, stuck your hand over it, and got burned, you learned that is probably not desirable to do again. Then if later in life, you learn that you can cook on a hot stove, you don’t suddenly forget not to touch the stove. The new memories do not replace the old memories, they are carefully mixed in.

Third, when only partial stimulus is present, the brain will fill in the gaps of what it thinks will be there. This concept is the basis of optical illusions and why we perceive vision as a continuous experience, even though we have a blind spot that is created from having two separate eyes. A if partial light, or very weak light is used on the hologram, the image will be fuzzy and partially reflected. To see the whole picture, we have to increase the amount of light going through the plate. The same is true for our brains. There is a mechanism of filling in the gaps, reproducing the stimulus we expect to be there, even when all of the information isn’t available. This allows us not only to operate in a seemingly continuous flow of information while conditions are radically changing (think billions of photons crashing into our retinas throughout the day).

But the question that Talbot and others have tackled in the holographic theory have expanded on these points to ask if reality itself is a hologram. If we really are experiencing reconstructions of waveforms in our mind, then what exactly is the nature of reality? Is nature itself a hologram? The emerging concept of “Simulation Theory” (again, still technically not a theory) that proposes that we are actually living in a simulation, coincides with the holographic theory well. The simulation hypothesis can be traced back to ancient origins in “The Butterfly Dream” by Zhuangzi in 426 BC, but has recently gained traction due to formulation by popular AI philosopher, Nick Bostrom.  In 2003, Bostrom proposed the simulation argument trilema which posits that the probabilities of living in a simulation are near zero. Several prominent physicists have debated on the topic and discussion has led to the simulation theory’s formulation as plausible. Popular advocates included in the discussion range from Elon Musk, Jurgen Schmidhuber, and Neil Degrass Tyson. 

In essence, the hypothesis proposes that all of reality is being computed by a simulation. That our lives are created for us by a computing device that is capable of simulating not only the world we live in, but even ourselves. That this simulated reality is computed so convincingly, we believe it is real. The connection between simulation theory and holographic theory relies on the nature of reality being abstracted from our subjective experience of it. We experience patterns and images, but the true source of reality is something different. Or rather what the innate source of our experience is, is not at all like what we experience. 

In holography, this is easier to understand. We experience an image of an object from our vision, but the object itself does not actually exist in this form, it exists as a waveform. Or according to simulation theory, it exists as some computed source, akin to bits that compiled the letters you’re reading now. Until more science is served, we may never know the real reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Hollywood Writers Strike Versus Facsimile Culture

Since I first made fun of AI panic back in my second column, I’ve been growing more disturbed. But I never thought I’d join the Luddites. However, the striking writers of the entertainment industry are demanding to “Regulate use of material produced using artificial intelligence or similar technologies”. These writers are the first line of resistance against cultural productions being turned into facsimiles of creative brilliance. This has become a point of emphasis among the signs being carried on the picket lines, an indication of its populist appeal. It’s likely that the strike will actually make entertainment bigwigs more attracted to the possibility of ditching the humans for compliant chatbots with no demands and few needs. The fight against AI taking over TV writing is one that should be taken up ‘by viewers like you’ (as PBS likes to say). If you like complex TV shows and films with brilliant dialogue, it’s in your interests to keep the humans and not let the products of their minds be replaced by an AI-created facsimile. 

In The Art World Facsimiles Have Become A Field Of Play in Which Toying with Financial Valuation Serves as a Sort Of Content

In the art world, the distinction between the real thing and a copy of the real thing has been obscured for many years, with a wide variety of consequences. In visual arts, copying became a field of play. The philosopher Walter Benjamin set the terms of the discourse in 1935 with his essay ‘The Work of Art in the Age of Mechanical Reproduction’. Benjamin was dealing with physical objects, and he theorized that an original artwork carried an ‘aura’ that gave it a high capital valuation. In the age of increased reproducibility, Benjamin conjectured that the value or the original would diminish. This hasn’t happened, as originals both old and new fetch huge prices. At least since the Pop Art movement of the 1960s, the art world has toyed with this trope — this predicted tug-of-war between the original and the facsimile, by just saying yes; delivering both originals and multiples. Warhol started mass distributing postcards of his most widely-recognized works in the early 1960s, while the original maintained its ‘aura’ and could be sold to collectors (although it took the pale man’s actual demise for the aura to glow enough to attract really spectacular sums.)

An odd twist comes into play in the land of NFTs. The work is infinitely replicable and can be distributed in moments to billions of internet users, and the NFT-collector may or may not be purchasing exclusive access. What the collector seems to be after is not the aura of the artwork, but the aura of ownership in and of itself – or of a particular relationship to the work.

The Mass Distribution of the Facsimile of Recorded Music 

In the world of recorded music, Neil Young stood out as the loudest early curmudgeon complaining that digitally-created CDs and later music files offer a pallid facsimile of what a recording artist intends. (Of course, it could be argued that recorded music itself is a facsimile of the way music was experienced for millenia prior to its invention, but I’m not going to try to unpack that here. In 1931, The American Federation of Musicians denounced recorded music as, basically, a facsimile of live music that would debase the art.) Over the years, Young’s complaint has become a generally accepted wisdom. We trade quality for the facsimile that is easily distributed and  conveniently available.

My friend ChatGPT agrees: “Digital audio compression works by removing parts of the audio signal that are perceived as less important or less noticeable, in order to reduce the file size and make it more convenient for storage and distribution. However, this process can also result in the loss of subtle nuances and details that contribute to the overall richness and depth of the sound.

“Studies have shown that digital audio compression can result in a loss of dynamic range, which is the difference between the loudest and softest parts of a recording. This can make the music sound less dynamic and less engaging, particularly in genres such as classical music that rely on subtle changes in volume and tone.”

Will the ironies never cease?

Is All Cultural Production A Facsimile of Life?

Taking a sharply anarchic left turn in this exploration, we might take up the view of the European radicals of the 1960s, the Situationists, who viewed all cultural production as contributing to the ‘society of the spectacle’. In his book, ‘Society of the Spectacle’, Guy Debord wrote, “The spectacle is a social relation between people that is mediated by an accumulation of images that serve to alienate us from a genuinely lived life. The image is thus an historical mutation of the form of commodity fetishism.” In other words, all art (from the word artifice or artificial) alienates us from direct experience. Among the practices used by the Situationists, the one most familiar to us today would probably be actions that contemporary people would call pranks. These were actions designed to break the trances citizens going through their normal routines. The Situationists called this tactic ‘construction’, and it involved created situations that would disrupt the status quo and encourage spontaneous excitement, joy or, for that matter, anxiety.

Situationism pretty much abandons mediation completely for intensely lived daily lives, what Situationist Raoul Vaneigem called ‘the revolution of everyday life’.

An eruption of this sort of consciousness would pretty much put the striking writers out to pasture. But this is not our world today.

The boss needs you, you don’t need him! (Credit: Wikimedia)

Remember When Youtube Was Going To Wipe Out Professional Television Production?

Aside from AI creativity — or possibly in combination with it — another specter looming up to challenge TV writers is the democratization of video production. This was, first of all, the dream of avant-gardists like Nam June Paik: that everyone could be a video artist. That it would become a medium of creative self-expression and break up the confining linearity of storytelling. And,  back in the earlier years of this century, Wired magazine related pundits like Kevin Kelly and Chris Anderson predicted that the “long tail” of small scale content creators (video in particular) would create niche audiences that would seriously impact and begin to replace the big movie and television productions. This doesn’t appear to have happened, although it could be that TikTok is grabbing them while they’re young and a generation will emerge that prefer 30-second clips of someone having their cat speak in a funny voice to the complex plots and dialogues of shows like ‘Succession’ or ‘Atlanta’. 

Maybe Our Lives Are A Facsimile

Finally we come to Simulation Theory, that favorite 21st century cosmology that our very lives themselves may be, in a sense, a facsimile, a mediated creation… a computer simulation. In this case, we may as well carry on by emphasizing that which gives us pleasure – at least until we find a way to bust out of The Matrix without switching off our facsimile universe. Like Pinnochio and Zuckerberg, we all long to be real boys (or others).

What Is To Be Done?

I’ve seen mixed results from attempts to get Chatbots to achieve authentic creative madness. So I think we should place our bets on a proven winner. That would be the screenwriters who have managed to send some wonders to our screens in this century, from the aforementioned ‘Succession’ and ‘Atlanta’ to ‘Fleabag’, ‘Black Mirror’, ‘Mad Men’, ‘Fargo’… the list of well-written shows goes on. (I won’t mention the unfunny comedy writing of ‘SNL’ or ‘The Daily Show’. Nope. Won’t mention it.) 

I mean, assuming there won’t be a revolution in everyday life in which we achieve some kind of unmediated intensely experienced existence, I suggest we try to keep these writer-freaks employed, well-paid and supplying us with cool content. (Why do I imagine that a Situationist revolution of unmediated intensely experienced existence could end up being as exhausting as having to work all the time?  It’s a bit like being asked to constantly engage in leisure activity as a participant when sometimes you just want to kick back and watch a good TV show. Sometimes we choose slack over engagement.) Speaking of which, after completing the first draft of this piece, it was announced that Facebook’s ‘Metaverse’ had failed and was being shut down. It’s unclear whether the failed attempt to bring VR to Facebooks 3 billion users represents a rejection of VR as a participatory medium that some, as far back as the early 1990s, thought would replace TV, or if the technology is still too raw for people to want to climb in, or whether Facebook’s particular attempt was somehow flawed.

In any case, we should support our striking writers lest the profiteers of television decide that they could fill the air with cheap reality programming, some of it possibly featuring dumb AIs and even dumber humans engaged in banal contests, and that they don’t need any pesky humans, even if the award-winning smart shows disappear. After all, reality TV gets a big viewership and is extremely profitable. I fear this may be the ultimate result of the great battle of the Hollywood writers against the entertainment machines.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Paul Cockshott on the Economics of Automation

Paul Cockshott is a computer scientist and economist, with an interest in how advanced computerised planning can supplant our existing economic order. I spoke with him about how Artificial Intelligence will automate jobs that people currently do. The discussion focuses on the economic costs of training AI models, how they weigh up against labour costs, and the economic circumstances under which human jobs will be automated. Over to Paul Cockshott:

* * * * * * *

The use of AI requires a great deal of processing power. It needs it in two distinct ways. The first is in training, and the second is in application. 

Let’s look at the training aspect. This has become feasible because of two developments over the last 15 years, in data and in power. 

Data

The build-up of large collections of images and text on the internet that can be used as training data for neural networks. I recall back in the 90s when a team I worked with was developing neural network video encoding, one of our problems, pre internet, was just getting and collections of image data to train with. We resorted to capturing TV broadcasts and training neural nets on those. Now, of course, due to Android camera phones, Google has almost unbounded collections of images from around the world on which neural nets can be trained for vision purposes. In addition, there are vast quantities of indexed images on the web, with dubious ownership, that smaller groups like Stability.AI can use. The same applies to text. It is the ready availability of a vast corpus of academic papers and books that makes systems like ChatGPT and Bard able to answer questions, if not like an expert, at least like a 3rd year student. 

Power 

Actual nervous systems work by electrochemical means to aggregate multiple discrete impulses to produce a discrete response. The Church–Turing–Deutsch principle states that any physical system can be emulated to an arbitrary degree of accuracy by a universal computing machine. This includes the semi-analogue, semi-digital processes that occur in nervous systems. Whilst this has been theoretically known at least since the 1980s and in informal terms since the 1950s, it was, until recently, impractical to apply on a large scale.

To emulate the analogue aspects of synaptic responses requires a great deal of floating-point arithmetic, more specifically it requires a lot of matrix to vector multiplication. A lot of work from the 1960s went into developing supercomputers for matrix mathematics, since these techniques turn out to be of very general applicability in the physical sciences. By the end of the last century, this had produced machines that were well able to handle tasks like climate simulation and weather forecasts. 

But the scale of maths required by artificial intelligence was considerably greater. The human brain contains tens of billions of neurons, and each neuron would have to be represented by a vector of synaptic weights. If each neuron has or the order of 10,000 synaptic weights and can fire about 10 times a second, we would require a vector processor of from 10¹⁵ to 10¹⁶ operations per second to emulate the brain: that is to say it would have to reach the petaflop range. 

The first machines in this range became available about 12 years ago. Last year, Tesla launched its Dojo supercomputer complex with a processing power of 10¹⁸ operations a second. That makes it equal to around 100 human brains in processing rate. The downside is the power usage – in the region of 1-2 megawatts. In comparison, the metabolic energy consumption of 100 human brains would be of the order of 1.5 kilowatts, so the Dojo system is about 1,000 times as energy intensive. 

The machine is built of 120 individual ‘training tiles’ as shown below. 

 

Credit: Paul Cockshott

However, at this point we are just comparing operations per second, not information storage. A brain with 80 billion neurons each with 15,000 connections would have 1.2 quadrillion weights. Tesla stores its weights in cfloat8 format, so that each of their training trays can store about 11 billion weights or about 1/100,000 of a human brain. 

So the current best Tesla technology is 5 orders of magnitude behind the human brain in storage, and 3 orders of magnitude behind in energy-efficiency: overall about 8 orders of magnitude away from the storage x power efficiency of the human brain. 

The consequence is that whilst it is possible, by consuming megawatts of power, to train a system on a specialist skill like driving, it is not yet possible to incorporate a human level of intelligence and knowledge into the car itself.  

A human can be taught to drive with a few tens of hours of driving instruction, and they can still do other jobs after they have driven to work. Tesla must spend years of processing time at a huge power bill to obtain the set of neural weights that a person needs to drive. 

Credit: Tesfu Assefa

Of course, the Tesla business plan is to train once and then replicate the information in all their cars. But the size and power hungry nature of the chips at present prevents them being put in each car. 

It will take some time, one or two decades, before the energy × storage efficiency of chips reaches a point where mobile robot devices with general intelligence comparable to humans are likely to be available. So, to harness general AI, a lot of effort must go into improving power consumption and memory capacity of chips. Until that point, it will be only available as remote online services running on big data centers. 

These in turn will increase demand for electricity at a time when, due to environmental considerations and political instability, energy is becoming scarce and expensive. The implication is that an ever-increasing share of GDP is going to have to be directed to producing non-fossil fuel energy infrastructure. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Steal This Singularity Part One: The Yippies Started The Digital Revolution

Every fourth one of these Mindplex articles will be an annotated and edited excerpt from my multipart piece titled Steal This Singularity, originally written some time in 2008. This will continue until I get to the end of the piece or the Singularity comes. Annotation is in gray italics.

Part One: Steal This Singularity

1: The notion that the current and future extreme technological society should not be dominated by Big Capital, Authoritarian States or the combination thereof. Also related, a play on the title of a book by 1960s counterculture radical Abbie Hoffman. Abbie may be an obscure figure to today’s young people. Let’s start to fix that here.

2: The notion that in our robotized future, human beings shouldn’t behave robotically. The response to AI isn’t to blow up or hack down AIs. Become so unique and original that no program, however sophisticated, can perform you. Let AI thrive. You have less clichéd paths to follow!  

 A few years ago, I proposed Big Dada as a response to Big Data. Big Data is the corporate/state/organization tool for exploitation and control, and/or for making effective policy for human benefit. (Life’s rich in ambiguity.)

With Big Dada, I suggested confusing the data by liking what you hate; hating what you like; by lying; by engaging serious issues with surrealistic gestures and language and by generally fucking with data’s logic circuits. I didn’t suspect at that time that a power-hungry, orange-faced, grifter-trickster reality show host would capture Big Dada in a sort of chaos-fascism. Clearly, there were bigger, richer Big Dadas to contend with. Who knew?   

The well-rounded posthuman — if any — should be able to wail like a banshee, dance like James Brown, party like Dionysus, revolt like Joan of Arc and illuminate the irrational like Salvador Dalí. Sadly, the ones that aren’t mythological are dead, so a smart-ass immortalist might argue that even being able to wag a finger would be an improvement over the passions or mobility of these three losers. 

3: The title for a website in which R.U. Sirius says and does as he pleases. As it turned out, it pleased me to not do much with that website.  

The Singularity is, of course, conceived of as the time at which the artificial intelligences that we create become smarter than us. And then it makes itself even smarter and smarter still and yet smarter again and so forth… at an ever-accelerating pace until it becomes incomprehensibly something other to our wormy little minds.

I have to be honest. I’m not sure how seriously to take this. But ‘Steal This Singularity’ has much more of a ring to it than ‘Steal This Future’ or ‘Steal This Transhumanity’. Good sloganeering is intellectually disreputable… but fun. Plus anything that can fit on a T-shirt can be sold. My friend Timothy Leary used to advocate for getting your philosophy down to a bumper sticker. Tim was disreputable… but fun. And the way I see it, The Singularity has become a buzzword for the rad techno-future brought on by NBIC (Nano-Bio-Info-Cogno) or GNR (Genetics, Nanotech, and Robotics) or — to put it in more populist terms, the coming of the machine overlords.

Look, for example, at Singularity University SU had just been established when I wrote this. Here we have the establishment Singularitarians, all hooked up with NASA and Google and Cisco and Genentech. And how seriously did they take the Singularity label? Well, when Alex Lightman and I interviewed founder Peter Diamandis for h+, he made it clear that they were using the word for the same reason that I was: COOL BUZZWORD! That… and to make Ray Kurzweil happy. Ok. He didn’t blatantly say “cool-ass buzzword, dude!” He said: “to be clear, the university is not about The Singularity. It’s about the exponentially growing technologies and their effect on humanity… You know, we toyed with other terms… like Convergence University and others. But in homage to Ray…” Why do I suspect investment capital was involved?

So, in equivalent homage to SU, I call this project ‘Steal This Singularity’ and admit straight out that it may or may not have jackshit to do with ‘The Singularity’, depending on accidents, random moods and possible funding.The question, then, may be asked, smarter-than-human AIs aside, does ‘Steal This Singularity’ presume the rather technotopian possibilities promised by transhumanism, but believe that it will be necessary to STEAL it from the so-called 1%? Is that what I’m up to here? Well, maybe. How does one steal a Singularity (or something like it) from corporate ownership? I think this is a serious question. It’s almost certain that, barring a revolution, the digital other-life will be privately owned (In case of a revolution, it will probably be controlled by the vanguard state… and then, eventually, also privately owned). If, for example, humans can upload themselves into data-based quasi-immortality, it will be owned and the options will be more locked in than North Korea on a bad day. And one fine day, the powers that be or some nasty 12-year-old hacker will drag you into the garbage icon. (Yes, the garbage icon is eternal.) OK, fun’s fun but let’s get back to the real, old school fun, i.e. the Yippies.

Part Two: The Yippies Started The Digital Revolution

In 1971, a revolutionary prankster/celebrity named Abbie Hoffman, who had started the radical group the Yippies (Youth International Party) released STEAL THIS BOOK, a manual for living on the fringes of a wealthy society by grabbing up some free shit from corporate powers while committing some Blows Against the Empire (another influence on this project, btw). 

Credit: Tesfu Assefa

See, 1971 was the last year that the vanguard of the counterculture thought that they were going to make a total cultural and political psychedelic/anarchistic/left wing revolution before realizing… fuck it. Let’s campaign for McGovern. But more to my point here and the milieu it attempts … true story… the Yippies started the phreakin’ digital revolution! To wit: The hacker culture started as the phone phreak culture. The phone phreak culture came out of the Steal This Book attitude about getting free stuff from the detritus of corporate culture, in this case, the phone company. I wonder how shoplifting and other forms of gutter-freak theft plays today among some leftists – the ones who seem to have become “inlaws in the eyes of Amerika” (Jefferson Airplane reference)… inclined towards lawful good behavior and even occasional pompous respect for American institutions. This must have emerged in reaction to a lawless lunatic right that has taken a much more visible and colorful role in the zeitgeist. There’s some extreme code-switching when it comes to the romance of insurrection (Yippies, for example, dug the Weather Underground… which, in those days, wasn’t a website for following weather conditions). And QAnon Shaman – with his war paint and animal howls – seems like someone who would only have been at home in a Yippie! prank back in ’71. There’s so much more I could say about code-switching. Maybe some other column. The first legendary phone phreak, John Draper aka Captain Crunch, who built the blue boxes, used to hang out at 9 Bleeker Street, NYC, Yippie headquarters. The first magazine that focused primarily on phone phreaking was YIPL (Youth International Party Line), which was started by Hoffman and “Al Bell.” In 1973, it transmorgified into TAP, which is more broadly remembered as the initiatory phone phreak periodical.

Phone phreaks were computer hackers. Draper famously noted that the phone system “is a computer.” From this milieu, the personal computer arose. Famously, Steve Jobs and Steve Wozniak funded the birth of Apple by selling Blue Boxes for phone phreaking.

Another Yippie contribution is the use of McLuhanism as a weapon in the countercultural revolution. Hoffman, Jerry Rubin and the other original YIPs took an idealistic youthful new left that was sort of basic and organic, and a mirror of the folk music that they loved, and made it “go electric” (a term used for when Bob Dylan started using rock ’n’ roll to communicate his increasingly surrealistic cultural critique.) That the medium is the message was central to their strategy for an anarchic left-wing sex, drugs & rock ’n’ roll youth revolution. Hoffman’s 1969 book ‘Revolution For the Hell of It’ is saturated with McLuhan references and strategies for how a freak left could take over America, end war and racism. and bring about a post-work celebratory psychedelic utopia. ‘Do It!’ yippie prankster/leader Jerry Rubin’s 1969 book was ‘zapped’ (i.e. designed) by Quentin Fiore, the same force behind ‘The Medium is the Massage’, McLuhan’s most successful incursion into the popular mind. The YIPs had faith that, being native to television and rock ’n’ roll radio, they had an intuitive understanding of the era that outmatched the dinosaurs of the establishment. They could bring the already rebellious rock ’n’ roll media babies into their utopian revolution.

As things evolved (or devolved), young people did become increasingly rebellious, and even riotous. The counterculture drifted from the intellectual class in the leading colleges out into the broader youth culture, and the emblem of rebellion shifted from Jane Fonda’s progressive activism to Peter Fonda giving the world the finger in ‘Easy Rider’. I bet some of those tangled up in this inchoate rebellion reemerged in 2020, in the Capitol Building on January 6 as hairy old dudes being disrespectful to Nancy Pelosi’s desk.

McLuhan wrote, “The global village absolutely ensures maximal disagreement on all points.” Wow! Sure seems to have called modern digital culture! This can be traced to the hippie/yippie reframing and idealization of mediated pop cultural hipness, and then on through Stewart Brand, who became obsessed with the idea that a picture of the whole earth would create a shift in human consciousness that would have us identify as citizens of earth (the global village) rather than members of a tribe or nation. Brand, with his Whole Earth Catalogs in tow, went on to become, arguably, the central figure of the emerging digital revolution in the late 1980s, sponsoring the first hackers’ conference, the first intellectual (maybe the last) social media site — a bbs called The Well — and helping create ‘Wired’ magazine, which idealized accelerated change as a world-improving hip cultural and business revolution. This may seem like a long distance from the Yippies’ original intentions — although it may be that where we landed was inevitable, the view of the essay ‘The California Ideology’ by Andy Cameron and Richard Barbook in 1995.

Indeed, the rise of the computer-enthusiastic hacker movement of the 1980s, which was made up pretty much entirely of counterculture enthusiasts, was well-timed to the Reaganite agenda for setting the entrepreneurial impulse free from regulation. It was these two forces in tandem that made the digital revolution happen. But I’m trying to cover too much ground in one column – a rant for another time. 

Read the follow up article Steal This Singularity Part 2: The More Things Change, The More You’ll Need To Save 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter