Technoshaman: from worldbuilding to mindbuilding: Part 3

As the ancient adage goes, “with great power comes great responsibility.” This leads us to the question: If used wisely, can the XR metaverse actually elevate human consciousness? Can worldbuilders craft virtual environments and experiences that make us happier and healthier as individuals and as a society? 

We’re not talking about influencing participants with ideologies, philosophies, morality, political propaganda, brands, or even social causes for that matter. Can the power of XR help transform us into better people? More responsible citizens? Can it motivate us to unify in building a better world?

The need for transformation

We don’t need to dwell on the need for individual and social transformation — the signs are all around us. The last century of technological innovation has given us vast personal, computational and industrial power that continues to grow exponentially. Individuals can now mislead the masses with a single tweet. Nations have the power to annihilate cities with the press of a button. And our hunger for energy and material wealth is choking the planet with our own waste. 

Gus Speth, author and top U.S. Advisor on Climate Change, put it this way:

I used to think that top environmental problems were biodiversity loss, ecosystem collapse and climate change. I thought that thirty years of good science could address these problems. I was wrong. The top environmental problems are selfishness, greed, and apathy, and to deal with these we need a cultural and spiritual transformation. And we scientists don’t know how to do that.

Technology is but a natural extension of human consciousness and intent. We currently have the technological solutions needed to create an abundant world with clean, renewable power, materials, and resources. We have solutions for mitigating, and possibly reversing climate change. And we have the capacity for the altruism needed to overcome selfishness and act in our mutual best interests. 

It’s time to transform, either consciously and intentionally, or involuntarily as a victim of our own ignorance. If we are to create a better world — or even just maintain our present quality of life  — a “cultural and spiritual transformation” within this generation would appear to be essential. We all get to go on this journey of transformation.

The science of transformation

Neuroscientist Dr. Paula Tallal said of, “You create your brain from the input you get.” Shuler and Bear showed that we not only create our brain — we craft our perception from the beliefs and expectations we choose:

Visual neurons in the brain’s primary visual cortex — long thought to conduct purely sensory, value-free visual information — can also modulate their response as a function of expected reward. In a clever study that sharply revises the view of the fundamentals of how we see, Marshall Shuler and Mark Bear show that visual neurons once considered to be mere feature detectors are affected by complex cognitive influences such as reward expectancy. Even at the most fundamental level, it seems, our expectations influence how and even what we see.Susana Martinez-Conde

To the extent that our beliefs, expectations, or worldviews are substantially changed due to exposure to media (of any kind), we can legitimately say that our brains have been “rewired” by the experience. Facilitating positive transformation on a personal, social, and ultimately, global level is the highest expression of worldbuilding. 

Transformative experiences substantially alter a person’s “possibility space” or life’s path — ideally in a positive sense. L.A. Paul describes the transformative experience as:

…a kind of experience that is both radically new to the agent and changes her in a deep and fundamental way; there are experiences such as becoming a parent, discovering a new faith, emigrating to a new country, or fighting in a war. Such experiences can be both epistemically and personally transformative. An epistemically transformative experience is an experience that teaches you something you could not have learned without having that kind of experience. 

Having that experience gives you new abilities to imagine, recognize, and cognitively model possible future experiences of that kind. A personally transformative experience changes you in some deep and personally fundamental way, for example, by changing your core personal preferences or by changing the way you understand your desires and the kind of person you take yourself to be. A transformative experience, then, is an experience that is both epistemically and personally transformative.

Many of the best stories ever told involve personal transformations of the story’s characters — timeless story themes such as the criminal who redeems himself, the scrooge who is re-awakened, or the hero who falls from grace. In XR, we ourselves can become the character who is transformed.

Paul’s epistemic and personal categories of transformation readily map into the two educational modalities: cognitive (intellectual realization through thoughts, facts, semantic language, narrative, and visuals) and affective (emotions, inspiration, and motivation transmitted through storytelling, drama, music, and semiotic language). 

Worldbuilders are discovering that XR experiences can evoke a wider range of affective states, including flow states, mindfulness, unity consciousness, and mystical states. 

A picture containing indoor

Description automatically generated
Artechouse: Walkthrough immersive environment featuring a fractal environment from technoshaman Julius Horsthuis. Phenomenological cinematic experiences can communicate the ineffable. (Credit: Artechouse)

Technoshaman: Maestro of transformation
In his 1985 book The Death and Resurrection Show: From Shaman to Superstar, Rogan Taylor traces our modern entertainment industry back to the earliest of all religions — shamanism. Shamans use rituals, feats, songs, stories, power objects, and performances to “fine-tune the psyche of his tribe.” Shamanism — like religion — was a “tool for both surviving and accomplishing transformation.” Rogan sees the shaman as a “maestro of transformation.”

While modern entertainment likely emerged from shamanism as Taylor suggests, he is quick to point out that, unlike today’s audiences…

… the tribal audience certainly would not have arrived at the shaman’s healing séance in the expectation of being mildly amused or merrily entertained. They came to witness, and take part in, something powerful and sacred.

According to evolutionary anthropologist Michelle Scalise Sugiyama, shamanic storytelling in indigenous cultures is a form of pedagogy used to communicate social norms and traditions. Storytelling is often accompanied by visual, auditory, and/or gestural modes, including drumming, singing, chanting, eye-gazing, vocal mimicry, and variations in volume, rhythm, timbre, pitch, and stress in speech that “cast a sort of awe on the audience,” according to one anthropologist she cites, who studied the Dena storytellers.

Taylor also details how these storytelling elements of shamanism survived as modern showbusiness while leaving behind the deeper sacred, mystical, and ecstatic dimensions of shamanism.

Showbusiness looks like the orphaned child of a divorce between art and ecstasy. Forever hiding its shameful origins, while, at the same time, secretly attempting the reconciliation of its separated parents.

With XR’s ability to form a wideband neural interface between worldbuilder and participant, future masters of XR experience design — the technoshaman — can bring the power of awe, mystery, and ecstasy back into mainstream arts and entertainment. The technoshaman is a maestro conducting the nervous systems of their audience.

However, like the shamans of old, the technoshaman must go within to access visions, or actually embody elevated, transcendent, or other beneficial states of consciousness in order to transmit them. They must themselves transform if they are to inspire transformation in others. 

A 360 cinematographer with a deep love of nature will transmit this love through their work. A musician with years of mindfulness practice will bring their audience into deep contemplative states. A visionary artist who journeys into altered states will evoke psychedelic experiences in their participants. When the technoshaman’s consciousness is laid bare for all to experience through multisensory XR technologies, there is no room for inauthenticity. The artist becomes their work.

The same goes for larger worldbuilding teams that require a wide range of skills, including programmers, audio engineers, animators, lighting designers, composers, producers, directors, performers, and more. Participants will receive a transmission from the combined consciousness of the worldbuilding team. 

This has always been the case in the entertainment industry, of course. But with XR, the transmission of consciousness is more profound. Worldbuilding teams that want to optimize positive impact must work to achieve authentic coherence of heart, mind and vision as they create and perform. 

MSG Takes Over Sphere Construction in Las Vegas – Billboard
MSG Sphere – The world’s largest digital dome, opening in 2023 in Las Vegas with 17,500 seats, is an arena for next-generation technoshamans. (Credit: MSG Entertainment, LLC)

In their highest expression, all performers, storytellers, journalists, filmmakers, and influencers are contemporary shamans who wisely tend to the psyches of their tribes. Emerging XR technologies are supercharging their crafts, turning storytellers into worldbuilders. 

The technoshaman knows that worldbuilding is mindbuilding and has pledged to use their power wisely for the upliftment and evolution of human consciousness. And, as the power of the XR metaverse grows, turning us all into worldbuilders — let us use these tools wisely for the betterment of humankind.

Technoshaman Resources

Here are select examples of experiences, venues, and platforms of interest to aspiring technoshamans.

 DomeImmersive LBEMobile AppVR Metaverse 
Artist, Project or PlatformARTransformative Mode
AI World Building       
Jason Silva’s Cyberdelic Dreaming   X XAI Art, Inspirational Narratives
Promethian AI   X XAI World Building Application
Builder Bot   X XMeta’s AI World Building Demo
Dome Shows/Events       
Michael & Jahna’s The JourneyX  X XEnchantment, Mindfulness
James Hood’s MesmericaX     Happiness, Well-Being
Jhené Aiko’s Modern Mantra ImmersiveX     Sound Healing, Well-Being
Metaverse Worlds & Events       
BRCvr   X XCreative Community/Festival
Evolvr   X XMindfulness Community
DomelandiaX  X XArtist Empowered Worlds
Domensions (in BRCvr)   X XTransformative Art Community
Mobile & VR Experiences       
Tripp   X  Mindfulness, Well-Being
Insight Timer  X   Mindfulness, Well-Being
Laurie Anderson’s To The Moon   X  Contemplative Journey
Space VR X X  Overview Effect
VR Metaverse Platforms       
AltspaceVR   X XCreative/Social
Engage   X XCreative/Social/Business
NEOS VR   X XCreative/Social
VR Chat   X XCreative/Social
RedpillVR   X XLive Music
XR/Dome Venues       
ARTECHOUSE X    Immersive Art Gallery
Unreal Garden X  X Art, Wonder
Vortex DomesX  X XWonder, Awe, Well-Being
WisdomeX     Immersive Art Park
FrequencyX     Breathwork, Well-Being
HolodomeX     Holodeck-Like Environment

Eight principles of the technoshaman

  1. The technoshaman uses advanced digital technologies to craft experiences that are positive, life-affirming, healing, or awakening, and that support or accelerate the natural evolution of human consciousness. 
  1. The technoshaman awakens minds. Rather than propagating memes, philosophies, ideologies, propaganda, brand messaging, and other forms of influence, the technoshaman seeks to evoke mental states that empower, embrace ambiguity and diversity, awaken latent faculties, transcend dualistic thinking, slow the internal dialog, or leave room for deep contemplation.
  2. The technoshaman opens hearts. Rather than gratuitously evoking emotions such as fear, shock, horror, or loss, the technoshaman focuses on positive emotions, including love, empathy, a sense of connection, awe, or joy. Storytelling is used as a path towards empowerment, mood elevation, epiphany, communal unification, or liberation of the human spirit, not simply to manipulate emotions or behaviors.
  3. The technoshaman embodies the states of consciousness that they seek to evoke in others. The experiences they create are authentic and not contrived solely for influence or personal gain.
  4. The technoshaman’s creative medium is the nervous systems of its audience, most powerfully accessed through multisensory immersive media. As such, technoshamans are attuned to and respect the nervous systems of their participants.
  1. Technology is simply a medium for transmitting states of consciousness and is never an end in itself. 
  1. The technoshaman promotes evolutionary communities by evoking a sense of unity, coherence, harmony, and connection.
  2. The technoshaman seeks not just to entertain, but to transform.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The History of BGAN Punks and a discussion with their Creator

“REKT DEKT DEKT, GELD I’VE BEEN WALKING HOME IN SONA, I’M NOT GOING HOME I’VE BEEN WALKING HOME, THERE’S NO MONEY.” 

Such are the indelible lyrics of Mark Cuban’s Bastard. More specifically, one of his 12 BGANs (short for Bastard GAN Punks). Created by Berk Özdemir, the V1 and V2 collections are renowned for being fashioned by a specialized branch of AI known as Generative Adversarial Networks (hence the acronym “BGAN”). 

(One of Mark Cuban’s V2 BGANs)

With notable holders like Cobie and Punk 6529 (not to mention recently gaining a place in FloorDAO’s Treasury), BGANs are a homage to – and evolution of – the original CryptoPunks by Larva Labs. Overflowing with a layered creative ecosystem that predates Art Blocks, some consider them to be true digital art. 

The V1 collection of 76 NFTs was released in 2020, followed by the much larger V2 BGANs (11305 in total), minted on March 7, 2021. This was one month earlier than the much talked-about Bored Ape Yacht Club art NFTs. Both series include a range of genderless, carefully curated traits (unlike CryptoPunks). And, as mentioned, the core stylistic influence and aesthetics are determined entirely by AI. 

(BASTARD GAN PUNKS V1)

What are Generative Adversarial Networks (GAN)? 

A generative adversarial network (GAN) is a machine-learning model introduced by Ian Goodfellow in 2014, whereby two neural networks compete to create more accurate, or better fitting, data. They are used to generate new data for scenarios across sectors. Ian Goodfellow’s appearance on the Lex Fridman Podcast explains more.

Use cases are also found in contemporary art: content like images, audio, video etc. can be given to the neural networks to learn from, as the two models compete with each other to output higher-scoring answers. After a training session, new file(s) are generated that contain the AI’s interpretation of the original dataset and what it learned. 

Berk’s integration of GAN technology was a case of creative experimentation while undergoing a Master’s programme at ArtScience Interfaculty

Unable to afford an original CryptoPunk, Berk decided to make “interesting enough” remixes of them. BGANs remix “genes” representing a variety of traits from several original CryptoPunks, so there is no direct equivalent or correlation between them and the original NFTs by Larva Labs. Nobody knows which specific Punks were used to create the new interpretations by Berk, thus the origin of the name and concept “bastards.”

BASTARD GAN PUNK V2 #11188

When detailing his creative process, Berk states: 

“The first iteration of Bastard Gan Punks (v1) were trained on 10,000 original Cryptopunks and gave birth to many weirdo adorable motherfuckin’ bastards. The process worked very well and I was so happy with getting weird interesting results. However, since some traits of Cryptopunks exist on just a few bastards (like AliMason Marcobello talks to the creator of BGAN Punks, a collection of algorithmically-generated NFT artworks, about art, technology, and creation.en, Ape, Zombie, or Beanie etc), most of the rare traits were lost during the model training.

For BGANPUNKS V2, I chopped all Cryptopunks’ attributes, made cross-gender copies of ones made for only one gender (BGANPUNKs are queer!) categorized them in different folders (Hairs/Heads, Eyes, Glasses, Smokes etc.), and wrote a script to make all possible combinations of those traits. I also added skin and accessory color randomization + a couple of custom traits I’ve drawn myself. 

To be honest, it was a painful and shitty process because I had to do all the cuts by hand and had to optimize the script via trial and error, maybe hundreds of times and not make a million unnecessary images at the end, which would take a lifetime to train with my shitty GPU.”

Separate smart contracts for IP and licenses (commercial rights):

Alongside AI, BGANs have also catered to the obfuscated and at times contentious nature of intellectual property by granting full commercial rights to holders. Through a separate smart contract, holders have the option to tailor specific conditions of the IP for each NFT directly on the Ethereum blockchain.  

Traits and the BGAN Ecosystem:

Unlike the static metrics that help collectors filter through an ever-expanding constellation of NFTs, BGANs were released mostly without any pre-determined trait categories except for CALM AF, HYPE AF, and ‘Bastardness’ (a broader category with varying rarity metrics). 

Instead, over the ensuing months and through a meritocratic process aligned with the ethos of Web3, the community decided on the specific traits they wanted BGANs to embody. This initiative was later streamlined through a living document (authored by Snooplion) known as the Trait Constitution:  In essence, if any holder believes that BGANs should change or modify their traits, all they need to do is follow the process of the trait constitution and submit a snapshot proposal for community voting.

The current definitive traits of BGANs include:

  • HYPE TYPE (animated/still). There are 847 animated or “HYPED AF” vs. 10,458 still or “CALM AF” BGANS.  
  • SONG WORD COUNT
  • BASTARDNESS
  • BACKGROUND (still BGANS)
  • GLITCH PATTERN (still BGANS)
  • BAD HABITS (still BGANS)
  • FACING DIRECTION (still BGANS)
  • TYPE (still BGANS)
  • BACKGROUND GLITCH LEVEL (animated BGANS)
  • BACKGROUND MOOD (animated BGANS)
  • FLOATY HEAD (animated BGANS)
  • SPEEDOMETER (animated BGANS)
  • NUM OF FRAMES (animated BGANS)
  • HEAD TURNS (animated BGANS)

A more detailed explainer on how to sort through and understand these traits has been written by 888mooncat.

Adding to the world of BGANs, Berk also created the Glicpixxx (V1 & V2) collections, Artglixxx collection, and an associated $Glix Utility token.

Glicpixxx V1 

Glicpixxx V1 is the first of the Glicpixxx experiments by Berk. As with the lighthearted origins of BGANs, the idea of Glicpixxx started in 2020 after a streaming session on Twitch when Berk opened a favored encoder software (a processing program) called “GLIC” and started playing around with images. After taking one of the encoded images, he divided it into 36 individual squares on Photoshop and minted them (as a test) using OpenSea’s storefront contracts. 

GLICPIXVER001 CANVAS – #0

However, before releasing the GLICPIXXXVER002 GRAND COLLECTION, Berk updated the parameters for the original smaller series (V1) so that holders of the originals (that were created using the OpenSea storefront) could transfer it to a new contract and mint an equivalent copy. This was largely because (at the time) OpenSea used unverified contracts and stored all data on their API. Not to mention that the tokens functioned as singular ERC-1155 rather than the conventional ERC-721 standard for NFTs.

A unique aspect of the new contract for GLICPIXXXVER001 that Berk created with his friend (CANER) is the added NFT (TOKEN ID #00) .It functions as a programmable canvas that reflects all 36 GLICPIXXXVER001 in 6×6 tiles. 

Similar to async.art (or related programmable blockchain projects), the separate 36 GLICPIXs function as layers for the master #00 token. Any holder of the original series can change the render of the master token by flipping their NFT, rotating it, or swapping its position in the canvas with someone else. 

Each time a token holder changes the parameter of the layer they own, they also change the state of the (master) contract, and as Berk stated, “a P5 sketch rendering the tile canvas will start reading from these new parameters.” The code can be found inside the smart contract. 

Glixpixxx V2 – Grand Collection

On the same day he experimented with GLICPIXXXVER001, Berk generated an additional 48 images (1024 x 1024) with GLIC – all with a different range of presets, codecs, and parameters. A Python script was then used to explore the visuals in more detail, resulting in an additional 40k (32 x 32) source images, each with a unique and complex variety of patterns and colors. 

GLICPIXXXVER002 #5770

While preparing for the launch of BGANs V2 in 2021, Berk used a random selection of the GLICPIXXX as backgrounds for transparent Bastards and found an organic synergy with the aesthetics of the two collections. It wasn’t before long that positive acclaim spread throughout the BGAN community, with people hunting GLICPIXXX, the most striking combinations. 

Although the 40k source images were originally going to be combined into a series of artistic GIFs for VIV3, after a lull in communication, Berk decided to release it as a grand collection and gift for Bastard Gan Punk holders. 

As such, GLICPIXXXVER002 was born as an ERC-721 collection consisting of 11,500 images of 32 x 32 pixels. As with V1, V2s are programmatically encoded with open source software called GLIC. The collection consists of several subcategories and unique traits that reflect the native lore of BGANs. More details can be found here

Mirroring the creative process for BGAN Punks, GLICPIXXXVER002 images were created without the predefined traits or patterns usually associated with the “layer merging, generative method.” Instead, Berk added traits according to his analysis of each separate image, and the patterns were reverse-engineered, categorized, and assigned to BOOMER CALM AF GLICPIXXX by the BGAN Punk community. https://glicpixxx.love/readme 

As an ERC-721 token with an open license, GLICPIXXX can be used by anyone as a raw asset for creative initiatives or projects like virtual worlds, games, smart watches, fashion, texture packs in Minecraft, weapon skins on an FPS game, or in combination with other unrelated NFTs. Further ideas and example use cases can be found on the main site. 

Artglixxx

ARTGLIXXX is powered by GLICPIXXX. It extends the concept and ecosystem of generative, algorithmic, programmable art by remixing the original designs of the GLICPIXXX collection. The first ARTGLIXXX series is ARTGLIXXX GENESIS, which can be minted by holding an NFT from the GLICPIXXXVER002 grand collection. 

https://opensea.io/collection/artglixxx

$Glix Utility token: 

$GLIX is an ERC-20 utility token for the GLICPIXXX ecosystem. 

Token IDs between 014999 of the ARTGLIXXX GENESIS series (aka ELDEST ARTGLIXXX) automatically yield 10 $GLIX tokens per day for holders until 2032 (10 years from the launch date). $GLIX tokens can also be burned (in increasing quantities) to mint or collect upcoming generations (or versions) of the ARTGLIXXX series. 

Further BGAN tokens were created in the ecosystem, known as $BGAN and $Bastard. $BGAN is the token used in the NFTX protocol and SushiSwap

With a more mysterious origin, the $BASTARD token was created and airdropped anonymously to all BGAN holders. After thorough due-diligence by (Qexd and Rosco), the community voted to add liquidity to it on SushiSwap. It can also be used for internal projects like minting NFTs from Berk’s copymint collection.

The official $BASTARD address is: 0x78d9a9355a7823887868492c47368956ea473618 

BGAN DAO:

As aligned with any trustless cohort of degens, the BGAN DAO (established 15 Nov. 2021) is governed through a meritocratic system of proposals that align with the constitution

At the time of writing, there are currently nine ‘Signers’ (elected by the community) who help execute valid proposals submitted on snapshot.org (here). 

The current signers include: 

  • Wassoshi – is an artist, postdoc researcher at the University of Copenhagen, and musician.
  • Dirtbaglu – an NFT collector and cryptocurrency expert with a focus on DeFi, liquidity provisioning systems such as nftx.org, and modern contemporary digital art.
  • Crypto Texan – is a Decentralized Finance Community & Marketing at @0xPolygon and an experienced podcast host with a specialization in crypto markets.
  • Qexd – is an NFT and cryptocurrency investor with a focus on credit-based stablecoins, and a host of several innovative BGANs Twitter spaces.

The DAO also includes a respected group of community moderators such as MoonOverlord and ShroomDaddy. Further moderators include JU7URNA, Masal, Rooste,r, and Lyssandros. The Bastard DAO is also currently the largest liquidity provider of the xBGANWETH SushiSwap pool. 

Interview with Berk

Given the unique ecosystem and growing developments of BGANs, it was only fitting to interview the creator, Berk, and dive further into the genesis, the creative process, and his extended thoughts on AI, NFTs, and Web3 in general. 

Mason: Why did you choose CryptoPunks for the training data? Would you still use them as a focal point today?

Berk: When I started with GAN, I had just discovered NFTs (around late 2019), and two significant projects caught my interest: CryptoKitties and CryptoPunks. I wanted to train my dataset, and such large-scale (10000+) collections seemed pretty straightforward to work on. 

I had a NVIDIA GTX 1660 Super as the graphics card, which would be like a snail for training GAN on big resolutions. So I had to be pragmatic and create a dataset with the smallest source images possible: pixel art. So choosing CryptoPunks was an arbitrary decision, but when I started getting the first results, ideas for the BASTARDS concept came, and I started minting them to create new personas on interwebz.

Mason: What made you settle on ‘11,305’ V2s instead of the conventional 10,000 collection size?

Berk: When preparing for V2, I wasn’t sure how big the collection could be. I had a model for infinite numbers, where people can concatenate a background-less bastard and a glicpixxx and mint it. But then I got feedback from Discord users that a fixed number like 10k would be better. 

Then 11,305 came kind of arbitrarily. I was curating the collection by generating outputs, putting them in the collection, and vice versa. In the end, when I said I would stop, there were 11,305 pieces.

Mason: What makes BGANs more than another CryptoPunk derivative like V3 Punks or Phunks?

Berk: Firstly, it introduces a new paradigm to PFPs and a narrative/concept aligned with the creative methodology. And in a market where a digital “Punk” became an elite flexing object, ugly productions from GAN (aka bastards without parents) have a new saying, introducing the new generation of people decentralizing this corrupted world. I think the most undervalued part of BGANs is that they are more real and humane than any of the PFPs, which claim to have a worldly character.

Mason: How did you get the ‘HYPED AF’ GIFs to loop so well?  

Berk: It’s not that complicated. Anyone who uses libraries like STYLEGAN can create those by outputting latent space interpolation. However, on V2 specifically, the gifs weren’t looping perfectly, so I had to open the source code and tweak some parameters to get almost perfect loops.

Mason: The term ‘NFT’ has been corrupted by more than a few influencers, celebrities and brands. With that in mind, other than CryptoArt, how would you define BGANs?  

Berk: Bastards constantly want to make the world beautiful. 

Mason: The visuals of the V1 and V2 collections are quite different. Stylistically, you could say that V1 resembles Francis Bacon’s paintings, whereas V2 leans more towards the conventional pixelated design of NFTs. Why is this?  

Berk: I made a mistake with V1s. Normally, CryptoPunks assets are 24×24, and for GAN training, you have to set a database consisting of images of the resolution, multiples of 32. Like 32×32, 64×64, 256×256, 1024×1024, etc. So, I upscaled the images from 24x to 32x, and it caused some distortions on some pixels. So I learned from those mistakes for V2, and they looked more like original CP and pixel art avatars (I think).

Mason: What are some of your biggest creative influences? Is there anyone in AI or blockchain that you recommend learning more about?

Berk: John Cage, John Oswald of Plunderphonics, Nick Briz, Cory Arcangel (maker of Super Mario Clouds), and Chris Novello who created the ‘Illucia’ project using code-bending.

For any artist new to the Crypto and NFT space, I highly recommend  Rhea Myers – a True OG CryptoArtist. 

And the Trash Art movement has been pretty influential.

Mason: How did Mark Cuban end up with several V2 Bastard Gan Punks?  

Berk: As I know, Haralabos Voulgaris, the former Director of Quantitative Research and Development for the Mavericks, minted a shitton of BGANs (and dumped most of them afterwards, lol) and told Mark Cuban about it.

Mason: What inspired you to start programming and move into AI, and do you have any advice for readers who want to learn more about it?

Berk: I have a piano and classical composition background, so when I saw the possibilities of programming in electronic music, I dived into the subject. Then I took a Python course and followed some other courses online. I realized that I have a super-powerful tool with programming that can do enormous tasks and experiments I can’t do myself. When I was doing my Master’s degree at ArtScience Interfaculty, I was genuinely interested in machine learning methods, so I started playing with object detection, natural language processing, GAN, audio datasets etc. I didn’t follow a specific source or book. I just jumped into everything I found interesting and was capable of fiddling with.

Mason: Are you working on any other projects?

Berk: I am currently trying to utilize Glicpixxx for fashion textile experiments. So let’s see what kind of stuff this will create!

And there is a multiplayer art experience I am working on with a small group of people, which will utilize Glicpixxx and possibly BGANs in a weirdo way. Other than those, I don’t have specific projects I am working on. Ideas come to me spontaneously, depending on the moment.

Mason: Are you proud of any particular moment in the BGAN timeline?

Berk: When I reversed the bonding curve and sent all the minting price to donations, we managed to contribute to saving a baby with spinal muscular atrophy (alongside other charitable causes).

Mason: Lastly, what would be your ideal future for generative art, NFTs, and artificial intelligence?

Berk: I think landscape needs more fun, goofiness and weirdness. Of course, I am always excited to see new technology utilizations. Still, sometimes NFTs are utilized to satisfy the collectors, to make their digital objects ‘worth holding something’. Now NFTs are a part of this world, and we will see so many new uses of digitally created objects, so I am excited to see what kind of craziness we will witness.

Conclusion:

With a pool on NFTX funded with over 10 ETH, a coordinated DAO and Treasury, more than 2,800 unique holders, and over 8,200 volume in ETH on OpenSea, BGANs have gained a reputation as something of a stablecoin equivalent in the ever transient landscape of NFTs. 

But with an eclectic community of technologists, artists, and musicians that form the collective mosaic of bastards, perhaps no better title could be had. Especially when, regardless of ETH prices, there will always be a band playing for the fam in BGAN land.  

Links: 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Large Language Models – From Banality to Originality

Looking at the insanely popular Large Language Models (LLMs) like ChatGPT with an AGI researcher’s eye, what I see are some potentially interesting components for future AGI systems. 

Economically, I see the seed of a family of technologies that, over the next few years, is very likely to obsolete a majority of white-collar jobs, wreaking incredible disruption and ultimately moving us further toward a world in which people are liberated from the necessity to work for a living.

On the other hand, looking at them from an end user’s view, what I see are tools that are already pretty useful for some things – but are marred by a few really major and frustrating flaws.

The first major flaw is a bizarre lack of reality discrimination – LLMs are so full of shit it’s almost unbelievable, given how intelligent they are in some ways.

The second is a mind-numbing boringness and banality – such powerful cliché machines have never previously even been imagined.

I do believe, though, both of these issues can be overcome with a moderate amount of R&D effort. Which is probably going to be put in by various parties during the next couple years.

To fully overcome these weaknesses will require a true breakthrough to Artificial General Intelligence. But I suspect they can be remedied to a significant degree even without AGI, as a parallel track of development.

Generative AI vs. AGI

As an AGI guy, the first thing I need to clarify when talking about LLMs is that they are certainly not AGIs. They achieve a fair degree of generality in their intelligence, but it’s not because they generalize beyond their training and programming – it’s because their training data is so huge it covers almost everything. A tremendous variety of queries can be answered via minor variations and combinations on things in the training data. 

A generally intelligent artificial mind will have lots of uses for statistically recognizing patterns in huge datasets, and synthesizing new forms via merging these patterns together in a manner appropriate to some narrow context – the things that LLMs are good at. But I don’t think these capabilities can form the core of an AGI mind. That needs much more abstract thinking, bound up with agency, will, self-reflection, modeling of self and other. If AGI systems have these core functions fulfilled (by the OpenCog Hyperon AGI architecture my colleagues and I are working on, or something else), I think LLMs could add a lot of wonderful peripheral functions.

Toward LLMs with Reality Discrimination

LLMs trained to produce text currently have serious problems with truth-telling – they basically can’t distinguish truth from falsehood. I’ve gathered a few funny or instructive examples in a blog post on ChatGPT and AGI. There are many others easy to dig up online. I have a particular liking for the conversation where ChatGPT calls me the founder of OpenAI, the company that created it (in which I have no direct role).

A variety of approaches are being pursued to overcome this problem and make LLMs less full of shit, and I think a bunch of them will probably succeed to some degree. As an example, we at SingularityNET and TrueAGI are trying to address this with a neural-symbolic architecture: LLMs are hybridized with an OpenCog-based symbolic reasoning engine. 

This symbolic reasoning engine can extract logical relationships from an LLM, and put the LLM’s knowledge in a more structured form. It then compares the relationships between concepts with each other to identify inconsistencies. Where possible, it also compares these relationships with existing background knowledge – common-sense truths about the world like ‘water is wet’ or ‘Ben Goertzel is not the founder of OpenAI’. Inconsistencies can then be used as negative evidence to fine-tune the model – the model can be explicitly trained to unlearn the lies it contains.

I suspect these hybrid symbolic-LLM structures can greatly increase the generality and truthfulness of things like ChatGPT, even without taking us all the way to AGI.

The Banality of LLMs

But LLMs also have other problems besides their rampant BS generation – like the fact that so much of what they produce is boring clichés. Eric Hoel has called this “the banality of ChatGPT“. 

As a so-bad-it’s-good example of this banality, consider the following excerpt from a recent dialogue I had with the system:

BEN: How would you advise to manage and coordinate the project of creating a replica of the planet Jupiter with 1000x human intelligence? 

ChatGPT: For managing and coordinating a project of creating a replica of Jupiter with 1000x human intelligence, I would suggest the following steps:

  1. Define the project scope: Clearly define the goals, objectives, and deliverables of the project.
  2. Assemble the team: Recruit individuals with relevant skills and experience to form a diverse and effective team.
  3. Develop a project plan: Determine the tasks, resources, timelines, and budgets required to complete the project.
  4. Set up project management systems: Establish tools and processes to track project progress, communicate with team members, and manage risks.
  5. Monitor progress and adjust as needed: Continuously assess the project’s status and make changes as needed to ensure it stays on track.
  6. Ensure communication and collaboration: Encourage regular communication among team members and stakeholders to ensure everyone is on the same page.
  7. Anticipate and manage risks: Identify and plan for potential risks and obstacles to minimize their impact on the project.
  8. Celebrate success: Recognize and celebrate milestones and achievements along the way to keep team morale high.

Remember, creating a replica of Jupiter with 1000x human intelligence is a complex and challenging project, so it’s important to be organized, communicate effectively, and continuously adjust as needed to ensure success.

Not exactly wrong. But, shall we say, not really grappling with the particulars of the question…

Sometimes boring clichés are exactly what you want in life. If you’re writing a cover letter for a job application, or an essay for university English class… originality and quirkiness can sometimes be dangerous.

But one worries about a near-future in which boring clichés become even more common than they are now, because they’re so quick and cheap to generate using AI models. A boring dystopia.

Credit: Edward Keller

The Glory and Horror of Music LLMs

Google’s recently announced MusicLM neural model provides an intriguing case study of the intersection between “fascinatingly impressive” and “horrifyingly boring.”

You can give it a query like –

We can hear a choir, singing a Gregorian chant, and a drum machine, creating a rhythmic beat. The slow, stately sounds of strings provide a calming backdrop for the fast, complex sounds of futuristic electronic music.

– and it will generate music that fits the bill. Amazing stuff.

Except the effect is a bit like having a workaday lounge band improvise a musical passage for you. It’s very rarely artistically thrilling.

Given how impressive the functionality is, you might say this is a pretty lame complaint.

However, if such technology was used to generate music for people to listen to, the result would be an even more cliché-ridden and repetitious music sphere than record execs have already inflicted on us! Dentist’s office muzak++ forever!

Credit: Tesfu Assefa

The problem here is that averaging together everybody’s art produces art that is itself average. For some commercial purposes – e.g. background music for ads or video games – average, passable, competent music may be fine.

As a lifelong composer and improviser, I’ve generally been more interested in creating sounds that diverge from the average and the expectation – even if they get a little jarring or discordant in the process.

Of course, current neural models can be jarring and discordant too – but they will tend to do it in a way quite similar to something from their training dataset, or combining surface-level features of a few things in their training datasets.

Music is the domain in which I’ve thought most about how to overcome the banality of LLM output – because as a musician, I would really love to have an AI musician to improvise with. We already have a robot singer in our Jam Galaxy Band, and some AI-composed riffs, but a real-time AI improviser jamming alongside me is what I dream of. I don’t want boring lowest-common-denominator MusicLM-style in my band, not at all…

One approach that can be taken here is to formally introduce a theory of ‘interestingness’ – make a mathematical model of what constitutes interesting music, and then condition a MusicLM-type model to bias it to produce outputs meeting this interestingness criterion. This is not that far off from work I did in the 1990s using genetic algorithms to evolve music maximizing a ‘fitness function’ encoding a theory of musical interestingness. But LLMs allow the evolved music to incorporate patterns of various sorts from human music in a much more refined way than was possible back then.

LLMs vs. Hmmmmmm

Credit: Tesfu Assefa

Of course, this would still be a very different thing from how an AGI system would approach music. 

AGI and music could intersect in a variety of ways, but one way or another, it would involve an AGI system creating and understanding music in the context of its experience of being an agent in the world, like when the AGI in the 2013 film Her says, “I’m trying to write a piece of music that’s about what it feels like to be on the beach with you right now.”

Steven Mithen’s book The Singing Neanderthals presents an hypothesis about the origin of language and music. He posits that human communication began with a communication system he refers to as “Hmmmmm” because it had the following characteristics: it was Holistic, manipulative, multi-modal, musical and mimetic. Basically Hmmmmm combined sound and gesture and action and imitation – somewhat like the pre-verbal/semi-verbal communication one sees in one-year-old children, but with more adult-level cognitive sophistication underlying. His proposal is that Hmmmmm came first and then spawned both language and music, which evolved from Hmmmmm in their own different directions.

Cambridge Archeological Journal did a fascinating feature presenting various criticisms on the hypothesis along with Mithen’s responses. 

An interesting and fairly difficult challenge would be to coax AI agents living in a virtual world – let’s say Minecraft enhanced with plugins, or the in-process Sophiaverse virtual world – to invent language and music along the lines of the Hmmmmm theory. This could be an interesting and valuable thing for AGI researchers to do regardless of how fully accurate Mithen’s theory of evolution is. 

We could stock the virtual world with a few easy-to-use musical instruments, let’s say –

  • drums that make rhythms when hit
  • flutes that they breathe into (modulating volume and timbre with breath) while pushing buttons to make notes
  • Piano-type instruments that make notes when their keys are hit

One would then ultimately want these virtual-world proto-AGI agents – I like to think of them as “Neoterics” (new people) – to do things like:

  • Discover that dancing to music is pleasurable and creates a feeling of togetherness which fosters collective action and communication
  • Discover that drumming enthuses a group to carry out physical tasks together
  • Discover that listening to melodic music puts the mind in a state conducive to creativity

Given that the Neoterics’ emotion models will be similar to ours, yet different in the particulars, it may be that the music they create to express their own emotions and influence each others’ emotions will be significantly different from human music. Perhaps one could then train music LLMs on music made by Neoterics and get a fascinating sort of hybrid – a truly new genre of music!

Whether or not this Neoterics experiment ever gets done in precisely this form, it does highlight the big difference between an LLM approach and an AGI approach – to music or anything else. LLMs are munging and merging data patterns, and with cleverness one can work around the most immediate issues that emerge from this approach, issues such as tendencies to hallucinate or converge on clichés. AGI, however, requires a totally different approach.

Narrow AI systems like LLMs may be useful for feeding patterns into the core cognition of AGI systems, or for helping them speak fluently in the lingo of a given domain. But at core, AGI systems will necessarily be very different from LLMs – they will be what Weaver has called Open-Ended Intelligences – complex self-organizing systems that engage richly with their environments, driven by complementary-and-contradictory drives to individuation and self-transcendence. AGI systems will achieve generalization via abstraction, and the need to balance individuation and self-transcendence, while working with limited resources, will drive them to intelligent abstract understanding. When they generate language or make music, they will build it on this abstract understanding, formed from their experience, and as a result will be imaginative and truth-telling naturally – organically – rather than with an additional trick.

LLMs and other deep neural nets are going to have a big impact on society, disrupting the business models of today’s tech giants and potentially eliminating a significant percentage of human jobs. But their biggest contribution ultimately may be waking up the world to the potential of AI and thus attracting more cognitive, financial and cultural resources toward the development of AGI, which will have a far larger direct impact than even the coolest narrow AI systems.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter