The question of what it means to be human has perplexed scientists, philosophers, and scholars throughout history. While we don’t have the answers to this question, it’s a thought that arises when meditating on the progression of artificial intelligence.
Like all species, humans have radically changed in an infinitesimal space of time. So has the technology that has allowed this evolution. In less than a million years, humans have evolved from hunting animals to sentient beings, a rapid transformation considering the age of the universe.
When speculating on the evolution of human intelligence, Dr. Carl Sagan once pondered, “What is the information content of the brain?” Unsurprisingly, there is much that connects humans with the rest of the animal kingdom, but where we differ, at least to our knowledge thus far, is a number of key characteristics — from the large areas of the brain dedicated to the fingers, especially the thumbs, as well as to the mouth.
“Our learning and our culture would never have developed without speech; our technology and our monuments would never have evolved without hands. In a way, the map of the motor cortex is an accurate portrait of our humanity.” Equally important are the small acts of “creative genius” that characterize our civilization and define us as a species, as Sagan emphasizes.
Creativity is often ascribed to artistic fields, but long ago, it was simply a method for survival. It was a creative act to carve weapons out of stone, as it is to prompt an algorithm to recreate the letter A in the pastiche of a Post-Impressionist. Like the humans who’ve hardwired it, AI too, has dramatically evolved in a short amount of time, and is hardwiring us, as a result. “The human is never simply human,” wrote Beatriz Colomina and Mark Wigley in their meditation on the archaeology of design. “Tens of thousands of different species are suspended within each human body and the body is itself suspended within a dense environment of countless species outside it. It is never clear where the human begins and ends.”
Much like the Industrial Revolution several centuries ago, AI is redefining civilization as we know it and it’s being met with an equal wave of hysteria and excitement. From faux-Drake songs to AI-generated images winning the Sony Photography Awards — the line between human and machine continues to blur, creating anxiously unpredictable circumstances. So much so, that even an ex-Google engineer went on record to state that AI is the most “powerful tech created since the atomic bomb.”
Jesse Damiani is an American writer, author, curator and AI expert, and like all of us, he is excited, but somewhat uncertain about how all this will unfold. As a curator for Nxt Museum and digital art gallery, Vellum LA, Damiani has written extensively at this intersection of art and AI. He recently provided a thorough expose on what he describes as the “Creative Singularity”, or inflection point where art and AI can no longer be separated.
In it, he alludes to Canadian philosopher Marshall McLuhan, who saw art as a “Distant Early Warning” system, which can “always be relied on to tell the old culture what is beginning to happen to it.” As such, “the contributions of artists and other creative professionals who have been researching, experimenting, and working with AI offer crucial signals for this new world,” noted Damiani.
To make sense of this all, Hypeart caught with Damiani to discuss the intersection of art and AI, the potential risks brought on by the technology, along with the age old question: ‘will the robots really take over?’
“It is urgent that we establish new standards as we enter the era of generative engines.”
How long have you worked with AI?
Sci-fi has always been my favorite genre of fiction, so machine intelligence, consciousness, and sovereignty have been in my head since I was a kid. During my MFA in the early 2010s, I became obsessed with how algorithmic social media was changing culture, and my first book of poetry channelled these anxieties and curiosities. In 2016, I founded a startup with the goal of creating a writing and project management tool for immersive and interactive storytelling. At the time, I imagined a future for the tool where it would function as a generative thought partner and story collaborator.
Ultimately the startup didn’t work out, but through it I wound up at Boost VC, an accelerator for early stage startups, where I was surrounded by companies across the spectrum of emerging technologies. That’s when I started to better understand what was brewing in machine learning, and it’s been part of my writing and curatorial practices since. It’s led me to work with rigorous artists and expansive minds, who have inspired my own thinking.
What has been the most inspiring piece you’ve worked on with AI? How about the most disturbing?
I’m currently working on a book called I Create Like the Word: Poetry in the Age of Machine Intelligence. It includes essays, interviews, and creative exercises with text and image generators, all of which are meant to examine the promise and peril of human-machine collaboration in poetry. It grew out of conversations I had with editor Daniel Lisi in 2018 and 2019 about spawned poetry, poems generated in the style of a given poet. The goal with I Create Like the Word is to situate the novel, exciting possibilities of machine intelligence as a collaborator, copilot, or sparring partner alongside the thornier areas where critical approaches will help ensure better outcomes for the artform and the poetry-reading public (however niche that may be).
As a general practice, I don’t knowingly work on or with projects I find ethically disturbing, but in terms of artistic efforts that occupy thought-provoking gray areas, the project I’d highlight is !Mediengruppe Bitnik’s Random Darknet Shopper. I wasn’t a creator of the piece, but I was lucky to curate it into PROOF OF ART. It was originally a “live Mail Art piece” in which the collective created a bot and gave it $100 worth of Bitcoin every week to make purchases on the darknet. They didn’t interfere with its choices, which ranged from cigarettes to Nike shoes to a counterfeit Louis Vuitton handbag. But where it got into really complicated territory was when it bought illicit drugs, which caused quite a stir with Swiss authorities. I love this project for many reasons, but particularly for its prescience in surfacing key accountability issues that working with machines provokes.
Paintings to printing, photography to video, web to AI. Whether a skeptic or believer, communication always evolves from one medium to the next. Do you see AI as a threat to the creative industry? Why or why not?
At face value, these tools should be exciting, particularly in how they make creative expression more accessible and facilitate new modes of communication. But the (stochastic) elephant in the room is that they’re also poised to displace swaths of the creative workforce. Under current conditions, I see AI accelerating existing problems in creative industries, namely the low value often associated with creative labor, which is why it’s critical for creative professionals to organize and advocate for better regulation while norms aren’t yet fixed. Anything that doesn’t require high-fidelity final outputs, or that is used within larger processes, is under threat: illustration, concept art, storyboards, graphic design, instrumentation, UX writing…the list goes on, and will grow. There will still be human hands managing, curating, and refining, but if new tools deliver outputs at even a quarter of the quality for 1% of the cost, unfortunately many employers will take those odds.
Traditional fine artists and others whose work involves conceptual approaches will be less impacted by these tools. Creating art isn’t just aesthetics; it involves artists developing an idiosyncratic relationship with reality and undercurrents in the zeitgeist, which then informs the art they create. We can assume that the technical aspects of many crafts will be automated and even refined by generative tools, but the embodied experience of the human artist—not to mention the simple fact that the public will mostly prefer to engage with human artists over machines—isn’t so easily automated.
While, contrary to popular belief, AI isn’t necessarily a new technology, it still in many ways is in the ‘Wild Wild West’ phase in terms of regulation. Do you believe there should be regulations put in place to protect artists, consumers and intellectual property? What would your suggestions be?
Regulation is one of the most critical aspects of a functioning democracy, and the United States is sorely lacking in its regulation of technology. This has been the case for many years. Of course, these are extremely tricky arenas to regulate, often without precedents or clear pathways to follow. But it is urgent that we establish new standards as we enter the era of generative engines. I’m encouraged that organizations like Creative Commons and EQTY Lab are taking a leading role in this new context, and recent cases are setting the initial standards and norms for how intellectual property will function in the era of AI.
We’ve already witnessed over the past 15 years how the tech giants, locked in an arms race with each other, are largely incapable of self-regulating to mitigate the harmful aspects of their products. It’s one of these wicked systems-level problems where it’s impossible to hold any of those companies or individuals accountable—which is why sensible regulation is so critical. If properly established and evenly enforced, new policies could both level the playing field and give corporations more space to build safety into their offerings.
“AI tools are very much classic artistic materials.”
What are your recommendations in how artists can better work with AI, such as the type of prompts they use to assist their practice?
I’m fascinated by the latent space of neural networks—there are so many patterns lurking there. One great example I like to reference is Minne Atairu, whose Hair Studies and Blonde Braids series reveals the underlying biases and shortcomings of data and algorithms by prompting image generators to portray Black hairstyles. The outputs speak for themselves; in some cases, they are incapable of creating imagery of Black people with naturally blonde hair tones, such as Melanesians, instead leading to figures with black braids and a sort of blonde wig in the tone and texture of white people affixed on top. This project is effective on multiple levels. It manages to both critique the way these models were created—raising questions about how data is gathered, cleaned, and analyzed—while simultaneously using them to quite literally visualize cultural stereotypes and beauty standards. The creators of these models likely didn’t intend for the tools to be used this way, but I see so much potential for artists to approach them with similar goals in mind.
As a related point, there’s also a growing body of research around made-up language to see what other relationships exist among different letters, syllables, words, and phrases in these models. Last year, researchers Giannis Daras and Alexandros G. Dimakis of the University of Texas at Austin claimed that they uncovered secret phrases in DALL-E 2’s secret language, such as “Apoploe vesrreaitais” (which purportedly means birds) and “Contarra ccetnxniams luryca tanniounons” (which purportedly means insects). This was a contested claim and I’m not endorsing it, but the moment speaks to the complex latent spaces housed in these models, the many different patterns they’ve deduced, and the possibility we have to use creative approaches to discover them.
Whether it’s used to reveal societal biases embodied in large datasets, as a probe to find emergent machine languages, or some other form of complex linguistic interaction, prompting is a new form of poetry—but we have to use it that way if we want these engines to return poetic outputs. Lots of trial and error is involved. In this way, AI tools are very much classic artistic materials; you have to put in the time to develop proficiency and ultimately a distinct aesthetic capability.
Friedrich Nietzsche once observed that maybe the “whole human race is only a temporally limited, developmental phase of a certain species of animal, so that man evolved from the ape and will evolve back to the ape again, while no one will be there to take any interest in this strange end of the comedy.”
Do you see AI as a legitimate threat in which this process from human to the next dominant species is beginning to take shape. Put more simply, will the robots indeed take over?
Nietzsche is a salient reference point in this discussion because of the role his writing plays in accelerationism, a blurry reactionary political schema that essentially holds that the only way to resolve what’s not working in late capitalism is to speed up the breakdown. Ted Chiang recently argued that AI as currently imagined only feeds into accelerationist ideologies, and though I remain excited about many of the new creative capabilities available to us now, I find his argument compelling.
As I’ve said, this has less to do with the technology itself than it does the socio-political and economic contexts it’s entering. Will the robots take over in Terminator or Matrix-like fashion? Feels farfetched. But the near-term risk that feels most palpable is that these tools will accelerate the worst of late capitalism, displacing and disempowering many people in service of concentrating wealth in the hands of a select few.
This is not the future most of us want. It’s why focusing on regulation and organizing are so important right now. We won’t have a do-over moment to build a strong foundation for a human-machine reality that both ensures human wellbeing in the AI era while encouraging the promise of these technologies.
All artwork courtesy of Jesse Damiani via Midjourney.