We’ll begin with the visual arts. Money talks, and in the art world, paintings made by AI programs are starting to make good money.
The Portrait of Edmond de Belamy is a somewhat blurry and unremarkable painting created in 2018 by the French art collective Obvious. The image was cobbled together using some open-source code downloaded from GitHub. The auction house Christie’s in New York valued the painting at between US$7000 and US$10,000 and advertised it as the first art piece created by AI that they had ever auctioned. When the painting went under the hammer, it beat the estimate many times over, selling for a remarkable US$432,500.
Not to be outdone, Sotheby’s in London sold an AI art piece in 2019. In this case, it was not the AI-generated art but the AI itself that was up for auction. German artist Mario Klingemann built an AI system called Memories of Passersby I, which continuously generates new but fake portraits. He encased the hardware in a specially made but suitably retro cabinet.
The AI was trained on a public-domain database of portraits from the 17th to the 19th centuries painted by Western artists. Due to a complex feedback loop between one picture and the next, Klingemann has claimed that the program will never repeat. The art it produces is thus poignantly ephemeral, appearing slowly before whoever is watching before disappearing forever.
The piece sold for £40,000 (US$51,012). This was at the top of Sotheby’s estimate.
These recent headline-making sales disguise over half a century of AI painting. It’s a history that starts with Harold Cohen, a pioneer in computer art. Cohen was a graduate of London’s famous Slade School of Fine Art and a contemporary of British painters such as David Hockney. After a successful career as an artist – Cohen represented Great Britain at major art events such as the Venice Biennale and the Biennale de Paris – he became tired of the contemporary art scene, and at the start of the 1970s took up programming.
He used his computer and artistic skills to develop AARON, an AI program that could paint and that would become his focus for the rest of his life. AARON’s paintings have been exhibited around the world, in galleries including the San Francisco Museum of Modern Art and London’s Tate Modern, and are held by museums such as the Victoria and Albert Museum in London, and the Stedelijk Museum in Amsterdam.
Like any artist, AARON’s style evolved over the 45 years until Cohen’s death in 2016. AARON moved from the sort of abstract art that Cohen himself had painted to more representational painting. Eventually, AARON’s artistic journey turned full circle, returning to rich and colourful forms of abstraction.
Cohen exhibited AARON’s art at the University of California at San Diego in 2011 under the title “Collaborations with My Other Self”. Asked if AARON was creative, Cohen observed that it was not as creative as he had been in creating the program. And when asked if he or AARON was the artist, Cohen would compare his relationship with AARON to that between a Renaissance master and his assistants.
As to whether an AI system like AARON could make genuine art, Cohen left the question unanswered, and in some ways unanswerable:
If [philosophers like Hubert] Dreyfus, [John] Searle, [Roger] Penrose, whoever, believe that art is something only human beings can make, then for them, obviously, what AARON makes cannot be art. That is nice and tidy, but it sidesteps a question that cannot be answered with a simple binary: it is art or it is not.
AARON exists; it generates objects that hold their own more than adequately, in human terms, in any gathering of similar, but human-produced, objects, and it does so with a stylistic consistency that reveals an identity as clearly as any human artist’s does. It does these things, moreover, without my own intervention. I do not believe that AARON constitutes an existence proof of the power of machines to think, or to be creative, or to be self-aware: or to display any of those attributes coined specifically to explain something about ourselves. It constitutes an existence proof of the power of machines to do some of the things we had assumed required thought, and which we still suppose would require thought – and creativity, and self-awareness – of a human being.
If what AARON is making is not art, what is it exactly, and in what ways, other than its origin, does it differ from the “real thing”? If it is not thinking, what exactly is it doing?
I doubt that I can give you a better answer than this.
But I can at least show you some examples of AI-generated artwork. A number of neural networks have been created to turn text prompts into images – well-known examples include DALL-E and Stable Diffusion. The technology is similar to large language models such as GPT-3, but instead of being trained only on text, it is trained on many millions of images and text captions scraped from the internet.
Where better to start than with some cats? The internet was, after all, invented to propagate cute pictures of cats. I asked one of this new generation of text-to-image AI tools to paint some cats in the style of a famous painter. All you need do is type a prompt for what you’d like the image to show. You might try, as I did, “Cat in the style of a Picasso cubism painting”, “Cat in the style of Van Gough’s Starry Night”, and “Cat in the style of Rembrandt”. Here are the results:
You might want to have a go yourself. You can try out the Stable Diffusion text-to-image model for free. Note that even if you use the same prompts as me, you won’t get exactly the same images. Stable Diffusion produces different images each time it is run. There’s a certain randomness about its output – perhaps this is to be expected from any tool helping you to be creative.
I’m not going to describe the three images produced by these prompts as art. But the images are undoubtedly a better pastiche of Picasso, Van Gogh and Rembrandt than I could possibly paint.
Even if we put aside the artistic merit of these pictures, there’s a host of other important questions raised by tools like Stable Diffusion and DALL-E. Was appropriate consent obtained from the many (human) artists on whose artworks these AI systems were trained? Can these AI-generated images be copyrighted? Equally, have we violated the copyright of any human artists? Would we even know, given the billions of images on which the systems have been trained?
There’s big money to be made with these AI painting tools. Stable Diffusion cost around US$600,000 to train. Stability AI, the London-based company behind the tool, raised US$101 million of venture capital after releasing Stable Diffusion to the public. This valued the company at over US$1 billion, making them Europe’s newest “unicorn”.
Despite this incredible valuation, Stability AI has fewer than 50 employees and – as far as I can tell – little or no revenue. I suspect very little of the money generated by Stable Diffusion will flow back to the many human (and mostly poor) artists whose art was used to train it. How can this be sustainable? How can this be right?
Excerpted with permission from Faking It: Artificial Intelligence In a Human World, Toby Walsh, Speaking Tiger Books.
Limited-time offer: Big stories, small price. Keep independent media alive. Become a Scroll member today!
Our journalism is for everyone. But you can get special privileges by buying an annual Scroll Membership. Sign up today!