In 2018, in late October, a distinctly odd painting appeared at the fine art auction house Christe’s. At a distance, the painting looks like a 19th-century portrait of an austere gentleman dressed in black. Contained in a gilt frame, the portly gentleman appears middle-aged; his white-collar insinuates that he is a man of the church. The painting seems unassuming, something expected at an auction house that sells billions of dollars of painting each year.
However, upon closer inspection, things get a bit odd. The work appears to be unfinished. The facial features are indistinct, as if the entire body of the subject was captured in motion. In fact, the whole composition is also somewhat displaced to the top left. The entire painting itself is soft but surreal.
If you were at the auction house, you would read that the painting is part of a series of portraits of the Belamy family. The aforementioned piece is of Edmond de Belamy. But, who is Edmond de Belamy? Some famous family head? A renowned preacher? Someone of great wealth? Well, Edmond de Belamy does not exist.
The answer to our mystery can be found at the bottom right of the portrait. There you will find the artist’s signature in cursive Gallic. It reads:
Our painter is a machine — an intelligent machine. Though the initial estimates had the portrait selling under $10,000, the painting would go on to sell for an incredible $432,500. The portrait was not created by an inspired human mind but was generated by artificial intelligence in the form of Generative Adversarial Networks or GAN. That’s right; machines are starting to take over the art world.
Artificial Intelligence able to comprehend and create art would represent a big step in developing intelligent machines
The AI “painter” was engineered by the Paris-based collective Obvious. They fed their GAN (more on this later) a data set of 15,000 portraits painted between the 14th and 20th centuries. Their algorithm analyzed the human-made images and proceeded to create its own art based on what it had learned from the thousands of portraits.
AI art is nothing new. More than 150 years ago, the famous mathematician Ada Lovelace dreamt of developing a computer able to create music. Indeed, it seems that the rise of intelligent machines is imminent. AI is growing more prevalent, helping to analyze and categorize data and solve problems in a wide number of fields. Yet, artificial intelligence is also intruding into the creative world and is being used to develop music, paintings, and poetry.
Aside from its potential financial value, the research driving this field could take AI further than we’d previously thought possible. Artificial Intelligence that is able to create art indistinguishable from that created by humans could represent a big step in building machines that can think more like humans. After all, what is more human than making art?
However, commercial projects like the Edmond Belamy portraits and similar experiments in computational creativity have sparked debates among engineers, artists, philosophers, and concerned citizens.
Can artificial intelligence really create art?
So, how would you go about teaching a computer algorithm how to draw a dog? Like a child trying to draw a dog for the first time, you could start by giving it various images of dogs, to get a general idea of what a dog looks like and what features make up a dog. By creating its own image of a dog, and then comparing it against the images in the data set, over time the algorithm would “learn” how to create a puppy portrait. This process of getting a machine to learn from past data without new programming is how machine learning works.
This process uses what is known as a neural network, or a series of algorithms designed to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. As new data is added, the neural network can adapt to generate the best possible result without needing to redesign the output criteria.
There are many different types of machine learning techniques and architectures used by researchers. However, when creating art, one commonly used technique is called Generative Adversarial Networks (GAN). For the sake of simplicity, we will primarily focus on GANs in this article.
What is a Generative Adversarial Network?
Originally developed by Ian Goodfellow and set out in a 2014 paper, GANs are a type of machine learning technique that uses two neural networks, pitting one against the other in order to generate an output that can pass for real data. GANs can be effectively used to create art, among other things. But how do they work?
Simply put, the two neural networks are called a Generator and Discriminator. Say we want to train our model to create its own 19th-century portrait of a dog. For the sake of our example, think of the Generator as an art forger and the Discriminator as an art authenticator. We will first need to show the Generator thousands of paintings of dogs in various sizes and breeds, so that it can learn what different elements can make up a dog.
The Generator uses the information in the data set to create a painting of a dog. The Discriminator will then try to spot the difference between the synthetic painting and the human-created ones from the data set. When the Discriminator spots the “fake,” the Generator “learns” how its attempt failed and tries again. In the beginning, the Generator makes many paintings that do not look dog-like enough to fool the Discriminator.
However, the Generator also learns from the Discriminator’s constant feedback. Eventually, it creates dog portraits that look more and more like dogs, until it can eventually fool the Discriminator into thinking that the new images are real-life portraits. The end result is our art.
One of the most extraordinary things about GANs is that you can take the underlying architecture and train the model on any data set that you want
This method can be used not just for art, but also for voices, text, and even faces. The viral website This Person Does Not Exist creates eerily realistic human faces using Generative Adversarial Networks, creating, as the name implies, faces of humans that do not exist, but which are almost indistinguishable from those that do. Factors like the size of the dataset, the underlying features of the data, the time you spend training your model, and the type of GAN model all affect the final output. By using different types of datasets, you can create something hyperrealastic like the images on the Does Not Exist website, or something dreamy and abstract like the Edmond Belamy portrait.
There are various methods to this machine madness
Artists, researchers, and data scientists are exploiting the power of Generative Adversarial Networks to create artistic masterpieces. One of the most prominent figures in this growing artistic field is the New Zealand artist and lecturer in computational design, Tom White. His artwork investigates, “the Algorithmic Gaze: how machines see, know, and articulate the world”. White has collaborated with AI systems to create art which depicts the world not as humans see it, but as algorithms do.
To humans, the pictures White’s algorithms create look like random arrangements of lines and blobs. But to the algorithms, they can be identified as specific objects: a shark, binoculars, a lawnmower. The images are created by choosing an object and then having a drawing system generate some abstract lines. This image is fed into a machine vision classifier, which tries to guess what the chosen object might be. Based on the guess, the drawing system then tweaks the image and feeds it through again. The process continues until the classifier guesses correctly.
However, much like Kadinsky, Picasso, and Miro, these are abstract paintings. They are not a human’s idea of what the object looks like, but a machine’s idea — they represent how the algorithm “sees” the world. And this, it turns out, is very different from how a human sees the world.
White’s work is just the tip of the iceberg. The German artist Mario Klingemann has developed neural networks the produce dreamlike antique-looking portraits that evolve and “come to life” in real-time. Another AI artist is Google’s former artist in residence, Sougwen Chung, who has created a system that draws alongside her to make stunning duet paintings.
Outside the realms of painting, researchers have also managed to train AI to write poetry. In a paper published by the University of Toronto and IBM, the researchers describe how they used 3,000 sonnets to train an algorithm to write its own Shakespearean-style sonnets. This is one end result:
“With joyous gambols gay and still array
No longer when he was, while in his day
At first to pass in all delightful ways
Around him, charming and of all his days”
Not bad for a machine, right? But what if music is your preferred form of expression? AI can do that too. In the spring of 2019, classical music enthusiasts gathered for an unusual event. The musical experience featured music composed by Bach and by artificial intelligence. Audience members were tasked with deciphering which musical composition was created by the human and which was created by machine.
Using a form of Generative Adversarial Network, researchers behind the project were capable of training AI to compose music that sounded like Bach himself came back to life. Even the lead on the project, Marcus du Sautoy, an Oxford mathematician, struggled to decipher the differences in the two compositions.
But, is AI truly creating art?
Can artificial intelligence be creative? This question lies at the heart of the research driving this field. Let’s go back to our painting examples. Who is truly the author behind the paintings? The algorithm itself or the person behind it? For many, it is the former. We humans like to think that our creativity makes us unique, something that separates us from animals and machines. However, others argue that the vastness of human creativity can be condensed into a complex process that involves essentially solving problems.
Can a machine be taught to mimic the creative process? Professor Marcus du Sautoy, the author of The Creativity Code, doesn’t necessarily think so. If anything, Professor Sautoy believes we are asking the wrong question. Instead of thinking about AI as replacing human creativity, it’s beneficial to examine ways that AI can be used as a tool to augment human creativity.
In the examples here, AI has been used to explore new perspectives on existing mediums. The machines we have seen so far may not be truly creative, as they still rely on humans for their initial data and parameters, their “sources of inspiration”. If anything, this new creative process is collaborative rather than adversarial.
So rather than say, This Art was Created by AI, it would be more appropriate to say that This Painting Was Created with the Help of AI. It is just not as catchy.
Even if machines grow to be more intelligent, reaching some type of General intelligence, Professor Sautoy believes that their role in creating art will still be collaborative, exploring entire new creative realms that would likely not develop if either of them were working alone.
You can create your own AI art
Perhaps you could even sell one as an NFT? There is a wide range of tools out there that allow those with little to no background in programming or machine learning to create their own art. Tools like GANBreeder will enable you to take two images and create a new one using various GAN models and datasets. If you want a little more control, without coding, check out the semi-free tool, Runaway ML. You can import massive data sets and synthesize everything from galaxies that do not exist to your own pokemon.
It is machine magic.