It’s the brushes at dawn as artists feel the pressure of AI-generated art – TechCrunch
If you have been anywhere near the interwebs lately you will have heard of DALL-E and MidJourney. The kinds of art that neural networks can generate – and with a deeper understanding of the strengths and weaknesses of the technology – means we face a whole new world of suffering. Often the butt of wishy-washy jokes (How do you get a waiter’s attention? Shout “Hey, artist!?”), computer-generated art is another punch in the “they took our jobs” narrative. of man against machine.
What’s interesting to me is that robots and machines taking certain jobs have been reluctantly accepted, because the jobs are repetitive, boring, dangerous, or just plain horrible. Car body welding machines do a much better, faster and safer job than humans ever could. Art, however, is another matter.
As with all technology, there will be a time when you no longer trust your own eyes or ears; machines will learn and evolve at breakneck speed.
In the recent movie “Elvis,” Baz Luhrmann puts a quote in Colonel Tom Parker’s mouth, saying a great act “gives the audience feelings they weren’t sure they should enjoy.” To me, that’s one of the greatest quotes I’ve heard about art in a while.
Commercial art is not new; whether you think of Pixar movies, music, or the prints that come with Ikea picture frames, art has been peddled on a grand scale for a long time. But what it has, overall, in common is that it was created by humans who had some sort of creative vision.
The image at the top of this article was generated using MidJourney, as I fed the algorithm a slightly ridiculous prompt: A man dances like Prozac is a cloud of laughter. As someone who has had a lifetime of mental health issues, including somewhat severe depression and anxiety, I was curious what a machine might offer. And, my God; none of these generated graphics are something that I would have conceptually imagined myself. But, I’m not going to lie, they did something to me. I feel more graphically represented by these machine-generated artworks than almost anything else I’ve seen. And the wild thing is, I did it. These illustrations were not drawn or conceptualized by me. All I did was type a weird prompt into Discord, but these images wouldn’t have existed if not for my wacky idea. Not only did he come up with the image at the top of this article, he spat out four completely different – and oddly perfect – illustrations of a hard-to-understand concept:
It’s hard to put into words what exactly this means for concept illustrators around the world. When someone can, with a single click, generate artwork of anything, emulate any style, create just about anything you can think of, in minutes – what is it means to be an artist?
Over the past week, I may have gone a bit too far, generating hundreds and hundreds of images of Batman. Why Batman? I have no idea, but I wanted a theme to help me compare the different styles that MidJourney is capable of creating. If you really want to go down the rabbit hole, check out Dark Knight Rises AI on Twitter, where I share some of the best generated coins I’ve come across. There are hundreds and hundreds of candidates, but here is a selection showing the breadth of styles available:
Generating all of the above – and hundreds more – had only three bottlenecks: the amount of money I was willing to spend on my MidJourney subscription, the depth of creativity I could find for prompts and the fact that I could only generate 10 concurrent designs.
Now I have a visual mind, but there is no artistic bone in my body. But I don’t need it. I come up with a prompt – for example, Batman and Dwight Schrute are in a fight – and the algo spits out four versions of something. From there, I can rerun (i.e. generate four new images from the same prompt), render a hi-res version of one of the images, or iterate based on one of the versions.
The only real flaw with the algorithm is that it favors a “take what you get” approach. Of course, you can get a lot more detail with your prompts to get a lot more control over the final image, both in terms of what goes into the image, styling, and other settings. If you’re a visual director like me, the algorithm is often frustrating because my creative vision is hard to capture in words, and even harder for the AI to interpret and render. But what’s scary (for artists) and exciting (for non-artists) is that we’re at the very beginning of this technology, and we’re going to have a lot more control over the how images are generated.
For example, I tried the following prompt: Batman (left) and Dwight Schrute (right) have a fist fight in a parking lot in Scranton, Pennsylvania. Dramatic lighting. Photo realistic. Monochrome. High detail. If I had given this prompt to a human, I expect him to tell me to go away for talking to him like he was a machine, but if he were to create a drawing, I suspect that humans would be able to interpret that prompt in a way that makes conceptual sense. I did a whole bunch of testing, but there weren’t many illustrations that made me think “yes, that’s what I was looking for”.
What about copyright?
There’s another interesting quirk here; many styles are recognizable, and some faces are also recognizable. Take this one, for example, where I invite the AI to imagine Batman as Hugh Laurie. I don’t know about you, but I’m very impressed; he has Batman’s style, and Laurie is recognizable in the design. What I have no way of knowing, however, is if the AI basically ripped off another artist, and I wouldn’t like to be MidJourney or TechCrunch in a courtroom trying to explain how that went horribly wrong.
This kind of problem arises in the art world more often than you might think. One example is the Shepard Fairey case, where the artist allegedly based his famous Barack Obama “Hope” poster on a photograph by freelance AP photographer Mannie Garcia. The whole thing became a fantastic mess, especially when a bunch of other artists started creating art in the same style. Now we have a sandwich of tiered plagiarism, where Fairey is both plagiarizing someone else and being plagiarized himself. And, of course, it’s possible to generate AI-art in the style of Fairey, which further complicates things infinitely. I couldn’t resist taking a look at it: Batman in Shepard Fairey style with the text HOPE at the bottom.
Kyle has many more thoughts on the legal future of this technology:
So what about artists?
I think the most frightening thing about this evolution is that we have moved very quickly from a world where creative feats such as photography, painting and writing were kept away from machines, to a world where this is no longer as true as before. But, as with all technology, very soon there will be a time when you can no longer trust your own eyes or ears; machines will learn and evolve at breakneck speed.
Of course, all is not catastrophic; if I was a graphic artist, I would start using the latest generation tools for inspiration. The number of times I’ve been surprised by how good something is and then I’m like, ‘but I wish it was a little more [insert creative vision here]” – if I had the graphic design skills, I could take what I have and turn it into something closer to my vision.
It might not be as common in the art world, but in product design, these technologies have been around for a long time. For circuit boards, machines have been creating the first versions of the track design for many years – often to be tweaked by engineers, of course. The same goes for product design; Five years ago, Autodesk showed off its generative design prowess:
It’s a brave new world for every job (including mine – I had an AI that wrote most of a TechCrunch story last year) as neural networks get smarter and smarter and increasingly comprehensive datasets to work with.
Let me wrap up with this extremely disturbing image, where several of the people the AI placed in the image are recognizable to me and other TechCrunch staff:
MidJourney images used in this article are all licensed under a Creative Commons Attribution-NonCommercial License. Used with explicit permission from the MidJourney team.