Blog

Emily Zheng

A daydreaming doodler turned designer and artist, Emily is all about fearlessness. She boulders, even though she’s slightly afraid of heights. She travels solo to faraway places. It’s Emily’s lack of fear that drives her to delve deep into the unknown to uncover fresh designs.

Designer | LinkedIn
Headshot of Emily
decorative image

05/17/2023

Fast, smart, but not quite there—why we’re not sold on AI for image creation

By Emily Zheng, Jane Dornemann

decorative image

Image by Emily Zheng and DALL-E 2

When we shared what our writers learned from using generative AI tools like ChatGPT, our design team naturally decided to use generative AI to create the blog image. That led us down another rabbit hole around the pros and cons of integrating smart platforms into our design process—from choosing amongst the latest offerings, like Midjourney and DALL-E 2, to wrestling with the ethics of them.

As of now, here’s what we know about generative AI for image creation:

These things are freaking fast. When we say we’re wowed by the speed of generative AI, we don’t just mean it can whip up an image in mere seconds—we’re thinking about how quickly it gets our minds going. Working with technology companies means we need to generate images for a lot of abstract concepts versus physical items. How does one depict the Internet of Things (IoT) or access management?

Typically, if we’re really stuck, we might run a Google image search on these terms to get some inspiration. But now, we can just enter those terms into tools like DALL-E and it spits out visual representations. These get us thinking of more original design concepts in a fraction of the time—making it ideal for brainstorming sessions and mood board creation. Kind of like Google…but on steroids.

They ignore a crucial part of our process. One thing the 2A design team treasures and sees as essential to producing a stellar product that aligns with a client’s ask is the feedback loop. No first-crack design, whether human-created or AI-generated, is going to be the final product. Design is a process—and this is where generative AI is of no help.

You can ask the AI to change a shade of blue to be darker or lighter, but that leaves a lot of room for the AI—not you—to choose. Sometimes you ask it to change just a few pixels and it ends up changing other aspects of the design you didn’t want. To really address feedback with our signature eagle-eye attention to detail, we would’ve had to import and manually edit our AI-generated works in more traditional design software. Since DALL-E 2 only lets users download non-editable PNGs, it becomes challenging to think of effective ways to manipulate these flat images. Not only does this defeat the purpose of a fast and at-the-ready product that AI seems to promise, but it ultimately can take up more time. The limited 1:1 aspect ratio of DALL-E 2’s images also required us to continue our work in Outpainting, which extends the borders of artwork beyond its original frames. It also ate up all our credits.

We must find the words. Having design vocabulary and training is extremely helpful in crafting prompts, because how you word a request will entirely determine what you get in return. Not only will infusing design concepts in your prompt help you get something closer to what you want, but it will help to create a visual that is more distinct from what everyone else is getting.

For example, we found that Midjourney tends to generate images that have a similar underlying style. (To see for yourself, check out this Instagram account that generates AI images based solely on headlines from The New York Times.) The ones that felt unique included requests to take inspiration from particular artists or included design terminology. For our ChatGPT blog image, we asked DALL-E 2 to create an image of “women looking at computer” in Corporate Memphis style, but the results didn’t quite hit the mark. So we asked it to mimic the works of Magdalena Koźlicka, a Polish digital artist. While the result was neither Corporate Memphis nor that of our chosen artist, we like what it gave us. Getting to the final product took more than 30 iterations.

Here’s a peak at what we got throughout the process:

How we issue credit? After all is said and done, who deserves the credit? As creatives, we want to honor the rights that other artists have to their own creations. But generative AI has resulted in a grey area where images have more than one creator. For the blog visual we published, we decided to credit both our designer and DALL-E to show that our designer used AI in her creative process. While DALL-E did most of the work, the final product would not exist without the designer’s carefully crafted prompts, edits in Outpainting, and overall creative direction (and none of it can be copyrighted).

But with AI clearly pulling inspiration from existing art—and likely influenced by all the prompts that others submit—it’s clear this is an ethical question that doesn’t have an answer yet. And while this question may be new to AI-generated art, there are plenty of notable visual artists who conceive of a piece but don’t create it themselves, such as Sol Lewitt and Ai Weiwei, yet the credit is theirs alone. To sum it up, generative AI can speed up the creative process, but that involves an element of luck in how on-point its image generation is. And sometimes that saved time is spent editing files that are challenging to manipulate. We see generative AI in design today much like what the introduction of the calculator must have been like: did mathematicians feel like they were cheating? Was it still their work if they had assistance from a machine? It’s true that generative AI has helped us do our jobs—but is it doing our jobs? That’s one question we can answer—and the answer is no. For now, at least.