Artist uses AI to extract color palettes from text descriptions

Enlarge / A series of four example color palettes extracted from short written prompts by Matt DesLauriers.

A London-based artist named Matt DesLauriers has developed a tool to generate color palettes from any text prompt, allowing someone to type in “beautiful sunset” and get a series of colors that matches a typical sunset scene, for example. Or you could get more abstract, finding colors that match “a sad and rainy Tuesday.”

To achieve the effect, DesLauriers uses Stable Diffusion, an open source image synthesis model, to generate an image that matches the text prompt. Next, a JavaScript GIF encoder named gifenc extracts the palette information by analyzing the image and quantizing the colors down to a certain set.

DesLauriers has posted his code on GitHub; it requires a local Stable Diffusion installation and Node.JS. It’s a bleeding-edge prototype at the moment that requires some technical skill to set up, but it’s also a noteworthy example of the unexpected graphical innovations that can come from open source releases of powerful image synthesis models. Stable Diffusion, which went open source on August 22, generates images from a neural network that has been trained on tens of millions of images pulled from the Internet. Its ability to draw from a wide range of visual influences translates well to extracting color palette information.

Other palette examples DesLauriers provided include “Tokyo neon,” which suggests colors from a vibrant Japanese cityscape, “living coral,” which echoes a coral reef with deep pinks and blues, and “green garden, blue sky,” which suggests a saturated pastoral scene. In a tweet earlier today, DesLauriers demonstrated how different quantization methods (reducing the vast number of colors in an image down to just a handful that represent the image) could produce different color palettes.

Different color quantization modules can produce different color palettes from the same image.
Enlarge / Different color quantization modules can produce different color palettes from the same image.

It’s not the first time an artist has used AI to extract color palettes from text. In May, an artist named dribnet published a generative art series called “Homage to the Pixel,” inspired by Josef Albers. He simultaneously released an online tool that anyone can use to produce a six-color palette based on text inputs.

Why use AI to find color palettes? Aside from the novelty factor, you could potentially extract matching colors from unconventional sources or abstract feelings like “the day after my last day in high school,” “the discarded wrapper on a fast food burger,” or “Star Wars and Lord of the Rings mash-up.”

The ability to extract color palettes from written prompts seems like something that popular art tools might duplicate in the future since picking groups of colors that go together well can be notoriously difficult. Many more unexpected applications of image synthesis models are likely on the way.

Go to Publisher:

Biz & IT – Ars Technica

Author: Benj Edwards