No, AI Image Generators are not a Threat to the Arts

Over the past few weeks, I’ve read several somewhat negative (somewhat fearful) posts regarding the new “AI image generator” MidJourney. For those that may not know what this is—MidJourney is one of many available and impressive “AI Art Generators” that “create” a specific type of artistic output (visual, audio, or otherwise) through a process that some describe as a “machine learning process”—that is, colloquially, a machine has “learned” some information, and used it to generate something new. (Note: humans collected the data or written instructions for the machine to use, but the process of generating the output is left to the machine.)

Users interface with generators like MidJourney specifically by inputting some text and watch on as the generator pops out what appears to be a fascinating pictorial “interpretation” of the description. By understanding this dynamic, you might not be surprised to learn that the ability to create such output using text input became a popular activity last year following the release of OpenAI’s CLIP (Contrastive Language–Image Pre-training), which was designed to evaluate how well generated images align with text descriptions. After its release, people quickly realized the process could be reversed – by providing text input, you could get image output with the help of other AI models.

So that’s what it is—and as you can see by the images that I included (via the use of MidJourney and DALL-E 2), the results are really impressive! However, the question is—are the many posts/arguments put forward that such technology is a threat to art and artists actually valid?

Absolutely not. And here’s why:

The human art experience is a highly complex process that goes far beyond the inherent, objective properties of a product. In fact, just like experiences of pain and color, while environmental elements may initiate processes of experience, what we understand as “art” truly takes place in the mind. We can point to a painting or sculpture and say, “that’s art,” but in reality, it’s akin to pointing at an apple and saying, “that’s red.” If this were not true, we would not be able to perceive any difference between what we understood to be a photograph and a hand-painted photorealistic image (even if the content appeared identical.) Humans are ABSOLUTELY required for what we currently understand or experience as “art.” (I recommend reading something like Dutton’s Art Instinct to get an idea of just how complex the experience of “art” is and how it evolved over our phylogenetic lifetime.) But in short—you should not fear machines replacing humans in the realm of art any more than you fear squares replacing circles.

Next, just to be clear as possible (as much as I can without broaching the Sisyphean task of defining intelligence), the tech available today is not true “AI” in terms of a conscious entity passing some Turing test. That is still many decades away. There’s no agency or creative intention (as we understand it) in play from the generator. There may be, at some point, an artificial consciousness that can create something like “art,”—but understanding how our own art experience evolved from experiences of our phylogenetic lifetime; there is no real reason to think it would even “want” to.

The real problem lies with these new technologies in legal arenas. For example, “In terms of Midjourney output, current US jurisprudence denies the possibility of granting copyright to AI-generated images. In February, the US Copyright Office Review Board rejected a second request to grant copyright to a computer-generated landscape titled “A Recent Entrance to Paradise” because it was created without human authorship.” It will be interesting to see how the legal landscape unfolds for images generated with AI generators and related future tech.

So yes, innovations and new technologies will always pour forth. Use them if you wish!!! Have fun! Explore, experiment, and create—let these new technologies be an incredible new tool for you to grow your ideas, expand your options, and navigate new landscapes of possibilities. And yes, new technologies will always impact the marketplace in some way—both positive AND negative. But humans adapt…that’s what we do.

So in summation, 5G is not a mind-control project, the earth is absolutely not flat, and Skynet is not coming for your paintbrushes.

(Images shown created with MidJourney and DALL-E 2) Please share with someone that may need to read this. :heart:


I would be curious if that copyright could be achieved I’d a painter took that AI generated image and painted it.

1 Like

DALL-E, etc are likely a much better/more native fit for ‘art NFTs’ than say JPEGs for example. You still can’t easily put a JPEG onto a blockchain, but you can super easily put a 50-word key phrase on there to reproduce that exact image, and do all sorts of interesting stuff around tracking derivative works. reverse engineering that to scan your original art and produce the private key for it, could give us old school media folk far more power in the digital domain.

still breaking new ground, but there’s already a market for buying specific combinations of keywords which yield a specific auto-generated result. basically a radical new way for graphic designers to shortcut content creation, check this for an example:- Prompt Marketplace | PromptBase

currently I see the tech as a new way for artists to create their own references in the same way as getting a photo of a model, going en plein air or setting up a still life.

still prefer paint over pixels 100%

1 Like

additionally in dall-e - a new update been released this week called ‘outpainting’ which delivers context dependant scene expansion, so you can go beyond the field of view of any existing picture and add anything you like, all while maintaining the style of the original, clever skynet:-

1 Like

That’s awesome!!! :smile:

1 Like