I was thinking this morning about the dialogue around AI like ChatGPT making human writing completely irrelevant. In a few years, computers will be able to create text that is indistinguishable from human-created text; some would argue that they already can.
I was also thinking about the dialogue around AI Art programs like Stable Diffusion, and how they can take prompts and create reasonable (if typically anatomically incorrect) facsimiles of human-created art.
In the second conversation, a key element is the issue of plagiarism. AI art programs often function by taking existing art, analyzing it, and replicating it based on a prompt. So if I find ten paintings involving pomegranates and feed them into the program, the program will find commonalities in these paintings and synthesize those into a new painting.
One important problem with this occurs if the art I feed in involves intellectual property that I don’t own. If I limit myself to public domain works or to cases where the artist consents, there’s no problem if the computer creates what is effectively a random collage.
But if I feed in works by artists who are actively trying to make a living, and who did not consent to their art being used in this fashion, I’m creating an ethical problem at best, and possibly a legal one.
It is true that humans create collages involving copyrighted work. However, one key aspect of Fair Use Doctrine is whether the allegedly infringing work is “transformative”: Does the new work have a clear creative purpose beyond merely copying an existing one?
Being “transformative” implies a level of cognitive deliberation that computers still have yet to demonstrate. Could they, eventually? Yes. Do they, now? No.
There’s also the matter of commercial use: Collages by high school students, for instance, usually look like “collages by high school students” and have the commensurate lack of commercial viability. In contrast, one important goal of AI art programs is to create works that are undetectable. That has a far greater potential for fraud.
Which brings me back to my first paragraph: The claim that AI systems will soon render human-created works obsolete.
I saw a step-by-step explanation of how to make an almost completely AI-generated instructional video: Ask one AI app for a script based on a simple prompt. Feed the results into a script editor. Feed those results into a video maker, and then into a script reader. Have yet another app create an interesting thumbnail. Boom! New content with minimal human interaction. And it’s only a short step from there to chaining the AI together (with more AI) to create a visually and aurally engaging video from a simple prompt.
Lingering under this discussion is the question: What is the purpose of art?
To the extent that content is created for profit, I agree, AI is a problem. Most artists (including writers) depend on their art satisfying a commercial demand, and to that degree, AI is a problem.
At the same time, though, it’s important to keep in mind that creative arts are, well, creative. Part of their existence, and in many cases their entire existence, is due to their act of creation. How many creative works wind up in a closet, never to be shared with the world, because the point of their existence was solely in their creation?
I’m not talking about frustrated artists who can’t find their market, and who languish in unintentional obscurity. I’m talking about works created for the sole purpose of having been created.
That’s what I fear will get lost in this conversation, that we as a culture have put all the value of “art” on the notion of its commercial viability. Commercial viability of art is not trivial, but it is also not the exclusive reason for the existence of art.