On Modes of Generative AI…
When people talk about generative AI, I think they’re often just kind of lumping all the usage patterns together. I feel like organizations are doing an actual, tactical examination about how they might get AI to co-exist with humans in a process.
I sat down this morning and tossed around some ideas of “modes” of using gen AI.
✏️ Inspiration: “Come up with an outline of how to explain James Bond to the average person”
✏️ Generation: “Write me a formal essay about James Bond and his travels in the communist countries of Europe in the 1980s.” (This can also be true for image and video generation.)
✏️ Assistance: “Give me a metaphor to explain how much James Bond likes martinis.”
✏️ Proofing: “Recommend changes to this blog post about how much I love James Bond.”
✏️ Summation: “Write a 100-word teaser from this article about James Bond.”
✏️ Experimentation: “Give me five different possible headlines for this review of a James Bond movie that I gave five stars.”
✏️ Recommendation: “If someone were to enjoy this article about James Bond, provide 3-5 recommendations of other content they might like as well.” (…is this even generative? I’m not sure – should a recommendation be considered “content”?)
I feel like these are characterized by the relationship between the AI output and the “core content.” Only in the case of pure generation are they the same thing (assuming you didn’t edit much).
In the case of inspiration, assistance, proofing, etc., the AI output is really just an “enhancement” of the output. So it’s generating something, just not the primary content of the work effort – it’s generating the opening acts, not the headliner.
And maybe this is a way to look at gen AI when trying to figure out how to fit it into an editorial process. The “core content” is the work of humans; your usage of AI is really characterized as “Enhance-ative AI” (I clearly just made that word up).
Any other usage modes I’m missing from my list?