On the Risks of Editorial AI…

tags: ai, content-creation

I ran an executive roundtable for Optimizely at the Gartner Marketing Symposium a couple weeks ago, where a group of 30 or so marketing executives discussed SIX risks of generative/editorial AI.

(I’m trying to make a term happen, BTW: “Editorial AI.” The use of AI in content generation goes far beyond just writing words. “Generative AI” is too restrictive.)

Here are the six risks I identified, which we discussed –

  • Inaccuracy: content generated by AI will say something wrong, or legally bind you to a position that’s not correct

  • Exposure: it will become obvious that your organization is using AI to create content (whether or not this is a bad thing is up to you, I guess); closely related to this is DISCLOSURE risk, meaning you will fail to appropriately disclose the source of AI content (thanks to a roundtable participant for carving that out)

  • Regression: your generated content will “regress to the mean,” meaning it will become average or worse – it will start to sound like everything else

  • Morale: the morale of your employees will suffer when the responsibilities/duties they want (creating content, for example) are taken from them, or colleagues start to lose their jobs because of AI

  • Atrophy: you will over-depend on AI to the detriment and neglect of your overall process; when the AI playing field becomes even again (when everyone has it), you’ll have taken a net step backwards

  • Pre-emption: other people’s AI will “repackage” your content for delivery in their own context, depriving you of engagement (this is different from the others, because this isn’t related to YOUR internal use of AI, just use of AI in general)

What am I missing?

(Also note that I’m not condemning any use of AI. Successful adoption and incorporation of anything simply requires that we consider and mitigate risks.)

This is item #21 in a sequence of 58 items.

You can use your left/right arrow keys to navigate