Skip to content

How To Work AI Into Content Marketing (in a Way That Works for You)

Does the company you work for have a policy on AI use? Several brands (Amazon, Apple, Verizon, and Wells Fargo) have set strict parameters around the technology.

Once you get the OK to give it a go, you’ll need to develop specific policies to guide AI use in content and marketing.

An AI operations plan will let you make (and share) sound decisions about its use and governance in your department – and help mitigate the risks.

For a recent article in CCO, CMI’s content leadership publication, I asked industry AI experts for their advice on operationalizing AI for content (and what to watch out for along the way). I’ve recapped the highlights here. You can also read the original article The Power of the Prompt: How to Plug AI Into Your Content Engines on CCO.

Develop generative AI strategy and standards

Your first question on AI’s role in your content operations should be, “Where does it make the most sense to use it?”

Focus on business impact

First, make sure incorporating AI aligns with your company values, says Trust Insights CEO Katie Robbert. “I would start with your mission and values and see if using artificial intelligence contradicts them,” she says.

Then, consider how AI tools work with your marketing and business priorities. “Think about the questions you’ve been unable to answer or problems you’ve struggled to solve,” she suggests.

Next, consider where these AI tools can help increase brand value or marketing impact. Will they help increase audience reach or enable you to branch out to new creative areas?


Consider #AI for content marketing problems you’ve been unable to solve, says @KatieRobbert via @Joderama @CMIContent.
Click To Tweet


Measure for problems solved as well as marketing impact

Most companies measure AI’s impact in terms of time – how much they can save or how much more they can do. That approach measures efficiency but not effectiveness, says Meghan Keaney Anderson, head of marketing at Jasper, an AI content generation tool.

Meghan recommends A/B testing to pit AI-assisted content against human-created content on comparable topics. “Figure out which one fared better in terms of engagement rates, search traffic, and conversions to see if [AI] can match the quality at a faster pace,” she says.


A/B test #AI content and human-written content to see which gets better engagement, traffic, and conversions, says @MegHKeaney via @CMIContent.
Click To Tweet


Set unified policies

Develop a unified set of generative AI governance policies for managing potential risks and simplifying cross-team content collaborations.

When each team uses different tools or sets its own guidelines, safeguarding company data becomes more difficult, says Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute (MAII).

“If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage,” she says. “You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.”

Consider AI for content operations

Creating content is one way to use generative AI, but it may not be the most beneficial. Consider using it to streamline production processes, amplify creative resources, or augment internal skills.

For example, use AI to tackle smaller, time-consuming assignments like writing search-optimized headlines, compiling outlines and executive summaries, and repurposing articles for social posts and promotions.

Jasper’s Meghan Keaney Anderson says this approach frees your team to explore new creative avenues and focus on work they’re more passionate about.

You also can incorporate AI to help with tasks that aren’t part of your team’s core skills. For example, MAII’s Cathy McPhillips uses AI tools to help produce the company’s weekly podcast, The Marketing AI Show.

Using AI tools to transcribe the podcast, help with sound editing, and create snippet videos for social media saves her up to 20 hours a week. “Working with AI tools reduced the time I had to spend on marketing tactics that are critically important to the business – but not things I love doing,” Cathy says. “That allows me to do more strategic, critical thinking I want and need to focus on but didn’t previously have the bandwidth for.”

Incorporate generative AI into editorial with care

When you use AI for content, implement guardrails to maintain content quality.

Establish or update your fact-checking process

Generative AI tools can produce content with misleading or inaccurate information. So, publishing AI-generated content without careful editorial oversight isn’t wise.

“AI is good at stringing together words, but it doesn’t understand the meaning behind those words. Make sure you’ve got humans watching out for inaccuracies before your content goes out the door,” says Jasper’s Meghan Keaney Anderson.

To manage this risk, Meghan recommends investing in the journalistic skills involved in content creation – editing, fact-checking, and verifying sources –and building those steps into your production workflow.

Be mindful of mediocrity

Even if your AI-created copy is factually impeccable, it can still come off as generic, bland, and uninspiring.

“Today’s audiences can tell the difference between content created by a person and generic copy created by artificial intelligence,” says Trust Insights’ Katie Robbert. She recommends careful human review and rework of AI content output to ensure it conveys your brand’s distinct voice, warmth, and human emotion.

Watch out for biases and ethical issues

Both AI- and human-generated content that includes biased or outdated views can damage your brand’s reputation and audience trust.

Make sure your team keeps an eye out for bias in the editing process. “It’s about making sure that you can stand by what you’re putting out in the world and that it’s representative of your customers,” Meghan Keaney Anderson says.

Address legal and IP security concerns

Generative AI tools also introduce tricky legal challenges – and the content team may be held accountable for them.

Copyright concerns

AI can potentially violate creative copyrights due to the way data gets collected and used by the learning model. Concerns in this area swing both ways: Brands risk becoming the bad actor that publishes copyrighted information without appropriate citations. They also can have their copyrights violated by others.

Several class-action lawsuits are challenging the way OpenAI acquired data from the internet to train its ChatGPT tool. Earlier this year, stock image provider Getty Images sued Stable Diffusion’s parent company, Stability AI, for copyright infringement. More recently, Sarah Silverman and two additional authors have alleged that ChatGPT and Meta’s LLaMA disseminated copyrighted materials from their books.

While the U.S. Copyright Office has issued guidance that works containing AI-generated materials aren’t subject to the same legal standards as human-created works, this issue is complex – and far from settled.

Privacy violations

Other external issues include maintaining the privacy of audience data typed into your content prompts. “Inputting protected health information or personally identifiable information is a big concern, and it’s something that companies need to be educated on,” Katie Robbert says.

 “Make sure your team members aren’t using ‘rogue’ tools – ones their business hasn’t sanctioned or that are built by unknown individuals,” Meghan Keaney Anderson recommends. “They may not have the same strict security practices as other AI systems.”

Brand secrets

And there’s another security-related concern: When you type your brand’s proprietary insights into AI prompts and search fields, that information may become part of its data set. It could appear in results requested by someone else’s prompt for a similar topic.

If your prompt details unannounced products and services, your organization may view it as a leak of trade secrets. It could put you in legal jeopardy and harm your team’s reputation.

Exercising caution and discretion with proprietary data is vital to the safe use of generative AI. “We must be the stewards of our company, data, and customers because legal precedents will lag far behind,” says Cathy McPhillips.


Be the stewards of company, data, and customers when it comes to #AI because laws will lag, says @CMCPhillips via @Joderama @CMIContent.
Click To Tweet


Consider implementing formal guidance on what teams can and can’t include in generative AI prompts. The City of Boston and media brand Wired have published interim guidelines covering internal activities like writing memos, public disclosures, proofreading, and fact-checking AI-generated content.

The Marketing AI Institute published Responsible AI Manifesto for Marketing and Business. Cathy also recommends an internal AI council.

“It’s a way to gather with change agents from every department regularly to discuss the positive and negative business impacts of onboarding AI technology,” she says.

Use operational expertise to roll out generative AI

Generative AI tools promise process efficiency and creative flexibility. But turning that potential into positive marketing outcomes is a job best managed through your human operational intelligence.   

Get more advice from Chief Content Officer, a monthly publication for content leaders. Subscribe today to get it in your inbox.

HANDPICKED RELATED CONTENT:

Cover image by Joseph Kalinowski/Content Marketing Institute