AI in marketing, media, and communications
~25 min readAI in Marketing, Media, and Communications
The Coca-Cola Experiment That Changed How a 130-Year-Old Brand Thinks About Content
In early 2023, Coca-Cola became one of the first Fortune 500 brands to publicly commit to AI-generated marketing content at scale. Working with Bain & Company and OpenAI, the company deployed GPT-4 and DALL-E to help create personalized ad copy, social media variants, and visual concepts across dozens of markets simultaneously. The "Create Real Magic" campaign invited consumers to generate artwork using Coca-Cola's iconic imagery through an AI platform built on OpenAI's tools. Within weeks, over 120,000 pieces of AI-assisted content had been created — a volume that would have required an army of agencies and months of production time using traditional methods. The campaign generated significant press coverage and won a Cannes Lions award. But the more interesting story isn't the award. It's the tension the campaign exposed inside Coca-Cola's own marketing teams.
Coca-Cola's in-house creative directors were simultaneously excited and unsettled. The AI tools could generate on-brand visual concepts in seconds, iterate across 40 language markets without a translator on call, and produce hundreds of social caption variants for A/B testing overnight. But the outputs required heavy curation. Brand safety issues surfaced — AI-generated images occasionally drifted from Coca-Cola's strict visual identity guidelines, and copy sometimes felt tonally flat despite being grammatically correct. The teams discovered that AI didn't replace creative judgment; it massively amplified the need for it. Someone still had to decide what "on-brand" actually meant, which outputs were brilliant and which were subtly wrong, and how to sequence content for emotional resonance across a campaign arc. Those decisions required experienced marketers, not more prompts.
The Coca-Cola case crystallized something that marketing and communications teams across industries are now grappling with: AI collapses the cost of content production but does not collapse the cost of content strategy. Generating 500 social media captions takes minutes with ChatGPT or Claude. Knowing which 12 to use, in what order, for which audience segment, timed to which cultural moment — that's still entirely human work. The teams that thrive are those who redirect time saved on production toward the strategic and editorial judgment that AI genuinely cannot replicate. This is the core tension of AI in marketing: abundance of output versus scarcity of discernment.
Where AI Sits in the Marketing Stack
How the Newsroom Learned to Stop Worrying and Brief the Bot
The Associated Press has been using AI to write financial earnings summaries since 2014 — long before most marketers had heard of GPT. By 2023, AP was producing roughly 4,000 AI-assisted stories per quarter using Automated Insights' Wordsmith platform, primarily for corporate earnings reports and minor league sports recaps. These are high-volume, data-rich, low-narrative stories where speed and accuracy matter more than voice. A quarterly earnings release from a mid-cap company produces a 400-word summary within seconds of the data hitting AP's system. No journalist writes it. No editor touches it unless something flags as anomalous. AP's human journalists, freed from this mechanical work, focus on investigative pieces, source relationships, and complex narratives that require genuine contextual understanding. The math is straightforward: AI handles the commodity content, humans handle the differentiated content.
What AP figured out early is that AI writing tools work best when they operate within tightly constrained formats. An earnings summary follows a predictable structure: headline figure, comparison to analyst expectations, CEO quote, segment breakdown, guidance. The AI knows the template. The data is structured. The output is reliable. Contrast this with a feature story about a CEO's leadership style during a crisis — that requires interviews, narrative judgment, source credibility assessment, and emotional intelligence in writing. No current AI tool does this well. The distinction AP draws — structured, data-driven content versus narrative, relationship-driven content — is one of the most useful mental models any communicator can apply when deciding where to deploy AI in their own workflow.
This structured-versus-narrative distinction shows up everywhere in marketing and communications. Email subject line testing is structured work — you can generate 30 variants with ChatGPT in under two minutes and let your ESP's A/B testing determine the winner. Writing a brand manifesto is narrative work — it requires understanding organizational history, founder intent, competitive positioning, and cultural aspiration in ways that a language model will get superficially right and subtly wrong without significant human editorial direction. The AP model teaches us to sort our tasks before reaching for an AI tool, asking: is this task primarily about pattern completion in a known format, or does it require genuine contextual and emotional judgment?
Prompt
I'm sending a promotional email for a B2B SaaS product — project management software for marketing teams. The email announces a new AI-powered reporting feature. The primary CTA is to book a demo. Our audience is marketing managers and directors at companies with 50-500 employees. Generate 15 subject line variants for A/B testing. Include: - 5 that lead with the benefit - 5 that use curiosity or a question format - 5 that create urgency or scarcity Each subject line should be under 50 characters. Flag any that might trigger spam filters.
AI Response
Benefit-led: 1. Your reports write themselves now (37 chars) 2. AI reporting is live — see it today (37 chars) 3. Cut reporting time by 80% this week (37 chars) 4. Marketing reports in under 2 minutes (38 chars) 5. Finally: reports your CEO will read (37 chars) Curiosity/Question: 6. What if reporting took 90 seconds? (36 chars) 7. Your team hasn't seen this yet (32 chars) 8. How are top teams saving 5hrs/week? (37 chars) 9. We quietly shipped something big (34 chars) 10. Ever wish reports wrote themselves? (36 chars) Urgency/Scarcity: 11. Demo slots filling — grab yours (33 chars) 12. Early access closes Friday (28 chars) 13. 47 teams joined this week. You? (33 chars) 14. Last week to get onboarding support (37 chars) 15. This offer expires Sunday night (33 chars) ⚠️ Spam filter flags: #13 uses a specific number which occasionally triggers filters in older ESPs. #12 and #15 use time-based urgency — ensure the deadline is real or deliverability scores may drop over time.
Spotify's Editorial Team and the Scale Problem Every Media Brand Faces
Spotify's editorial team curates thousands of playlists across 180 markets in 30+ languages. For years, this meant a relatively small group of music editors making subjective calls about playlist descriptions, mood tagging, and contextual copy — the text that tells you why a playlist exists and why you should care. By 2022, Spotify had begun integrating AI-assisted writing tools into this editorial workflow, using language models to generate first-draft descriptions that human editors then refined. The efficiency gain was significant: editors who previously spent 40% of their time on first-draft copy could now spend that time on the curatorial and cultural decisions that actually differentiated Spotify's editorial voice from algorithmic competitors like Apple Music. The AI didn't replace the editors. It removed the blank-page problem.
Spotify's approach reveals a pattern that media companies from The New York Times to BuzzFeed have independently discovered: AI is most valuable in editorial workflows not as a replacement for editorial voice but as a first-draft accelerant that eliminates the cognitive overhead of starting from nothing. A Spotify editor describing a jazz playlist for the Brazilian market doesn't need to invent a structure — they need to apply cultural sensitivity, genre knowledge, and brand voice to a structural scaffold. AI provides the scaffold. The editor provides the soul. This division of labor shows up identically in corporate communications, where PR teams use Claude or ChatGPT to generate press release drafts from a fact sheet, then spend their actual expertise on narrative positioning, spokesperson alignment, and media relationship strategy.
| Task Type | AI Suitability | Best Tool | Human Role |
|---|---|---|---|
| Email subject line variants | High | ChatGPT, Claude | Select and test winners |
| Press release first draft | High | Claude, ChatGPT | Positioning, tone, approval |
| Earnings/data summaries | Very High | Automated Insights, GPT-4 | Anomaly review only |
| Playlist/content descriptions | Medium-High | ChatGPT, Claude | Cultural and brand refinement |
| Social media caption variants | High | ChatGPT, Jasper | Select, schedule, monitor |
| Campaign concept development | Medium | Claude, ChatGPT | Strategic direction, selection |
| Brand manifesto / voice guide | Low | Claude (research assist) | Primary author throughout |
| Investigative / narrative journalism | Very Low | Perplexity (research only) | Full authorship and judgment |
| Visual campaign concepts | Medium-High | Midjourney, Adobe Firefly | Brand safety, selection, refinement |
| Media strategy and channel planning | Low | ChatGPT (data synthesis) | Strategic ownership |
What a Solo Consultant Taught Us About Punching Above Your Weight
Not every AI-in-marketing story involves a Fortune 500 brand or a global newsroom. Consider a freelance brand strategist working with three to five clients simultaneously — the kind of professional who previously had to choose between doing deep research and meeting a deadline. With Perplexity AI for competitive landscape research, Claude for synthesizing client interview notes into strategic frameworks, and ChatGPT for generating first-draft positioning statements and messaging hierarchies, a single strategist can now produce deliverable quality that previously required a team of three. One consultant described her workflow: she spends 90 minutes in client interviews, drops her notes into Claude with a structured prompt, gets a synthesized brief back in minutes, then spends her remaining time on the insight layer — the provocative reframe or the unexpected competitive angle that makes her advice worth paying for.
This solo consultant example matters because it illustrates the democratization effect that AI creates in marketing and communications. The cost of producing professional-grade content assets — research briefs, messaging frameworks, content calendars, competitive analyses — has dropped dramatically. A small agency can now compete on output volume with a larger one. A brand manager at a mid-size company can produce the kind of strategic documentation that previously required an external consultant. The competitive moat in marketing is shifting from "who can produce the most content" toward "who has the sharpest strategic judgment about what to produce and why." AI raises the floor for everyone. The ceiling is still determined by the quality of human thinking applied to AI outputs.
The Briefing Principle: Garbage In, Garbage Out — At Scale
What This Means in Practice for Marketing and Communications Teams
The practical implication of everything above is a fundamental reallocation of professional time, not a replacement of professional roles. Marketing and communications teams using AI effectively are spending less time on content production and more time on content strategy, editorial judgment, and audience insight. A content marketing manager who previously spent Tuesday writing three blog post drafts can now use AI to generate those drafts in an hour and spend Tuesday on distribution strategy, SEO architecture, and editorial calendar planning. The work doesn't disappear — it upgrades. The professionals who adapt fastest are those who recognize that their value was never in the typing; it was always in the thinking that directed the typing.
There's a skills implication here that's easy to underestimate. The ability to brief AI tools effectively — to write prompts that constrain outputs toward useful, on-brand, strategically sound content — is becoming a core marketing competency as consequential as knowing how to brief a designer or write a creative platform. Marketers who can translate strategic intent into precise AI prompts, evaluate outputs critically against brand and audience standards, and iterate efficiently are dramatically more productive than those who use AI tools casually and accept mediocre first outputs. This skill isn't technical. It's editorial. It's the same muscle you use when giving feedback to a junior copywriter — you just need to apply it faster and more precisely, because AI iterates in seconds rather than days.
The third practical implication is about workflow integration rather than tool adoption. The marketing teams seeing the biggest productivity gains from AI aren't the ones who added ChatGPT to their browser bookmarks — they're the ones who embedded AI into their existing processes at specific, high-volume friction points. HubSpot users running AI-assisted email subject line testing inside their existing campaign workflows. Salesforce users letting Einstein generate first-draft sales email sequences that reps then personalize. Notion AI users summarizing meeting notes and extracting action items automatically. The tool matters less than the integration point. Finding the three to five places in your current workflow where you spend time on high-volume, structured, low-stakes content creation — that's where AI delivers immediate, measurable return.
Goal: Build a personal AI opportunity map for your specific marketing or communications role, grounded in real tasks rather than hypothetical use cases, and validate your first AI-assisted content output against a real brief.
1. Open a blank document and list every recurring content or communications task you personally complete in a typical week — include email drafts, social posts, briefs, reports, summaries, and internal updates. Aim for at least 12 tasks. 2. For each task, estimate the average time you spend on it per week (e.g., 45 mins, 2 hours). Total these up. 3. Apply the structured-versus-narrative test from the AP example: mark each task as S (structured/data-driven/format-dependent) or N (narrative/relationship-dependent/contextual judgment required). 4. For every S-rated task, identify which AI tool could assist: ChatGPT or Claude for copy and text, Perplexity for research, Midjourney or Adobe Firefly for visual concepts, your existing platform's built-in AI (HubSpot, Klaviyo, Notion AI) for workflow-integrated tasks. 5. Pick the single S-rated task that consumes the most time. Write a 4-line brief for it: (a) specific audience, (b) desired action or outcome, (c) brand voice in three adjectives, (d) one thing you're NOT trying to say. 6. Use that brief to write a prompt in ChatGPT or Claude. Run the prompt and evaluate the output against your brief. Note what worked and what required human adjustment. 7. Estimate the time saved if you used this AI-assisted workflow every time you completed this task. Multiply by 48 working weeks to get an annual time-saving estimate. 8. Identify one N-rated task from your list where you believe AI would produce subtly wrong outputs without heavy human oversight. Write two sentences explaining specifically why — what does this task require that current AI cannot reliably provide? 9. Share your structured task prompt and your N-rated task explanation with a colleague or manager and discuss whether your assessments align.
Principles Extracted from These Stories
- AI collapses the cost of content production, not content strategy. Coca-Cola could generate 120,000 pieces of content; they still needed experienced marketers to decide which 200 were worth using.
- Structured, format-dependent tasks are where AI delivers the most reliable value. The AP model — AI for earnings summaries, humans for investigative narrative — is directly applicable to marketing workflows.
- The blank-page problem is AI's clearest win in editorial and creative work. Spotify's editors didn't lose their jobs; they lost the least valuable part of their jobs.
- AI democratizes production capacity, shifting competitive advantage from output volume to strategic judgment. The solo consultant competing with a three-person team is the new normal.
- Briefing quality determines output quality. A 90-second audience-action-voice-exclusion brief transforms AI outputs more reliably than advanced prompt engineering.
- The highest-value integration points are high-volume, structured, low-stakes content tasks already embedded in existing workflows — not experimental standalone AI projects.
- The skills that matter most are editorial, not technical: the ability to brief precisely, evaluate critically, and iterate efficiently against a strategic standard.
Key Takeaways
- Major brands including Coca-Cola and media organizations like the AP are already deploying AI at scale in content and communications workflows — this is current practice, not future speculation.
- The structured-versus-narrative distinction is the most practical mental model for deciding where AI adds value in your specific role.
- AI tools like ChatGPT, Claude, Jasper, and platform-native AI (HubSpot, Notion AI, Klaviyo) each serve different integration points in the marketing stack.
- The professional skill being rewarded in AI-enabled marketing teams is editorial judgment — the ability to brief, evaluate, and direct AI outputs toward strategic outcomes.
- Time saved on production should be reinvested in strategy, audience insight, and the narrative work that AI consistently underperforms on.
- Workflow integration beats tool adoption: finding where AI fits into your existing high-volume tasks delivers faster, more measurable returns than exploring AI in isolation.
How the Washington Post Writes 850 Articles a Year Without a Reporter
In 2015, the Washington Post quietly deployed an AI system called Heliograf to cover local election results. Nobody noticed — because the articles read like articles. By 2017, Heliograf had published over 850 stories, covering high school football scores, congressional races, and earnings reports that would never have justified a reporter's time. The Post didn't fire journalists. Instead, reporters who used to spend Fridays transcribing box scores started using that time to investigate police misconduct and housing policy. The editorial team had discovered something that sounds counterintuitive: automating the low-value work made the high-value work better, not redundant.
What made Heliograf work wasn't just the technology — it was the Post's clarity about what machines should do versus what humans must do. Structured, data-dense, format-predictable content (scores, results, financial summaries) went to the AI. Narrative, judgment, source relationships, and accountability journalism stayed with humans. That division of labor wasn't accidental. It required editors to explicitly map which tasks depended on human judgment and which were essentially translation jobs — converting structured data into readable prose. Most marketing and communications teams have never done that mapping. They're either ignoring AI entirely or using it indiscriminately, and both approaches cost them.
The principle the Post demonstrated is now being applied across marketing departments worldwide: AI excels at scale and consistency, humans excel at judgment and relationship. The practical question isn't whether to use AI for content — it's where the handoff happens. Get that boundary wrong in one direction and you produce generic, on-brand-but-soulless content that audiences tune out. Get it wrong in the other direction and you waste skilled writers on tasks that don't require their skills. The Post's framework — map the task, assign appropriately, measure output quality — is the operating model that separates teams using AI well from teams just using AI.
The Structured Data Advantage
Personalization at a Scale Humans Can't Match
Persado is an AI platform that generates emotionally targeted marketing language — and in 2019, JPMorgan Chase ran a headline experiment that became a case study in business schools. Human copywriters produced headlines like 'Access cash from the equity in your home.' Persado's AI produced 'It's true — You can unlock cash from the equity in your home.' The Persado version outperformed the human version by 450% on click-through rate. This wasn't because the AI was more creative. It was because Persado had analyzed millions of previous customer interactions and identified that this audience responded to validation language ('It's true') before the offer. The AI knew something the copywriter didn't — not because it was smarter, but because it had processed more signals.
This is the second major advantage AI brings to marketing: pattern recognition at a scale no human team can replicate. A senior copywriter has internalized feedback from maybe a few hundred campaigns over a career. An AI system trained on millions of ad variations has effectively internalized feedback from campaigns that would take a thousand careers to accumulate. That doesn't mean the AI's output is always better — it means its probabilistic guesses about what will resonate are better calibrated to large audiences. For niche audiences, unusual brands, or culturally specific contexts, human judgment still frequently outperforms. The JPMorgan result works for mass retail banking. It may not transfer to a boutique spirits brand targeting collectors.
The practical implication for marketing teams is to treat AI-generated copy as a high-volume hypothesis generator, not a final-draft machine. Persado doesn't replace the copywriter — it gives the copywriter ten variants to evaluate, each optimized for a different emotional register. The copywriter's job shifts from blank-page creation to editorial judgment: which of these variants fits our brand voice, which might alienate a segment we care about, which needs a human rewrite to feel authentic. That's a genuinely different skill set than traditional copywriting, and teams that develop it early are building a durable advantage.
Prompt
I'm writing email subject lines for a B2B SaaS product that helps finance teams close their books faster. Our audience is CFOs and financial controllers at mid-market companies. Write 6 subject line variants, each using a different emotional appeal: (1) fear of falling behind, (2) peer validation, (3) curiosity, (4) urgency, (5) aspiration, (6) specificity/data. Keep each under 50 characters. Label each variant with its emotional appeal.
AI Response
Fear of falling behind: 'Still closing books manually?' Peer validation: 'How 400 CFOs cut close time by 3 days' Curiosity: 'The close process nobody talks about' Urgency: 'Q4 close is 6 weeks away' Aspiration: 'Finance teams that close in 3 days' Specificity/data: 'Cut month-end close from 8 days to 5'
When Spotify Personalized 400 Million Listeners Simultaneously
Spotify's Wrapped campaign — the annual personalized year-in-review every user receives in December — is now one of the most shared marketing moments on social media. In 2022, Wrapped generated over 400 million social shares within its first week. The content itself is AI-generated: your top artists, your listening personality type, your obscure genre obsession. Each of 400 million users gets a unique version. No human team could produce that. But here's what's easy to miss: the creative concept, the visual design system, the emotional framing ('Your 2022 Wrapped'), and the social-sharing mechanic were all human decisions. The AI handled the personalization engine. Humans handled the reason anyone wanted to share it.
Spotify's model is increasingly the template for large-scale marketing: human-designed campaign architecture filled with AI-generated personalized content. Nike uses a similar approach for its training app, where AI generates personalized workout summaries and progress narratives for each athlete. HubSpot's AI tools generate personalized email sequences based on CRM behavior data. In each case, the brand voice, the campaign concept, and the emotional hook are human-authored. The AI's job is to instantiate that concept across millions of individual contexts without degrading quality or coherence. It's a production role, not a creative director role — and understanding that distinction prevents a lot of strategic mistakes.
| Task Type | Human Advantage | AI Advantage | Best Practice |
|---|---|---|---|
| Campaign concept & strategy | Brand intuition, cultural context, competitive positioning | Limited — can generate options but lacks strategic judgment | Human-led, AI can assist with research and option generation |
| Long-form editorial content | Narrative arc, source relationships, original reporting | Speed, structural consistency, SEO optimization | Human writes, AI edits and optimizes |
| Personalized messaging at scale | Emotional authenticity for niche audiences | Pattern recognition across millions of interactions | AI generates variants, human sets guardrails and approves |
| Visual content creation | Original artistic direction, cultural sensitivity | Speed, variation, style transfer, asset generation | Human art directs, AI produces variations |
| Social media copy | Brand voice nuance, real-time cultural awareness | Volume, A/B variant generation, hashtag optimization | AI drafts, human reviews before publishing |
| Performance reporting narratives | Stakeholder context, strategic interpretation | Data synthesis, consistent formatting, speed | AI generates first draft, analyst adds strategic commentary |
| Crisis communications | Judgment, empathy, stakeholder relationships | Not recommended for primary drafting | Human-led entirely, AI can assist with research only |
A Social Media Manager's Workflow, Rebuilt
Consider what a typical social media manager's week looks like without AI: Monday spent writing posts for the week, Tuesday in a creative review, Wednesday scheduling, Thursday monitoring, Friday pulling performance data and writing a report. That's roughly 40% of the week on content production — writing tasks that are repetitive but require brand knowledge. With tools like Jasper, Copy.ai, or even ChatGPT trained on brand guidelines, that 40% compresses to about 15%. The manager still writes, but they're editing and approving AI drafts rather than starting from blank documents. The recovered time goes somewhere — and where it goes determines whether the role gets more strategic or just more output gets produced.
The managers who have navigated this shift well report a consistent pattern: they reinvested the recovered time into community engagement, influencer relationships, and deeper performance analysis — work that directly strengthened brand equity rather than just feeding the content calendar. Those who used the recovered time to simply publish more content often found engagement rates declining, because volume without added value trains audiences to ignore you. AI can produce more posts. It cannot produce more attention. The social manager who understands this becomes more valuable as AI adoption grows; the one who measures success by post count becomes less so.
Train the AI on Your Voice Before You Trust It
What This Means for How Marketing Teams Are Structured
The skills that AI makes less scarce in marketing are the ones that were already abundant: producing adequate copy quickly, generating standard visual assets, writing performance summaries. The skills it makes more valuable are the ones that were always scarce: brand judgment, strategic positioning, audience empathy, creative direction, and the ability to know when AI output is subtly wrong in ways that will matter later. Marketing departments that restructure around this reality are shifting hiring toward strategists and editors rather than producers. A team that was ten writers and two strategists two years ago may now function more effectively as four writers, four strategists, and a prompt engineer who maintains the AI workflow.
Media companies are ahead of this curve because they felt the pressure first. BuzzFeed's 2023 experiment with AI-generated quiz content was widely covered, but less discussed was the simultaneous expansion of their investigative unit. The Guardian uses AI to generate structured data stories while protecting headcount for long-form journalism. Reuters uses an AI system called Lynx Insight to detect anomalies in financial data and alert journalists to potential stories — the AI does the pattern detection, the journalist does the verification and narrative. In each case, the organizational logic is the same: AI handles scale and speed, humans handle judgment and trust.
For communications professionals — PR, internal comms, corporate affairs — the implications are slightly different but follow the same logic. AI tools can draft press releases, generate Q&A documents for media briefings, monitor sentiment across thousands of news sources, and produce first drafts of executive speeches. Cision and Meltwater now integrate AI summarization directly into their media monitoring dashboards. But the judgment calls — how to frame a sensitive story, which journalist to brief first, how a CEO's word choice will land with employees — remain deeply human. The professionals who thrive will be those who use AI to clear the administrative and production work so they can spend more time on the relationship and judgment work that actually moves the needle.
Goal: Produce a concrete, evidence-based map of where AI can recover meaningful time in your team's workflow, with two validated pilot candidates and a written recommendation ready to share.
1. Open a spreadsheet and list every recurring content or communications task your team does in a typical month — aim for at least 15 tasks. 2. For each task, estimate the average hours spent per month across the team. 3. Add a column labeled 'Structure Level' and rate each task 1-5: 1 = highly structured/predictable output (like a product description from specs), 5 = highly judgment-dependent (like a crisis statement). 4. Add a column labeled 'AI Readiness' and mark each task as High, Medium, or Low based on your Structure Level rating — tasks rated 1-2 are High, 3 is Medium, 4-5 are Low. 5. For all High AI Readiness tasks, write one sentence describing what the AI input (prompt + data) would look like and what the human review step would be. 6. Calculate total hours currently spent on High AI Readiness tasks. This is your maximum time recovery estimate if AI tools were fully deployed. 7. Identify the two High AI Readiness tasks with the highest monthly hours — these are your pilot candidates. 8. For each pilot candidate, spend 20 minutes testing outputs using ChatGPT or Claude with your existing content as style examples. Document the quality gap between AI output and your current standard. 9. Write a one-paragraph recommendation to your manager or team: which task to pilot first, which AI tool to use, and how you'll measure whether it's working.
Lessons from the Field
- The Washington Post's Heliograf showed that automating low-judgment content tasks frees human talent for high-judgment work — but only if you explicitly define which tasks belong in which category.
- JPMorgan Chase's Persado experiment demonstrated that AI's pattern recognition across millions of interactions can outperform experienced human copywriters on mass-market emotional targeting — while likely underperforming on niche or culturally specific audiences.
- Spotify's Wrapped campaign established the template for modern personalization at scale: human-designed campaign architecture, AI-powered content instantiation across millions of individual contexts.
- Social media managers who used AI-recovered time for community and relationship work outperformed those who used it to publish more content — volume without added value reduces attention, not increases it.
- Marketing team structures are shifting from producer-heavy to strategist-heavy, with AI handling production volume and humans handling brand judgment, creative direction, and editorial oversight.
- Communications professionals using AI for monitoring, drafting, and Q&A preparation are freeing capacity for the relationship and judgment work — media briefing strategy, stakeholder framing, executive counsel — that AI cannot replicate.
- The most durable skill in AI-augmented marketing isn't prompting — it's knowing when AI output is subtly wrong in ways that will matter to real audiences, which requires deep brand and audience knowledge.
Key Takeaways
- Map tasks by structure level before deploying AI — the structured/predictable vs. judgment-dependent distinction determines where AI adds value vs. where it creates risk.
- AI personalization tools like Persado operate through pattern recognition across massive datasets, not creativity — their advantage is calibration to large audiences, not universal applicability.
- Personalization at Spotify's scale requires AI for production and humans for creative architecture — neither works without the other.
- The question for every marketing and communications role is not 'will AI replace this job' but 'which parts of this job should AI be doing, and what does that free me to do better.'
- Crisis communications and culturally sensitive content should remain human-led — AI assistance is acceptable for research, not for primary drafting of high-stakes messages.
- The competitive advantage in AI-augmented marketing is shifting from content production speed to editorial judgment, brand stewardship, and strategic thinking.
When the Algorithm Becomes the Editor
In 2023, Sports Illustrated published articles under bylines that didn't exist. The names were AI-generated, the headshots were stock images run through face-synthesis tools, and the content — while technically accurate — had the texture of something assembled rather than written. When the story broke, the backlash wasn't just about AI. It was about trust. Readers felt deceived not because a machine had helped write the articles, but because the publication had hidden that fact behind fictional human faces. The lesson wasn't that AI content is bad. It was that transparency is the new editorial standard.
Sports Illustrated's parent company, The Arena Group, lost significant advertiser confidence in the aftermath. What made the situation worse was that dozens of legitimate publishers were already using AI tools openly — disclosing AI assistance, keeping human editors in the loop, and maintaining audience trust. The contrast was damning. Two organizations using the same underlying technology produced completely different outcomes based on one variable: honesty about the process. That's the core tension in AI-assisted media right now, and it plays out daily in newsrooms, agencies, and brand content teams worldwide.
The principle extracted from this story is blunt: AI amplifies your editorial choices, good and bad. If your process is rigorous, AI makes it faster and more scalable. If your process cuts corners, AI industrializes those shortcuts at a pace that makes accountability nearly impossible. The tool doesn't create the problem — the workflow does. Every marketing and media team using AI today is implicitly making editorial policy decisions, whether they've formalized them or not.
The Disclosure Landscape Is Shifting
The Personalization Paradox at Spotify
Spotify's AI DJ feature, launched in 2023, does something deceptively simple: it combines personalized playlist curation with a synthesized voice that contextualizes song choices with short commentary. The voice sounds warm and knowledgeable. It references your listening history. It feels like a conversation. Within months of launch, Spotify reported that users who engaged with the AI DJ feature had higher session lengths and lower churn rates than those who didn't. The feature worked not because the AI was more accurate than a human DJ, but because it was present, consistent, and endlessly patient.
What Spotify understood is that personalization at scale isn't just about data — it's about voice. The AI DJ didn't just pick songs; it narrated a relationship with the listener. That's a communications insight, not a technical one. The engineering was sophisticated, but the product decision was editorial: give the algorithm a personality, and users will engage with it as if it has one. This is the same principle behind Netflix's recommendation copy, Amazon's 'because you watched' framing, and the tone of Duolingo's in-app notifications, which are famously written with character and consequence.
The risk Spotify managed carefully was the uncanny valley of AI intimacy. If the DJ voice had been too human — too specific, too emotionally presumptuous — it would have felt invasive. Instead, it stayed warm but general, informed but not omniscient. That calibration is something human writers did. The AI handled volume and consistency; the humans handled tone and trust boundaries. That division of labor is worth internalizing for any team building AI-assisted communications at scale.
Prompt
Here is our brand voice guide: [paste 3-4 sentences describing tone]. Below are five short pieces of marketing copy written by different team members. Score each one from 1-5 for brand voice consistency and explain what's off in the ones that score below 4. Then rewrite the lowest-scoring piece in our correct voice.
AI Response
Scores: Copy A — 4/5 (strong, minor formality mismatch in sentence 2). Copy B — 2/5 (overly casual, uses slang inconsistent with your 'confident expert' positioning). Copy C — 5/5. Copy D — 3/5 (passive voice undercuts your direct tone). Copy E — 4/5. Rewrite of Copy B: [rewritten version maintaining brand voice with specific structural changes noted].
Reuters and the Speed Advantage
Reuters has used automated journalism tools since 2018, when it deployed its Lynx Insight system to help reporters identify story angles in large datasets. By 2023, Reuters was using AI to auto-generate first drafts of earnings reports within seconds of financial data becoming available — drafts that journalists then verified, enriched, and published. The result was a measurable competitive advantage in breaking financial news. Reuters wasn't replacing journalists; it was removing the mechanical first step so reporters could focus on interpretation, context, and the calls that turn data into news.
For marketing and communications teams, the Reuters model translates directly. Think of AI as the entity that drafts the first version of your monthly performance report, your post-event recap email, or your product update announcement — all of which follow predictable structures and draw from known data sources. Your team then adds judgment, nuance, and the organizational context that no model can access. The time savings are real: teams using AI-assisted drafting report 40-60% reduction in first-draft production time, according to McKinsey's 2023 State of AI report.
| Use Case | AI Handles | Human Handles | Risk to Watch |
|---|---|---|---|
| Earnings/performance reports | Data extraction, draft structure | Interpretation, tone, accuracy check | Numerical errors in AI output |
| Social media content | Volume, variations, scheduling copy | Brand voice, cultural sensitivity | Repetitive or off-brand phrasing |
| Personalized email campaigns | Segmentation copy, subject line variants | Strategy, offer logic, compliance | Over-personalization feeling invasive |
| Press releases | Boilerplate sections, SEO formatting | News judgment, spokesperson quotes | Generic language reducing pickup |
| Crisis communications | Background research, draft options | All final decisions and approvals | Speed pressure bypassing review |
The Solo Consultant Who Competes With Agencies
Consider a freelance brand strategist working with mid-sized B2B clients. Before AI tools, she could manage two clients simultaneously — research, strategy decks, messaging frameworks, and copy all took time she simply didn't have more of. With Claude for long-document synthesis, Perplexity for competitive research, and ChatGPT for copy drafts, she now manages four clients at the same billing rate. Her actual strategic work — the client interviews, the insight synthesis, the recommendations — takes the same time. Everything around it compresses. She's not doing less thinking; she's doing far less typing.
This pattern repeats across communications roles. A one-person PR team at a startup using AI tools can produce the volume of media materials that previously required an agency retainer. An in-house content team of three can publish at the cadence of a team of eight. The competitive implication is significant: organizational size is no longer a reliable proxy for content output. What matters now is how intelligently a small team deploys AI tools within a clear editorial process — which means strategy and taste have become the scarce resources, not headcount.
Build Your Prompt Library Before You Need It
What the Sports Illustrated case, the Spotify DJ feature, and the Reuters automation model share is that they all required deliberate human decisions about where AI operates and where it doesn't. None of them succeeded or failed because of the technology alone. They succeeded or failed based on the editorial and strategic frameworks wrapped around the technology. That's the pattern that holds across every real-world example in marketing and media: AI expands capacity, but human judgment sets the boundaries that make that capacity trustworthy.
For professionals in marketing, media, and communications, the practical implication is that your most important AI skill isn't prompt writing — it's process design. Knowing which tasks to route to AI, which outputs require human review before publication, and which decisions should never be delegated to a model are the choices that determine whether AI makes your work better or just faster. Speed without accuracy loses audience trust. Volume without voice loses brand equity. These aren't new problems; AI just makes them arrive faster and at greater scale.
The professionals who will get the most from AI in this space are those who treat it as a capable but junior collaborator — one that needs clear briefs, consistent feedback, and firm guardrails. Claude, ChatGPT, and Gemini don't know your audience the way you do. They don't know your client's sensitivities, your publication's editorial standards, or the cultural context your campaign is landing in. You do. That asymmetry is your value. AI handles the labor of language; you supply the judgment that makes language matter.
Goal: Produce a practical, one-page AI content policy your team can actually use — one that clarifies roles, protects brand trust, and removes ambiguity about when and how AI tools are appropriate.
1. Open a blank document titled 'AI Content Policy — [Your Team/Organization Name]'. 2. List the five content types your team produces most frequently (e.g., social posts, email campaigns, reports, press releases, blog articles). 3. For each content type, write one sentence describing what AI can draft without restriction. 4. For each content type, write one sentence describing what must be human-written or human-verified before publication. 5. Write a single paragraph defining your team's disclosure standard — when and how you will indicate that AI assisted in content creation. 6. Add a 'Red Lines' section listing two or three content scenarios where AI should not be used at all (e.g., crisis statements, executive quotes, legal disclosures). 7. Write a one-sentence statement of your team's overall AI philosophy that could be shared with a client or stakeholder. 8. Share the draft with one colleague and ask them to identify any gaps or scenarios you haven't covered. 9. Save the final version as a standing team reference document and review it every six months.
- Transparency is now an editorial standard: audiences and regulators increasingly expect disclosure when AI has materially shaped content, and hiding that erodes trust faster than admitting it.
- AI amplifies your existing process quality — rigorous workflows scale well; corner-cutting workflows scale badly and quickly.
- The most effective AI deployments split labor clearly: AI handles volume and structure, humans handle voice, judgment, and cultural context.
- Personalization at scale requires editorial decisions about tone and trust boundaries — these can't be outsourced to the model.
- Organizational size no longer determines content output; the quality of your AI-integrated process does.
- Your prompt library is a strategic asset — teams that invest in tested, approved prompts for recurring tasks move faster and more consistently than those who start from scratch each time.
- The scarce resource in AI-assisted communications is not content production — it's the human judgment that makes content trustworthy and strategically sound.
- AI in marketing and media is an editorial and process challenge as much as a technical one.
- Disclosure and transparency are becoming baseline expectations, not optional best practices.
- Effective teams use AI for volume, structure, and first drafts — humans own voice, accuracy, and final judgment.
- Solo professionals and small teams can now compete on content output with much larger organizations.
- Your AI content policy is as important as your style guide — build it before you need it.
What was the core reason the Sports Illustrated AI content scandal damaged audience trust?
A marketing manager wants to use AI to draft personalized email campaigns for 12 audience segments. Based on the principles in this lesson, which task should remain entirely human-led?
Spotify's AI DJ feature succeeded partly because it maintained what the lesson calls 'calibration.' What does this mean in practice?
According to the Reuters example, what is the most accurate description of how AI changed their journalism workflow?
A solo communications consultant doubles her client load after adopting AI tools. According to the lesson, what does this reveal about the current competitive landscape?
Sign in to track your progress.
