Issue #115 | The AI Issue


Happy Sunday, Everyone!

I hope you all are having a fantastic start to May (difficult to believe that we’re already more than 35% of the way through 2025). For me, it’s been an incredible week in South Africa - fantastic food, wonderful, welcoming people and a remarkable event (Converge 2025).

In a surprise to precisely no-one, the most-discussed topic among the speakers and by the attendees was (by far) GenAI - followed somewhat distantly by tariffs and ad platform evolutions. But, what was surprising was the growing chasm between the AI hype and the on-the-ground/in-the-marketing-department reality.

To put it simply, I think marketing (in-house and agency alike) have a signal-to-noise crisis:

GenAI has become the most effective smoke‑machine the tech industry has ever developed (and that’s saying something). Conference keynotes promise fully autonomous marketing departments. Press releases talk about “thinking agents” that will double revenue in a quarter. Meanwhile, walk the halls (or zoom with the practitioners) of any mid-to-large brand and you will still see marketers hand‑typing product copy, pulling Looker reports the old-fashioned way and fiddling around with tomorrow’s social graphics in Photoshop. While that may seem diametrically opposed to the Zeitgeist of the Twitter-verse (err…X-Verse), it’s the daily reality for most marketers. I’m aware that X-Verse sounds like a TV-MA website, but we’ll go with it.

This isn’t just opinion or crowd-sourced wisdom, either - the hard data is equally sobering.

Pew Research’s February 2025 workforce study shows almost two‑thirds of American employees rarely use artificial‑intelligence tools, and fewer than one in five uses them weekly. Surveys aggregated by the US Federal Reserve say the same thing at the enterprise level: only a low double‑digit percentage (5% - 20%) of firms report what they call “meaningful adoption.” And the St. Louis Fed found that only 0.5% to 3.5% of work-hours nationally are “AI-assisted.” If those stats aren’t quite enough to persuade you, consider Sam Altman’s own claim: ChatGPT processes a billion messages a day. That sounds wildly impressive. But, when you break it down, it’s less so:

  • SEMRush found that the average ChatGPT “conversation” includes 8 messages in their analysis of 80M+ clickstream records.
  • Quick math (also done by Rand Fishkin at Friends of Search in March) says that 1,000,000,000 messages a day * 30% “search-like intent rate / 8 messages/conversation = ~37,500,000 “searches” per day on ChatGPT. That’s a BIG number. But…it’s not that big.
  • Google, in contrast, processes 5 trillion (yes, with a “T”) queries per year. That’s ~14B searches per day. If you’re doing quick math, you’ve probably already come to this conclusion: Google processes more search-like queries in a day than ChatGPT does in a year at this time.

None of this is to say that GenAI tools aren’t impressive, powerful and likely to play a massive role in the future of work and commerce. They are. But, as every coach has ever said: potential does not equal production. And, as of today, the marketing story around GenAI is running laps around the implementation reality.

While that may sound negative, I think the opposite is true. The opportunity remains. We’re still in the early innings. If you haven’t adopted these tools or integrated GenAI into your processes, team and workflows, there is still time to do so while remaining well ahead of the curve.

Four Cohorts That Predict Your Future More Reliably Than Any Vendor Road Map

One of the most useful mental models floating around the industry splits talent into three buckets: heavy users, daily users and dabblers. I see a fourth camp every time I audit a client and it is usually the group that creates the loudest noise internally. Put the four together and you get a diagnostic tool that tells you more about your readiness than any slide deck ever will.

Category 1: Power Users. These are the builders. They keep their prompt libraries in Git, version prompts the way software engineers version code, and wire retrieval‑augmented generation into core processes. These are the people who are actively looking for avenues to integrate GenAI tools deeper into the company processes and turbo-charge their own productivity. Today, this group is less than 1% of the workforce, but is driving both the hype train and the productivity train forward.

Category 2: Everyday Operators. These individuals treat GenAI like a seat belt: automatic, invisible, essential but not transformative (at least, in isolation). They are more productive now than they’ve ever been, but that’s mostly a product of modest productivity increases and a superior work-ethic. Some of the underlying research, perhaps part of the outline and draft headline are all generated from a model, but the final copy, voice and tone is still theirs. Meeting notes, translation clean‑ups, evergreen keyword clusters: all accelerated, none abdicated.

Category 3: Casual Experimenters. Think weekend Peloton warriors. Enthusiastic bursts after reading an AI-thoughtboi LinkedIn post, followed by long stretches of inactivity, far too many screenshots and ultimately, very little net productivity increased. They can spot‑solve a problem or create a piece of content with GenAI, but rarely build sustained muscle.

Category 4: Skeptical Holdouts. Intelligent, experienced and often the most vocal. Their pushback can be useful. Hallucinations are real, model drift is a thing, not all problems are well-suited to probabilistic models, finding product-market fit is difficult. But the objection is usually ideological first and empirical second. Most of the people in this group have never spent an afternoon playing around with ChatGPT or Mistral or Gemini, and it shows.

An organization or marketing team composed of Category 1 players will rapidly build + fortify moats. Category 2 delivers steady efficiency but can still be copied. Category 3 burns discretionary budget on novelty. Category 4 quietly donates market share to whoever is brave enough to automate the grunt work.

To me, the single most important leadership task for the next twelve months is to migrate talent north on that continuum.

Capability Curves Are Crooked, Not Hockey Sticks

The marketing industry loves a simple, linear narrative: double the parameters (read: power) of a model + the errors fall by 50%. It seems logical, sounds compelling - but is completely false. A recent article from the New Scientist found that the most advanced models - o3, Gemini 2.5, DeepSeek R1 - all had triple-digit increases in hallucination rates vs. their predecessor models. ChatGPT O3, for instance, had a 33% hallucination rate - a 106.25% increase vs. O1 (16%).

This isn’t just limited to basic use-cases, either – a systematic review across medical citations from 2024 found that the latest OpenAI flagship models invented 28.6% - nearly 3-in-10 - of its references. In legal benchmarking, a Stanford University study found the hallucination rate was comparable to the medical review at 17% - 34%, and increased once the prompts became more complex and nuanced.

At the same time, some tasks, such as translation between high‑resource languages, classification of user‑generated content, image creation, research and multi-step reasoning are getting shockingly good. Progress is jagged, domain‑specific, and occasionally regressive. Treat every major model release like an experimental drug: read the label, run a controlled trial, and reserve the right to downgrade if side‑effects spike.

The great news for marketers is that many of the “skills” where GenAI systems are improving - research, image generation, ideation, copy creation, audience insights/research, categorization, pattern recognition - are ones that are used in their day-to-day lives. And, unlike lawyers and doctors, marketing errors are far less consequential (in most cases).

For better or worse, GenAI is here to stay. It will get better, though it’s unlikely to reach “superintelligence” without multiple new breakthroughs – and predicting those is about as reliable as predicting tomorrow’s Powerball numbers.

SPEED: The Operating System for Human + Machine

Over the past two years I have watched brilliant teams sabotage themselves by treating a language model as the new boss instead of the new power tool. The quickest diagnostic I know is the SPEED framework: five checkpoints that keep responsibility where it belongs and leverage where it matters.

Strategy comes first and stays human. Decide the hill worth taking, the customer you intend to delight, the constraint/rule you refuse to break. Then ask the model to surface hidden patterns in your purchase data or your brand’s comment threads that validate or challenge your thesis.

Planning is where resource trade-offs live. Channel weights, flighting windows, risk buffers all remain leadership (read: human) territory. Let a GenAI system act as your Avengers Dr. Strange, running thousands (or millions) of scenarios so you can uncover edge cases a human is likely to miss. Armed with those insights, create a one-page rationale for your chosen plan/strategy. The goal is to ensure that everyone understands why you made the choice you did.

Execution is the assembly line: drafting copy, resizing assets, clustering keywords, scrubbing call transcripts. Give the machine a style guide and a red-team checklist, then let it burn through the monotony. Reinvest the hours you get back into sharper angles, richer stories, more robust landing page development, or anything else you believe will move the needle.

Evaluation is where most roll-outs stall. People report vanity metrics, models spit out scorecards, and no one links the two. Flip it: leadership sets three non-negotiable KPIs, then has the model mine all of the available data for anomalies, causal hints and next questions. The goal should be for GenAI to do the number-crunching grunt work, so your smart people can spend more of their time interpreting, contextualizing and transforming those numbers from reporting to real insights that help your (or your client’s) brand move forward.

Development is what transforms one-off GenAI interactions into workflows, and previously skeptical team members into power users. The goal should be to demonstrate the power of these tools, then democratize their adoption. As a leader, this is the biggest pain point: how do we encourage every team member to use these tools, share their findings (good and bad), and be open to making new mistakes? What do we have to do to convince people that these tools are not here to replace you – but someone who uses them will, if they don’t begin to do so?

Without fail, every time I’ve heard about a GenAI deployment/project gone wrong, the cause was the same: somewhere along the chain, intention was handed to the model and humans were demoted to janitors. Keep SPEED in order and you can mitigate a significant portion of the risk associated with deploying these tools.

Proven, Emerging, Speculative: Where to Aim Your Next Dollar

I divide commercial use‑cases into three bins:

Bin #1: Proven.

This is (to be frank) boring stuff - tCPA/tROAS bidding, Meta’s Advantage+ campaigns, routine if/then workflows, predictive churn models that have been around since 2018 but suddenly feel exciting because a LLM writes the commentary. If you are a mature brand and these workflows are still in pilots, the issue is decisiveness, not due diligence. Other examples that fall into this category:

  • Advertising optimization: Performance Max, Advantage+, and retail-media smart bidding that tunes placements, budgets and creative sequencing while your media buyer sleeps.
  • Search and site health: modern SEO suites that use LLMs to surface intent clusters, analyze heatmap data, identify content gaps, write schema, and review landing pages for opportunities.
  • Real-time personalization: recommendation engines that rewrite the homepage on a visitor’s third click, swap hero images by cohort, and bump margin through segmented/tailored pricing
  • Predictive analytics: lead scoring, customer-lifetime-value forecasting, propensity models, etc. Basically - all the number-crunching grunt work that your team currently has to do, but likely doesn’t do as often as they should.
  • Conversational support: tier-one chatbots that handle password resets, order tracking and lead qualification 24/7, freeing humans for nuance.
  • Audience Insights: often done once (or, at best, once a year) - this is an opportunity to get more (and more current) insights on your target audiences, from what they like and where they interact, to how they speak and who else they’re considering.
  • BI accelerators: natural-language querying that lets a VP ask, “Show me gross margin by SKU versus ad spend week-over-week,” and get a chart before the latte cools.
  • Competitive Intelligence: the thing that everyone - from the CEO + CMO to the media buyer - really wants to know: what’s our competition doing? The reality is that most brands don’t do this nearly enough, because it’s manual, tedious and there simply isn’t time. Fortunately, this can be automated relatively easily, the results piped into a shared Google Sheet, and that sheet analyzed to uncover patterns in the discounts/offers your competitors are running.

Bin #2: Emerging Edge.

This is where the fun starts. Campaign briefs drafted by agents that understand your historical conversion data, media analysts writing SQL with co‑pilot, computer‑vision quality control running in near real‑time on the factory floor. Pick one, set a hard service‑level target - cycle time down by 30% or creative‑testing velocity up by a factor of 3 - and give the experiment 90 days. If it does not hit the number, kill it and move resources to the next hypothesis. Other ideas that might fall into this bucket include:

  • Agent-drafted campaign briefs that convert your first-party data into targeting grids, spend ladders, and testable creative hooks—planners become editors, not scribes.
  • Multivariate creative engines like CreativeOS or Midjourney V6 that can generate 100s of on-brand visuals and copy variants, pre-scored for predicted engagement. Many more “available” tools like Gemini + ChatGPT can be used for copy ideation and headline generation at scale - with a human reviewing, of course.
  • Code and data copilots for analysts - SQL autocompletion, anomaly sniffing, automated QA - so “spreadsheet archaeology” stops throttling real-time decisions.
  • Customer-journey orchestrators that choose next-best-action across email, SMS, ads, and on-site modules. This transforms personalization into an integrated experience vs. a channel-specific silo.
  • Predictive trend engines mining earnings calls, customs manifests, Google search data, news articles, etc. to flag micro-categories months before your syndicated report.
  • Brand-risk early-warning systems that sweep social and fringe channels for mention spikes, sentiment shifts or other oddities, then feed alerts straight to your Slack/inbox. If nothing else, these can easily replace the $30k to $100k “brand monitoring” systems that currently gobble enterprise-level budgets despite offering prehistoric capabilities.
  • AI compliance sentries scanning every ad, landing page, and product claim against FTC, FDA, or vertical-specific rule sets. This is another simple automation, but one that can serve as a warm, comforting safety blanket for regulated sectors.
  • Synthetic human panels enable your team to pre-test positioning, packaging, or pricing at 1/100th the cost and 100× the speed of live research panels.

Bin #3: Speculative + Strategic Opportunities.

This is the “future-y” stuff - fully autonomous chief marketing officers, hyper‑real virtual influencers negotiating their own sponsorships, quantum‑accelerated trend scanning. Read the papers, keep a hobby repo, but do not budget anything your comptroller would notice.

  • Autonomous “micro-marketers” agent stacks that own and optimize a single channel end-to-end before graduating to higher levels of control. These are amazing in theory, but in practice come with significant drawbacks and risks. Also - most of the “replace your CMO” platforms really don’t understand what a CMO does, so replacing them will be difficult.
  • Create hyper-real virtual influencers that pass photoreal Turing tests and localize hooks/scripts in minutes. This is (potentially) the next “big thing” in creator content - especially since it doesn’t come with any of those annoying royalty fees or unresponsive creators.
  • Emotion-adaptive experiences. I’ve seen several of these, which claim to adjust copy, color, or other content (UGC, photos) in real-time based on camera or voice sentiment. It’s super creepy. It’s legally ambiguous. It’s not worth investing in right now.
  • Predictive culture-trendspotting fusing Nielsen panels, Discord transcripts, and geo-social data to flag fringe memes before they mainstream.
  • Privacy-preserving joint data models. For the data-science nerds, these are federated-learning frameworks that let non-competing brands pool insights without sharing raw customer data.

Your goal should be to implement everything that’s possible in Bin #1 immediately – this isn’t controversial stuff, and these items have a high probability of improving your margins, increasing efficiency and making your team’s life better. The risk/reward profile for these items skews heavily into the “reward” camp - and the biggest risk (to be exceedingly blunt) is your getting fired for not doing them.

After you’ve accomplished that, move into the Emerging Edge category. Most of this can be done with basic implementation (I’ve outlined as basic process below)

Retrieval‑Augmented Generation: The Safety Valve Most Teams Skip

Retrieval‑augmented generation (RAG) is a simple trick: force the model to ground answers in a trusted vector store before it opens its synthetic mouth. Spin up an open‑source database, chunk your policy PDFs into embeddings, and hand the model a “cite or stay silent” instruction. For two United States dollars an hour in graphics‑processing power you slash hallucinations, maintain version control over the truth and give your legal team a digestible audit trail. No enterprise‑scale rollout should go live without this valve in place.

Implementation Without the Buzzwords

Start by identifying the “ripe-for-automation” tasks. The easiest way to do this, if you track hours, is to pull your team’s time logs for the last 90 days. Find the 20(ish) tasks that consume the most hours, then score them for frequency, complexity/nuance, cognitive drag, risk (brand, financial, etc.) and seniority (i.e. how senior are the people doing the task). If you don’t have an hours system, create an internal survey that asks each person for 5-10 regular/repetitive tasks they loathe the most, then score them using the same framework:

  • Frequency: 0 (one-off or exceedingly uncommon) to 10 (done daily/weekly)
  • Complexity/Nuance: 0 (highly complex and nuanced) to 10 (templated and simple)
  • Cognitive Drag: 0 (virtually none) to 10 (a royal PITA)
  • Risk: 0 (very low-stakes/low-leverage) to 10 (an error is catastrophic)
  • Seniority: 0 (often done by interns) to 10 (often done by your senior-most people)
  • Add the scores together, and the tasks with the highest score(s) are the ones most ripe for automation.

A practical example: search terms report (STR) management. Anyone who runs Google Ads knows that search terms management is simultaneously tedious and essential. No-one likes doing it, it can take hours a month for even lower-spending accounts, the risk is limited (at worst you remove the conflicting negative KW), and 95% of queries can be analyzed by a junior marketer (there are always 5-10% that require significant knowledge or interpretation). This is an ideal task to be thrown to GenAI – something that we’ve done. Our process was rudimentary at first, but it worked: exported both the “positive” KWs and the STR to Google Sheets, created a prompt that explained the brand + target audience, and asked the model to assign a “score” of 0 (not at all relevant to any keyword, exceptionally high probability that query should be excluded) to 10 (highly relevant or identical to keyword, query should be added to campaign if not already live in account) to each search term. Once this ran, a person went in and reviewed what the model produced, created a final negative list to upload, and uploaded it. Result: a boatload of hours saved.

Next: prove the concept. You just identified the biggest time-sinks for your team. There’s a temptation to immediately go to “build out AI infrastructure!” - but, let’s pump the brakes. Before you brief DevOps or draft a Cloud GPU purchase order, prove value in a single day with something everyone already knows how to use, like ChatGPT or Gemini.

Your goal should be to start small, but replicate reality. Grab a high-scoring task from above, such as rewriting product-spec boilerplate, stitching three dashboards into one status email, or translating a 40-minute sales call into Salesforce notes.

Next, record your team doing the task from end-to-end: what do they do? Where do they source data? What is their process? Document everything, including the total time to complete the task. Your goal should be to understand each step of the process in detail, so you can communicate those details to the model and design the relevant automations.

Once you have that, purchase a handful of “Pro” licenses (the cost is less than what you spent on last Friday’s team pizza lunch that no-one liked anyway) and give the AI the task breakdown (from above), along with the same raw inputs, style guide, product info, and whatever else your team faces every week.

Ask the model to work the project end-to-end while a Loom recorder runs in the corner. Do not optimize prompts, do not add Zapier glue code. Treat the model like an eager intern who’s read the manual but never touched your business. Measure the time it takes ChatGPT or Gemini to accomplish the task, and compare it with what you found in your initial audit of the task, review of the time logs and/or the self-reported time from the survey. This is inherently an unfair comparison: this was the model’s first time doing the task, and (presumably) your team’s thousandth – so you’re comparing a rookie to an old pro. This is the worst-case scenario for the GenAI model.

Next, note every error in the output. Were these trivial “curly quotes” and brand-tone tweaks, or did the model invent French sizing conversions that never existed or manufacture performance data that did not exist in the source reports or dashboards? If the model is making substantial errors (i.e. inventing sources that are not there, or completely fabricating data), then that’s a big problem. If the errors are mostly cosmetic, that’s less of a problem.

Finally, translate the time delta into reclaimed senior-talent hours. If the task previously took your Senior Digital Strategist 4 hours per week, and now s/he only needs to spend 30 minutes reviewing the output from the system, that’s 3 hours + 30 minutes saved.

Calculate that role’s hourly rate (total comp / 2040), multiply the two together, and voila! Instant economic impact. If your senior strategist makes $100,000 per year + $35,000 in taxes/fringe benefits, that’s $66.18/hr. Reducing the time spent from 4 hours to 30 minutes therefore saves $231 per week, or about $12,044.76 per year. Not too shabby.

Candidly, if this exercise buys back even half a day per week from a mid-level copy lead, you just uncovered the budget line to fund a more robust deployment. The best part is that the process you’ve just undergone provides the proof – you can show the time saved, the deliverable quality, the rudimentary process used (which can, presumably, be improved quite significantly) and the economic impact of implementing this for the team/company/organization.

Walk that evidence into the next leadership/IT meeting and the conversation flips from, “Why are we doing this?” to “How fast can we roll this out to the full team in a repeatable, secure and optimized way?”

And if the lift is negligible? Celebrate the cheap failure. You saved yourself three weeks of procurement theatre and proved the workflow needs data hygiene, not more compute. Either way, you replaced opinion with evidence, which is exactly what separates a leadership team that talks from one that walks.

Next - a private sandbox. Microsoft Azure with OpenAI, Gemini + Google Cloud in a dedicated tenancy, or if budgets are tight, an open‑weight model behind a virtual private cloud. Nothing leaves the company’s jurisdiction, every prompt is logged, and rollback is one environment variable away. This is the setup that allows for more robust integrations (i.e. API calls that automatically feed data from your systems into the model and back again), improved security and far more control. It sounds complex, but it really isn’t – it’s (for all intents and purposes) much of the same underlying process in a more structured and secure environment.

Prompts should be cataloged and organized. If you have a design‑system repo, the prompt guide lives next to the brand style guide and voice and tone documents. My personal preference is to upload all prompts to a Notion database, along with notes on their use – this makes it easy for the full team to access and comment, while keeping everything searchable and organized.

My preference for governance is a red‑team/green‑team loop. The green team creates with Gen AI; the red team reviews for hallucinations, bias, errors and brand/compliance/legal holes. Nothing gets rolled out to end users or clients until both groups are satisfied with the outputs.

Finally, we optimize for time saved, deliverable accuracy or economic impact. If an automated workflow cannot post a significant (20% or better) improvement in at least one of those metrics, document the points of failure, then place it on the back burner. Celebrate the win (saying “no” to something that doesn’t work quickly is a win - don’t ever forget it), then pick up the next task from the list above.

Culture Eats Model Architecture for Breakfast

Technology rarely kills these projects - behavior does. That is why your Chief People Officer (or whatever we’re calling the head of HR these days) suddenly matters as much as your Chief Technology Officer. Institutionalize an AI “standup” - a weekly 15/30 meeting where your team demos what worked and (especially) what flopped. Ask attendees for feedback. Encourage skeptical team members to share their pain points/frustrations. Invite everyone in, and celebrate the wins, no matter what form they come in – a quick no is better than a long yes.

This exercise is what moves the Casual Experimenters into Everyday Operators and drains fear from Skeptical Holdouts by showing - not telling - what the tools actually do.

Five Brutal Truths to Keep Handy

  1. You will not solve adoption by “hiring an AI person.” Fluency must spread horizontally, the way Excel did 20-30 years ago.
  2. Bigger models are not automatically safer. Hallucination is probabilistic, not moral. Guardrails (+ humans) are what will save you from errors. Remember the SPEED framework.
  3. Waiting for perfect intellectual‑property clarity is a seductive excuse for inertia. Private tenancy plus encryption will cover most of your exposure. Risk/Reward is a balance, not an ultimatum.
  4. Return on investment is not fuzzy - your instrumentation is. Measure hours saved, errors prevented and revenue added. Focus on using GenAI to improve your efficiency and production.
  5. Head‑count rarely drops; it shifts. GenAI does supercharge productivity - but virtually no organization has a marketing team with no backlog. If 1 marketer can do the work of 5, then an AI-enabled team of 5 can do the work of 25. Jevon’s Paradox is a real thing.

One Hour, One Habit

Look at tomorrow’s calendar. Highlight a task that burns 30 minutes of senior talent. Open up ChatGPT or Gemini, write a five‑line prompt, pair the sharpest mind in the room with the model and push to production. Measure the time saved. Rinse, repeat, iterate, evolve.

Velocity still beats perfection. Always has, always will.

The simple truth is this: we’re all still early to this technology. The chatter about what these tools can do far exceeds what they actually are doing for most organizations on most days – and that’s where the opportunity lies.

Instead of worrying about falling behind, just start. As the Twitter Bois say, “You can just do things.”

Until next week,

Cheers,

Sam

Loving The Digital Download?
Share this Newsletter with a friend by visiting my public feed.

Follow Me on my Socials

1700 South Road, Baltimore, MD 21209 | 410-367-2700
Unsubscribe | Manage Preferences

THE DIGITAL DOWNLOAD - SAM TOMLINSON

Weekly insights about what's going on and what matters - in digital marketing, paid media and analytics. I share my thoughts on the trends & technologies shaping the digital space - along with tactical recommendations to capitalize on them.

Read more from THE DIGITAL DOWNLOAD - SAM TOMLINSON

Happy Sunday, Everyone! It’s difficult to believe, but June is here. We’re officially 41.7% of the way through 2025. This week’s issue is one that was inspired by the current state of marketing and technology, along with several conversations with colleagues about what marketers should be doing going forward. Eventually, I came around to a theme for the issue: The more things change, the more they stay the same Marketing today is both more difficult and more exciting than at any prior point...

Happy Sunday, Everyone! I hope you’re all doing well and enjoying the (semi-official) start of summer, also known as Memorial Day. After a few heavy issues on marketing strategy, I thought we’d change course with this week’s issue, and focus on current events + big-picture ad tech trends. It also just so happens that Google held Marketing Live (part of I/O) this past week. In a surprise to exactly no-one, this was less a product showcase and more an AI infomercial. For two-hours this past...

Happy Sunday, Everyone! After a whirlwind spring conference season, I’m (finally) back for a few weeks. This past week, we rolled out a series of new initiatives to our team – and I found myself explaining Jevons’ Paradox. Afterwards, I saw many people found it helpful in thinking about the impacts of AI on marketing. This inspired me to start thinking about what other paradoxes, razors and mental models I use in my day-to-day, as well as how those might be helpful to you all. To be brutally...