Issue #153 | ChatGPT Ads Are Here. The Research Says Rankings Don't Matter. Now What?


Happy Sunday, Everyone!

I hope you're all having a fantastic final week of January. If your inbox looks anything like mine, it's been flooded with hot takes about OpenAI's announcement that ChatGPT ads are coming. Everyone has an opinion. Let’s be honest: most of them are wrong.

First, the facts. Here’s what actually happened: OpenAI announced they will be testing ads in the US for free users and their new $8/month "Go" tier. The operative phrase is "will be" - the ads aren't live yet. They'll be rolled out "in the coming weeks” (in Sam Altman time, which could well be a few months in real-person time) - but the X hype machine is already in overdrive.

When they do launch, ads will appear at the bottom of answers and (allegedly) clearly labeled as sponsored. Plus, Pro, Business, and Enterprise plans all will remain ad-free. And OpenAI is making some big promises about how this will all work.

What OpenAI Is Saying

Sam Altman, on X: "We will not accept money to influence the answer ChatGPT gives you, and we keep your conversations private from advertisers. It is clear to us that a lot of people want to use a lot of AI and don't want to pay, so we are hopeful a business model like this can work."

Fidji Simo, OpenAI's CEO of Applications, framed it differently: "AI is reaching a point where everyone can have a personal super-assistant that helps them learn and do almost anything. Who gets access to that level of intelligence will shape whether AI expands opportunity or reinforces the same divides."

OpenAI outlined five principles for their ad approach: mission alignment, answer independence, conversation privacy, choice and control, and long-term value. The key promise: "Ads do not influence the answers ChatGPT gives you." This is - in many ways - similar to the stance Google has maintained forever - Google Ads + Google Organic are entirely separate entities (though this firewall had some water poured on it during the Google Anti-Trust case this past year)

Now, let's be clear about OpenAI’s financial picture. The company is not profitable. According to Microsoft’s Q3 earnings, OpenAI lost a dizzying $11.5B in Q3 2025 alone. Internal documents (optimistically) project a $14 billion loss in 2026, which would bring cumulative to $44B - $50B - plus an expected $50-60B in future losses - bringing total cumulative losses to $115B before turning profitable in 2029. Obviously, OpenAI is in a unique position - they’re incinerating money, but also have a $250 billion Azure commitment with Microsoft representing roughly 45% of Microsoft's $625 billion cloud backlog. They’ve generated ~$20 billion in annualized revenue (2025 numbers). They have the resources - but they can’t continue to lose $7.70 for every $1 in revenue forever.

That makes the ad math simple. OpenAI has ~800M users, 95% of whom don't pay. Ads are the only near-term solution to boost RPU without more subscriptions.

Following the announcement, the marketing world immediately split into two camps: those rushing to figure out how to "win" at ChatGPT ads, and those dismissing the whole thing as irrelevant. Both are missing the point.

This week, I want to talk about what ChatGPT ads actually mean for marketers, and why some new research from Rand Fishkin should fundamentally change how you think about AI visibility. If you're already spending money on "AI tracking" tools or planning to pour budget into ChatGPT ads the moment they open up, this is worth your time to read.

The Research That Should Make You Pause

Last week, Rand Fishkin (of SparkToro fame) published research that should be required reading for anyone thinking about AI advertising. The headline finding is brutal for anyone selling "AI rankings" as a service:

If you ask ChatGPT for a list of brand recommendations 100 times, there's less than a 1 in 100 chance you'll get the same list twice.

Let that sink in.

Rand and his research partner ran thousands of prompts across ChatGPT, Claude and Google's AI Overview. They tested everything from chef's knives to cancer hospitals to digital marketing consultants. Six hundred volunteers ran 12 different prompts through each of the 3 tools nearly 3,000 times total. The results were consistent across categories: AI tools produce essentially randomized lists every single time.

It gets worse.

When it comes to the order of recommendations (the thing most "AI ranking" tools claim to track), the odds of getting the same order twice are less than 1 in 1,000. Not 1 in 100. One in a thousand. That’s (roughly) the odds of the ball landing on 2 specific numbers on a European roulette wheel on consecutive spins.

If you've been paying for tools that promise to track your "ranking position in AI," I have bad news: those metrics are, in Rand's words, "full of baloney."

This isn't a minor methodological quibble. Companies are already estimated to be spending $100M+ annually on AI visibility tracking. And the fundamental premise - that there's a stable "ranking" to track - appears to be false.

The research also surfaced something else worth noting: the way people prompt AI tools is wildly different from how they search Google. When Rand asked 142 people to write prompts about the same topic (buying headphones for a family member), the semantic similarity between their prompts was almost zero. People don't reduce their queries to keywords anymore; they get creative, specific and weird. That may sound like a minor thing, but it’s not – one of the reasons why search results are as stable as they are is because inputs are relatively stable (i.e. there are millions of queries containing near-identical phrases entered into Google every month (“Best Family Dinner Recipes” or “What is the best blender” or “Best Advertising Agency for [Industry]”). The results for those are relatively stable, which helps visibility tools predict + extrapolate rankings. But with AI searches, that stability evaporates.

But Wait - AI Visibility Isn't Entirely Useless

Here’s where the nuance matters. While rankings are essentially random, visibility percentage is not.

When Rand's team ran the same prompts dozens of times, certain brands consistently showed up more often than others. City of Hope hospital appeared in 97% of ChatGPT's responses about West Coast cancer care. Bose and Sony appeared in 55-77% of headphone recommendation responses.

The AI's consideration set - the pool of brands it draws from - is relatively stable. What's random is which brands from that pool get surfaced in any given response, and in what order.

This distinction matters enormously:

  • Tracking "rank position" = meaningless (the position changes every time)
  • Tracking "visibility percentage" = potentially useful (how often you appear across many prompts)

The implication? You can't game your way to the #1 position in AI responses, because there is no stable #1 position. But you can work to be in the consideration set to be one of the brands the AI draws from when answering relevant queries.

And that's a fundamentally different game than SEO or paid search.

Why This Changes Everything About "AI Advertising"

That little detour though Rand’s research wasn’t random - there’s a fascinating connection between how AI tools generate organic rankings and what that means for ads.

OpenAI has been very clear about one thing: ads will not influence ChatGPT's answers. The organic response is the organic response. Ads appear separately, at the bottom and will be labeled as such.

This is a fundamentally different model than Google Search, where paid results often appear above organic results (with a variety of different layouts) and (essentially) compete for the same real estate.

In ChatGPT's model, you have two distinct surfaces:

  1. The organic answer: where your brand either appears or doesn't, based on whatever's in ChatGPT's training data and retrieval systems
  2. The ad unit: where you can pay to appear, but separately from the answer itself

This creates an interesting dynamic. If Rand's research is right, then it’s not possible to reliably "rank" in the organic answer…. but, you can pay to appear in the ad unit. The question is: will users trust ads that appear below an AI-generated answer?

My guess? Trust will be lower than traditional search ads, at least initially. When someone asks ChatGPT for a recommendation and gets an answer, then sees a "Sponsored" result below it, the implicit message is: "Here's what I actually think, and here's what someone paid me to show you." That's a tough sell.

The Pricing and Measurement Problem

Here's where it gets really interesting: according to The Information, OpenAI is asking advertisers to pay $60 per 1,000 views (CPM) for ChatGPT ads. For context, a Super Bowl spot runs about $63 CPM ($8 million for 127 million viewers).

Let me say that again: ChatGPT ads cost roughly the same per thousand eyeballs as the Super Bowl.

For your $63 Super Bowl CPM, you get 30 seconds of video during the most-watched event in the US of the year (sorry, winter Olympics), complete with celebrity cameos, a Budweiser Clydesdale and water cooler conversation for the next month. For your $60 ChatGPT CPM, you get a text ad that appears below the AI's actual recommendation. Oh, and initial advertisers are being asked to commit at least $200,000 for ChatGPT ads.

Now, for Google advertisers who are accustomed to paying $2,000+ CPMs (yes, you read that correctly - and I have paid far more than that for some clients), $60 is a straight-up bargain if the ad performs anything like a search ad. And that is the million (well, likely billion) dollar question: will users trust ads that appear in a module below the organic answer? My inclination is “somewhat” - especially if the ads are highly relevant to the query.

This brings us to challenge #2: targeting. OpenAI has been mum on exactly how much control advertisers will have over which answers can include their ad. As any Google advertiser will tell you, leaving the machine to its own devices without control/guardrails tends to result in ads appearing for absolutely irrelevant, low-intent, low-quality queries.

From what I’ve learned so far, all ads in ChatGPT will be “thematic”. I’ve received few details on exactly how broad those themes will be. My guess? Very broad - think “travel” as a theme. Now, if you’re Marriott, you probably don’t care if someone is traveling to Maui or Dallas or London or Cape Town or Singapore - you have hotels in every one of those places, so a general ad about how great Marriott is will do nicely. But a local/regional brand could be signing up for thousands (if not millions) of impressions that are completely irrelevant to their core business. The same challenge likely applies to many other large advertisers, such as home services, consumer goods manufacturers, professional services, banks/financial institutions, healthcare/medical providers, educational institutions, etc. Will a plaintiff’s injury lawyer be OK appearing for thousands of queries about wills, trusts and estates? Will a roofer want to show up for fireplace cleaning or refrigerator searches? Will an implant dentist want to show up to someone looking for braces?

This is - in my mind - the biggest question for ChatGPT ads: can they build targeting that’s precise enough to mitigate these concerns and build a sufficiently large pool of advertisers?

Now, there are 3 ways to solve that problem: option #1 is what I’ve alluded to above: allow advertisers to get more granular with targeting + exclusion; option #2 is to leverage machine learning to figure it out. Option #3 (aka what Google + Meta do) is to use both.

OpenAI seems to want to do none of those things (which, to me, is a tell that they’re not sure how well the ads will work). Their team has told early advertisers the platform will provide "high-level insights" - total impressions and total clicks. That's it. No conversion tracking. No detailed user data. No attribution modeling. An OpenAI spokesperson told The Information: "That's similar to what TV networks offer." (Aside: we run lots of TV. We get more than that).

So: OpenAI wants you to pay premium rates for less-than TV-level measurement.

I’m sure the argument OpenAI will make (and some advertisers will be sympathetic to it) is that the intent signal in ChatGPT is uniquely valuable. When someone asks "What's the best supplement for sleep?" - that's a high-intent moment. The user’s intent at that period in time is (likely) higher than when s/he is scrolling through his/her IG feed, which justifies the price. And, in theory, they’re right – IF the ad can be reliably aligned with the intent. But absent that, you’re just paying for a very expensive lottery ticket.

Now - let’s put this all together and combine what we know about ChatGPT ads with Rand's research. The result is a fascinating problem:

  • Organic visibility is random: you can't reliably control whether you appear in the AI's non-sponsored / organic answer.
  • Targeting is broad: advertisers will likely have relatively little control over where ads appear, with most targeting being thematic vs. specific/keyword-based. I would expect to see match behavior similar to broad match on Google with short keywords (i.e. “Lawyer”).
  • We don’t know about locations: the other major challenge is understanding how much geographic control advertisers will have. If the “TV-style” model holds true, that could mean paying to advertise in DMAs like LA or NY, which can span multiple states - limiting appeal to advertisers with geographic constraints (i.e. lawyers, plumbers, contractors, franchisees, etc.).
  • Paid measurement is limited: you can't see what query led to the click, or reliably tell what happened after it within the ad platform (though you’ll likely be able to do so on your website).
  • CPMs are premium: you're paying 3x Meta rates for less (in terms of pixels) and less valuable (in terms of location - this is a separate surface vs. integrated into the feed) real estate.

This is a lot of money for a lot of uncertainty. There will be some early adopters willing to gamble $200k on Sam Altman’s wheel of misery that post absurd early results (most likely these will be wildly inflated by OpenAI giving them loads of free / make good inventory). The FOMO will be real. But my take is this: being first seems as if it will be a losing proposition on the whole. But…being 2nd/3rd, once the minimums drop to $10k-$25k and many of the initial bugs are resolved? That’s when I’d want to get in.

The Brand Moat Just Got Deeper

If you've been reading this newsletter for a while, you know I've been beating the drum on brand for months. The argument I made in Issue #140 was that in a world of infinite content and AI-generated sameness, brand becomes the last enduring moat.

Rand's research reinforces this in a way I didn't fully anticipate.

Here's the key insight: AI tools recommend brands that are prominent in their training corpus. The brands that show up consistently across thousands of prompts aren't the ones with the best "AI SEO"; they're the ones with the strongest presence in the underlying data the AI learned from.

That means:

  • Brands that have been written about extensively
  • Brands that appear in authoritative sources
  • Brands that are mentioned in the contexts where people discuss the problems they solve
  • Brands that have built genuine awareness and consideration over time

In other words: brands that did the hard work of actually building a brand.

You can't game your way into ChatGPT's consideration set with technical tricks. The AI was trained on the C4 (Colossal Cleaned Crawled Corpus) - a/k/a the internet. If your brand wasn't prominent in the training data, you're not in the consideration set.

This is the opposite of early SEO, where technical optimization could vault an unknown site to the top of Google. It's closer to how PR works: you need to be notable, discussed and/or recommended by real sources before you get the prime time coverage.

For brands that have invested in long-term brand building, this is great news: your moat just got deeper.

For brands that have relied on paid media to compensate for weak organic presence, this is a problem. You can buy ChatGPT ads, sure. But if your brand never appears in the organic answer, users will notice. They'll see that ChatGPT recommended your competitors, and you showed up in the sponsored slot. That's not a great look.

The Organic-to-Paid Pipeline (AI Edition)

A few weeks ago I introduced the concept of the Organic-to-Paid Pipeline: the idea that organic success and paid scale are both downstream effects of understanding your audience and how to connect with them. The brands winning on paid media are the same ones winning on organic social. You can't buy your way out of a bad organic strategy.

The same principle applies to AI, but with a twist.

In the AI context, "organic" means appearing in the AI's unpaid responses. And unlike social media, where you can test and iterate your way to organic success, you can't rapidly change whether you're in ChatGPT's consideration set. That is/was determined by your brand's presence in the training data, which was collected months or years ago.

This creates a longer feedback loop and a higher barrier to entry:

  1. Build a brand that gets discussed: in publications, on social media, in forums, in the places where people talk about problems you solve
  2. Get into the AI's consideration set: as a result of that organic presence
  3. Validate through visibility tracking: measure how often you appear across relevant prompts (not your "rank")
  4. Amplify with AI ads: when they become available, use paid to reinforce presence you've already earned

The brands that skip steps 1 and 2 and jump straight to paid will find themselves in a weak position. Their ads will appear below organic answers that don't include them. Users will wonder why.

What Marketers Should Actually Do

Okay, enough theory. Let's get tactical.

1. Stop paying for "AI ranking" tools (or at least understand what you're actually measuring). If a vendor is selling you "ranking position in AI," demand to see their methodology. Ask them to explain Rand's research and how their metrics account for the randomness problem. If they can't, save your money.

Visibility percentage (how often you appear across many prompts) is a defensible metric. Ranking position is not. And even visibility percentage requires running dozens or hundreds of prompts to get statistically meaningful data. Ask your vendor how many prompts they're running and how they're accounting for prompt variability.

2. Audit your brand's AI presence. Run the prompts yourself. Ask ChatGPT, Claude, and Google's AI Overview for recommendations in your category. Do it 20-30 times. Track how often your brand appears. Notice which competitors show up consistently and which are absent.

If you're showing up consistently, great! That’s a strong signal that you are in the consideration set. If you're not, no amount of AI advertising will fix it. You have a brand problem, not a media problem.

Pro tip: Vary your prompts slightly each time. Ask "best [category]" and "top [category]" and "[category] recommendations." Ask for specific use cases. The more variation in your prompts, the more realistic picture you'll get of your actual AI visibility.

3. Invest in the things that build long-term AI visibility. This isn't "AI SEO." It's brand building; what’s old is new again. In times of change, obsess about the constants:

  • Get covered in authoritative publications in your space
  • Build a presence in the communities where your customers spend time
  • Create content that gets cited and referenced by others
  • Earn genuine recommendations from trusted sources
  • Be discussed in the contexts where people talk about problems you solve

The AI learned from the internet. Make sure you're part of the internet that matters.

AI's consideration set is heavily influenced by how concentrated or fragmented a category is. If there are only a handful of notable players in your space (like cloud computing providers), you have a higher chance of appearing consistently. If there are thousands of options (like science fiction novels), the AI's recommendations will be much more random. Understanding where your category falls on that spectrum shapes your strategy.

4. Wait before pouring budget into ChatGPT ads. I know this is contrarian for someone who makes a living in paid media, but hear me out:

ChatGPT ads will be brand new. The targeting is (at best) uncertain. The measurement is limited (remember: impressions and clicks only, no in-platform conversion tracking). The user trust dynamic is uncertain. We already know the CPMs are $60+ (that’s 3x Meta) for TV-level measurement. We don't know how OpenAI's "ads won't influence answers" promise will hold up in practice.

Unless you have a $200k+ budget specifically allocated for experimentation, I'd wait 2-3 quarters. Let the early adopters figure out what works. Watch the data. Then move with conviction once the landscape is clearer.

My plan is to wait until the minimums come down to $10k-$25k per month, then start pushing chips onto the table in several industries. That’s enough to get a meaningful signal without having to deal with a very angry CFO if goes the way of Sam Altman’s other promises.

5. Focus on the fundamentals. The brands that will win in AI advertising are the same brands winning everywhere else: the ones with strong products, clear positioning, genuine customer love, and organic presence that paid media amplifies rather than compensates for.

Nothing about ChatGPT ads changes that. If anything, it reinforces it.

The Tool to Manage What's Coming: Optmyzr

Here's the reality: if you're going to test ChatGPT ads when they open up (and eventually, most brands will), you'll be adding yet another platform to an already complex media mix.

This is where Optmyzr becomes essential.

When ChatGPT ads launch with their own interface, metrics, and optimization levers, you'll need a way to manage cross-platform complexity without drowning in tabs. Optmyzr is built for exactly this — bringing all your paid media under one roof with automated rules, anomaly detection, and cross-platform reporting.

Cross-Platform Budget Management: As AI ads become a new line item, you'll need to allocate budget dynamically based on performance. Optmyzr's budget pacing tools help you shift spend to what's working without manual rebalancing.

Automated Rules at Scale: When you're testing a new platform like ChatGPT alongside established channels, you need guardrails. Optmyzr's Rule Engine lets you set up automated responses to performance changes — pause underperformers, scale winners, get alerts when something's off.

Unified Reporting: The last thing you need is another dashboard. Optmyzr consolidates your performance data so you can see how ChatGPT ads stack up against your existing channels.

The platforms will keep multiplying. Your management layer needs to keep up.

The Bottom Line

ChatGPT ads are coming. But the research is clear: the game isn't about rankings or positions. Those are random. The game is about being in the consideration set — being one of the brands the AI draws from when answering relevant queries.

And the way you get into the consideration set isn't through technical tricks or early ad spend. It's through building a brand that's genuinely prominent in the places that matter.

This should sound familiar. It's the same argument I've been making about the next era of marketing, about organic fueling paid wins, about customer proximity as the foundation of everything.

The platforms change. The channels evolve. But the fundamentals remain: build something people care about, earn organic presence, and use paid to amplify what's already working.

AI advertising doesn't change that. It reinforces it.

Until next week,

Cheers,

Sam

Loving The Digital Download?
Share this Newsletter with a friend by visiting my public feed.

Follow Me on my Socials

1700 South Road, Baltimore, MD 21209 | 410-367-2700
Unsubscribe | Manage Preferences

THE DIGITAL DOWNLOAD - SAM TOMLINSON

Weekly insights about what's going on and what matters - in digital marketing, paid media and analytics. I share my thoughts on the trends & technologies shaping the digital space - along with tactical recommendations to capitalize on them.

Read more from THE DIGITAL DOWNLOAD - SAM TOMLINSON

Happy Sunday, Everyone! I hope you're all having a good week! We’re at that point where January feels like it’s (finally, blissfully) nearing its end. Last week's issue on the 5-Interaction Rule hit a nerve. I received more than a few emails back — most of them about the "fired" part. Turns out, telling people to fire their leadership team if they don’t talk to customers is... provocative. Who knew? But buried in the outrage (and the enthusiastic agreement - the split was ~50/50) was a more...

Happy Sunday, Everyone! I trust you're all staying warm and having a fantastic weekend. If you have Monday off, enjoy it. If you're like most of us and you don't, well… at least you're in good company. Before we dive in, a quick note: this issue is a continuation of last week's piece on killing your content calendar and building a Content Engine. If you missed it, I'd recommend giving it a read first. If you're short on time, the tl;dr is this: most content strategies fail because they're...

Happy New Year, Everyone! If you made it through this week, congratulations - you survived the first gauntlet on the calendar – between the "circling back" emails, the "new big ideas”, the “I need it yesterday” requests and aggressive goals, the first business week of the year is always a challenge. So: take a deep breath. You made it. Now that the dust is settling, I want to talk about the one thing that likely made your week more challenging - whether for your company brand, your personal...