Issue #161 | The AI Regulation Wars Begin


Happy Sunday, Everyone!

Before I get into this one, quick ask: if you’ve been enjoying the newsletter, forward this issue to someone on your team who runs creative production. They’ll want to read it before June 9th. More on why in a moment.

On December 11, 2025, something happened that most brands & agencies haven’t even started to think about its implications.

That was the date that New York Governor Kathy Hochul signed S.8420-A into law. It’s the first state law in the country requiring advertisers to conspicuously disclose when an ad features a synthetic performer, which is defined as any AI-generated or algorithmically created figure designed to appear as a real human.

That definition is broader than it sounds. This isn’t just about deepfakes of celebrities or digital replicas of existing people. It covers any AI-generated human figure in any visual medium. If you’ve ever used HeyGen, Synthesia, RunwayML, Kling or any similar tool to put a generated spokesperson in a video ad - and that ad served an impression in New York without a disclosure - then you’re in violation of this law.

The law takes effect June 9, 2026. That’s roughly 72 days from today.

Before we get into the particulars of this law, I want to highlight why this is so significant: it’s the first law attempting to regulate Generative AI creative. It is not the last. As of this writing, there are (at least) 15 states that have enacted or advanced legislation that would attempt to regulate gen AI content in advertising.

The closest parallels are in Massachusetts and California, but states as diverse as Georgia, Tennessee, Maryland, Nevada, Michigan and Idaho have all passed legislation that have direct implications. If the NY State law survives judicial challenge/review (which is expected from the Trump administration), we could well be looking at a landscape where there are 50+ different compliance regimes for advertising, just in the US. That sounds hyperbolic, but the math is undeniable: at least 15 states have advanced legislation already; another 25-30 have introduced bills for their 2026 legislative sessions. 13 states have successfully passed a separate regulatory regime for AI-generated political advertising. At least 2 states are implementing disclosure laws for specific categories like healthcare, with more on the way.

All of this is further complicated by the lack of standardization in requirements and disclosures.

The Penalty Math Is Brutal

Here’s where it gets expensive. The statute is written with mandatory language: violations “shall result in” civil penalties. There is no discretion here.

The structure:

  • First violation: $1,000
  • Each subsequent violation: $5,000
  • Every non-compliant ad is a separate violation
  • No cure period
  • No notice-and-correct safe harbor
  • No requirement that the state notify you before assessing penalties

Do the math. If you’re running 50 active AI-generated video ads across Meta, YouTube and CTV without disclosure, you’re not looking at a $5,000 problem. You’re looking at $246,000. Scale to 200 - 300 creatives across a few brands, and the exposure looks less like an annoyance and more like a fleet of brand-new Porsche 911 Turbos.

And unlike most regulatory violations, there’s no “fix it and we’ll go easy on you” option included in the statute. The language is clear. The obligation falls on whoever “produces or creates” the advertisement - meaning the agency, the production company, the brand, or all three. Everyone involved has skin in the game. The statute doesn't define where that obligation begins or ends in the creative production process. That means a brand that briefed an agency that subcontracted a freelancer who used AI may all have exposure. That ambiguity is an enforcement mechanism, not a bug.

The penalty math is the easy part. What nobody is talking about yet are the three structural problems that make this genuinely hard to solve — even if you want to.

The 3 Unsolved Problems

Everything above assumes a straightforward compliance scenario: you used AI, you didn't disclose, you get fined. That's the simple case. The harder cases are the ones nobody is talking about yet.

Problem 1: The Burden Is Backwards

The statute requires disclosure when an advertiser has "actual knowledge" that a synthetic performer appears in an ad. But enforcement doesn't work on the honor system. The state doesn't need to prove how your ad was made. It needs only to review your ad and decide the performer *looks* synthetic. Based on that assessment (however right or wrong it is), the State sends you a penalty notice. Then you (the brand, agency, freelancer, whatever) must prove the performer is real. There’s no “presumption of innocence” when it comes to regulatory violations.

Think about what that means operationally. AI-generated performers have crossed the quality threshold where most viewers - and most enforcement analysts reviewing an ad library - cannot reliably distinguish them from real humans. The flip side is also true: real human performers in well-lit, well-produced video ads can look indistinguishable from AI-generated ones. High production value is now, paradoxically, a compliance risk.

That means the law doesn't just create a disclosure obligation for AI-generated ads. It creates a documentation obligation for every piece of creative you produce. If you can't produce a signed model release, a talent contract, a call sheet or production invoices proving a human being stood in front of a camera, you are functionally in the same position as someone who used a synthetic performer without disclosure. The absence of proof becomes the violation.

Functionally, this means that every shoot, performer, asset and variation must be catalogued, stored and retrievable on demand. That’s not a creative problem - it’s an infrastructure problem. And it's one that most brands + agencies - especially the high-volume operations producing dozens of variants per concept - are not remotely set up to handle.

For those of you counting on the "actual knowledge" defense to save you, don’t. The statute only requires disclosure when the advertiser has actual knowledge that a synthetic performer is in the ad - but Adobe sends 100s of emails per year to every user explicitly marketing their AI features. The tools are named "Generative Fill" and "Generative Expand." The interfaces say "Powered by Adobe Firefly” and allow you to select a model to use. The descriptions of these tools (“Generative Fill in Photoshop uses Adobe Firefly AI to add, remove, or expand image content via text prompts or empty selections. By analyzing surrounding pixels, the AI generates context-aware, non-destructive layers that match lighting, perspective, and style, offering three variations per edit.”) specifically highlight that they are powered by AI. Claiming you didn't know your team was using generative AI when the software vendor spent 3 years making sure you couldn't possibly miss it is not a winning legal strategy.

Problem 2: Nobody Knows What "Conspicuous" Means

Let’s assume you solve the infrastructure problem. You know every ad that used AI-generated performers. You want to add a disclosure and remain compliant, so you add a label to each ad.

Are you compliant?

The answer is: I have no idea. Neither do you or your attorney.

The statute says the disclosure must be "conspicuous." It does not define the term. No font size. No minimum duration. No placement guidance. No contrast ratio. No required language. No insight on what percentage of the video the disclaimer must be present, nor how much of the screen it must occupy.

Compare that to the FTC's endorsement guidelines, which at least provide interpretive frameworks for what "clear and conspicuous" means across different media. Or compare it to Georgia's pending HB 478, which specifies disclosure text at 30% of vertical picture height, visible for at least 30% of the media's duration. You can argue those numbers are absurd (and they are)...but at least they give you a target to hit.

New York gives you nothing. Which means there is no way to confirm compliance in advance. There is no pre-clearance mechanism. No advisory opinion from the AG's office. No safe harbor for good-faith efforts.

The standard will be defined retroactively through enforcement. The first penalty notices will be the guidance. Someone is going to be the test case that establishes what "conspicuous" means in a 6-second bumper versus a 30-second pre-roll versus a CTV spot versus a static display unit. And that someone is going to pay - potentially six figures - for the privilege. The first guy through the door always takes the arrows.

Now put those two problems together. You can't definitively prove your performer is real. And even if you disclose, you can't definitively know if your disclosure is sufficient. That's at least one (if you have never used AI tools, you still have to be able to prove that), and potentially 2 layers of unquantifiable risk on every asset you’ve produced.

This is an insurance problem. And if you don't think E&O carriers are going to start asking about AI-generated creative in their underwriting questionnaires within the next 18 months, you haven't been paying attention to how insurers respond to new categories of enforceable liability.

Problem 3: The definition is going to get broader

New York's law is narrowly scoped. It targets synthetic performers - AI-generated human figures. That's the version of AI-generated advertising that legislators understood when they wrote the bill.

It will not stay that narrow.

Bills already working through other state legislatures aren't limited to synthetic humans. Georgia's HB 478 targets all AI-generated content used in commerce. Massachusetts' Artificial Intelligence Disclosure Act covers any content "substantially created or modified" by generative AI - video, audio, text, print, all of it. California's AI Transparency Act already requires provenance data on all AI-generated content from platforms above a certain size, regardless of whether a human figure is involved.

Now think about how generative AI actually gets used in a modern creative workflow. This goes far beyond the primary talent in the ad. It's the background generated in Midjourney. The product shot composited with AI lighting. The script was drafted in Gemini and revised by a creative strategist. The voiceover runs through ElevenLabs for pacing before the real talent records. The color grade suggested by an AI tool. The music generated in Suno or Udio. The headline variations tested through AI-assisted copy tools. The translations are powered by Adobe or Keevx. The visual touch-ups are completed using many of the 15+ AI-powered tools in Photoshop. The accessibility compliance for the final asset/experience is done by a tool that uses - gasp - AI and/or machine learning!

When - not if - a state passes a law requiring disclosure on any ad that used generative AI in its production, what percentage of your current creative library is compliant?

If you're being honest with yourself, the answer is close to zero. Not because you're doing anything wrong, but because generative AI has become so deeply embedded in production workflows that most teams couldn't tell you with certainty which assets were touched by AI and which weren't. The tools are integrated everywhere - briefing, concepting, pre-production, editing/post-production, QA/QC, accessibility checks - all of it. AI isn't a discrete step in the production process that can be labeled. It's the water the entire workflow swims in.

And here's where this gets absurd in practice.

Think about what happens in a completely standard creative production process: A real model shows up to a real place for a real shoot. A real photographer takes real photographs using real cameras. Those photographs go to a real designer, who opens them up in Adobe Photoshop - the same tool the industry has used for 35 years.

That designer uses Generative Fill to extend a background so the image fits a wider aspect ratio for a billboard placement. Uses the Remove tool to clean up a stray crack in the concrete. Uses Generative Expand to give the art director more headroom in the frame. Maybe runs a Neural Filter to smooth the lighting across the model's face because this was shot in the real world, and sometimes the wind blows and results in the light being slightly off at the instant the shot was captured.

Every single one of those tools is, by Adobe's own description, “AI powered”.

Adobe doesn't hide this. They market it. "Powered by Adobe Firefly" is printed on the feature set. The tool names literally contain the word "Generative." And if your team is using Content Credentials - which Adobe has been aggressively pushing as an industry transparency standard - the PSD file now contains embedded metadata explicitly documenting that generative AI was used in the production of that image.

You have a real person, photographed by a real camera, retouched using tools that have been standard practice since before most junior designers were born - and the final asset may still trigger the statute. Because the law doesn't say "created by AI." It says "modified by computer, using generative artificial intelligence or a software algorithm." Generative Fill is generative AI. It's in the name.

Now ask yourself: is that model still a "real performer" or has she become a "synthetic performer" under the statute? The honest answer is that nobody knows, because the law wasn't written with this use case in mind. It was written to address the HeyGen spokesperson in a direct-to-camera testimonial. But statutes don't enforce based on legislative intent. They enforce based on the statutory language. And the statutory language is broad enough to cover a Photoshop retouch. Now, there's a plausible argument that identifiable real performers fall outside the definition even if AI was used in post-production - but there’s also a compelling argument that more the effective the retouching (i.e., the more successful your retouching is at making a real person look better), the greater the risk profile.

This is not a niche edge case. This is every professional photo shoot that goes through post-production in 2026. Generative AI features are not optional add-ons in the current version of Photoshop. They are core functionality, integrated into the tools designers reach for reflexively, dozens of times per project. Many designers couldn't tell you after the fact which edits used generative AI and which used legacy tools, because the interface doesn't force that distinction. You'd have to audit the action history of every PSD file, layer by layer, to reconstruct which pixels were AI-generated and which weren't.

There's an irony here worth naming. The industry's push toward AI transparency - Content Credentials, C2PA provenance standards, metadata tagging - was designed to build trust. But in a regulatory environment where "modified using generative AI" triggers a disclosure obligation, that same transparency infrastructure becomes the evidentiary trail that proves the violation. The better your provenance documentation, the easier the enforcement case against you.

The tools designed to prove your content is trustworthy are the same tools that prove you're non-compliant.

That's the compliance crisis hiding behind the current one. New York is asking you to disclose when a human figure is synthetic. The next law is going to ask you to disclose when anything is synthetic. And the law after that may ask you to prove when something isn't.

If your production infrastructure can't answer that question today - comprehensively, defensibly, on demand - then you're not just unprepared for New York. You're unprepared for everything that is coming after it.

Why This Keeps Me Up At Night

I want to step outside the analysis for a moment and say something directly.

My agency operates in some of the most regulation- and competition-heavy verticals around: pharma, healthcare, law + legal services, banking, real estate, financial services, consumer products. These are industries where advertising compliance isn't a suggestion - it's the cost of doing business. We navigate HIPAA considerations in creative production. We build disclaimers into legal advertising that satisfy state bar requirements across multiple jurisdictions simultaneously. We produce campaigns for senior living communities where a single misrepresentation about care capabilities can trigger regulatory action and, in some cases, litigation.

I am not new to compliance complexity. And I'm telling you: what these AI disclosure laws are building toward makes the current pharma and financial advertising compliance landscape look like the wild west.

That's not hyperbole.

Think about what pharma advertising actually requires: the FDA mandates specific disclosures, but it also tells you what to disclose, where to place it, and how long it needs to be visible. Fair balance requirements are prescriptive. ISI placement has clear guidelines. There are review processes, legal frameworks and decades of enforcement precedent that give you a compliance target you can actually aim at. The same is true for financial advertising under SEC and FINRA rules - the requirements are burdensome, but they're defined.

Now compare that to what we're looking at with AI-generated assets.

You have mandatory penalties with no defined disclosure standard. You have a documentation burden that applies to assets where you may not even know AI was involved, because none of them are clear on what “involved” actually means. You have statutes written broadly enough to capture standard post-production tools, with no interpretive guidance, no pre-clearance mechanism, and no enforcement precedent to tell you what "good enough" looks like. And you have all of this multiplied across a growing patchwork of state laws that don't align on definitions, scope or requirements.

Pharma advertisers at least know the rules. They may hate those rules because they’re expensive, slow and maddening to operationalize. But they know the rules. What we're walking into with AI creative regulation is a regime where the penalties are mandatory, the definitions are ambiguous, the scope is expanding and the rulebook hasn't been written yet.

I've spent the better part of two decades working in industries where regulatory exposure is a daily consideration, and I have never seen a compliance environment this poorly defined move this fast toward enforcement. That's what keeps me up at night - not the $5,000 per violation, but the fact that there is currently no reliable way to know whether you're in violation at all.

The Compliance Paradox

I already know what some of you are thinking, because it's the same instinct I had: we'll use AI to solve this. Build a tool (or more likely, prompt an existing one) to crawl your asset libraries, flag anything that looks synthetic, then cross-reference production records and generate a compliance report. Basically: build a machine to find the stuff made by other machines.

It's a perfectly logical response. It's also potentially a violation of the very laws you're trying to comply with.

Massachusetts' proposed AI Disclosure Act covers any content "substantially created or modified" by generative AI. If you're using an AI system to generate compliance reports, risk assessments and/or audit documentation - documents that will be submitted in response to an enforcement inquiry - then (surprise) you may be producing AI-generated content in a regulated commercial context without disclosure. California's AI Transparency Act requires provenance data on AI-generated content from covered platforms. Several pending bills don't distinguish between customer-facing creative and internal operational outputs - they target AI-generated content, period.

Think about the absurdity of that for a moment. The State sends you a penalty notice. You use an AI tool to audit your creative library and prepare your response. Your response is now itself a piece of AI-generated content that may require disclosure under a different state's law. The compliance mechanism creates its own compliance obligation.

Is this likely to be enforced in practice? Probably not tomorrow. But the statutory language in several of these bills is broad enough to cover it. And if we've learned anything from the last 20+ years of digital advertising regulation, it's that the edge cases nobody worries about today become the enforcement precedents of tomorrow.

The deeper point is this: there is no clean exit from this problem. You can't AI your way out of an AI compliance crisis. The solution - to the extent one exists right now - is human-led, process-driven and manual. It requires people who understand both the production workflow and the regulatory landscape reviewing assets with actual judgment, not algorithmic pattern matching. That is slower. It is more expensive. And it is, for the moment, the only defensible approach.

All of which might be academic if you thought enforcement was unlikely. It's not.

New York Has Every Incentive To Enforce This

One refrain I’ve heard as we’ve started discussing this: the NY state government can’t get their own house in order; what makes you think they’ll be able to figure this out?

My honest response: financial incentives are one hell of a motivator. New York is staring down a cumulative 3-year budget gap of $27.5B (per the State Comptroller's February 2026 executive budget report). Federal Medicaid cuts are threatening to make it worse. In that environment, enforcement of this starts to look a lot like a low-political-cost tax on predominantly out-of-state businesses.

The discovery process is trivially easy, even for bureaucrats. Pull the public ad library on Meta, TikTok, whatever. Scan for AI-generated performers. Check for disclosure. Send the notice and invoice. The tools you used to create the ad are the same tools that prove the violation. There’s no “we didn’t know” defense when the prompt and the invoice both specify exactly what was happening.

There’s one more thing that makes this an attractive enforcement target: the overwhelming majority of these companies are out-of-state. That means relatively few upset voters. For a state looking to close a budget gap through enforcement, that’s a dream come true.

The Part That Nobody Is Talking About

This is where it gets more dangerous than most people realize.

June 9 is not the day penalties go out. It’s the day liabilities start accumulating. New York has a 3-year statute of limitations on civil matters, which means the state can take its time. You could run 30 non-compliant ads from June 10 through July 6, turn them all off, and NYS could still send you a $146,000 penalty notice in November 2026…. or November 2027.

The ad does not have to be live for the penalty to apply. New York only needs to prove it was live after June 9. And because the violations are documented in public ad libraries - records the state can pull at any time - there is no quietly fixing this after the fact. Turning off a non-compliant ad after June 9 doesn’t erase the violation.

This is not a Y2K situation. This is not theoretical. This is a signed statute with a specific effective date, mandatory penalties and a state that has both the tools and the fiscal motivation to enforce it.

Don’t Bet On Federal Preemption

The same day Hochul signed S.8420-A, the White House issued an Executive Order aimed at preempting state AI regulation, including a DOJ Task Force to challenge state laws. I’ve seen people point to this as a reason to wait.

Here’s the problem: an Executive Order cannot overturn a signed statute. Only Congress or the courts can do that. Anyone betting on federal preemption materializing before June 9 is making a bet I would not take. If you need a reference point for how fast the federal government moves on things like this, I have two words for you: TSA lines.

This law is on the books. Plan accordingly.

What The Agencies Paying Attention Are Doing Right Now

The agencies that are asleep on this are doing what agencies always do: running ads until someone tells them to stop. The problem is that by the time someone does - via a penalty notice, not a friendly reminder - the bill may exceed a starter home in most states.

The agencies that are awake are doing three things:

  • Auditing every active and recent creative asset for AI-generated human figures
  • Building disclosure into the production workflow now, not after the first notice arrives
  • Advising clients proactively - in writing - so there’s no ambiguity about who flagged this and when.

That last one matters more than people think. The compliance obligation falling on whoever “produces or creates” the ad is language that will show up in client conversations, contracts, and - eventually - lawsuits. Getting ahead of it in writing isn’t just good practice. It’s protection. And if your MSA or SOW doesn't address AI disclosure liability, expect that conversation to happen across a conference table with attorneys present, not over a friendly check-in call.

There are three narrow exemptions in the law worth knowing: audio-only ads, AI used exclusively for translation, and “expressive works” like film or art where the ad content is consistent with that underlying work. If you’re running product demos with AI spokespeople or brand videos with synthetic hosts, none of those apply to you.

The clock is ticking.

Cheers,

Sam

P.S. If you're producing creative that touches AI at any point in the workflow - and after reading this, you know that's almost everyone - and you want a second set of eyes on your exposure before June 9, reply to this email.

Where Tools Like Optmyzr Make This Possible

If this issue made you think about what’s quietly accumulating in your ad accounts — that’s exactly what Optmyzr is built for. PPC audits, automated optimizations, rule-based workflows, performance monitoring — the infrastructure that catches problems before they become expensive ones. If you’re not using it yet, you should be.

Loving The Digital Download?
Share this Newsletter with a friend by visiting my public feed.

Follow Me on my Socials

1700 South Road, Baltimore, MD 21209 | 410-367-2700
Unsubscribe | Manage Preferences

THE DIGITAL DOWNLOAD - SAM TOMLINSON

Weekly insights about what's going on and what matters - in digital marketing, paid media and analytics. I share my thoughts on the trends & technologies shaping the digital space - along with tactical recommendations to capitalize on them.

Read more from THE DIGITAL DOWNLOAD - SAM TOMLINSON

Happy Sunday, Everyone! Q1 is in the rear view mirror. 2026 is officially 25% over, conference season is in full swing, and most clients are wildly optimistic about growth despite the macro uncertainty of the last few weeks. That optimism is fine — it's useful, even — as long as it doesn't lead to the most common strategic mistake I see teams make during this particular season of reflection: mistaking diminishing returns for failure. It happens everywhere, every Q2, like clockwork. A team...

Happy Sunday, Everyone! Before we talk about how to work with automation, we need to talk about something most marketers refuse to acknowledge: ad platforms are not on your side. That's not a conspiracy theory. It's a business model observation. Meta, Google, Amazon, AppLovin, Snap, LinkedIn - they’re all in the advertising business. Their revenue goes up when your spend goes up. Their incentive is to create systems that maximize the budget captured in their ecosystem and minimize the surplus...

Happy Sunday, Everyone! Every marketer I know has blown up a strategy that was working. Not because the data justified it, but because something felt wrong. A down week. A “suggestion” from an exec. A pointed question from a client. A competitor launch that triggered a reflexive need to move. In each case, the math didn't change; it stopped mattering. This is the most persistent failure mode in the marketing/advertising industry. It isn’t incompetence. It isn’t a lack of data. It’s emotional...