93A: The Three Characters That Should Terrify Every AI Company
Inside the Massachusetts bill that looks like a joke and reads like a lawsuit
Last month, a Massachusetts state rep filed a bill about AI. Four sections. One sponsor. No co-sponsors. The kind of bill that gets filed on a Tuesday, referred to committee, and never spoken of again. The kind of bill nobody reads.
I almost didn’t read it.
Then I saw three characters buried in Section 2, line 27: 93A.
If you work in AI, those three characters should keep you up tonight. Let me explain.
The Bill Everyone Will Ignore
Massachusetts House Bill 81, the “Artificial Intelligence Disclosure Act,” is the worst AI bill I’ve ever read.
It requires AI-generated content to carry a visible label and embedded metadata identifying the tool used and the timestamp. That’s it. That’s the whole bill. Four sections.
It defines AI as anything that “resembles human cognitive abilities when it comes to solving problems.” A definition so broad it could capture a spreadsheet formula. It draws no line between AI-assisted and AI-generated content, relying instead on a subjective “reasonable person” standard that no two courts would apply the same way. It assumes text watermarking is a solved technical problem. It isn’t. It offers zero exemptions. Not for journalists. Not for researchers. Not for artists. Not for satire. Not for personal use. Zero.
No implementation timeline. No funding for enforcement. No safe harbor for companies trying in good faith to comply. Filed by a single Republican rep from Seekonk. Refiled, actually, from a version that already died in the 2023–2024 session. Nobody co-sponsored it then. Nobody co-sponsored it now.
I read it, closed the tab, and moved on with my day.
Then I went back and read it again. Because something in the penalty clause didn’t sit right.
Section 2, Line 27
Buried in the penalty clause, the part of the bill that nobody reads, is a single reference: violations “shall be punishable in the same manner as provided in Chapter 93A of the General Laws.”
For those outside Massachusetts: Chapter 93A is the state’s consumer protection statute. It is, by most accounts, the most aggressive consumer protection law in America.
Here’s what 93A unlocks. Anyone can sue, not just the government. Class actions are on the table. If the court finds a violation was willful or knowing, damages get tripled. And the bar for what counts as “unfair or deceptive” is lower than in almost any other state.
The fine for a user who strips an AI label off their content? Five hundred dollars. A thousand for a second offense. That’s Section 3. That’s the part people will notice.
But failing to label at the system level? That’s Section 2. That’s 93A. That’s the part people will miss. And it’s a different universe entirely.
The Case That This Isn’t Stupid
Here’s where it gets uncomfortable.
I went back through the bill. And I started asking a different question: What if the things that look like drafting failures are actually design choices?
The definitions are vague. That means maximum litigation surface. Every case becomes a factual dispute about what “substantially” and “materially alters” mean, argued before a jury.
There are no exemptions. That means no safe harbor. Every person and every company using AI tools in Massachusetts is exposed. No carve-out to hide behind.
The “reasonable person” standard is subjective. That means each case lives or dies on its facts, which means each case has to be fought individually, which means each case costs money to defend even if you win.
There’s no safe harbor for good-faith compliance. Companies that tried to do the right thing get treated the same as companies that didn’t bother.
Now bolt 93A onto all of that. What do you get?
You get a bill that doesn’t need a single regulator to lift a finger. You get a bill that funds its own enforcement through plaintiff attorneys who can file class actions, collect treble damages, and recover legal fees. You get the ADA website-accessibility litigation playbook, where lawyers systematically identify technical violations and file suits at scale, applied to every piece of AI-generated content touching Massachusetts.
The bill doesn’t need to be well-drafted to be dangerous. It needs to be vague, broad, and connected to 93A.
Check, check, and check.
Two Readings. Both Should Worry You.
I want to be honest about what I don’t know. I can’t tell you whether this was intentional.
Reading one: This is a signal bill. A single legislator refiled something from last session to hold a position, secure a committee referral, and maybe generate a headline. The gaps are oversights. The 93A reference is boilerplate. It will die in committee like its predecessor. Nothing to see here.
Reading two: Someone who understands Massachusetts law, whether the sponsor, a staffer, or an outside drafter, deliberately built the vaguest possible framework and bolted the most powerful enforcement mechanism in the state onto it. The bill doesn’t need the Office of Consumer Affairs to write regulations. It doesn’t need the AG to bring cases. It just needs to exist as a statute, and 93A does the rest.
I find the second reading more interesting than most people will be comfortable with. The bill’s placement as Chapter 93M, slotted right after 93L in the General Laws, required specific knowledge of the statutory structure. The 93A reference isn’t the kind of thing that ends up in a bill by accident. Someone typed those three characters knowing what they meant.
But here’s what matters: even if reading one is correct, even if this is just a sloppy bill from a solo legislator, it changes nothing. A sloppy bill with 93A attached is still a bill with 93A attached. The statute doesn’t care about intent.
The Legal Novelty Nobody Is Talking About
Here’s what makes H.81 different from the dozens of other state AI disclosure bills floating around right now: it doesn’t create a new enforcement mechanism. It borrows one.
Most AI bills set up a new regulatory process. A state agency writes rules, investigates complaints, issues fines. That takes years. It takes funding. It takes political will to staff and sustain. Most of those bills are dead the moment they pass because no one funds the agency that’s supposed to enforce them.
H.81 skips all of that. By attaching to 93A, it plugs directly into an enforcement infrastructure that’s been running for decades. The courts are already there. The case law is already there. The plaintiff bar already knows how to file these suits. The bill doesn’t build a new machine. It feeds AI into an existing one.
That’s the legal novelty. And it has national consequences.
93A has extraterritorial reach. If your AI tool is used by someone in Massachusetts, or your AI-generated content reaches Massachusetts consumers, the question of whether you’re exposed isn’t hypothetical. Massachusetts courts have applied 93A to out-of-state companies before. You don’t need to be headquartered in Boston. You need to have touched the state.
Now multiply that. California has Section 17200. Illinois has the Consumer Fraud Act. New York has Section 349. Every state with a muscular consumer protection statute is one penalty clause away from the same play. And once a single successful 93A class action establishes that unlabeled AI content is an “unfair or deceptive act,” every plaintiff attorney in every state with a comparable statute will have the template.
H.81 might die in committee. But the precedent it’s trying to set doesn’t need to pass in Massachusetts to spread. It just needs to pass somewhere. One state. One statute. One successful case. And the playbook goes national.
The bills nobody reads. The clauses nobody checks. The rooms nobody watches. That’s where this is happening.
If you’re in AI and you’re only watching Washington, you’re watching the wrong room.


If 93A becomes the enforcement route for AI, which current AI practices do you think are most vulnerable first?