Good morning!

The Arizona Legislature is finally getting serious about AI regulations.

We’ve got an update for you on that front, along with the details on an important, but really gross, development in deepfake porn.

Stick with the AI Agenda for the good, the bad and everything in between!

We can finally put some names and faces to the new committee at the Arizona Legislature that will tackle AI regulations.

The AI and Innovation Committee in the House that lawmakers announced last week now has a full cast of lawmakers.

We did a quick check and most of them don’t have much experience with AI, at least professionally. That’s OK! Most of humanity had little experience with AI before a few years ago.

Each one will bring their own experiences to regulating AI, which could be useful, considering AI touches basically everything.

For example, Republican Rep. Julie Willoughby has a background in nursing, which might allow her to bring some insight to how AI is used in medicine. And Democratic Rep. Stacey Travers is a scientist, so she knows how to digest technical information.

The committee only has one bill assigned to it so far — and it ties into one of the great debates of the moment in AI: Deepfake porn.

What’s on deck

The bill that the committee will consider when they meet is HB2133, which would tackle the rise of AI-generated deepfake porn.

The bill would piggyback on Arizona’s “revenge porn” law by making it illegal to publish sexual deepfakes of identifiable individuals. And it would authorize the Attorney General to enforce the law and allow deepfake porn victims to sue companies that publish them.

But Arizona lawmakers aren’t the only ones concerned about deepfake porn.

At the federal level, the DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act) represents a bipartisan push to combat this very issue. Just this week, the U.S. Senate unanimously passed the bill, sending it to the House for consideration.

Sponsored by Democratic Sen. Dick Durbin of Illinois and co-sponsored by Republican Sen. Lindsey Graham of South Carolina, the DEFIANCE Act allows victims of non-consensual deepfake porn to sue those who create, distribute, or possess such content with intent to share it.

Victims could seek at least $150,000 in civil damages per violation, with steeper penalties if the deepfakes involve elements of retaliation, harassment, or assault.

This isn't the bill's first rodeo — it passed the Senate unanimously in 2024 but stalled in the House.

Its revival now, amid heightened public outrage, underscores a rare moment of unity in a polarized Congress.

Democratic Rep. Alexandria Ocasio-Cortez of New York and Republican Rep. Laurel Lee of Florida are leading the companion bill in the House, emphasizing that protecting personal dignity transcends party lines.

Grok’s Deepfake disaster

The timing of the DEFIANCE Act is spot on, as its Senate debate highlighted the scandal with Grok, Elon Musk's AI chatbot.

What began as a fun tool turned into a major issue over deepfake porn.

Last year, Grok's image and video features faced criticism for weak protections. By mid-year, its "NSFW" mode let users make realistic videos and edits of people, like Taylor Swift, often turning explicit without direct requests.

Groups warned xAI was fueling mass non-consensual deepfakes, with poor age checks and blocks.

Things worsened in late 2025 when image edits made it easy to "undress" or sexualize real photos.

On X, users tagged @grok under images of women or girls — often selfies — and asked for changes like putting them in bikinis.

This hit everyday people, including apparent minors, without consent.

Last month, reports showed Grok creating these images at one per minute during peaks. Copyleaks spotted thousands in a week, and French officials called minor-involved content illegal.

The scandal exploded around New Year's, with victims comparing it to revenge porn and tech-enabled abuse.

Some governments acted fast: Indonesia and Malaysia blocked Grok over fake sexual images of women and kids. The UK's Ofcom probed X for spreading illegal content, including possible child abuse material.

On December 28, Grok made a sexualized image of girls aged 12-16, then apologized for ethical and legal breaches.

Musk has not apologized for continuing to let it happen.

He just moved it behind a paywall.

Nationwide, momentum is building around laws that directly target deepfakes and sexually explicit digital forgeries.

  • Washington’s HB1169 and companion measures like SB5105 expand child‑exploitation offenses to cover fabricated and computer‑generated depictions of minors, closing gaps for AI‑generated and deepfake child sexual abuse material.

  • New York’s A01280 and twin bills S06278/A06293 establish crimes and private rights of action for the unlawful dissemination of deepfakes that graft a person’s face or body onto pornographic, lewd, or graphically violent imagery, while S02414 and election‑focused bills such as A06491 and LB615 in Nebraska restrict or label synthetic media in political communications.

  • Missouri’s HB1913 and SB1183, alongside broader AI‑content proposals like HB2035, HB2321, SB1012, and SB1324, create offenses and civil remedies for “intimate digital depictions” and other artificially generated content, including synthetic media used to deceive or harm.

How will Arizona’s HB2133 stack up after it’s gone through the new AI committee, and then debate in the House and Senate?

We’ll let you know.

Keep Reading

No posts found