Podcast hosting platforms are now on the hook for AI-generated spam, with the EU’s AI Act imposing fines of up to 3% of a company’s global turnover for undisclosed synthetic content. The regulation, which takes effect in August 2026, specifically targets content of “public interest,” where misleading AI slop could cause real harm.
Alberto Betella, CTO of RSS.com, argues the liability is clearest for sensitive topics. A poorly translated golf show is a nuisance, but a synthetic “Dr. XYZ” persona giving medical advice breaks listener trust and endangers people. He distinguishes this dangerous “AI slop” - content designed to seem real and harvest programmatic ad revenue - from simple copyright infringement or curated AI-assisted production.
“The risk is that the ecosystem gets flooded with this synthetic noise, and then advertisers leave.”
- Alberto Betella, Podnews Weekly Review
In response, Betella’s company and Spreaker have already implemented voluntary AI disclosure tags in their RSS feeds, a move covering roughly 15% of new episodes. This builds an audit trail for regulators and a filter for advertisers. Betella contends that if hosting platforms don’t provide these tools, creators can’t prove compliance, leaving the entire chain exposed to fines.
The push for transparency is colliding with a legacy industry skeptical of new standards. While some dismiss open protocols like Podcasting 2.0 as fringe, Betella’s disclosure tags represent a practical, pre-regulatory fix. The goal is to prevent a crisis where platforms are fined and advertisers abandon the medium, all because of unmarked synthetic content.
The countdown to enforcement has begun. Hosting companies are the first line of defense, and their adoption - or neglect - of disclosure tools will determine whether podcasting navigates the new rules or becomes a case study in regulatory failure.
