AI Media Integrity Act of 2025 (AMIA)
A BILL
To safeguard public trust, journalistic integrity, and democratic stability in the age of AI-generated media through transparency, accountability, and civic safeguards.
SECTION 1. SHORT TITLE
This Act may be cited as the “AI Media Integrity Act of 2025” or “AMIA.”
SECTION 2. FINDINGS
Congress finds the following:
AI-generated media—including deepfakes, synthetic news content, and algorithmic narratives—poses unprecedented risks to truth, trust, and civil discourse.
While AI offers benefits in journalism and content creation, its misuse can erode democratic processes, incite violence, or manipulate public sentiment.
Americans have a right to know when they are viewing, reading, or engaging with AI-generated content.
Media freedom must be balanced with civic responsibility, transparency, and public accountability.
SECTION 3. DEFINITIONS
As used in this Act:
Synthetic Media refers to content—text, image, audio, or video—created or substantially altered by artificial intelligence.
AI Disclosure Label means a standardized, user-visible notice indicating that a piece of content was generated or modified by AI.
Verified Human Journalism means content produced by credentialed journalists without algorithmic editing or narrative alteration.
SECTION 4. TRANSPARENCY REQUIREMENTS FOR AI-GENERATED MEDIA
(a) Any synthetic media published, broadcast, or distributed in the United States must include a clearly visible AI Disclosure Label at the point of first impression.
(b) Labels must:
State "This content contains AI-generated elements" or equivalent language
Be unremovable, persistent, and machine-readable
Be applied at the file or data layer, not only at the platform level
(c) Platforms and publishers must maintain internal records of:
AI model used
Prompt/source input
Time/date of creation
SECTION 5. MEDIA PLATFORM ACCOUNTABILITY
(a) Social media and news platforms with over 10 million active users must:
Implement AI content detection and labeling systems
Allow users to filter or opt out of AI-generated content
Report quarterly on moderation practices related to synthetic content
(b) Failure to comply may result in:
Civil penalties up to $5 million per violation
Public warning listings maintained by FACA
Restrictions on government advertising contracts
SECTION 6. ELECTIONS AND CIVIC PROTECTIONS
(a) It shall be unlawful to distribute AI-generated content within 90 days of a federal election that:
Misrepresents a candidate, official, or voting procedure
Uses synthetic voice or likeness without consent
Simulates events that never occurred
(b) Violations may result in:
Felony charges for willful manipulation
Platform liability for amplified or monetized dissemination
SECTION 7. JOURNALISM STANDARDS AND CERTIFICATIONS
(a) The National Press Standards Council (NPSC) shall issue optional certifications for outlets that:
Disclose all AI use in news content
Maintain a verified human newsroom majority
Publish editorial AI policies
(b) Certified outlets may display a “Verified Human Journalism” Trustmark and be eligible for:
Government media grants
Preferential access to press briefings
Public trust rating integration with search/social platforms
SECTION 8. FREE SPEECH AND EDITORIAL FREEDOM
(a) Nothing in this Act shall:
Infringe on First Amendment rights
Restrict artistic or parody-based AI content
Regulate private communications or non-public media
(b) AMIA regulations apply only to public-facing content intended to inform, influence, or simulate real-world events or public figures.
SECTION 9. FUNDING
(a) This Act shall be funded through allocations under the American Reboot Act.
(b) No new taxes shall be levied.
(c) The Federal AI Constitutional Authority (FACA) shall oversee enforcement, compliance, and public reporting.
SECTION 10. SEVERABILITY
If any provision of this Act is held invalid, the remainder shall remain in effect.