AI-Generated Content: Bridging the Gap Between Transparency and Reality
The rise of AI-generated content presents both creative possibilities and societal risks, like eroding trust in online information. Jurisdictions like the EU are responding with regulations mandating AI transparency. Watermarking and disclosures emerge as crucial mechanisms, but ambiguities and conflicting incentives create implementation challenges. The AI Act, effective August 1, 2026, requires machine-readable watermarks and clear deepfake disclosures, yet complexities in responsibility allocation and definition persist. An investigation into widely used AI image systems reveals limited adoption of robust watermarking practices, highlighting the need for standardized, verifiable methods to ensure responsible AI deployment.