(The Center Square) – A new California bill that would require the encoding of information that could be used to identify creators of AI-generated content is now at the governor’s desk for approval. This “latent disclosure” bill that requires AI products such as a picture from ChatGPT to be easily identified, along with another pending bill to require social media users to verify their identities to prove they are not children, could significantly reduce anonymity online for users of global technology platforms based in California.
Aimed at helping users know if content is real or AI-generated, SB 942 would require that generative AI services such as ChatGPT or Claude with more than 1 million monthly users or viewers in California to provide AI content detection tools, an option for users to add labels identifying AI-generated content, and so-called “latent disclosure” that includes the name and version of the service used, the time and date of the content’s creation, and a unique identifier, while also requiring that “personal provenance data” with user identity is not encoded into content.
However, given that generative AI services often keep detailed internal logs of each user’s activity, timestamps could be used to match content to a user even without personal provenance data. This could create privacy concerns, as it’s unclear if this personal identifying information will remain in company hands only; just last month, a background check database hack resulted in the exposure of every Social Security number. Timestamps encoded into content could also be correlated with social media posts, allowing social media user identities to be inferred through the sharing of AI-generated content they create.
Opposition to SB 942 from a coalition of technology and business groups focused on the need for federal, not state regulation on the matter, the weakness of current watermarking technologies, and First Amendment concerns.
“Content provenance and watermarking is still incredibly unreliable and in many cases, very easy to break,” said Dylan Hoffman at a Senate committee hearing on SB 942.
Responding to a prior speaker’s comments saying AI-generated content should be labeled just how a pack of gum is, Hoffmain said, “A pack of gum is not protected by the First Amendment, whereas some of the speech is. And I think the way we either preference or stigmatize certain speech is concerning.”
At a later committee session in the Assembly , Jai Jashima, an AI developer and cofounder of the Transparency Coalition, defended the need for the bill, and demonstrated next-generation content identification and tracking technology is ready to go and already in use.
“It will bring much needed transparency requirements to generative AI outputs, and it will provide the public with critical insight into which content was created in part or in whole by a generative AI system,” Jashima said. “The technology behind it is proven. Even OpenAI is supportive of C2PA and has actually started using it for Dall-E, which is one of their models.”
C2PA is a technology standard that creates a digital “fingerprint” for media content, securely documenting its origin, editing and distribution history. While C2PA allows tampering of these records to be detected, it’s still possible to delete the records entirely, or potentially add false records. Because C2PA is used to create and maintain records of a file’s distribution, the system could impact the privacy of any individuals in the chain of receiving or sharing C2PA-encoded content.
SB 942 follows the heels of SB 976, another internet safety bill at the governor’s desk for signing by the end of September. SB 976 designed to reduce youth social media addiction by mandating advanced age verification, which social media companies warn would require them to force all users to verify their identities for full social media access. If both bills are signed into law, the ability of users to anonymously access social media and create and share AI-generated content could be considerably constrained.