NEWYou can now listen to Fox News articles!
After a meeting with executives from key AI technology firms, including Amazon, Google, Meta, Microsoft, and OpenAI, President Biden announced that the companies had agreed to four commitments. These range from best practices, such as enhancing system security and product testing, to the ‘moonshot’ goals of watermarking AI content and using AI to solve critical societal challenges in areas like health care.
While solving societal challenges is aspirational, watermarking AI-produced content may be challenging. It also raises questions regarding what constitutes ‘AI generated’ and whether the government should push technology providers to label content produced using their tools.
Watermarking is inherently tricky, and the techniques vary by medium. In many cases, they rely on a shared secret, such as a textual pattern, list of special words, or watermarking location or pattern in a file, between the developer and those who make tools to detect the developer’s watermark.
If this secret becomes more widely known, either due to a leak or it being reverse-engineered, AI users who wish to remove the watermark can readily do so.
For some watermark technologies, simply moving the output to an analog medium and back (such as displaying it on a screen and recording it from there) is all that