International standards provide guardrails for the responsible, safe and trustworthy development of AI, making them invaluable tools for regulators and policymakers worldwide.
It’s clear that AI has spun out of control in many sectors. Deepfake photos and videos threaten consumer trust, and thousands of sites are being spun up with scraped, and often, inaccurate information on people and topics.
It’s time to set universal standards to help assure that AI-generated content is authentic and accurate through a series of strategies such as watermarking. A coalition of international standards bodies, working under the umbrella of the AI and Multimedia Authenticity Standards (AMAS) Collaboration, is focusing on promoting current and developing AI standards.
Of course, AI is a fast-moving target, so these standards groups need to accelerate their slower and more deliberative process of standards development to keep up. A concerted effort to put more comprehensive AI standards in place was announced at the recent “AI for Good” conference hosted by the UN in Geneva. The multidisciplinary collective — the AI and Multimedia Authenticity Standards Collaboration (AMAS) spearheaded by the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU).
Such standards are entirely voluntary, but adoption will enable businesses and technology providers to interact with confidence, as well assure greater interoperability between disparate AI systems, according to Gilles Thonet, deputy secretary-general of the IEC. “International standards provide guardrails for the responsible, safe and trustworthy development of AI, making them invaluable tools for regulators and policymakers worldwide. We’re laying the foundation for systems that prioritize transparency and human rights by mapping existing standards and highlighting gaps where they are needed to restore trust in AI-generated and multimedia content online.”
See also: Researchers Create Tool To Prevent AI Image Manipulation
Other AI image standards to consider
The following are standards areas now being promoted or under development:
- Rights declarations: These include general purpose and opt-out mechanisms.
- Watermarking: This ensures that “digital content is marked in a way that can be used to verify its authenticity, synthetic nature, and ownership. Watermarking is increasingly used to facilitate the declaration of the rights of content creators and to help ensure that their digital assets are not used without their consent.”
- Content provenance: This includes publishing information on the origin, history, and lifecycle of digital content. This provides “a transparent record that can be used to establish trust and authenticity. Such mechanisms are essential for maintaining the integrity of digital media, especially in environments where content manipulation is a significant concern.”
- Trust and authenticity: This consists of “methodologies for ensuring that digital content is genuine and has not been tampered with. By implementing trust and authenticity measures, organizations can protect their digital assets from unauthorized alterations and help ensure that consumers can rely on the content they receive.”
- Asset identifiers: These are “unique codes assigned to digital content to ensure proper management and identification of the asset. These identifiers help in maintaining a clear and organized record of digital assets, making it easier to organize, manage and distribute content.” By using asset identifiers, organizations can ensure that their digital media is properly tracked and accounted for, reducing the risk of loss or unauthorized use.”
Earlier this year, IEC and ISO also published the first part of a new JPEG Trust series of international standards, which provides a framework for establishing trust in media, which can also be applied to video and audio. The new standard describes tools and methodologies that individuals and organizations can use to create their own Trust Profiles. It does this by linking images with their metadata and other provenance data to highlight any attempt to tamper with them. The presence or absence of this information provides contextual information for the establishment of trust in the image.