Press Releases

< Back to list
Featured Content

Verance Comments on NIST Draft Report on Reducing Risks Posed by Synthetic Content

San Diego, CA, May 31, 2024 – Verance Corporation, a global watermarking company that powers AI provenance and broadband features for broadcast television, announced it has provided commentary on the National Institute of Standards and Technology (NIST) draft report, NIST AI 100-4: “Reducing Risks Posed by Synthetic Content.”

In response to NIST’s request for comments on the completeness and clarity of the draft report, Verance underscores three critical points that warrant further attention:

1. The use of provenance authentication technologies for non-synthetic media is of equal practical importance to its use on synthetic media.

2. Digital watermarking offers a durable, practical means of identifying provenance metadata.

3. Forgery of provenance is a greater security risk than denial of provenance authentication.

Here are the full contents of Verance’s comments submitted to the National Institute of Standards and Technology:<p><br>

Verance Corporation appreciates the opportunity to provide comments on National Institute of Standards and Technology (NIST) draft report NIST AI 100-4. Verance is a worldwide leader in watermark technology development, founded in 1995. We have created global standards for durable digital watermark technology for the worldwide recorded music industry, motion picture industry, and broadcast television industries in connection with the DVD-Audio, Blu-ray Disc, and NextGen TV / ATSC 3.0 entertainment formats. Verance’s watermark technology has been included in billions of media assets and consumer products distributed to consumers by leading entertainment and technology companies worldwide.

We have reviewed the draft report “NIST AI 100-4: Reducing Risks Posed by Synthetic Content” and, in response to the request for comments on the completeness and clarity of the report, we urge the committee to seriously consider and address the following three points that we believe are not adequately reflected in the draft report:

1. The use of provenance authentication technologies for non-synthetic media is of equal practical importance to its use on synthetic media.

2. Digital watermarking offers a durable, practical means of identifying provenance metadata.

3. Forgery of provenance is a greater security risk than denial of provenance authentication.

Explanation of each of these points follow.

1. The use of provenance authentication technologies for non-synthetic media is of equal practical importance to its use on synthetic media.

A fundamental promise of provenance authentication technology is to enable accurate trust signals associated with media content to be securely and reliably established by technology platforms and devices for the benefit of users. Trust signals are trustworthy indications of how and by whom the content was created that enable users to make informed decisions about the degree of trust that they will place in the content. Significant emphasis has been placed on ensuring that trust signals associated with synthetic content are available, so that the users can identify this potentially harmful content and its nature into account.

But, of course, no matter how much progress is made towards ensuring that synthetic media incorporates trust signals, it is unavoidable that disinformation agents will continue to have the ability to generate harmful content lacking trust signals, including non-synthetic content using, for example, open source, state sponsored, or other generative technology outside the reach of US policy.

So, what about non-synthetic sources of data that should be trusted by the public, such as official government communications and trusted news sources? If this content does not reliably convey trust signals, the public will assume that their absence means the content can be trusted, and such false trust would accrue equally to disinformation.

The strongest defense against harmful synthetic content, is therefore to establish means for all classes of trustworthy content – including synthetic and non-synthetic – to convey authentic, traceable provenance, thereby reducing public trust in content lacking authenticity measures.

This approach parallels that taken on the World Wide Web, where TLS security (the lock icon in the browser bar) provides a widely employed and well-understood trust signal in the identity of the organization operating a website. With supporting public education, its presence can provide consumers with a meaningful trust signal and its absence can be understood to provide a reason for caution.

2. Digital watermarking offers a durable, practical means of identifying provenance metadata.

Section 3.1.1 of the report provides substantial discussion of digital watermarking and its use for attaching durable labels to media content. The report’s analysis is primarily based on research done in the context of copyright communication, where the watermark binds copyright assertion information directly within the media asset.

The discussion overlooks a highly relevant and widely used application of digital watermarking, which is the use of watermarking to identify metadata by reference. In this use of the technology, a digital watermark embedded in a media asset carries information that enables retrieval of metadata associated with the asset from a separate data source; e.g., a database.

Section 3.1.2 of the draft NIST AI 100-4 report includes a paragraph titled “Using Digital Fingerprints to Identify Metadata” that describes a related approach based on the use of digital fingerprint technology to reference metadata associated with content that is stored in a database. Use of digital watermarking to identify metadata can also provide this capability.

In fact, there exist openly-specified, interoperable standards for the use of digital watermarking to identify metadata. ATSC, the US-based international broadcast standards organization, has published a set of standards for this function (ATSC A/334, A/335, and A/336) which have been adopted by numerous television broadcasters and are in widely deployed to enable recovery of timed metadata associated with broadcast video services. The draft NIST AI 100-4 report cites the elements of this standard that specify the physical data transmission layers of this watermark system (A/334 and A/335), but overlooks the highly relevant upper layer specifications (A/336) that enable use of these watermarks for publication and retrieval of timed metadata from network servers in service of authentication.

The use of digital watermarking to identify provenance metadata is receiving considerable attention currently. The Coalition for Content Provenance and Authenticity (C2PA), a leading multi-industry initiative developing provenance authentication standards, incorporates support for the use of digital watermarking to associate provenance manifests with content, the Content Authenticity Initiative (CAI) advocates for the use of digital watermarking for provenance authentication, and recent research details how that the C2PA provenance authentication and ATSC watermarking standards can be used in combination to attain a fully open and interoperable means for durable provenance authentication of broadcast content.

3. Forgery of provenance is a greater security risk than removal of provenance authentication.

The draft NIST AI 100-4 report includes substantial discussion of security risks associated with provenance authentication and content labeling. The discussion identifies watermark removal and tampering as relevant threats and cites numerous research papers related to watermark security. This discussion, as currently presented, would benefit from additional clarity and context related to the risks associated with forgery of provenance. Specifically:

• The draft report would benefit from an unambiguous definition for tampering, which we understand from the context in which it is used to constitute forgery of untrue provenance or labels. A clear definition of this fundamental technical term, used throughout the document, is important for the report’s analysis and conclusions to be properly understood.

• The draft report would benefit from acknowledgement of the substantially greater harm of forgery attacks over removal attacks. As detailed in section 1 of our comments, the natural and beneficial end-state of a provenance authentication and labeling ecosystem is for authenticity labels to reach a level of widespread use for both synthetic and non-synthetic content so that it can become a normalized public expectation. In this context, absence of trust signals – the result of a successful removal attack – is of only modest harm because it increases consumer suspicion around authenticity content. In contrast, a false trust signal (e.g. attribution of disinformation to a trusted source) can be extremely harmful to consumer trust.

• The draft report inaccurately cites research as relating to both watermark tampering and removal attacks that relates solely to watermark removal attacks. In fact, none of the cited references related to watermark security include any analysis of tampering vulnerabilities; they are comprised exclusively of research on watermark removal attacks. This error is highly misleading to the reader and should be corrected.

• The draft report would benefit from discussion of removal and forgery threats facing cryptographically-authenticated metadata and fingerprinting based provenance and labeling solutions. The current draft discusses these threats in detail with respect to watermarking, while failing to consider them at all in the context of the discussion of metadata and fingerprinting. This imbalance risks leading readers to conclude that no such threats exist when, they have been widely studied.

ABOUT VERANCE

Verance® Aspect® is a global watermarking platform that powers broadband features on broadcast television by enabling sports betting, dynamic advertising, and interactivity across all screens and distribution paths. Aspect supports new and existing industry standards including ATSC 3.0 and HbbTV and works in today’s ATSC 1.0 broadcasting environment. Leading programmers across the United States including FOX, Graham, Gray, Sinclair, PBS affiliates, and Capitol Broadcasting have deployed Aspect.

Verance’s AI watermark as well as content measurement and enhancement technologies are at the forefront of innovation and set the industry standard for television, movies, and music. Our solutions have been adopted by over 100 leading technology and entertainment companies and deployed in over 500 million consumer products worldwide. For more information, visit: http://www.verance.com

< Back to list