AI is not just an opportunity or a threat – it’s both

AI is not just an opportunity or a threat – it’s both By Chris Black, Chief Marketing Officer, Vizrt.

In the world of live production, AI is nothing new. I’ve seen AI enhance production assistance by automating repetitive tasks, freeing up teams to focus on creative work. It’s used to enhance production workflows, enable automation and speed in sports, improve realistic 3D virtual environments, and more – essentially improve efficiency, a crucial factor of any live production.

But in the last few years, AI’s capabilities have emerged to serve another aspect of visual storytelling: to source information and produce visual media.

The risks in AI-generated information and media have caused reasonable reluctance from people around the world, and particularly those who work with the news. It’s a primary concern how easily AI could be used to churn out misleading content or manipulate news stories for clicks.

So, how can you guarantee that this widely available tool isn’t misused? Well, the short answer is, you can’t. But instead of focusing on the problem, I would like to talk about the efforts being made to find solutions.

An eye for AI

Striving to educate people in identifying AI-generated content is part of the solution. The BBC, for instance, has released the AI Quiz online, where viewers can test their skills in identifying the difference between a real or AI-generated video or image. Even when an image was not created by AI, but was altered with AI, the BBC informs and explains its use to the viewer – therefore elaborating on the various ways AI can be used to modify materials in multiple parts, to different extents.

C2PA allows every viewer to trace an image’s journey from its original capture to its final use in a news story, helping build confidence in the authenticity of the media they consume.

Staying up to date in the ways that AI can be used to create or modify pieces of content is key to staying attentive to its possible use to spread disinformation. Additionally, searching for common minor errors or contextual improbabilities – such as the widely circulated image of the Pope “wearing” a Moncler-like white puffer jacket – keeps your instinct to question what you’re seeing sharp.

Another key way to verify the use of AI is to search for where the content came from – essentially, searching for the content’s provenance.

Understanding provenance

To find provenance means to find origin. Knowing where a piece of content originated, as well as if and how it has been modified, can let a news consumer trust that what they’re seeing is real.

This is exactly what Media Cluster Norway is addressing with Project Reynir, which “aims to elevate technology and solutions that ensure a sustainable democracy.” Together with tech partners like Vizrt they’re advancing the C2PA Content Credentials standard.

What is C2PA? The Coalition for Content Provenance and Authenticity (C2PA) emerges as an open technical standard providing publishers, creators, and consumers the ability to trace the origins of different types of media.

Skipping the technical talk, in essence, the C2PA allows every viewer to trace an image’s journey from its original capture to its final use in a news story, helping build confidence in the authenticity of the media they consume. But of course, to work, it must be widely adopted. Thankfully, many news sources, companies, and organizations are embracing this effort. This includes Microsoft, Nikon, and OpenAI, which shared its plans to include the open technical standard in any imagery it generates or modifies.

How does it work? With the content credential’s icon placed on top left corner of the piece of media – a minimalist pin with the letters “CR” – the user can scroll over the icon, which reveals a sidebar with verified information about this image or video. The verified information can include the publisher of the content, where and when it was created, whether it was modified, and more. It can also include if any information is missing; for instance, it will tell the viewer if it doesn’t know the content’s origin, in which case it is up to the viewer to decide whether or not to trust it.

Solutions are a work in progress

Although it sounds promising, this effort is still a work in progress. Handling the potential misuse of AI-generated media for disinformation is not simple and will never require a one-size-fits-all solution.

It is positive that big players in media and tech are stepping up to the plate and laying the groundwork to bring media authenticity to everyone – from local newsrooms to global broadcasts. We can always try to fight the potentially harmful use of any piece of available technology, by engaging in the efforts to combat it and staying educated through its advances.

Chris Black, Chief Marketing Officer, Vizrt writes exclusively for Tech For Good

Chris Black

Chris Black is Chief Marketing Officer at Vizrt. With a career in the media industry spanning more than 25 years, Chris has worked for software vendors and as an art director and technical director in several television stations across the United States. At Vizrt, Chris has applied his real-world broadcasting experience and technical know-how to become a leading thinker in the fields of marketing, new media, live production, and augmented reality technology.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE