Our Work on AI

For years, WITNESS has anticipated and responded to the evolving threats and opportunities of new audiovisual technologies – like generative AI.

As a trusted voice on protecting authentic video documentation, WITNESS leverages deep expertise, research, and cross-regional networks to shape the development of audiovisual technologies and infrastructure, along with the policies, laws, and regulations that govern them.

Whether confronting the weaponization of AI against vulnerable communities, its impact on elections and democracy, or even its potential to drive positive change, we work to ensure these technologies strengthen – rather than undermine – human rights and our broader information ecosystem.

Our strategy on AI centers on four pillars that support frontline information actors to:

SHAPE

AI development and regulation to protect democratic values and human rights.

RESIST

harmful impacts of AI on trust.

ADOPT

beneficial AI tools and tactics to strengthen their work.

REIMAGINE

how to communicate the truth within an AI-mediated information ecosystem.

Our work on AI includes:

Advocating for system-level change

We engage with tech companies and policymakers to shape the design and deployment of technologies and policies that support truthful witnessing and resist manipulation. Our advocacy work complements grassroots efforts with long-term structural impact.

Learn more about Our policy submissions and advisory opinions or How frontline consultations shape our advocacy work.

Providing “future-proof” guidance and training

As AI makes it easier to dismiss real footage as fake or harm communities fighting for human rights, we continuously “future-proof” our guidance and trainings, helping people adopt simple, practical steps to “fortify the truth” of their video documentation, making it more resistant to doubt.

Expanding access to AI detection training

We provide accessible online trainings to equip activists, journalists, fact-checkers, and other human rights defenders with the skills to use detection tools effectively and responsibly, while understanding their capabilities and limitations.

Building verification skills beyond AI detection

Before turning to AI detection tools, you can and should use a range of other verification techniques – like cross-referencing content and verifying sources. We deliver training on these skills, with ongoing support to ensure lasting impact.

Facilitating access to expert support and tools

Some cases require deeper forensic analysis — yet most communities lack access to experts and tools. We fill this gap through our Deepfake Rapid Response Force — the first global mechanism for escalating suspected deepfakes and manipulated content for expert review and timely response.

Learn more about our Deepfake Rapid Response Force.

Turning learnings into guidance for millions

Every WITNESS initiative generates concrete learnings, which we translate into practical resources. These materials are strategically distributed – reaching millions and strengthening global resilience to deceptive audiovisual AI.
.

Featured work

This tipsheet offers key insights into the use of AI detection tools for images, video, audio, and text. It highlights their capabilities and limitations, including common issues like false positives, language challenges, and variable accuracy. It also encourages transparency in verification processes and clear communication about how conclusions are reached. Ethical considerations, contextual framing, and media literacy are central to promoting critical engagement with AI-generated content.

This forward-looking report investigates the evolving relationship between synthetic media and the information landscape in situations of armed conflict and widespread violence, with a particular focus on implications for conflict resolution and peace processes. It is available in English and Arabic.

With the progress of generative AI technologies, synthetic media is getting more realistic. We therefore see growing demand for AI detection tools that can determine whether a piece of audio and visual content has been generated or edited using AI. This piece discusses some of the limitations of detection tools and how to decide when to use them.

We’re fast approaching a world where widespread, hyper-realistic deepfakes lead us to dismiss reality. Watch WITNESS Executive Director Sam Gregory’s TED Democracy talk in which he highlights three key steps to protecting our ability to distinguish human from synthetic — and why fortifying our perception of truth is crucial to our AI-infused future.

Technical standards are not neutral—they shape human rights in the digital age. This paper examines WITNESS’s participation in the Coalition for Content Provenance and Authenticity (C2PA) to explore how human rights may be meaningfully embedded in technical standard-setting processes.

While generative AI and synthetic media have creative and commercial benefits, these tools are connected to a range of harms that are disproportionately impacting vulnerable communities. This article via TechPolicy – recently featured in President Obama’s AI reading list explores how legislators can center human rights by drawing from the thinking of human rights organizations and professionals with regard to transparency, privacy, and provenance in audiovisual content.

On Sept 12th WITNESS’ Executive Director, Sam Gregory, presented testimony to the US Senate Subcommittee on Consumer Protection, Product Safety & Data Security on “The Need for Transparency in Artificial Intelligence.” The testimony, which can be read here, focused on how to optimize the benefits and minimize the harms and risks from multimodal audiovisual generative AI.

Reports and articles

While watermarks and metadata standards like C2PA gain momentum, a significant portion of online content will always exist without traceable provenance. This blog post explores the implications of such divide which raises serious concerns about equity and human rights.

As generative AI technology evolves, so do the tools designed to detect it. Our blog discusses the need for standards to evaluate the effectiveness of AI detection tools informed by their application in real-world scenarios.

In this co-authored piece for Just Security, Raquel Vazquez Llorente shares high-level findings from work by WITNESS and the TRUE project, exploring how synthetic media impacts trust in the information ecosystem. 

This guide is intended to assist judges and other decision makers in their assessment of open source information, by explaining some of the most common open source investigative techniques.

This article talks about the need to ‘fortify the truth’ by fostering resilient witnessing practices that can ensure trustworthy videos and strengthen narratives of vulnerable communities. It identifies and speculates on actions at tactical, strategic, tools, technology, and policy levels, drawing upon human rights organization WITNESS’s work on proactive preparation for emerging technologies and technical infrastructures.

Generative AI can protect witnesses’ identities, visualize survivors’ testimonies, reconstruct places & create political satire. Check out our blog about using generative AI and synthetic media for human rights advocacy, the ethical challenges it poses and the questions that organizations can ask

How do we ensure technical solutions for enhancing confidence in media help rather than harm? In this article Sam Gregory discusses some core issues in pursuit of this goal.

 

Warning labels on AI-generated media give viewers little context. Artists and human rights advocates have forged a more effective—and creative—path. Read more here.

Around the world, deepfakes are becoming a powerful tool for artists, satirists and activists. But what happens when vulnerable people are not “in on the joke,” or when malign intentions are disguised as humor? Read this report that focuses on the fast-growing intersections between deepfakes and satire. Who decides what’s funny, what’s fair, and who is accountable?

This report focuses on 14 dilemmas that touch upon individual, technical and societal concerns around assessing and tracking the authenticity of multimedia. It focuses on the impact, opportunities, and challenges this technology holds for activists, human rights defenders and journalists, as well as the implications for society-at-large if verified-at-capture technology were to be introduced at a larger scale. Read the full report.

This article examines the various issues which arise when using technology to document crimes, and posits that the communities affected by conflict should be at the centre of how documentation tools are developed and deployed.  

How do we best prepare for, and not panic over, generative AI? WITNESS’ Sam Gregory discusses one area of preparation, authenticity and provenance infrastructure, which show the work of how media was made, where it came from and was edited, and how it was distributed.

Other resources

What are the ethics of using Deep Fakes to anonymize sources in non-fiction media? What are the layers of consent that require consideration? What are the futures, the risks, and the opportunities of these types of manipulations? What strategies can non-fiction media makers (journalists, documentarians, and artists) implement to navigate the complex landscape of these technologies? See this conversation that includes WITNESS’ Raquel Vazquez

The Partnership on AI’s
Glossary for Synthetic Media Transparency Methods provides definitions around a number of key synthetic media transparency terms. WITNESS took part in a series of workshops that PAI ran and directly fed into the creation of this glossary.

WITNESS co-chairs the Threats and Harms task force of the C2PA, where it leads the harm assessment of these specifications designed to track the source and history of multimedia across devices and platforms. WITNESS has influenced this and related initiatives at an early stage to empower critical voices globally and bolster a human rights framework. Read our blog.

Provenance and authenticity tools would enable you to show a range of information about how, where and by whom a piece of media was created, and how it was subsequently edited, changed and distributed. Check this video series out to know more about provenance and authenticity, the C2PA standards and how we may fortify truth for accountability and awareness.

Deepfakery is a series of critical conversations exploring the intersection of satire, art, human rights, disinformation, and journalism. Join WITNESS and the Co-Creation Studio at MIT Open Documentary Lab for interdisciplinary discussions with leading artists, activists, academics, film-makers and journalists. See the full series here