DETECTOR

Deepfake Evidence and Technology for Forensic Content Oversight and Research
Project ID
Funding Organization:
Funding Programme:
HORIZON-CL3-2024-FCT-01
Funding Instrument:
Research and Innovation Action
Start Date:
01/09/2025
Duration:
36 months
Total Budget:
4,489,405 EUR
ITI Budget:
812,500 EUR
Scientific Responsible:

AI is transforming law enforcement, offering new tools for policing but also enabling advanced criminal tactics that challenge traditional methods. The global nature of crime, including cyber threats, trafficking, and terrorism, calls for innovative solutions as LEAs face vast data volumes and increasingly sophisticated criminal activities. AI has raised concerns with deepfakes—highly realistic but fake audio, video, or text that can depict individuals saying or doing things they never did. Deepfakes pose serious risks, impacting politics, economy, and social trust. Examples include fabricated videos of political figures and voice-cloned audio for financial fraud, often spread through social networks to deceive and defraud on a large scale. Forensic institutes and courts struggle to differentiate authentic evidence from AI fabrications, especially in cases involving national security. Despite promising detection research, existing methods fall short as current models rely on limited, non-diverse datasets and produce results with limited legal admissibility. The DETECTOR initiative aims to address these challenges, supporting LEAs and forensic experts in analyzing altered media. It offers an integrated solution through cross-border collaboration among AI researchers, LEAs, forensic scientists, legal experts, and ethicists. DETECTOR’s goals include: developing specialized tools for detecting media manipulation, creating comprehensive datasets, researching digital evidence exchange across borders, engaging stakeholders, informing policymakers, and training forensic experts in digital media and AI. Through these efforts, DETECTOR seeks to safeguard digital evidence authenticity and enhance forensic capabilities to counter AI-driven media manipulation across Europe.

Our team was tasked with: 1) Project coordination. 2) Head Avatar reconstruction for fake data creation through reenactment, 3) Explainable Deepfake detection algorithms.

Consortium

ETHNIKO KENTRO EREVNAS KAI TECHNOLOGIKIS ANAPTYXIS
FUNDACION CENTRO DE TECNOLOGIAS DE INTERACCION VISUAL Y COMUNICACIONES VICOMTECH
ENGINEERING – INGEGNERIA INFORMATICA SPA
SHEFFIELD HALLAM UNIVERSITY
CYBERCRIME RESEARCH INSTITUTE GMBH
STOWARZYSZENIE POLSKA PLATFORMA BEZPIECZENSTWA WEWNETRZNEGO
DIGINNOV – DIGITAL INNOVATION CONSULTING S.R.L.
UNIVERSITA DEGLI STUDI DI FIRENZE
SISAMINISTERIO
Netherlands Forensic Institute
Ministerio da Justica Portugal
MINISTERO DELLA DIFESA
THELOGICALLY LTD
HOME OFFICE London
UNIVERSIDAD DE LA LAGUNA

Contact

Dr. Dimitrios Zarpalas
(Scientific Responsible)
Building B - Office 0.18

Information Technologies Institute
Centre of Research & Technology - Hellas
1st km Thermis - Panoramatos, 57001, Thermi - Thessaloniki
Tel.: +30 2310 464160 (ext. 145)
Fax: +30 2310 464164
Email: zarpalas@iti.gr