Thales Unveils Deepfake Detection at European Cyber Week

AT this year’s European Cyber Week, held from November 19-21 in Rennes, Brittany, Thales revealed its innovative deepfake detection model. Developed in collaboration with France’s Defence Innovation Agency (AID), the model is designed to combat the growing risk of identity fraud and image manipulation fuelled by AI technologies.

As AI-generated content becomes more commonplace, the threat posed by deepfakes has risen significantly. AI platforms such as Midjourney, DALL-E, and Firefly are increasingly being used to create hyper-realistic images and videos, making it difficult to distinguish between real and fabricated content. This has serious implications for sectors ranging from cybersecurity to media integrity, with some experts predicting that AI-driven fraud could result in significant financial losses in the near future.

According to Gartner, around 20 percent of cyberattacks in 2023 involved deepfake content as part of disinformation and manipulation efforts. The financial sector, in particular, has seen an uptick in the use of AI-generated images for identity theft and phishing attacks.

Christophe Meyer, Senior Expert in AI and CTO of Thales’s AI accelerator, cortAIx, explained, ‘Thales’s deepfake detection metamodel addresses the problem of identity fraud and morphing techniques. By aggregating multiple methods, such as neural networks and noise detection, we’re better equipped to protect the growing number of solutions requiring biometric identity checks.’

Advanced detection techniques

Thales’s deepfake detection metamodel combines several advanced methods to ensure the authenticity of images. These include:

  • CLIP (Contrastive Language-Image Pre-training): A method that connects image and text to analyse inconsistencies in AI-generated images by comparing them with their textual descriptions. This helps spot visual artefacts and irregularities that may indicate manipulation.
  • DNF (Diffusion Noise Feature): This method leverages current image-generation architectures to detect deepfakes by identifying the “noise” added to images during creation. By assessing this noise, the model can flag images that may have been generated by AI.
  • DCT (Discrete Cosine Transform): This technique focuses on analysing the spatial frequencies of an image. DCT can detect hidden artefacts in images, which often go unnoticed by the human eye, but are present in AI-generated content.

Thales’s AI team has worked extensively on developing these methods. With over 600 AI researchers, including 150 based in Paris’s Saclay technology cluster, the company has been a key player in advancing AI’s application to mission-critical systems.

Growing need for AI protection

The Thales deepfake detection metamodel comes at a crucial time, as disinformation and the misuse of AI technologies escalate across industries. In addition to the company’s efforts to tackle deepfake detection, its Friendly Hackers team has developed BattleBox, a toolbox designed to assess AI-enabled systems’ vulnerabilities, particularly to adversarial attacks and data leaks.

Thales’s advancements in AI security were also showcased during the CAID challenge in 2023, where the company demonstrated its ability to locate AI training data even after it was deleted, highlighting its commitment to confidentiality and data protection.

With these cutting-edge solutions, Thales is positioning itself at the forefront of the fight against deepfakes and AI-driven fraud, ensuring greater security for both consumers and industries worldwide.