Over the past three years deepfake technology has dramatically improved in quality, and with a spike in open-source software, deepfake capability is now more accessible than ever before. But how could threat actors use this technology for malicious intent?
To find out more, Matt Lewis, Group Research Director, has been working with students from University College London to explore the ease of use and output quality of some common open source deepfake frameworks to see how they could become yet another trick in the social engineering handbook.
As part of the research, Matt tasked the students to take a three-minute clip of a well-known film, and using two leading open-source software applications, manipulate the scene to replace the lead actor’s face with his own, with the aim of creating a video that could act as a demonstration of a cyber security risk.
Through this research, we have been able to understand what the implications are for fakery in the context of cyber, the potential impact to our clients, and more importantly, what techniques need to be researched to support deep fake detection. We are working to understand how technology could be used and abused by adversaries in the near future and how we can get ahead of the game by way of defensive measures, and/or provision of guidance to legislation and regulation around the use of such technology.
Matt says, “working with the UCL CDT (centre for doctoral training) in Data Intensive Science students is always a real joy, they are not only focused and dedicated to the task at hand, but also great fun to be around. By using non-cyber experts (the students are astrophysicists), we benefit from diversity of mindset and reduce issues around preconception which allows for unfettered approaches to research.”
If you’d like to find out more about the risk of deepfakes, suggested mitigation techniques, and deepfake detection models, view the report in full below.
Contact
NCC Group Press Office
All media enquires relating to NCC Group plc.