The new deepfake detection tool will hopefully have an impact in the upcoming US elections in November, as well as other global political campaigns
Microsoft has launched a set of new deepfake detection tools to help combat deepfakes, which could be used ahead of important events like the upcoming US election. Previously deepfakes have spread a huge amount of fake information on the internet which affects voters opinions and their decisions.
So how does it work? Their first tool called “Video Authenticator” as the ability to analyse an image or video clip to determine whether it has been edited using Artificial Intelligence (AI). The tool will then provide a confidence score indicating the chance that the media has been manipulated. With videos, the tool shows the same confidence score but in real-time on each frame as the video plays. Microsoft claims that the tool “works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye”.
Deepfakes refer to photos, videos or audio files that are manipulated or used in a fraudulent way to show someone doing or saying something that they actually never did or said. With AI, it has become relatively easy to manipulate faces based on previous media to create realistic new false images and videos. These deepfakes are then used to spread misinformation on social media to affect user actions.
This new deepfake combatting technology was originally developed by Microsoft Research in coordination with Microsoft’s Responsible AI team and the Microsoft AI, Ethics and Effects in Engineering and Research (AETHER) Committee, which is an advisory board at Microsoft that helps to ensure that new technology is developed and released in a responsible manner.
Microsoft has released a second tool that can both detect manipulated content and reassure people that the media they are viewing is in fact real. It allows video creators to certify that their content is authentic. This second technology has two components, the first built into Microsoft Azure which allows content creators to add certificates and digital hashes to their content. Those authentication methods then “live with the content as metadata wherever it travels online”, according to Microsoft
The second component is a reader that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it. Viewers can access this feature through a browser extension or other methods.
This technology has been built by Microsoft Research and Microsoft Azure in partnership with the Defending Democracy Program. It will power an initiative recently announced by the BBC called Project Origin who will help test the authenticity of the technology and assist it’s drive to be a mainstream standard anti-deepfake tool that can be adopted across the board.
According to Tom Burt, Corporate Vice President of Customer Security & Trust and Eric Horvitz, Chief Scientific Officer, these new tools will be initially available to organisations involved in the democratic process, including news and media outlets as well as political campaigns.
For the full announcement of this new deepfake protection technology click here to read Microsoft’s press release.