Deep fakes are a powerful tool for weaponization, and they have been used to commit cybercrime, social engineering, fraud and disinformation.
Deep fakes come in many shapes and sizes, including face swapping, lip syncing and audio-based deep fakes.
Low-level detection methods rely on ML models that can identify artifacts or pixellations introduced through the deep fake generation process. High-level detection methods use models that can identify semantically meaningful features, like unnatural movements and phoneme-viseme mismatches.
It is important to verify the source of media when receiving videos and images in order to detect deep fakes.
Businesses and countries should be aware of the cyber risks posed by deep fakes and take measures to protect against them, such as using a single vendor security solution that can detect the attack at multiple choke points.