Deepfakes as a service

The future of malware delivery and biometric subversion

Realistic image synthesis has its proponents and distractors. While the proponents celebrate the release of open-source tools such as Stable Diffusion as progress, cyber security practitioners highlight the risks because now anyone can conjure realistic images and videos with a reasonably affordable PC with a decent GPU. With AI-powered natural image synthesis, how will we know whether the image, voice, or video stream is real or synthetic? 

Or, in other words, a deepfake?

Deepfake attacks using voice and video impersonation have cost companies hundreds of thousands of dollars. And, where there is money, the cyber criminal is not far away.

The problems posed by deepfake are propaganda, misinformation (often with the intent of creating political division), character attacks, deception, and fraud. Deception and fraud must come under the radar of cyber security practitioners.

Deepfake as a Service

When ransomware hit the headlines around 5-6 years back, it caught attention because of the enormous sums of money they demanded from affected companies. The bad actors had figured out quite early that instead of collecting tiny amounts from individual victims, it makes sense to extort a huge amount from a single company.

With the potential of substantial monetary rewards, vendors on the dark web are offering deepfake services for as low as 20 to 30 dollars per minute of video. On the more expensive side, it costs 100, 150 dollars per minute.

Why should cyber security managers be worried?

As there are open-source code and readily available materials on the internet, anyone with a bit of knowledge of deep learning can easily create deep fake videos and images.

Let us take a closer look at the ways in which deepfake is used to attack a user.

Scams targeting end-users

Deepfake is a scam that targets unsuspecting humans. In the case of the cyber security manager, the end-user may often be at the receiving end of a fake voice or video message instructing them to perform a malicious act such as a money transfer. BEC (Business Email Compromise) is a popular attack technique leveraged by deepfakes.

Malware or ransomware delivery

Closely related to the attack vector of exploiting unsuspecting end-users is the possibility of bad actors delivering malware via fake messages. A phoney video or voice message mimicking a senior manager could instruct a junior employee to download a malware package that could open a backdoor.

Biometric subversion

Systems that use biometric authentication using facial recognition or voice-based authentication could be compromised if bad actors turn deepfakes against them. Deepfakes could manipulate biometric data such as the face, iris or the loops on a fingerprint to circumvent authentication systems.

Countermeasures

An Elon Musk deepfake on a page that claimed investment opportunities and a deepfake video of cybersecurity expert Chriss Sistrunk promoting an electric scam device were all over Facebook and a popular gaming platform as advertisements. 

There are chances that your employees can see advertisements like these when they are at work. If they are unaware of the risks it holds, they may click on them, which could download a payload and pose a threat to your whole business.

Traditional security controls won’t work against deepfakes as it is a psychological tactic similar to phishing. Hence, continuous security awareness and cyber education (as cumbersome and tiring as they may sound) are essential to combat deepfakes.

Since we are in the early stages of developing comprehensive security controls to combat deepfakes, a change in mindset is essential. Lou Steinberg, former CTO of Ameritrade, says – “We are overfocused on authentication and under-focused on authorization, which comes down to intent”, – which means, instead of asking who the person is (authentication), ask what that person wants to do (authorization)? 

For example, if the system detects an unauthorised high-value financial transaction, that transaction should be automatically rejected and reported, even if the user has successfully authenticated.

With such an approach, even if deepfakes succeed in circumventing authentication controls, the authorization controls downstream could prevent the intended malicious act.

Training Recommendations

Powered by Security Quotient

To combat deepfake, one must understand Artificial Intelligence, Machine Learning, and the cybercrimes related to them. This short video will help your employees get a basic understanding of AI ML-powered cybercrimes.

Leave a Comment

More from Security Quotient

Sign-up

Get every advisory and video directly delivered to your inbox.