Key Insights
Deepfake technology is increasingly used by cyber criminals to impersonate trusted individuals and trick victims into revealing sensitive information or to acquire money. These attacks are on the rise, and as technology improves, both individuals and organizations must remain vigilant to protect against these threats.
Who should read this?
- Individuals – Anyone using digital platforms, particularly for financial transactions or confidential communications.
- Organizations -Businesses, security teams, and employees who may be targeted by deepfake scams, especially those handling sensitive communications and financial transactions.
What are deepfake scams?
Deepfakes are manipulated videos, audios, and images that look and sound like real people, created using artificial intelligence. In recent years, cyber criminals have begun using deepfake technology to impersonate executives, celebrities, and public figures to scam victims into transferring money or sharing confidential information. The rise of AI-driven tools has made these attacks increasingly accessible and convincing.
Examples of deepfake scams include:
- Impersonation of executives or leaders – Cyber criminals use deepfake videos to impersonate company leaders, instructing employees to carry out fraudulent financial transactions.
- Voice phishing (vishing) – Attackers use AI-generated voice recordings of executives to call employees and request financial transfers or other sensitive actions.
- Fake cryptocurrency schemes – Deepfake videos of prominent individuals are used to falsely promote cryptocurrency schemes, tricking viewers into investing.
A recent Forbes report highlights a significant surge in deepfake fraud attempts globally, with deepfakes now accounting for 6.5% of all fraud attempts worldwide. This sharp increase underscores the growing impact of deepfake scams on businesses and individuals alike. In fact, in 2024, deepfake attacks occurred every five minutes, making up 40% of all biometric fraud.
Despite growing awareness of deepfakes, a survey by iProov found that 43% of people didn’t believe they could tell the difference between a real video and a deepfake, and nearly one-third didn’t even know what a deepfake was. Many people still believe that live audio and video cannot be faked, even as far as 2026. This false sense of security is exactly what fraudsters exploit. Technology has advanced so much that seeing—or even hearing—no longer guarantees reality.
Why does this happen?
- Advances in AI technology – AI and deep learning models have made it easier than ever to create convincing fake audio and video, which cyber criminals exploit to deceive victims.
- Lack of awareness – Many individuals and organizations still do not recognize deepfake scams, making them more susceptible to these increasingly realistic attacks.
- Rapid adoption of digital communications – As more businesses move online, the use of video conferencing, social media, and digital transactions creates new opportunities for deepfake scams.
What’s the risk?
- Financial loss – Deepfake scams can convince individuals and organizations to make significant financial decisions based on seemingly genuine instructions, leading to fraudulent transactions, misdirected payments, or unauthorized transfers.
- Reputational damage – When deepfake scams succeed, they can damage an organization’s reputation. Clients and customers may lose confidence in the organization’s ability to secure sensitive information, leading to long-term trust issues.
- Data breaches – By impersonating high-level executives or trusted individuals, deepfakes can deceive employees into sharing confidential information.
How to stay safe?
For individual users
- Be cautious with personal information on social media – Cyber criminals use social media to collect data for deepfake creation. They can study photos, videos, and public information to replicate your voice, appearance, and mannerisms. Limit the amount of personal information shared and consider ensuring your profiles are set to private.
- Don’t rely only on what you see or hear – Many people assume that if something looks or sounds real, it must be real. With deepfake technology, that’s no longer the case. Always verify requests through official or reliable means, such as a phone call or text, especially if they involve sensitive information or transactions.
- Report suspicious content – If you notice anything suspicious, immediately report it to your IT team for further investigation and action.
For organizations
- Consider setting up multi-channel verification processes – One of the easiest ways to fall victim to deepfakes is by relying on a single communication channel for important actions. Make sure all critical financial transactions or requests are verified through multiple communication channels, such as video calls, phone calls, or even in-person checks when possible.
- Encourage caution among employees – Employees are often the first line of defense against deepfake scams, but they might overlook suspicious requests. Train them to treat unexpected requests from executives or high-level figures with extra caution, even if they come in the form of video or voice calls.
- Keep a directory of approved communication methods – Maintain a list of trusted contacts and communication methods for each employee, including alternative emails or phone numbers, to authenticate requests.
- Verify requests through supervisors or managers – For high-risk actions, ensure that employees confirm requests with the requester’s manager or supervisor. This process helps prevent fraudulent activities and ensures additional oversight to avoid impersonation.
- Strengthen authentication measures – Implement Multi-Factor Authentication (MFA) across critical systems to make it more difficult for attackers to bypass security even if they impersonate a trusted individual.