Cybercriminals increasingly employ AI to improve algorithms
Have you all watched Alex Garland’s ‘Ex-Machina’? If yes, you might already know the trouble a human-like AI could bring. That was pure fiction, but we cannot knock out such a possibility with newer innovations surprising us every day!
Let us understand more about AI, its subfield, Machine learning (ML), and cybercrimes associated with these technologies.
What is Artificial Intelligence (AI) and Machine learning (MI)?
Artificial Intelligence (AI), a branch of computer science, deals with programming a system to develop it into a smart machine that responds like humans. Machine Learning, the subpart of AI, is human training a machine to learn from its past data. For example, Netflix recommendations, face recognition, and Instagram algorithms.
AI can speed up many processes, which, if done manually and can cost a lot of time. For the same reason, cybercriminals use AI, which fuels cyberattacks compared to conventional hacking methods. By incorporating sophisticated and intelligent technology solutions, cybercriminals can exploit loopholes in corporate IT networks, launch large-scale Denial of Service (DoS) attacks and counter the limited security capabilities of an average organization. Cyberattacks that harness AI are among the most prominent threats organizations face today.
Let us dive into some of the cybercrimes powered by AI/ML.
Deepfakes are false videos or audio files made using advanced deep-learning and Artificial Intelligence (AI) techniques that can evade the human eye and even bypass software detection. All the threat actors must do is upload images of the subject’s face from different angles. Then deep fake technology will embed the face into the required video, where the machine will learn the data provided and generate the desired output.
Using deep fake, a $35 Million Bank Heist was carried out in Hong Kong. Here the threat actors used AI voice cloning to manipulate the bank manager by disguising the voice of the cybercriminal as the director of a company he was acquainted with.
AI-based Spear phishing
A spear-phishing email crafted using AI will allow hackers to target victims more effectively and at scale. With AI, cybercriminals will generate subtle, well-informed, well-written phishing emails that accurately mimic an individual.
This is done by AI gathering personal information from social feeds about team outings, details of colleagues, and their work relationships. After compiling all these details, the machine will learn and predict the target’s tone of voice.
AI Botnet attack
AI-based botnets can outpower defence systems to launch massive DDoS attacks. AI predicts the defence side strategies, which will help the botnet to devise new ways to exploit systems. In extreme scenarios, it can also poison the machine learning process, leading to threat actor-driven processes performed by the targeted systems.
An example of an AI botnet attack is TaskRabbit, an online home service marketplace where millions of user’s bank account details were harvested. It reached a point where the website had to be disabled to restore security.
AI-based password guessing
Cybercriminals are increasingly employing AI to improve algorithms for guessing users’ passwords. With neural networks and Generative Adversarial Networks (GANs), cybercriminals can quickly analyze vast password datasets and generate password variations.
Artificial intelligence/machine learning-driven cyberattacks are likely to spike in the coming years. Constant vigilance and staying aware of the latest cyber threat landscape will help you stay ahead of AI/ML-powered cyberattacks.