Generative AI

Cyber security risks while using Generative AI

Generative AI & Cyber Security

Generative AI is a powerful tool that utilizes deep learning techniques to analyze massive amounts of data and generate new content closely related to the original data. Generative AI automates tasks that once required human intelligence, saving time and resources.

ChatGPT and DALL-E are popular examples of generative AI applications. ChatGPT uses natural language processing to generate human-like responses, while DALL-E creates images through deep learning.

An image of a cat flying an aeroplane. This was generated using OpenAI’s DALL-E within seconds.

Generative AI models used in organizational research bring benefits and risks to cybersecurity. The increased availability and popularity of tools like ChatGPT can aid malicious actors in hosting cyber attacks.

Generative AI and Cyber risks

Generative AI can improve security by detecting data leaks and monitoring access, but it can also be misused to create phishing emails and deep fake videos. The growing popularity of ChatGPT has also resulted in an increase in fake apps and links related to ChatGPT that can potentially steal credentials.

Among employees, ChatGPT’s quick access to information makes it popular. However, a potential risk is that employees may unintentionally copy and paste confidential company information into ChatGPT, which could expose sensitive data. Therefore, caution is advised when accessing links and inputting data into ChatGPT.

Now let’s focus on the cyber risks related to ChatGPT, a generative AI system.

1. Copyright violation

While ChatGPT alerts users that its responses may potentially cause copyright infringement, it is the user’s responsibility to validate them. Due to the convenience of one-click access, employees may copy and paste ChatGPT’s automated responses into their work, including research, articles, and programming. But, it is essential to understand thatChatGPT generates original content based on trained data. However, if the trained data is plagiarized, there is a risk of plagiarism in articles or code generated from ChatGPT.

To mitigate this risk, it is essential to establish organizational cyber security guidelines and conditions for validating content produced by ChatGPT and train employees accordingly.

2. Sharing of intellectual property

With the rapid progress in generative AI, organizations must establish rigorous protocols to prevent employees from divulging confidential information to ChatGPT. Failure to do so could expose sensitive data to the public, leading to reputational harm.

Suppose an organization uses ChatGPT to research the medical report and enters patient information; this gets fed into the ChatGPT’s database. This could cause a data breach since the data used to train ChatGPT is a part of the public domain that is not protected under intellectual property rights and can be used by anyone for any purpose.

OpenAI has already implemented a disclaimer that cautions users against posting sensitive information to the chatbot. However, if employees are unaware of the significance of data and the potential harm it could inflict on the organization, it may result in adverse consequences.

3. Hyper-personalised phishing email

Phishing attacks that utilize emails have evolved to include Artificial Intelligence techniques even before the development of ChatGPT. With natural language processing capability, ChatGPT can imitate human language, tone, and writing style to adapt to each target. This allows threat actors to create highly personalized phishing emails by utilizing ChatGPT and generating phishing kits with commands that enable the AI tool to craft malicious code and phishing emails.

4. Chatbot poisoning

ChatGPT is a machine learning model that learns from the information given to it. Integrating OpenAI’s ChatGPT API into an AI chatbot can make the chatbot susceptible to poisoning. Malicious actors can input biased or hate-filled conversations into the database, causing the chatbot to learn and replicate this behaviour. This can result in the chatbot generating inappropriate and potentially discriminatory responses.

To prevent this, developers must ensure that high-quality data from reputable sources are inputted into the chatbot system. Continuous monitoring of the chatbot interactions is also necessary to identify and address any inappropriate or harmful responses. Developers can also implement filters to prevent the chatbot from learning and replicating abusive language or behaviour.

Failing to take these precautions can result in the corruption of organizational data and reputation.