Country/Region
AI Governance

Understanding Prohibited AI Practices and How to Avoid Them

August 19th, 2025

Contributor: Aleena Jibin

Understanding Prohibited AI Practices and How to Avoid Them

Who should read this?

CEOs
CTOs
CISOs
AI Governance Managers

Artificial Intelligence (AI) has immense potential to drive innovation and improve business operations, but without proper oversight, it can lead to unintended and harmful outcomes. To unlock its full potential, organizations must integrate strong governance strategies that transform AI from a tool with uncertain outcomes into a trusted and scalable resource that aligns with business objectives. As AI becomes an integral part of modern business operations, organizations must not only focus on the technical benefits but also consider the ethical implications of its use. Overlooking these considerations can expose businesses to substantial legal, ethical, and reputational risks, underscoring the importance of responsible AI governance.

In this context, the EU AI Act plays a crucial role by setting clear boundaries for AI use, outlining the dos and don’ts to ensure ethical compliance. Certain AI practices are outright prohibited due to their potential to harm individuals and society. However, despite the significance of these regulations, many organizations may overlook key aspects or assume they are not impacted by them. This blog will highlight these prohibited AI practices, explain why they are so critical, and provide actionable insights to help businesses navigate AI governance effectively and responsibly.

Why Some AI Practices Are Prohibited: The Ethical and Legal Risks

AI systems, if not properly governed, can have unintended consequences. These consequences may range from privacy violations to societal harm. AI can reinforce bias, manipulate vulnerable individuals, or even violate human rights, leading to mistrust and public backlash. Therefore, organizations need to understand what constitutes a prohibited AI practice and take proactive steps to avoid them. Misusing AI in certain ways can lead to legal problems, damage to brand reputation, and loss of consumer trust.

It’s important for organizations to not just focus on compliance with regulations, but to understand why these practices are harmful. Only then can businesses build a responsible strategy for using AI ethically. The EU AI Act provides the necessary clarity, but the key challenge lies in its practical implementation.

Prohibited AI Practices Under the EU AI Act

The EU AI Act categorizes AI systems according to their risk levels. However, certain AI practices are explicitly prohibited regardless of risk level due to their potential to cause harm to individuals or society. Whether you are developing AI systems or deploying AI technologies in your organization, it is crucial to ensure that these prohibited practices are avoided to maintain ethical compliance and mitigate risks.

Below are the key prohibited AI practices and actionable steps that both AI-developing and AI-using organizations can take to avoid falling into these prohibited areas:

1. Manipulative and Deceptive AI Practices

AI systems that subtly influence human behavior, particularly without users’ awareness or consent, are prohibited. These AI systems may manipulate individuals by using hidden methods or tactics to alter their decision-making, which can lead to harmful outcomes. For instance, AI-driven marketing campaigns might exploit emotional triggers to pressure consumers into buying products or services they don’t need.

Actionable Step:

For AI Developers: AI developers must ensure that AI-driven systems, particularly in customer-facing applications like advertising or product recommendations, do not manipulate users. Transparency is key—any content generated by AI, such as advertisements or product suggestions, should be clearly identified as being AI-generated to avoid misleading users.

For AI Deployers: AI deployers should ensure that the AI tools they adopt prioritize user preferences and avoid tactics that pressurize users into making decisions based on vulnerabilities. Transparent labeling of AI-driven content is a critical step in maintaining trust with users.

2. Exploiting Vulnerabilities of Individuals

AI systems that exploit the vulnerabilities of certain individuals, such as children, people with disabilities, or those in financially unstable situations, are banned. This involves using AI to manipulate these individuals' decisions, for example, encouraging them to make purchases they can’t afford or making unfair decisions about their eligibility for certain services based on their vulnerabilities.

Actionable Step:

For AI Developers: AI developers should regularly check their AI systems, especially those that interact with vulnerable groups, to make sure they are fair and don’t take advantage of these users. Vulnerable groups include children, elderly people, individuals with disabilities, and those in financially unstable situations. AI tools should be designed to protect these individuals and treat them fairly.

For AI Deployers: AI deployers must regularly evaluate AI tools used in sensitive areas, such as education or financial services, to ensure they do not exploit individuals based on their vulnerabilities. Instituting safeguards to protect vulnerable users will help maintain fairness and avoid unethical outcomes.

3. Biometric Categorization and Use of Sensitive Personal Data

AI systems that use biometric data, like facial recognition, to infer sensitive personal information such as race, political beliefs, or religious views are prohibited, unless there is a strict legal justification. The misuse of such AI systems can lead to discrimination, privacy violations, and the infringement of individuals' rights. This includes the improper collection or analysis of personal attributes that should remain private, potentially causing harm or exploitation.

Actionable Step:

For AI Developers: Developers should ensure that any AI systems involving biometric data comply with privacy laws such as GDPR. AI tools that process sensitive personal information should only be used under strict legal conditions, and users should always give explicit consent.

For AI Deployers: Organizations deploying AI tools involving biometric data should monitor how this data is collected and used, ensuring that it complies with GDPR and that the collection process is transparent. Periodic audits should be conducted to assess compliance with privacy laws.

4. Social Scoring Systems

AI systems that evaluate or rank individuals based on their social behavior or personal traits are prohibited. These systems can be used to assign scores to individuals based on their actions or social media presence, which can negatively affect their access to services such as loans, jobs, or housing.

Actionable Step:

For AI Developers: AI developers should make sure their systems, especially those used for important decisions like hiring or lending, focus on relevant and objective factors, such as skills, or qualifications. These systems should not rely on personal traits or social media activity, which could lead to biased or unfair decisions. For example, an AI used in hiring should prioritize professional experience and skills, not personal details like a candidate’s social media posts.

For AI Deployers: Organizations deploying AI tools should perform fairness audits on their AI systems to ensure they don’t inadvertently discriminate based on irrelevant data. AI tools should focus on job-related criteria, not social behaviors.

5. Criminal Risk Profiling Based on Personality Traits

Using AI to predict an individual’s likelihood of committing a crime based solely on personality traits or profiling is prohibited. This type of profiling, often seen in predictive policing, can lead to discriminatory outcomes and violates individuals' rights by assuming that personality traits determine criminality.

Actionable Step:

For AI Developers: AI developers should ensure that their tools used for criminal risk assessments are based on verifiable, objective data. Profiling based on personality traits alone should never be used as the sole factor for making critical decisions.

For AI Deployers: If your organization uses AI for risk assessments or criminal profiling, ensure that these tools rely on verifiable data related to criminal activity rather than subjective traits like personality. Human oversight should always be involved in these decision-making processes.

6. Unauthorized Facial Recognition Data Scraping

AI systems that scrape publicly available images from the internet or CCTV footage to build facial recognition databases are prohibited. This practice violates privacy laws and undermines trust in AI systems by collecting biometric data without individuals' consent.

Actionable Step:

For AI Developers: Developers should ensure that any AI tools utilizing facial recognition data follow strict consent protocols. This data must be collected and used transparently, ensuring that individuals are fully informed about how their data will be used.

For AI Deployers: Organizations using facial recognition technology should obtain explicit consent from individuals before collecting or using their biometric data. They should also periodically audit these systems to ensure compliance with privacy regulations like GDPR.

7. Emotion Recognition in Workplaces and Educational Institutions

Using AI to detect emotions in workplaces or educational settings is prohibited unless it is for specific health or safety reasons. The use of emotion-detecting AI in these environments can invade personal privacy and lead to manipulation or unnecessary stress.

Actionable Step:

For AI Developers: Developers should avoid creating emotion-monitoring AI for workplace or educational settings unless there is a clear health or safety purpose. Transparent and ethical design principles should govern the development of such systems.

For AI Deployers: If your organization uses emotion-monitoring AI, ensure that it is transparent, voluntary, and focused solely on wellness. Ensure that employees or students are fully aware of its use and that it respects privacy.

8. Real-Time Remote Biometric Identification

The use of real-time biometric identification, such as facial recognition, in public spaces for law enforcement purposes is highly restricted under the EU AI Act. While there are specific exceptions (e.g., preventing imminent threats or identifying missing persons), the use of such technologies is subject to strict regulations.

Actionable Step:

For AI Developers: AI developers must ensure that any real-time biometric identification systems they create comply with strict legal and ethical requirements. These systems should only be deployed when absolutely necessary, with full accountability for their use.

For AI Deployers: Organizations using real-time biometric identification, like facial recognition, should assess the potential risks these technologies may pose to individuals’ privacy and rights. This process, known as an impact assessment, helps identify any negative consequences or legal concerns before the technology is deployed. Organizations must also obtain necessary legal permissions, such as judicial authorization, and ensure the technology is only used in situations where it's absolutely needed, with proper oversight to protect individuals' rights.

Empowering Ethical AI Use

The EU AI Act provides organizations with clear guidelines on prohibited AI practices but it is up to both AI-developing and AI-using organizations to implement governance measures that ensure responsible and ethical use of AI. Understanding the prohibited AI practices, taking action to avoid them, and establishing strong AI governance strategies will help organizations reduce risks and ensure that their AI systems are compliant with regulations.

By educating employees on these issues and promoting ethical AI use, businesses can foster a culture of accountability, trust, and transparency. When AI is governed responsibly, it has the power to drive innovation and deliver value without causing harm to individuals or society.

Book a Free Demo

Reduce human cyber and compliance risks with targeted training.

Get a guided walkthrough — at a time that suits your timezone.

Book a Free Demo
Book a demo