The EU AI Act is already reshaping how AI can be built, bought, and used in policing.
- Many policing AI uses are high-risk or banned
Predictive policing, biometric categorisation, and most real-time facial recognition in public spaces are prohibited or tightly restricted. - Compliance is operational, not optional
High-risk systems require fundamental rights impact assessments, logging, human oversight, and EU registration before deployment. - Technology must be compliant by design
Data governance, bias mitigation, robustness, cybersecurity, and post-deployment monitoring are now regulatory expectations. - Procurement rules are tightening
New EU contractual clauses for high-risk AI will directly shape future police tenders and vendor obligations. - Standards lag behind reality
Research and deployment are advancing faster than harmonised technical standards, increasing reliance on codes of practice and soft law.
Bottom line:
The AI Act is redefining AI in policing as a regulated operational environment, not a plug-in technology.