Partnered with
European Network of Law Enforcement Technology Services
Subscribe to get latest posts:
PUBLISH ON LEOSPHERE SUBMIT PUBLICATION
SEARCH FOR VENDORS Browse Directory
Subscribe to get latest posts:
Search the Directory for law enforcement and security technology. Browse Directory

Australian ACSC publishes agentic AI security guidance warning of prompt injection, scope creep and audit trail gaps

Australia’s Cyber Security Centre published joint guidance on 1 May 2026 on the careful adoption of agentic AI services, warning that autonomous AI tools introduce significant security and resilience risks that organisations are not yet equipped to manage. The guidance identifies the core risks as: agents acting outside their intended scope when given ambiguous or incomplete instructions; agents being manipulated through prompt injection in external content they process; agents with overly broad permissions causing unintended consequences across connected systems; and the difficulty of reconstructing agent decision trails for audit or incident response. The ACSC recommends treating agent actions, prompt histories, delegated credentials and human approval records as evidential artefacts that must be logged and preserved. For EU law enforcement deployers, the guidance directly informs AI Act Article 9 risk management obligations and Article 13 transparency requirements for high-risk agentic AI systems.