Body-worn cameras entered policing with a very clear promise attached. They were supposed to improve behaviour. Officers would be more professional, citizens would be less likely to escalate, complaints would drop, and the use of force would decline. That promise was never the whole case for them, of course. Cameras were also useful in more practical ways. They captured evidence, helped resolve disputes over what happened, protected officers from false allegations, and gave prosecutors, supervisors, and complaint handlers something more solid to work with when accounts conflicted.
Still, the public sales pitch leaned heavily on behaviour change because it was easy to understand and politically defensible. Ten years on, that claim looks much weaker than it once did. The evidence has not shown that cameras have no effect at all. It has shown something less tidy: sometimes they seem to matter, sometimes they do not, and the outcome depends a great deal on how they are deployed, when they are activated, how they are supervised, and what rules govern them. There is no general, repeatable outcome that justifies a procurement decision based solely on behaviour change. That is a long way from the original promise.
So if cameras did not reliably deliver behavioural reform, what did they deliver?
From behaviour to visibility
Mostly, they changed visibility. That may sound obvious, but it matters. Body-worn cameras have added a persistent layer of recorded interaction to frontline policing. The record is partial, often messy, and never complete. Even so, it is common enough now to shape expectations. It has altered what can be reviewed later, reconstructed, challenged, and disclosed. Once a police organisation has body-worn cameras at scale, people begin to assume that important incidents ought to leave some kind of visual or audio trail. That assumption affects evidential practice, complaint handling, supervision, and eventually the ordinary sense of what acceptable conduct looks like. That shift did not happen through a single policy decision. It happened during deployment.
This is one reason the evidential case for body-worn cameras has aged better than the behavioural one. Footage can be genuinely useful. It reduces reliance on conflicting witness accounts. It helps prosecutors and defence lawyers. It can shorten complaint investigations or even resolve them outright. It gives disciplinary processes a record they often lacked before.
But that is exactly where the problem begins, too. Footage is often treated as more complete than it really is. A body-worn camera records what is in front of the officer, from a single angle, at a particular moment. What it captures depends on activation policy, camera position, obstruction, movement, battery life, sound quality, stress, and simple chance. It does not show all that happened before the officer switched it on. It does not show what happened outside the frame. It may miss remarks spoken nearby. It cannot capture everything the officer saw, noticed, or understood.
None of that is a temporary defect that will disappear with better hardware. It comes with the form itself. The camera gives you a fragment. Sometimes a very useful fragment, but still a fragment.
And yet people or organisations have a habit of leaning on that fragment as if it were the whole story. When footage exists, it tends to dominate the account of an incident, even when it leaves obvious gaps. When there is no footage, the absence can quickly become suspicious in itself. Both reactions are understandable, and both can be misleading.
The accountability demand
The same tension sits inside the accountability case. Many people wanted body-worn cameras not because they thought officers would suddenly become better behaved, but because they wanted a record. A record can be checked, challenged, requested, disclosed, and used. That argument has lasted better than the old promise of behavioural transformation. But it carries the same basic weakness as the evidential case: the record is only ever partial. A great deal is then asked of that partial record. Policy has not really caught up with that fact.
If this were only about recording, the problem would be difficult enough. It is no longer only about recording.
Are we still buying cameras?
In parts of the world, certainly in the United States, and especially in larger agencies, body-worn cameras are increasingly part of a much wider digital system. They feed transcription tools, indexing systems, incident flagging, report drafting, and analytics. Not every force has moved very far in that direction, but the trend is clear enough. The camera is no longer the whole product. It is the front end of a data system.
European procurement has generally moved more slowly. It still tends to focus on the hardware itself: battery life, docking, storage, retention, redaction, and compliance. Those things matter, but they no longer tell you much about the larger system being built. The actual system now runs from capture to storage to organisation to analysis to disclosure to action. If forces keep thinking mainly in terms of cameras, they will miss the more important question of what kind of infrastructure they are putting in place and what powers come with it.
Behaviour change was never the whole vendor proposition either. What changed is where the value sits. It is no longer in the recording. It is in what can be done with what is recorded. Vendors have moved accordingly. Procurement has been slower to follow.
This is also why storage is now not really the central problem, even though it is often discussed that way. Yes, footage accumulates. Yes, retention duties expand. Yes, redaction becomes expensive, and disclosure becomes heavier. But that is not just a budget issue. It is a failure to properly define and manage the system. Data that is stored and never meaningfully used is not an asset. It is a liability.
What matters now is what organisations choose to do with the data once they have it. And that is where things become more consequential.
Body-worn camera systems are increasingly tied to speech recognition, automated tagging, selective review, complaint triage, training feedback, case systems, and intelligence workflows. Once that happens, the system is no longer just recording police work. It is starting to sort it, classify it, and in limited ways interpret it.
That sounds abstract until you look at one very concrete example: automated report drafting. Several vendors are now building tools that turn body-worn camera audio and transcripts into draft incident reports using speech recognition and language models. The attraction is obvious. Officers spend a lot of time writing paperwork, often late in a shift, and any promise of speed or consistency will get attention.
But the real issue is not whether the software can produce neat prose. It is whether the resulting report can be trusted, challenged, corrected, and actually owned by the person who submits it. A machine-drafted report can only describe what it processed. It cannot supply the context the camera missed, what happened before activation, or the officer’s reasoning and judgement unless a human being adds those things back in. In any proceeding where that report is challenged, the question will be who attests to its accuracy and on what basis. Forces that have not defined that standard are accepting exposure they have not named.
Some agencies in the United States have already discovered this the hard way. Early deployments have led not only to interest but to caution: validation requirements, restrictions on machine-generated narratives in evidentiary settings, and, in some cases, pauses while policy catches up. That is not surprising. Once an official report appears to have been shaped by a generative system, the problem is no longer just efficiency. It becomes a question of credibility.
Governance is behind capability
And that, really, is where the governance gap now sits. Most of the policy architecture around body-worn cameras was written for recording. Increasingly, the systems in use are capable of analysis, pattern detection, summarisation, and workflow automation. That is a different category of power. Who gets to analyse footage at scale? Which forms of automation are allowed? How are outputs logged, corrected, and disclosed? Can they influence discipline, supervision, training, or prosecution? What limits apply to retrospective analysis? What level of human review is required before automated output enters the official record? None of those questions can be left to simple user demand and procurement by default, though in practice many organisations do exactly that.
The old public argument about cameras often turned on accountability versus officer autonomy. That framing no longer gets to the heart of the issue. The sharper trade-off now is also between the value an organisation can extract from data and the exposure that same data creates. More analysis may improve training, reveal patterns of poor practice, and strengthen complaint handling. It also creates more discoverable material, more internal scrutiny, more legal risk, and more pressure to act on problems once they become visible.
This is where the discussion usually becomes uncomfortable. Do forces actually want systems capable of surfacing discourtesy, weak searches, missed procedural steps, poor victim engagement, unnecessary escalation, or inconsistent use of discretion? If those things can be found, choosing not to look is not neutral. But choosing to look is not neutral either. It creates obligations. Huge ones.
What users should now be asking
So the useful questions have changed. The starting point is no longer whether cameras reduce complaints or improve behaviour. The harder questions are about architecture, authority, and limits. What is the force’s digital evidence system supposed to be? How is audio and video processed at scale? Which analytical functions are acceptable and which are off limits? What audit trail exists for automated processing? What standard of officer review applies before automated output enters a formal record? Which uses should require independent approval rather than internal sign-off alone?
These questions reach well beyond police organisations themselves. Prosecutors, defence lawyers, courts, oversight bodies, and civil society groups are all adjusting to a world in which video-based evidence is becoming expected rather than exceptional. That changes disclosure practice. It changes evidential assumptions. It changes what the presence of footage means, and what its absence means, too.
It also raises a more awkward question that procurement discussions often manage to avoid. If footage exists but nobody reviewed it, why not? If analysis was possible but not carried out, why not? If a system could have identified a pattern of concern, who decided not to ask? At some point, the most revealing governance question may be the simplest one: what do organisations prefer not to know, even when the system could tell them?
Conclusion
Body-worn cameras did not produce the clean behavioural transformation they were once sold on. What they produced instead was a persistent, searchable, and increasingly analysable record of policing activity. That record has real value. It also has hard limits. And it is steadily becoming part of a broader machinery of automated review, interpretation, and documentation that is moving faster than governance is keeping pace with.
Forces that still think they are mainly buying cameras are behind the curve. They are buying data systems. Forces that allow automated tools to shape the narrative of policing without a solid architecture of explicit ownership and constraint will find themselves operating within boundaries they did not choose and may not fully understand.
Further reading:
Yokum, Ravishankar, Coppock (2019). A randomized control trial evaluating the effects of police body-worn cameras. Proceedings of the National Academy of Sciences.
https://www.pnas.org/doi/10.1073/pnas.1814773116
Lum, Koper, Wilson et al. (2020). Body-worn cameras’ effects on police officers and citizen behavior: A systematic review. Campbell Collaboration.
https://onlinelibrary.wiley.com/doi/10.1002/cl2.1112
National Institute of Justice. Research on Body-Worn Cameras and Law Enforcement.
https://nij.ojp.gov/topics/articles/research-body-worn-cameras-and-law-enforcement
College of Policing. Crime Reduction Toolkit: Body-worn cameras.
https://www.college.police.uk/research/crime-reduction-toolkit/body-worn-cameras
NPCC (2024). Body Worn Video Guidance.
NPCC (2025). Artificial Intelligence Strategy for Policing 2024 to 2027.
European Commission (2022). Report on the application of Directive (EU) 2016/680.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022DC0364
Camp et al. (2024). Using body-worn camera footage to study police-citizen interactions. PNAS Nexus.
https://academic.oup.com/pnasnexus/article/3/9/pgae359
Ferguson (2025). Generative Suspicion and the Risks of AI-Assisted Police Reports. Northwestern University Law Review.
https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1621&context=nulr
Electronic Frontier Foundation (2025). Axon’s Draft One: Designed to Defy Transparency.
https://www.eff.org/deeplinks/2025/07/axons-draft-one-designed-defy-transparency
Adams (2025). Automation, AI, and body-worn camera policy.
https://www.sciencedirect.com/science/article/pii/S0047235225000224
Srbinovska et al. (2025). AI-driven analysis of body-worn camera data. Preprint.
https://arxiv.org/abs/2504.20007
Axon. Draft One product information.
https://www.axon.com/products/draft-one
Axon. Written evidence to the UK Parliament on AI and policing.
https://committees.parliament.uk/writtenevidence/132303/pdf/