Across Europe, and likely elsewhere, substantial public money goes into policing and security innovation every year. Projects are launched, participant consortia are assembled, pilots are conducted, and results are documented. By the formal standards of most programmes, these projects succeed in the sense that they produce what they were asked to produce. Sometimes one way or the other.
The translation into sustained operational use is a different matter. The gap between what gets funded and what becomes part of daily policing is persistent, and the usual explanations, budget pressure and institutional conservatism, account for only part of it. The deeper issue lies in how grant-funded innovation is structured and in what it assumes about how change actually happens inside police organisations.
Built for proof, not for use
Most funding instruments are designed to support exploration and validation. They bring together technical expertise and open access to operational environments, creating the conditions for testing a concept. In many cases, they do this well. A system functions in a pilot, data is processed, users interact with it, and results are demonstrated. What follows is then treated as a continuation of the same process, when it is not. Moving from a controlled pilot to a functioning capability inside a police organisation is a different phase altogether, governed by different constraints and requiring a different kind of ownership.
A pilot is a permissive environment. Data access is negotiated for a specific purpose. Integration with existing systems can be partial or bypassed. External partners stay present, solving problems as they arise. Timelines are fixed and short, and the aim is simply to show that something is possible. Operational deployment reverses most of those conditions. Data use requires formal approval within established frameworks. Systems must connect with infrastructure that was never designed for them. Procurement rules apply. Support, liability, and maintenance need to be defined upfront. Training has to be organised, and the system has to function without the people who built it. Treating a pilotβs success as evidence that deployment will follow is where most grant-funded innovation goes wrong. The conditions under which something was proven simply do not carry over into the conditions under which it must operate.
The gap is organisational, not technical
The limiting factor is rarely the technology itself. Projects seldom establish who takes responsibility once funding ends. There is no identified budget holder for continuation, no defined procurement pathway, no support model, and no one accountable if implementation stalls. End users are involved, sometimes closely, but their role is contributory rather than owning. Participating in a pilot does not translate into the authority or capacity to deploy. When a project concludes, what remains is a validated solution without an institutional anchor. The people who built it move on, and the organisation that might benefit from it is left to absorb legal, financial, and technical integration on its own. The system does not fail dramatically. It simply fails to continue.
Project assessment reinforces this pattern. Proposals are evaluated on technical credibility, methodological soundness, and the ability to produce demonstrable results, all reasonable criteria, but none of them tests whether what is being built can survive beyond the project itself. A solution can perform well in a pilot and score highly even when the path to deployment is unclear or unrealistic. This is often framed as a technology readiness question, but readiness in that sense measures the wrong thing. The step from a successful demonstration to a deployable, supported capability is where governance, procurement, integration, and organisational change all converge. It is also where funding and attention tend to fall away.
What projects rarely measure is a different class of KPIs, ones that reflect actual uptake rather than technical success. Not whether a system works, but whether it is actually used. How many officers choose to use it when they are not forced to. Whether usage increases or drops once project support is withdrawn. Whether supervisors embrace it as part of normal workflow. Whether officers, unprompted, say that it makes their job easier or that they would not want to go back to the previous situation. Whether the organisation is not just willing but eager to allocate budget, ownership, and long-term responsibility to it. These are harder to formalise and less comfortable to measure, but they are far closer to the question that ultimately matters than the indicators evaluators now want to see.
Not just a funding problem
Police organisations are not neutral recipients here. They operate within frameworks that are deliberately restrictive: legal obligations, evidentiary standards, budget cycles, IT governance, and institutional risk all shape what can be adopted and at what pace. These are not incidental barriers. They define the operating environment, and most projects do not systematically engage with them. The result is a recurring mismatch between what a project demonstrates and what an organisation can actually absorb.
What is largely missing is a structured way to carry a validated solution through the phase in which it becomes part of an organisation. This work is not research. It does not package well as innovation. It is integration: aligning with legal requirements, working through procurement, connecting to existing systems, defining support and liability, and preparing the organisation to use and sustain the capability over time. It is less visible, harder to frame, and usually the part nobody wants to fund. At present, it is handled inconsistently, sometimes by individual forces without sufficient capacity, sometimes by vendors with their own commercial logic, sometimes informally through networks and personal initiative, and often not at all.
The cycle that repeats
The result is a system that is effective at generating knowledge and demonstrating possibility, but inconsistent at converting either into sustained operational use. The default response has been to fund additional projects, on the assumption that more activity will yield greater impact. In practice, it repeats the same pattern. New ideas are explored, new pilots succeed, and the underlying gap remains.
None of this suggests that grant-funded innovation lacks value. Much of it contributes in less visible but still important ways: shaping understanding, informing standards, and defining what may become possible. Not every project is intended to produce immediate deployment, and it should not be judged as if it were. The problem arises when programmes are framed around operational impact while structurally stopping at validation. The expectation and the design do not align, and what gets delivered, though real, is not what the system implies it will be.
If more of this work is expected to reach practice, funding design needs to reflect the full adoption lifecycle. Validation is a milestone, not an endpoint. The transition from proven concept to operational capability is a distinct phase with its own responsibilities, constraints, and resources, and until it is treated as such, the pattern will not change. Projects will continue to deliver what they are asked to deliver. Police organisations will continue to operate largely as they did before. And the connection between the two will remain weaker than it needs to be.