Dr Kat Hadjimatheou
Government funding criteria for police tech projects currently ignore ethics and human rights, so it’s no wonder they come in for so much criticism. Here’s how they could do things differently.
Police investment in new technologies is strongly encouraged by government with £175 million earmarked since 2016 to fund innovative ‘police transformation projects’. But police adoption of digital technologies doesn’t come without risks. Many police technology projects run into controversy, with civil liberties groups arguing that they intrude on people’s privacy, visit suspicion on innocent people, and discriminate against minority ethnic groups, among other things. Consider just a few of the headlines seen in the UK press over the last month:
Why do so many police technology projects run into criticism? Is there anything police or their government funders could do to pre-empt the problems they raise and to mitigate them? This blog post argues that many of the problems often faced by digital policing projects could be addressed. But only if police funding bodies require applicants to do some basic ethical and human rights risk-assessments before they release the money.
The Police Transformation Fund of £175 million was established by the Home Office in 2016 in order to “transform policing by investing in digitalisation, a diverse and flexible workforce and new capabilities to respond to changing crimes and threats”. The fund is administered by the Police Reform and Transformation Board, which is charged with delivering the Policing Vision 2025, including creating opportunities for policing to embrace digital technology for the purposes of ‘[g]athering comprehensive information about victims, offenders and locations quickly from mobile technology and using analytics to help us make decisions about where we target limited resources’ (pg 10).
Projects around such capabilities are inevitably likely to carry ethical risks, especially with regard to the processing of data, privacy, discrimination and police accountability. These risks are not new. The kind of headlines quoted above have been appearing in the press regularly for many years now. And neither are they unmanageable: terms such as ‘privacy-by-design’ and ‘privacy/ethical impact assessment’, might sound technical to an average member of the public but they should be pretty familiar to anyone working in the field of security technology.
Unfortunately, no one on the Police Reform and Transformation Board does work in the field of security technology. Instead, it is entirely comprised of senior police leaders, many of whom are likely to be focused on operational concerns and to have little understanding of the privacy, ethical, or human rights issues raised by the technological development they decide to fund. As a result, there is no mention of the need to address such issues in the guidance for applicants, let alone any requirement for it in the funding criteria. In practice, this means that police forces and PCCs are spending millions of pounds of taxpayer’s money developing new techniques of digital policing without taking into account any of the potential ethical or human rights implications of such projects.
This is problematic, not least because it creates the following risks:
- Technologies developed may be used in ways that lead to privacy infractions, security breaches, misuse of data, etc.
- Legal challenges may be brought against the police. Such challenges could potentially involve the right to privacy, rights against discrimination, and/or the right to a fair trial.
- Even when legal challenges are not brought, a perception of insufficient care over these matters may result in serious reputational damage to the police service, and a reduction in public trust in the police to act proportionately and as responsible data custodians.
- If a project meets with sufficient public or pressure group resistance because of the above, it may have to be retrofitted for privacy/data protection or even scrapped, meaning a waste of public money and police resources. Consider the way external concern and pressure led the government to abandon the NHS care.data scheme.
The existence of a funding process presents an opportunity to get police services and PCCs (who have to sign off on project proposals) to think about these issues and take steps to address them. For that to happen, the Board needs to make it a precondition of receiving funding that project proposals include some kind of ethics and human rights risk assessment. First, they should require applicants to fill out a form designed to help them identify possible ethics/human rights issues in advance, and so distinguish projects that require some kind of ethical monitoring from those that might not. Second, they should provide project leads with some guidance about the ways in which they can implement and report ethical oversight so as to provide good leadership and governance of the project. Finally, the funding body should maintain an open dialogue with the project lead to enable any ethical concerns to be raised and discussed during the lifetime of the project.
Learning from what’s out there already
None of this means reinventing the wheel. On the contrary, many ready-made options are already available and most of these can be adapted to fit the requirements of any particular project. Here are some examples of the kinds of things projects could be invited to consider:
- Projects themselves could seek privacy by design certification for the solutions they produce (the UK Information Commissioner’s Office (ICO) is also developing a privacy seal)
- Projects could set up an ethics advisory board or a single advisor to work with the project throughout its lifetime.
- Projects could adopt measures to increase transparency and communication with the public (as far as is compatible with operational security/other relevant considerations). This could take the form of a FAQ page on the project website, it could be a public discussion in a local community centre, or it could be something more formal like a focus group.
- Projects could consider implementing a privacy/ethical/societal impact assessment. An example of a model for such an assessment is established at EU level by the SATORI project, which developed a CEN standard for ethical impact assessment.
Of course, project leads should not only be free to choose from existing options but should also be given the opportunity to define their own approach. Imposing specific requirements on projects from above risks becoming a box-ticking enterprise, which would represent a waste of time for all involved not to mention a waste of taxpayers’ money. Instead, the aim should be to encourage police and project leads to get used to talking openly and thinking creatively about the implications of new technologies. Doing so will also mean they’re better prepared for the questions that will inevitably come their way from civil liberties and other concerned groups. As the Information Commissioner recently pointed out, nothing will substitute for robust national coordination and strategy to address the ethical and human rights implications of police use of new technologies. But the proposals put forward in this blog would be a step in the right direction.
Dr. Kat Hadjimatheou is a researcher with the Interdisciplinary Ethics Research Group. Email: K.Hadjimatheou@warwick.ac.uk. Twitter: @surveilleethics
The author developed the ideas for this blogpost with the Independent Digital Ethics Panel for Policing https://idepp.org/, a panel of experts (including serving law enforcement professionals, academics in privacy and ethics, civil society thinkers and policy advisors, technologists, engineers, and a lawyer) that provides advice and guidance to law enforcement.