Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
The British government is currently working on a new tool that aims to predict who might murder in the future based on the tool’s algorithm.
The UK Ministry of Justice is leading this real-life project, initially named the “homicide prediction project” but now called “sharing data to improve risk assessment.“
It uses computer algorithms and personal data, including information from the Probation Service, to try and identify individuals who could potentially go on to commit violent crimes.
According to officials, the project is still in the research stage. It is meant to help authorities better understand and manage the risk of serious violence from people already under supervision. (via: The Guardian)
The former Conservative government launched the initiative, which is still being pursued under the current Labour administration.
However, it became public knowledge only after the civil liberties group Statewatch filed a Freedom of Information request.
Sofia Lyall, a researcher from the group, raised serious concerns, calling the system “chilling and dystopian.”
She warned that such tools often amplify existing biases and inequalities in the criminal justice system.
Research has shown that these kinds of crime-predicting technologies are usually flawed and unreliable, yet the government is still moving forward with them.
Lyall is urging officials to stop developing the tool immediately. While predictive policing is already in use in parts of the US, it has faced backlash and increasing regulation due to ethical concerns.
Unlike the fictional Minority Report, which used psychics to predict crimes, this project relies on artificial intelligence and personal data.
Still, the central idea remains the same: anticipating and stopping crimes before they happen.
Critics fear this could lead to people being unfairly targeted, labeled as dangerous, or even punished based on what they might do rather than what they’ve actually done.
The debate raises questions about privacy, fairness, and the future of justice.
Do you think we’re anywhere close to using AI for crime detection? Would you trust a system like this? Tell us what you think below in the comments, or via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News

