The detain/release task was an interesting one to work through as I found some value dynamics at play that were conflicting with the information in the scenario.
My main focus for release vs detain was on the violence meter, then the commit a crime meter and then the failure to appear meter. For example, if they were low in the violence and commit a crime but high in failure to appear I released them but if they were high in violence or medium in violence and commit and crime, I was detaining.
I also relied almost solely on the risk assessment compared with the comments from the prosecutor or defense. I will preface this by saying I did not feel like I had enough information to adequately make these decisions. The reason I basically threw-out the prosecutor and defense statements was because I had no way to corroborate their statements so taking them on face-value felt useless. That being said, I did not particularly like relying on the risk assessments to make the decisions because I had no understanding of how the risk assessments were determined and some of them really did not seem to fit based on the crimes they were accused of. There were some were the crimes seemed violent to me but they were determined to be a low-risk for violent reoffending if released and some that were considered high risks for violence but were currently charged with crimes that did not seem all that violent. Without knowing their criminal histories and how the risk assessments were made, it was very difficult to feel confident in my decision making.
I do think this highlights a problem with a lot of analytic data in many industries. Analytics is definitely a buzz word at the moment whether in sports, law enforcement, education, business or pretty much any field that can find data to use. I find there are generally three issues with relying solely on data to make decisions.
The first is how the findings are determined. Often these statistics and recommendations are based on data and information that are packaged in a way that many in that specific industry do not understand, never mind a layperson. There is a lot of trust being put into the people determining the data.
The second is contextualizing the data. It is pretty common knowledge at this point that in Canada and the United States that people of colour are more likely to end up in prison and receive longer jail terms. If we are relying solely on data, that could lead us to the conclusion that racial profiling, stop-and-frisk and policies of that nature would then be good policy based on the information we have as people of colour must be more violent if they are more often going to jail and receiving longer sentences, right? Historical context, racism in policing, prosecuting and sentencing and all of the other factors that have lead to these statistics must be factored in to properly contextualize the data. We cannot just accept the numbers at face value.
The third is individual circumstances. Algorithms and data boil everything down to likelihood. How likely is this person to reoffend? How likely is this person to commit another violent crime? What it often doesn’t factor in is the individual person in the situation. Now I don’t know how much better we are at it without the algorithms but the big mistake many people make is accepting data and algorithms as the only tool and not seeing it as what it is, one piece of the puzzle. There are other factors besides data that must be considered to properly make decisions. Data and algorithms can definitely improve our decision making but in my opinion, cannot be the only thing we are using to make the decision. The human element cannot and should not be removed from the equation either. Each individual situation must be able to be evaluated on it’s own merits while using algorithms and data as one of the tools, but not the only tool to make the decision.
I am all for using data. I love using technology and I think that many industries would find value in increasing their usage of data and creating better algorithms. But without constant tweaking and evaluation of the algorithms and making sure that the algorithms are showing what we actually want them to show us, we are lost. We must also recognize that the data and algorithm cannot possibly tell the full story. There are larger societal factors and smaller individual factors that we cannot allow to be overrun. Many will argue that data and algorithms remove bias and when done well, I agree. However the algorithm is only as perfect or as flawed as the creator and their own abilities and biases.