Proprietary Algorithms for Public Purposes

AuthorJohn Gregory
DateJuly 24, 2017

It is now generally recognized that “code is law”: how computers process the millions of on/off, yes/no signals in their binary universe can have legal effects beyond their obvious output. Deciding how computers handle data they receive is a matter of choice, and those choices have consequences. These consequences arise whether or not the software writers, the coders, are aware of their choices or assumptions.

Two developments have brought the coding issue back to the fore in public discussion. The first is the computerization of what used to be purely mechanical devices. The analysis of physical phenomena is done, pursued and communicated electronically. This increases the opportunity for hidden or unconscious assumptions to play a role in the results of the analysis.

The second is artificial intelligence (AI): as computers train themselves, draw conclusions from big data too voluminous for human-powered analysis, and build on the conclusions to ask and answer further levels of question, the assumptions about the data and the conclusions are more remote from the knowledge and control of the systems’ designers.

David Canton foresaw this issue as a key one for 2017 in his new year’s predictions on Slaw.ca:

Another AI issue we will hear about in 2017 is embedded bias and discrimination. AI makes decisions not on hard coded algorithms, but rather learns from real world data and how things react to it. That includes how humans make decisions and respond and react to things. It thus tends to pick up whatever human bias and discrimination exists. That is a useful thing if the purpose is to predict human reactions or outcomes, like an election. But it is a bad thing if the AI makes decisions that directly affect people such as who to hire or promote, who might be criminal suspects, and who belongs on a no-fly list.

And do the algorithms lie?

The present note focuses on the implications for the criminal justice system. The state prosecutes someone based on the output of a machine, such as a breathalyzer or a speed-limit radar. Even if the machine has been u sed the way it was supposed to used, how does it work? How are the input data converted to output data? What are all the factors that can influence the output?

AI is a more recent development. It is used to review and understand masses of data about human social conduct, both commercial and social. Specialized uses are directed at criminal behaviour, to learn patterns of conduct, and – in some places – to compare the background of the defendant with the data to predict whether the person is still a risk to society. The analysis can be used before sentencing, to judge severity, or after, to consider eligibility for parole.

It is arguable that well- known machines that measure only physical inputs, such as breathalyzers and radar guns, can be taken as proven – though as they become driven by computers, that may change. The desire to challenge these processes has led to demands by criminal defendants in the United States to access the computer code of the breathalyzers to review their accuracy. Decisions have been divided on those demands. In some cases the devices are certified as effective by the state, usually subject to meeting...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT