Artificial Intelligence and Bias: Social Impacts of a Technical Solution References

AuthorSarah A. Sutherland
DateAugust 13, 2019

One of the substantial concerns about the implementation of artificial intelligence (AI) in the legal space is about bias, and evidence has shown that this concern is warranted. Given the urgency of this topic as these systems are being sold and deployed, I was happy to be able to speak about it at the Canadian Association of Law Libraries Conference in May and the American Association of Law Libraries in July. Here are some of my thoughts on AI that may not have made it into the presentations.

First some discussion of AI itself — while it’s fun to talk about AI broadly, it is helpful to break down what kinds of technologies people are generally talking about when they discuss AI. Essentially, there are two types:

  1. The first runs complex statistical analyses and makes inferences and predictions based on input data. It is based on past activity and has assumptions that can be played with to explore different ways of predicting outcomes in the future.
  2. The second kind uses computer programs to run over data and draw their own conclusions. The input data can be in different formats including numerical or textual sources. This type is called “self learning” and requires less data than the first kind.

The economic impact of AI is expected to be primarily felt in the way we perceive and value decision making, because AI is often used to match patterns in human decision making to make decisions in similar situations. Like the introduction of spreadsheets, which made bookkeeping cheap and efficient, AI is expected to reduce the effort and cost of decision making. Decision making in rare or uncommon situations is another matter: “AI cannot predict what a human would do if that human has never faced a similar situation.”[1] Machines are and will continue to be bad at predicting rare events. While automating decision making won’t eliminate all jobs, its economic impact is likely to change them: a school bus driver might not drive an autonomous bus, but someone will still be needed to supervise and protect the children in it.[2]

These are still speculation, because the technology and its implementation have not caught up to people’s ideas of what might happen. In the legal space the primary data source being used for AI is free text in the form of written prose, which is drawn from sources like court judgements, legislation, and other legal writing such as commentary or court filings. AI systems are not capable of understanding complex meaning and extracting facts...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT