Regulating the Algorithms that Impact Our Lives
In a world increasingly recognizing and implementing technological advances in data analytics, the “algorithm” is frequently at the forefront of the discussion. Algorithms not only enable social media companies to gather data and optimize what appears on your timeline or on your feed, but also to solve complex issues like autonomously driven vehicles and automated financial advisors. Due to the rapid pace of innovation in the field, policymakers and legislators are scrambling to regulate this industry. One of the most important conversations in the realm of public policy will be where these limitations are placed and how they are enacted, as the algorithm is a piece of technology that will play an even larger role moving forward.
To those unfamiliar with the concept of artificial intelligence and algorithms, the former is essentially driven by the concept of the latter. Artificial intelligence, here meaning the capacity of technology to have similar cognitive capacities to that of a human, is powered by an algorithm which enables the machine to adapt and become more efficient over time. In his essay Knowing Algorithms, Nick Seaver cites the definition of the term algorithm from a popular computer science textbook, writing “Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output”. The algorithm becomes a tool for optimizing the efficiency of a given procedure or action performed by technology.
Algorithms have quickly inserted themselves into many facets of our modern society. They exist for anything from determining the sentencing minimums in a court of law to the piloting of military drones. Although these algorithms are capable of greatly expediting certain processes regarding both of these areas, there are legitimate ethical concerns. Measuring the capacity for safety in an algorithm designed to drive an automobile autonomously containing a human pilot is possible, but it is challenging to place liability regulations on such a new form of technology. There is little regulation or legislation regarding the circumstances of an autonomously driven vehicle injuring a person or someone’s property. Who is at fault when an autonomously driven vehicle crashes into another vehicle driven by a human?
Another area of concern stems from the original process of developing the algorithm. It is possible for the developer to impart biases, even unknowingly, into the algorithm, which will manifest during performance. As our society adopts algorithms for processes such as determining a prospective jail sentence or even determining credit scores, the concern over biases contained within the algorithm itself becomes a problem. In a 2018 report entitled Intro to AI for Policymakers: Understanding the Shift by The Brookfield Institute in Canada, they note the example of Predpol. This is a crime-predicting system widely used in the United States, and its algorithm was built on outdated data sources, which allowed for the inaccurate prediction of crime to occur in lower income areas and thus, increased police presence in these communities. This example depicts how flawed inception of an algorithm can lead to future problems caused by the imparting of biases into the core of the algorithm.
In his article Ethical Algorithms: How to Make Moral Machine Learning, Dan Corder recognizes patterns of errors committed by A,I which resulted from biases. He recognizes the potential in research that would teach AI to model ethical behavior by exposing it to repeated instances of ethical behaviors and unethical behaviors. As our society searches for ways to reconcile these issues, he posits that the issue starts with the initial programming of the algorithm. This idea is particularly important as, in many cases, decisions made by the algorithms that are implanted in AI can have dire consequences. Understanding the reasoning behind these decisions is also important in understanding how an algorithm may be biased or have an error contained within it’s programming.
Regarding the transparency of artificial intelligence systems, it is often difficult to determine the rationale by which a decision was made by the system. These systems, often highly complex, utilize massive quantities of data and analyze them in ways that are beyond human capabilities.
This difficulty in understanding why a decision was made by an artificial intelligence system is often referred to as “explainability”. This notion of understanding the process that the artificial system used is crucial in terms of liability. In the aforementioned example of an algorithm determining the length of a sentence in the courtroom, understanding the rationale behind this decision is crucial for transparency. In this circumstance, an explanation is undoubtedly owed to the accused, as this is a profound decision that would have a great impact upon their life.
In our fast-paced modern world, the turn to artificial intelligence has been so quick, many policymakers are just now beginning the conversation on the regulation of technologies like the algorithm. We often place high levels of scrutiny on individuals and companies in positions of power, but this level of scrutiny has not yet been applied to the algorithm which governs so much of our daily lives. The conversation on liability and the ethical concerns which arise from the use of algorithms is one of the most important conversations that is not being had and this form of technology continues to unfold, it is increasingly imminent. Although the most immediate concerns of regulation on algorithms might focus on things like safety, liability, and bias, this conversation will need to be ongoing as artificial intelligence and the algorithm are poised to play a large part in our future.