Books - Read and Enjoyed

We, the Robots?

Regulating Artificial Intelligence and the Limits of the Law

Simon Chesterman

Cambridge University Press 2021

This book is a very practical, down to earth text and is concerned with the challenges the current state of AI and its further development poses or will pose for national and international regulation. In three parts the challenges, available tools and the possibilities are discussed.

A useful lens through which many of the issues are viewed is the utility-morality-legitimacy distinction.

Utility

AI algorithms, as well as their regulation, may increase or decrease the utility of decisions or results. High frequency trading may increase the efficiency of trading or open the door for misuse skewing the system in the process. Autonomous driving may safe costs, time and lives but may also redistribute risks and benefits unfairly. Criminal assessment software may increase the accuracy and uniformity of the assessment of the profile of an offender or it may apply the bias inherent in its training data and thus reinforce existing biases and leading to inferior decisions.

Understanding how AI may increase or decrease performance and quality is a precondition to put in place regulation that consistently improves utility rather than diminishes it.

Morality

There are tasks and decisions, it can be argued, that should not bet delegated to algorithms at all, at least not yet. Essentially all of today’s successful AI applications rely to a large extent on machine learning. Machine learning means that assessments, categorization, and decisions are based on historic data about the past.

The problems of morality with this approach become most apparent in the practice of the justice system. COMPAS, short for Correctional Offender Management Profiling for Alternative Sanctions, is a popular software used in US courts for assessing criminal offenders. Trained with data from a database of criminal offenders COMPAS generates a score from 1 to 10 to indicate the estimated probability for future commitment of crimes. It used regular in criminal cases such as in State v Loomis in 2016 in Wisconsin.

 'You´re identified,' Judge Scott Horne said, 'through the COMPAS assessment, as an individual
 who is at high risk to the community.' The judge then rules out
 probation 'because of the seriousness of the crime and because
 your history, your history on supervision, and the risk
 assessment tools that have been utilized, suggest that you´re
 extremely high risk to re-offend'.
 
 (p 63-64)

However, past data reflect crimes committed of people in the past and COMPAS training relates them to things like education, race, neighborhood, and other attributes that per se have nothing to do with a criminal record. This is the very definition of stereotyping. A cornerstone in all advanced legal systems is the principle, that offenders are punished for their own deeds and based on assessments of them as individuals. If software like COMPAS is used as an assessment tool, it violates this principle almost by definition as long as there is no technique to remove all unwanted biases and stereotyping from the training algorithm.

How subtle biases may creep in shows the use of DNA databases. DNA found at a crime scene are routinely matched against a DNA database which has been built up based on data from historic criminal cases. In the US black Americans are over-represented in the criminal records, due to a discriminatory system one could argue. This over-representation means that black American offenders are more likely to be identified by a match in the DNA database than white offenders, which in turn means that black Americans will continue to be over-represented in the DNA database in the future, simply due to the fact that they were over-represented in the past.

Many societies acknowledge that today’s state of affairs are not ideal and attempt to improve it. There is systematic discrimination against some groups based on their education, wealth, income, race, religion, etc., and this discrimination is engraved in the databases of criminal records, employment, health, and salary statistics. Using these data, and there is no other data to use, reinforces these biases. So far human judges, recruiters and other decision makers are much better suited than AI algorithms to understand these biases and, by way of their decisions, nudge the system towards the envisioned ideal.

While the implications of biases in training AI systems is most striking in justice systems, they are wide spread. Job recruiters, insurance companies, private schools and universities, car rental companies, banks for loan granting, and many other organization use AI systems based on machine learning for assessment and decision making. So this problem permeates societies already today and affects the majority of the people in many countries. And it is a question of morality and not utility, because this practice threatens to violate two moral principles: being fair to individuals based on their assessments as individual persons, and changing society towards an envisioned improved state.

Legitimacy

In a third class of decision processes the issue is not the inferiority of the results or the inadequacy of the available techniques to obtain a desirable output, but the process itself that has to meet certain requirements, whatever the results.

Some decisions are only legitimate if a certain procedure is followed, independent of the quality of the result. E.g. a decision by a court may not be accepted as legitimate if taken by an algorithm, but only when the court procedures are followed with due attention given to all arguments and with a careful reasoning and explanation. Thus, a given result obtained in one case by a fully transparent process and supported by detailed arguments may enjoy a high level of legitimacy, while the same result obtained by an opaque algorithm without any explanation may not be seen as legitimate at all. Many public decisions at all levels from the district to the national government fall into this category, because it involves allocation of common resources, prioritizing contested objectives and distribution of risks.

Another example is that the appointment of a government as a result of a general and fair election is considered legitimate, but if the same government is appointed by an opaque algorithm, it would in all likelihood not be accepted as legitimate.

Autonomy

Another dimension in the discussion of AI regulation is the autonomy of the AI agent. As long as AI algorithms are simply tools that assist humans in computing and data processing, humans are the agents that are in charge and responsible, also in a legal sense. However, as AI becomes more sophisticated it will also become more autonomous.

True autonomy of AI systems calls into question
long-standing assumptions that humans are the source, the means,
and the purpose of regulation.

(p 60)

The extent to which humans are in control, can be categorized into in the loop, above the loop and out of the loop.

In the loop means a human actor makes all the decision, supported only by algorithms, even if those algorithms are highly sophisticated and process vast amount of information way beyond the capabilities of humans. Still, humans decide how to assess and use the results of the algorithms and what actions are to be pursued.

Above the loop means that algorithms take decisions but are continuously monitored by humans. If something goes wrong, humans can detect it and step in to avoid gross failures or wrong decisions.

In out of the loop scenarios humans do not monitor the algorithms’ operation and will detect malfunction and misbehavior only by accident and long after the fact.

Technological advances push steadily towards greater autonomy of AI systems. Therefore it is mandatory that regulation keeps pace and, ideally even prepares for fully autonomous AIs ahead of time. All three dimensions discussed above, utility, morality, and legitimacy, remain relevant for fully autonomous AI systems and become even more pronounced.

For instance, societies will have to decide which processes and decisions handed over to AI systems and to which degree. The better the regulatory regime of nations and the international community is prepared for this development, the higher the chances that AI can be put to beneficial use.

(AJ December 2022)