Artificial Intelligence

Modern technologies, including Artificial Intelligence (AI) and algorithms cut costs and facilitate activities (such as internet searches and autonomous driving) which would otherwise be impossible. But they remove human involvement from the decision-making.

For algorithms, all decisions are binary. When deployed in law enforcement, this would be a big change (in the UK at least) from our tradition of having law enforcement moderated by human police officers, jury-members and judges. Katia Moskvitich commented, with some force, that 'our society is built on a bit of contrition here, a bit of discretion there'. Follow this link for a further discussion of this subject.

And then there is the related issue that algorithms are written by humans, who will almost certainly (though accidentally) import their own false assumptions, generalisations, biases and preconceptions. How easy is it to challenge decisions made by such algorithms? Does it matter, for instance, that recruitment decisions (including to the civil service) are nowadays often made by algorithms whose logic is held in a 'black box' inaccessible to anyone other than its designer - and maybe not to the client?

Self-learning AI can be just as dangerous. I was struck by this report in The Times in October 2018:

Amazon inadvertently built itself a sexist recruitment assistant in a failed experiment that demonstrated why artificial intelligence does not necessarily lead to artificial wisdom. The company set out to create a virtual hiring tool that would sift thousands of job applications far more efficiently than people can. Unfortunately, the AI algorithm taught itself to discriminate against women based on the fact that many more men had applied for and got jobs in the past. The new system began to penalise CVs that included the word “women’s”, as in “women’s chess club captain”. It downgraded applications sent by graduates of two all-female universities and prioritised applications that featured verbs more commonly found in male engineers’ CVs, such as “executed” and “captured”.

AI predominates modern financial markets. A JP Morgan analyst has estimated that a mere 10 per cent of US equity market trading is actually now conducted by discretionary human traders; the rest is driven by various rules-based automatic investment systems, ranging from exchange traded funds to computerised high-speed trading programs. The FT's Gillian Tett argues that we are seeing the rise of self-driving investment vehicles, matching the auto world. But while the sight of driverless cars on the roads has sparked public debate and scrutiny, that has not occurred with self-driving finance.

And it is important to remember - when wondering what can go wrong - that artificial intelligence applies software to data sets. Either or both can be faulty.

There are reports, for instance, that over-stretched social services teams are looking to algorithms to help them make those terrible decisions about whether 'at risk' children should be removed from their parents. The rate of child abuse in the general population is so low that false positives (wrongly removing a child) are inevitable. Is any evidence base strong enough - and free enough of crude stereotypes - to support automated decisions? If it were your child that was being taken into care, would you prefer the decision to be taken by an algorithm or a human?

And older doctors are concerned that their younger colleagues may be placing too much reliance on technology, and not enough on patient-reported symptoms. A senior surgeon, Mr Skidmore, wrote this to the FT in 2018:

Time and again, the errors being made by a younger generation of medical specialists in many disciplines are due to excessive reliance on scans and other images when these reports are at variance with the history given by the patient, and to increasingly cursory clinical examinations. There remains no substitute whatsoever for a doctor who has been trained by good teachers carrying out a meticulous bedside assessment of a patient and thereby constructing a provisional diagnostic matrix. ... Can AI assess abdominal rigidity in a patient with peritonitis? AI cannot smell odours that accompany disease. AI cannot assimilate or validate pain on an analogue scale. ... Disease processes do not change. Meticulous assessment of a patient’s symptoms and signs remain just as relevant today. AI can be used to confirm the clinical diagnosis but should never be allowed to refute it. Unfortunately, with errors in communication and failure of continuity of care responsibility, excessive and unquestioning reliance on AI can lead to clinical delay in patient management, with disastrous consequences.

Predictor Values - or Prejudices?

Durham Police's are using an AI system to help their officers decide whether to hold suspects in custody or release them on bail. Inputs into the decision-making include gender and postcode. The force stresses that the decision is still taken by an officer, albeit 'assisted by' the AI, but the Law Society has expressed concern that custody sergeants will in practice delegate responsibility to the algorithm, and face questions from senior officers if they choose to go against it. One problem, for instance, might be that the system uses postcode data as one of its 'predictor values' - but postcodes can also be indicators of deprivation, thus possible creating a sort of feedback loop as officers increasingly focus on deprived areas.

There are already some 'no go' areas for algorithms. It would not be acceptable (in the UK) for Durham Police - or anyone else - to use ethnicity as one of its predictor values. And insurance companies are not allowed to offer cheaper car insurance to women, even though they are on average much safer drivers than men. But why then should other predictor values be acceptable? Age and gender are, for instance, used by Durham police.

In the US, a federal judge ruled that a 'black box' performance algorithm violated Houston teachers' civil rights. But Eric Loomis, in Wisconsin, failed to persuade a judge that it was unfair that he was given a hefty prison sentence partly because the COMPAS algorithm judged him to be at high risk of re-offending. This was despite his lawyer arguing that such a secret algorithm was analogous to evidence offered by an anonymous expert whom one cannot cross-examine - and one analysis of the system had suggested that it was twice as likely to incorrectly predict that a black person would re-offend, than a white person.

Are We Entitled to Explanations?

The EU's General Data Protection regulation (GDPR) requires organisations to tell us when automated decisions affect our lives, and we can challenge those outcomes , requiring companies to give meaningful information about the decision. But it is not yet clear how informative, in practice, such explanations will be. The companies that design the algorithms will in particular want to defend their intellectual property, and some AI systems learn from experience, so even their designers may be unable to explain what has happened.

Stephen Cave warns that our "biggest misapprehension about AIs is that they will be something like human intelligence. The way they work is nothing like the human brain. In their goals, capacities and limitations they will actually be profoundly different to us large-brained apes." An emerging class of algorithms make judgments on the basis of inputs that most people would not think of as data. One example is a Skype-based job-interviewing algorithm that assesses candidates' body language and tone of voice via a video camera. Another algorithm has been shown to predict with 80% accuracy which married couples will stay together - better than any therapist - after analysing the acoustic properties of their conversation.

US expert David Gunning adds that the best performing systems may be the least explainable.. This is because machines can create far more complex models of the world than most humans can comprehend. Counterfactual testing - such as changing the ethnicity of a subject to see whether the decision changes - will not work when there is a complex stream of data feeding the decision making.

Indeed, we may never fully understand how particular AI systems learn and work. No-one in Google, for instance, can tell you exactly why AlphaGo made the moves that it did when it started beating the best Go players in the world.

Further Reading

The ability of algorithms and AI to work together to the disadvantage of consumers is also beginning to cause concern. There is more detail in the discussion on my cartels web page.

Karen Yeung offers an interesting academic review of Algorithmic Regulation and Intelligent Enforcement on pp 50- of CARR's 2016 discussion paper Regulation scholarship in crisis?. She notes AI's 'three claimed advantages. Firstly, by replacing the need for human monitors and overseers with ubiquitous, networked digital sensors, algorithmic systems enable the monitoring of performance against targets at massively reduced cost and human effort. Secondly, it operates dynamically in real-time, allowing immediate adjustment of behaviour in response to data feedback thereby avoiding problems arising from out-of-date performance data. Thirdly, it appears to provide objective, verifiable evidence because knowledge of system performance is provided by data emitted directly from a multitude of behavioural sensors embedded into the environment, thereby holding out the prospect of 'game proof' design.' But 'All these claims ... warrant further scrutiny' which she proceeds to offer.

There is much to ponder in Joanna Bryson's IPR blog Tomorrow comes today: How policymakers should approach AI. She says, for instance, that:

Dr Bryson's blog also says interesting things about regulation of the Technology Giants.

The House of Lords published a detailed report in 2018 AI in the UK: ready, willing and able? which included some interesting regulatory recommendations such as:

Parliament's Science and Technology Committee published a thorough report - Algorithms in Decision Making - in 2018, in particular pressing the government to require algorithm operators to be transparent. The government's response was also pretty thorough although it currently had no plans to introduce legally-binding measures to allow challenges to the outcome of decisions made using algorithms.

[Other lively regulatory issues - especially in response to innovation - are summarised here.]

 

Martin Stanley