My Introduction to Regulation summarises the factors that have driven much of the explosive growth in regulation since the 1980s. It now looks as though the world may be entering a further Industrial Revolution in which the physical, biological and digital worlds are coming together in the form of new technologies such as machine learning, big data, robotics and gene editing. This web page contains some initial notes on the consequential regulatory issues that are beginning to attract attention.
One preliminary comment:- It is vital that regulatory frameworks are pro-competitive so that innovators can test their ideas. It is a mistake to have a regulatory framework which requires every innovation to be challenged before it can be put into practice. But it is equally important that potentially dangerous technologies are properly evaluated before deployment.
(There was an interesting Policy Exchange roundtable in July 2017 which considered how regulation might keep up with disruptive regulation. Follow this link to download a note of the discussion.)
Please send me further information and articles etc. which might help other readers keep track of interesting regulatory developments in these or other areas. If and when the notes get too long, I will create separate pages to carry the detail. This has happened already for the discussion of the regulation of the Technology Giants such as Google and Facebook.
CRISPR technology is now widely available. Tweaking individual letters of genetic code, it takes just hours to adjust what evolution has fashioned over billions of years.
A Mississippi dog breeder has already been given permission to use gene editing to fix a mutation that makes Dalmations prone to kidney disease. But future biohackers may have less acceptable objectives, including terrorism.
Lots of interesting issues here. Autonomous vehicles seem certain to be much safer (on average) than those controlled by humans. But will we hold them to higher standards? For instance:
- Who will be blamed if a half-asleep driver fails to intervene to avoid a collision caused by a mistake made by another driver?
- Should the algorithms provide that the life of the vehicle's driver should be sacrificed if that is necessary to avoid killing, say, several pedestrians - or just one - who was jaywalking - but only five years old?
- It has even been suggested that driverless cars should be fitted with a dashboard dial allowing you to choose whether the lives of the car's occupants should always outweigh those of others, or whether the car should always sacrifice its passengers.
The government announced in November 2017 that self-driving cars would be in use in the UK by 2021, and that insurers would be required to cover injuries to all parties whether or not a human driver had intervened before his or her vehicle was involved in a collision. This implies a fundamental shift in road traffic law away from personal liability to product liability. And which manufacturer will be held liable - that of the hardware (the car) or the software?
The first pedestrian was killed by a self-driving car in March 2018.
Follow this link to read about the psychology involved in our attitude to Risk and Regulation.
Bitcoin and other Crypto-Currencies
Decentralized digital currencies, which use blockchain technology, feel like they are only a small and attractive step from where we are now.
Apart from my share in our house and car, all my significant assets are represented by bits and bytes in the IT systems of various financial institutions. I trust those institutions, of course, partly because they are so heavily regulated, and backed by the Government in the form of the Financial Services Compensation Scheme. But then I think about the financial crisis, and the way in which the true value of my financial assets is affected by interest rates and inflation, over which I have zero control. I remember all too clearly how the value of my financial assets fell by nearly one-fifth following the Brexit referendum.
Crypto-currencies, too, are no-more than bits and bytes, but they are registered in a peer-to-peer database that is controlled by no-one and with which no-one can meddle. That can't be bad. On the other hand, their value, too, currently fluctuates wildly in response to real world events such as Brexit.
The key difference, I guess, is that Bitcoin and the rest are truly international. Unlike Sterling, the Dollar or the Remnimbi, they are not linked to any one country or influenced by any one government. If their use continues to grow, will governments seek to regulate them or their users? And could they succeed? Many say not.
The Technology Giants
The numerous issues associated with Google/YouTube, Facebook, Amazon, Airbnb, Uber etc. are explored here.
The Gig Economy
Information technology is facilitating new ways of ordering goods and services to be delivered to the door, including books (and much more) from Amazon, taxis (in particular from Uber), food etc. (from supermarkets), and meals (Deliveroo etc.).This is to be welcomed,. (See for instance Robert Hahn and Robert Metcalfe's The Ridesharing Revolution.) But there are some unwelcome consequences.
First, the gig economy facilitates new and arguably onerous ways of employing those who prepare and deliver many goods and services. They can be required to enter into contracts under which
- they must purchase their own vans and equipment (although the vehicles etc. are provided or specified by the 'employer'),
- they have no guaranteed work (zero hours contracts),
- they are not paid, and are sometimes responsible for providing cover, when sick or on holiday.
These arrangements can be tax efficient for both 'employer' and 'employee' and they suit many individuals very well. But they can also be exploitative, leaving workers without essential protections. It is far from clear that one-sided contracts are in the long term interests of the individuals or society. Numerous cases are working their way through the courts as lawyers seek to define the boundary between being a 'worker' and being truly self-employed.
The self-employed also pay much less by way of National Insurance Contributions (NICs) despite the fact that, since the introduction of the new state pension in 2016, they get pretty much the same state benefits as employed people. The government attempted to address this by increasing NICs in 2017 but this proved very unpopular and was abandoned. The proposal also appeared to pre-judge or forestall the recommendation of the Taylor Review of modern working practices published a few months later. (See also my web page commenting on weaknesses in HMRC.)
The gig economy can also be economically devastating for those previously sheltered from such competition. Here is an extract from a report in the New York Times.
For decades there had been no more than 12,000 to 13,000 taxis in New York but now there were myriad new ways to avoid public transportation, in some cases with ride-hailing services like Via that charged little more than $5 to travel in Manhattan. In 2013, there were [already] 47,000 for-hire vehicles in the city. Now [in 2018] there were more than 100,000, approximately two-thirds of them affiliated with Uber.
While Uber has sold that “disruption” as positive for riders, for many taxi workers, it has been devastating. Between 2013 and 2016, the gross annual bookings of full-time yellow-taxi drivers in New York, working during the day when fares are typically highest, fell from $88,000 a year to just over $69,000. Medallions, which grant the right to operate a taxi in New York City, were now depreciating assets and drivers who had borrowed money to pay for them, once a sound investment strategy, were deeply in debt. [NY Taxi Drivers representative] Ms. Desai was routinely seeing grown men cry and she had become increasingly concerned about the possibility that they would begin taking their lives.
There is a separate issue concerning the companies' willingness to adjust to local culture and regulation. The BBC commented in September 2017 that "Throughout its short, tempestuous life, Uber has clashed with regulators around the world - and more often than not it has come out on top. Its tactic has often been to arrive in a city, break a few rules, and then apologise when it's rapped over the knuckles. Some regulators have backed down, others have run the company out of town."
Algorithms & AI
There is a bit of a theme running through some of the above issues. Modern technologies, including Artificial Intelligence (AI) and algorithms cut costs and facilitate activities (such as internet searches and autonomous driving) which would otherwise be impossible. But they remove human involvement from the decision-making. For algorithms, all decisions are binary:- a big contrast (in the UK at least) from our tradition of having law enforcement moderated by human police officers, jury-members and judges. Katia Moskvitich commented, with some force, that 'our society is built on a bit of contrition here, a bit of discretion there'. Follow this link for a further discussion of this subject.
And then there is the related issue that algorithms are written by humans, who will almost certainly (though accidentally) import their own false assumptions, generalisations, biases and preconceptions. How easy is it to challenge decisions made by such algorithms? Does it matter, for instance, that recruitment decisions (including to the civil service) are nowadays often made by algorithms whose logic is held in a 'black box' inaccessible to anyone other than its designer - and maybe not to the client?
One interesting (worrying?) example is Durham Police's use of an AI system to help their officers decide whether to hold suspects in custody or release them on bail. Inputs into the decision-making include gender and postcode. The force stresses that the decision is still taken by an officer, albeit 'assisted by' the AI, but the Law Society has expressed concern that custody sergeants will in practice delegate responsibility to the algorithm, and face questions from senior officers if they choose to go against it.
In the US, a federal judge ruled that a 'black box' performance algorithm violated Houston teachers' civil rights. But Eric Loomis, in Wisconsin, failed to persuade a judge that it was unfair that he was given a hefty prison sentence partly because the Compass algorithm judged him to be at high risk of re-offending. This was despite his lawyer arguing that such a secret algorithm was analogous to evidence offered by an anonymous expert whom one cannot cross-examine.
The ability of algorithms and AI to work together to the disadvantage of consumers is also beginning to cause concern. There is more detail in the discussion on my cartels web page.
AI predominates modern financial markets. A JP Morgan analyst has estimated that a mere 10 per cent of US equity market trading is actually now conducted by discretionary human traders; the rest is driven by various rules-based automatic investment systems, ranging from exchange traded funds to computerised high-speed trading programs. The FT's Gillian Tett argues that we are seeing the rise of self-driving investment vehicles, matching the auto world. But while the sight of driverless cars on the roads has sparked public debate and scrutiny, that has not occurred with self-driving finance.
Karen Yeung offers an interesting academic review of Algorithmic Regulation and Intelligent Enforcement on pp 50- of CARR's 2016 discussion paper Regulation scholarship in crisis?. She notes AI's 'three claimed advantages. Firstly, by replacing the need for human monitors and overseers with ubiquitous, networked digital sensors, algorithmic systems enable the monitoring of performance against targets at massively reduced cost and human effort. Secondly, it operates dynamically in real-time, allowing immediate adjustment of behaviour in response to data feedback thereby avoiding problems arising from out-of-date performance data. Thirdly, it appears to provide objective, verifiable evidence because knowledge of system performance is provided by data emitted directly from a multitude of behavioural sensors embedded into the environment, thereby holding out the prospect of 'game proof' design.' But 'All these claims ... warrant further scrutiny' which she proceeds to offer.
Above all, though, it is important to remember Stephen Cave's warning that our "biggest misapprehension about AIs is that they will be something like human intelligence. The way they work is nothing like the human brain. In their goals, capacities and limitations they will actually be profoundly different to us large-brained apes." An emerging class of algorithms make judgments on the basis of inputs that most people would not think of as data. One example is a Skype-based job-interviewing algorithm that assesses candidates' body language and tone of voice via a video camera. Another algorithm has been shown to predict with 80% accuracy which married couples will stay together - better than any therapist - after analysing the acoustic properties of their conversation.
And there is much to ponder in Joanna Bryson's IPR blog Tomorrow comes today: How policymakers should approach AI. She says, for instance, that:
- AI is already the core technology of the richest corporations on both sides of the great firewall of China.
- Already [AI is] far better at predicting individuals' behaviour than individuals are happy to know, and therefore than companies are happy to publicly reveal.
- The government's present policy of outlawing adequate encryption is a severe threat to the UK on many levels, but particularly with respect to AI.
- AI and ICT more generally have become sufficiently central to every aspect of our wellbeing that they require dedicated regulatory bodies just as we have for drugs and the environment.
- This is not the same as saying that AI cannot have proprietary intellectual property or must all be open source. Medicine is full of intellectual property, yet it is well regulated.
Dr Bryson's blog also says interesting things about regulation of the Technology Giants.
And we may never fully understand how particular AI systems learn and work. No-one in Google, for instance, can tell you exactly why AlphaGo made the moves that it did when it started beating the best Go players in the world.
The House of Lords published a detailed report in 2018 AI in the UK: ready, willing and able? which included some interesting regulatory recommendations such as:
- The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
- The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
- Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
- It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
The Digital Poorhouse?
Increased unregulated use of AI may also have profound social consequences. Virginia Eubanks argues that 'We all live under this new regime of data analytics, but we don’t all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.'
Her full Harpers article is here.
We have all grown up believing that, although our physical behaviour can easily be constrained and dominated by others, our minds, thoughts, beliefs and convictions are to a great extent beyond external constraint. As John Milton said "Thou canst not touch the freedom of my mind". But advances in neural engineering, brain imaging and neuro-technology mean that the mind may soon not be such an unassailable fortress. Elon Musk and others are developing tools such as
- brain computer interfaces,
- lie detectors that use brain scanning to achieve very high success rates,
- brain scans that predict recidivism rates for offenders, and
- ways of altering memories.
This suggests that we will, at the very least, require improvements to laws around data analysis and collection. But some scientists argue that human rights law will need to be updated to take into account the ability of governments not only to peer into people's minds but also alter them.
Protecting Key Infrastructure
Remember the stories about the Russian hacking of Western databases, and the Stuxnet attack on Iranian nuclear industry centrifuges? Much Western infrastructure is nowadays in private hands, so whose responsibility is it to defend it? Government is understandably reluctant to take on such a massive task, but industry is understandably unwilling to foot the bill. The answer, in the UK at least, is that the owners of designated Critical National Infrastructure have a legal duty to safeguard it, advised and monitored by the Centre for the Protection of National Infrastructure or the National Cyber Security Centre.
The Foundation for Responsible Robotics published an interesting report Our Sexual Future with Robots in July 2017. The report discussed whether increasingly lifelike robots, such as Sophia on the right, might:
- negatively impact on societal attitudes to women and their body image as well as further objectify and commodify the female body,
- encourage social isolation,
- help reduce sex crime, and
- (by allowing people to live out their darkest fantasies) have a pernicious effect on society and create more danger for the vulnerable. There is, for instance, a lack of clarity about the law regarding sex robots that look like children. (Child-like sex dolls are illegal in the UK.)
The pace of change in this area certainly seems likely to require some form of regulatory response before too long.
New EU rules come into force in May 2018 but one wonders whether any regulations can adequately protect the interests of consumers faced with increasing monetisation of personal data. The following extracts from a letter to the FT summarised concerns very well:
... The consumer will never own the data or the algorithms. ... Every moment, your data relating to browsing, calling, online, social media, location tracking and so on is being churned through a multiverse of data warehouses. If you have been browsing about a certain medicine, correlating to a call to an oncologist and a search for a nearby pharmacy, this can consequently be packaged as a data intelligence report and sold to your medical insurance company. This is just one of the myriad ways monetisation is being unleashed on unsuspecting consumers across the world.
The data protection regulations, although a step in the right direction, are usually still heavily tilted in favour of the corporate giants and still focused on cross-border transfers than on the real risks of monetisation. The fines imposed on the Silicon Valley giants are minuscule compared with the money they have made from data monetisation efforts. And this is all achieved in the age that is still a forerunner to the era of artificial intelligence and quantum computing.
The very concept of data privacy is archaic and academic. The tech giants are moving faster than this philosophical debate about data privacy. All the sound-bites from the tech giants are mere smoke and mirrors. Unless we revisit our concepts of what is data privacy for this new age of data monetisation, we will never really grapple with the real challenges and how to enforce meaningful regulation that really sets out to protect the consumer.
Syed Wajahat Ali