Regulation - Responding to Innovation

My Introduction to Regulation summarises the factors that have driven much of the explosive growth in regulation since the 1980s. It now looks as though the world may be entering a further Industrial Revolution in which the physical, biological and digital worlds are coming together in the form of new technologies such as machine learning, big data, robotics and gene editing. This web page contains some initial notes on the consequential regulatory issues that are beginning to attract attention.

One preliminary comment:- It is vital that regulatory frameworks are pro-competitive so that innovators can test their ideas. It is a mistake to have a regulatory framework which requires every innovation to be challenged before it can be put into practice. But it is equally important that potentially dangerous technologies are properly evaluated before deployment.

(There was an interesting Policy Exchange roundtable in July 2017 which considered how regulation might keep up with disruptive regulation. Follow this link to download a note of the discussion.)

Please send me further information and articles etc. which might help other readers keep track of interesting regulatory developments in these or other areas. If and when the notes get too long, I will create separate pages to carry the detail. This has happened already for the discussion of the regulation of the Technology Giants such as Google and Facebook.

Algorithms & AI

There is a bit of a theme running through many of the issues listed below. Modern technologies, including Artificial Intelligence (AI) and algorithms cut costs and facilitate activities (such as internet searches and autonomous driving) which would otherwise be impossible. But they remove human involvement from the decision-making. For algorithms, all decisions are binary:- a big contrast (in the UK at least) from our tradition of having law enforcement moderated by human police officers, jury-members and judges. Katia Moskvitich commented, with some force, that 'our society is built on a bit of contrition here, a bit of discretion there'. Follow this link for a further discussion of this subject.

And then there is the related issue that algorithms are written by humans, who will almost certainly (though accidentally) import their own false assumptions, generalisations, biases and preconceptions. How easy is it to challenge decisions made by such algorithms? Does it matter, for instance, that recruitment decisions (including to the civil service) are nowadays often made by algorithms whose logic is held in a 'black box' inaccessible to anyone other than its designer - and maybe not to the client?

One interesting (worrying?) example is Durham Police's use of an AI system to help their officers decide whether to hold suspects in custody or release them on bail. Inputs into the decision-making include gender and postcode. The force stresses that the decision is still taken by an officer, albeit 'assisted by' the AI, but the Law Society has expressed concern that custody sergeants will in practice delegate responsibility to the algorithm, and face questions from senior officers if they choose to go against it.

In the US, a federal judge ruled that a 'black box' performance algorithm violated Houston teachers' civil rights. But Eric Loomis, in Wisconsin, failed to persuade a judge that it was unfair that he was given a hefty prison sentence partly because the Compass algorithm judged him to be at high risk of re-offending. This was despite his lawyer arguing that such a secret algorithm was analogous to evidence offered by an anonymous expert whom one cannot cross-examine.

The ability of algorithms and AI to work together to the disadvantage of consumers is also beginning to cause concern. There is more detail in the discussion on my cartels web page.

AI predominates modern financial markets. A JP Morgan analyst has estimated that a mere 10 per cent of US equity market trading is actually now conducted by discretionary human traders; the rest is driven by various rules-based automatic investment systems, ranging from exchange traded funds to computerised high-speed trading programs. The FT's Gillian Tett argues that we are seeing the rise of self-driving investment vehicles, matching the auto world. But while the sight of driverless cars on the roads has sparked public debate and scrutiny, that has not occurred with self-driving finance.

Karen Yeung offers an interesting academic review of Algorithmic Regulation and Intelligent Enforcement on pp 50- of CARR's 2016 discussion paper Regulation scholarship in crisis?. She notes AI's 'three claimed advantages. Firstly, by replacing the need for human monitors and overseers with ubiquitous, networked digital sensors, algorithmic systems enable the monitoring of performance against targets at massively reduced cost and human effort. Secondly, it operates dynamically in real-time, allowing immediate adjustment of behaviour in response to data feedback thereby avoiding problems arising from out-of-date performance data. Thirdly, it appears to provide objective, verifiable evidence because knowledge of system performance is provided by data emitted directly from a multitude of behavioural sensors embedded into the environment, thereby holding out the prospect of 'game proof' design.' But 'All these claims ... warrant further scrutiny' which she proceeds to offer.

Above all, though, it is important to remember Stephen Cave's warning that our "biggest misapprehension about AIs is that they will be something like human intelligence. The way they work is nothing like the human brain. In their goals, capacities and limitations they will actually be profoundly different to us large-brained apes." An emerging class of algorithms make judgments on the basis of inputs that most people would not think of as data. One example is a Skype-based job-interviewing algorithm that assesses candidates' body language and tone of voice via a video camera. Another algorithm has been shown to predict with 80% accuracy which married couples will stay together - better than any therapist - after analysing the acoustic properties of their conversation.

And there is much to ponder in Joanna Bryson's IPR blog Tomorrow comes today: How policymakers should approach AI. She says, for instance, that:

Dr Bryson's blog also says interesting things about regulation of the Technology Giants.

And we may never fully understand how particular AI systems learn and work. No-one in Google, for instance, can tell you exactly why AlphaGo made the moves that it did when it started beating the best Go players in the world.

The House of Lords published a detailed report in 2018 AI in the UK: ready, willing and able? which included some interesting regulatory recommendations such as:

The Digital Poorhouse?

Increased unregulated use of AI may also have profound social consequences. Virginia Eubanks argues that 'We all live under this new regime of data analytics, but we don’t all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.'

Her full Harpers article is here.

Data Protection

New EU rules - the GDPR - came into force in May 2018 but one wonders whether any regulations can adequately protect the interests of consumers faced with increasing monetisation of personal data. The following extracts from a letter to the FT summarised concerns very well:

... The consumer will never own the data or the algorithms. ... Every moment, your data relating to browsing, calling, online, social media, location tracking and so on is being churned through a multiverse of data warehouses. If you have been browsing about a certain medicine, correlating to a call to an oncologist and a search for a nearby pharmacy, this can consequently be packaged as a data intelligence report and sold to your medical insurance company. This is just one of the myriad ways monetisation is being unleashed on unsuspecting consumers across the world.

The data protection regulations, although a step in the right direction, are usually still heavily tilted in favour of the corporate giants and still focused on cross-border transfers than on the real risks of monetisation. The fines imposed on the Silicon Valley giants are minuscule compared with the money they have made from data monetisation efforts. And this is all achieved in the age that is still a forerunner to the era of artificial intelligence and quantum computing.

The very concept of data privacy is archaic and academic. The tech giants are moving faster than this philosophical debate about data privacy. All the sound-bites from the tech giants are mere smoke and mirrors. Unless we revisit our concepts of what is data privacy for this new age of data monetisation, we will never really grapple with the real challenges and how to enforce meaningful regulation that really sets out to protect the consumer.

Syed Wajahat Ali

Cookies

Some of us, but far from all, know that our browsing history, saved in the form of cookies, can significantly affect the way in which companies deal with us. The most obvious example are the cookies saved by travel companies, which reveal whether we have previously enquired about a particular flight or holiday. If we have, then the company is less likely to offer us their best deal, believing that our repeated searches suggest that we are very likely to become a customer. (See for instance Travel site cookies milk you for dough in the Sunday Times - 24 June 2018.

Does it matter? I would argue that this is no more than an automated example of the behaviour of any salesperson who has the ability to price a product (such as a car) according to their estimate of the customer's keenness to buy. But it's a bit less obvious - sneaky even.

Do We Need a New Watchdog?

Here in the UK, Doteveryone is leading some interesting thinking about the fast developing relationship between society and the internet. Their latest paper suggests the creation of a new body or bodies which would

And Sky's Chief Executive has called on Ministers to create a regulator "with sharp teeth" to rein in the Internet Giants and curb the "flow of hate, abuse and offensive, illegitimate and even dangerous content online". Sky would obviously gain commercial advantage (or rather lose commercial disadvantage) from such regulation, but this does not invalidate his plea.

Elsewhere ... It's early days as yet, but it will be interesting to see whether Australia's eSafety Commissioner is successful. The Office was established in 2015 with a mandate to coordinate and lead online safety efforts across government, industry and the not-for profit community. It takes a particular interest in cyber-bullying:- "If you are under 18 we can help you make a complaint, find someone to talk to and provide advice and strategies for dealing with these issues".

The Technology Giants

Numerous issues associated with Google/YouTube, Facebook, Amazon, Airbnb, Uber etc. are explored here.

Gene Editing

CRISPR technology is now widely available. Tweaking individual letters of genetic code, it takes just hours to adjust what evolution has fashioned over billions of years.

A Mississippi dog breeder has already been given permission to use gene editing to fix a mutation that makes Dalmations prone to kidney disease. But future biohackers may have less acceptable objectives, including terrorism.

And some scientists are warning that CRISPR allows genetic constructions that can change the rules of inheritance in sexual reproduction, boosting the chances that a particular trait will pass from a parent to its offspring from 50-50 to nearly 100 percent. This could allow CRISPR to target control disease-carrying insects, invasive species and other problematic organisms. But what if the new organism escapes, and becomes invasive itself?

Self-Driving Vehicles

Lots of interesting issues here. Autonomous vehicles seem certain to be much safer (on average) than those controlled by humans. But their successful introduction requires successful advances to be made on many fronts.

It has already taken a long time for the technology to become truly road-ready. I remember, in around 1993, being driven in a technology-demonstrating Jaguar on a fairly quiet French motorway. The car was able to stay in lane by tracking the two white lane markers, and could adjust its distance from the vehicle in front.

And Elliott Sclar argued, rather persuasively, that ...

All transport is interaction between moving vessel and travel medium. Land-based modes require major capital investments in suitable physical infrastructure and administrative co-ordination between vehicle and travel medium. To date it has been implicitly assumed that existing configurations of streets, roads, traffic signalling, sidewalks and so on remain about as they are; only the physical driver disappears. It is important to remember that the present configurations evolved assuming human-operated rolling stock. The effectiveness, efficiency and safety of the new mode will greatly depend upon the degree to which the infrastructure is modified to accommodate its technology. All the adjustment cannot and will not be done in the vehicle.

This seems likely to be particularly true in major British cities where cautious autonomous vehicles will find it very difficult to understand when it is safe to change lane (think Hyde Park Corner!) or slip out of a side road into a busy High Street.

And will we hold autonomous vehicles to higher standards? For instance:

The government announced in November 2017 that self-driving cars would be in use in the UK by 2021, and that insurers would be required to cover injuries to all parties whether or not a human driver had intervened before his or her vehicle was involved in a collision. This implies a fundamental shift in road traffic law away from personal liability to product liability. And which manufacturer will be held liable - that of the hardware (the car) or the software?

Follow this link to read about the psychology involved in our attitude to Risk and Regulation.

Bitcoin and other Crypto-Currencies

Decentralized digital currencies, which use blockchain technology, feel like they are only a small and attractive step from where we are now.

Apart from my share in our house and car, all my significant assets are represented by bits and bytes in the IT systems of various financial institutions. I trust those institutions, of course, partly because they are so heavily regulated, and backed by the Government in the form of the Financial Services Compensation Scheme. But then I think about the financial crisis, and the way in which the true value of my financial assets is affected by interest rates and inflation, over which I have zero control. I remember all too clearly how the value of my financial assets fell by nearly one-fifth following the Brexit referendum.

Crypto-currencies, too, are no-more than bits and bytes, but they are registered in a peer-to-peer database that is controlled by no-one and with which no-one can meddle. That can't be bad. On the other hand, their value, too, currently fluctuates wildly in response to real world events such as Brexit.

The key difference, I guess, is that Bitcoin and the rest are truly international. Unlike Sterling, the Dollar or the Remnimbi, they are not linked to any one country or influenced by any one government. If their use continues to grow, will governments seek to regulate them or their users? And could they succeed? Many say not.

Blockchain, by the way, is a harmless - and indeed valuable - technology. It is essentially a database which stores records of transactions in units called blocks. New blocks get added to the end of ever-growing chains of blocks, which are in turn copied to many computers on the internet. This elaborate, distributed process makes the blockchain tamper-proof - an unfalsifiable record of the transactions that made it.

The Gig Economy inc. Uber

Information technology is facilitating new ways of ordering goods and services to be delivered to the door, including books (and much more) from Amazon, taxis (in particular from Uber), food etc. (from supermarkets), and meals (Deliveroo etc.).This is to be welcomed,. (See for instance Robert Hahn and Robert Metcalfe's The Ridesharing Revolution.) But there are some unwelcome consequences.

First, the gig economy facilitates new and arguably onerous ways of employing those who prepare and deliver many goods and services. They can be required to enter into contracts under which

These arrangements can be tax efficient for both 'employer' and 'employee' and they suit many individuals very well. But they can also be exploitative, leaving workers without essential protections. It is far from clear that one-sided contracts are in the long term interests of the individuals or society. Numerous cases are working their way through the courts as lawyers seek to define the boundary between being a 'worker' and being truly self-employed.

The self-employed also pay much less by way of National Insurance Contributions (NICs) despite the fact that, since the introduction of the new state pension in 2016, they get pretty much the same state benefits as employed people. The government attempted to address this by increasing NICs in 2017 but this proved very unpopular and was abandoned. The proposal also appeared to pre-judge or forestall the recommendation of the Taylor Review of modern working practices published a few months later. (See also my web page commenting on weaknesses in HMRC.)

The gig economy can also be economically devastating for those previously sheltered from such competition. Here is an extract from a report in the New York Times.

For decades there had been no more than 12,000 to 13,000 taxis in New York but now there were myriad new ways to avoid public transportation, in some cases with ride-hailing services like Via that charged little more than $5 to travel in Manhattan. In 2013, there were [already] 47,000 for-hire vehicles in the city. Now [in 2018] there were more than 100,000, approximately two-thirds of them affiliated with Uber.

While Uber has sold that “disruption” as positive for riders, for many taxi workers, it has been devastating. Between 2013 and 2016, the gross annual bookings of full-time yellow-taxi drivers in New York, working during the day when fares are typically highest, fell from $88,000 a year to just over $69,000. Medallions, which grant the right to operate a taxi in New York City, were now depreciating assets and drivers who had borrowed money to pay for them, once a sound investment strategy, were deeply in debt. [NY Taxi Drivers representative] Ms. Desai was routinely seeing grown men cry and she had become increasingly concerned about the possibility that they would begin taking their lives.

There is a separate issue concerning the companies' willingness to adjust to local culture and regulation. The BBC commented in September 2017 that "Throughout its short, tempestuous life, Uber has clashed with regulators around the world - and more often than not it has come out on top. Its tactic has often been to arrive in a city, break a few rules, and then apologise when it's rapped over the knuckles. Some regulators have backed down, others have run the company out of town."

In the UK, Transport for London (TfL) have been prominent in attempts to regulate Uber in the same way as they regulate other taxi providers. The Court of Appeal, for instance, backed TfL's insistence that Uber provide a facility for passengers to talk to the company over the phone (or a voice call overt the internet) if the passenger does not want to use Uber's app for this purpose. And, having been threatened with the removal of their licence, Uber eventually conceded a string of safety failings before magistrates gave it a 15 month probationary licence running from June 2018. The judge criticised the company's 'gung ho" attitude and its attempts to "whip up public outcry" by launching a public attack on TfL rather than accepting blame for its mistakes and misbehaviour.

Neuroscience

We have all grown up believing that, although our physical behaviour can easily be constrained and dominated by others, our minds, thoughts, beliefs and convictions are to a great extent beyond external constraint. As John Milton said "Thou canst not touch the freedom of my mind". But advances in neural engineering, brain imaging and neuro-technology mean that the mind may soon not be such an unassailable fortress. Elon Musk and others are developing tools such as

This suggests that we will, at the very least, require improvements to laws around data analysis and collection. But some scientists argue that human rights law will need to be updated to take into account the ability of governments not only to peer into people's minds but also alter them.

Protecting Key Infrastructure

Remember the stories about the Russian hacking of Western databases, and the Stuxnet attack on Iranian nuclear industry centrifuges? Much Western infrastructure is nowadays in private hands, so whose responsibility is it to defend it? Government is understandably reluctant to take on such a massive task, but industry is understandably unwilling to foot the bill. The answer, in the UK at least, is that the owners of designated Critical National Infrastructure have a legal duty to safeguard it, advised and monitored by the Centre for the Protection of National Infrastructure or the National Cyber Security Centre.

Sex Robots

The Foundation for Responsible Robotics published an interesting report Our Sexual Future with Robots in July 2017. The report discussed whether increasingly lifelike robots, such as Sophia on the right, might:

The pace of change in this area certainly seems likely to require some form of regulatory response before too long.

Games & Gaming

Competitive computer games are big business nowadays. Valve Corporation, for instance, hosts an annual international DOTA tournament where professional teams compete for a prize pool which had reached $25m by 1017. Intriguingly, however, DOTA is free to play - but the prize pool is funded through the purchase of 'compendiums' - self-updating, interactive booklets that track the tournament's statistics. It is less obvious, however, that Valve creams off 75% of the purchase price before transferring the balance to the prize fund. Nice work if you can get it! But is it abusive? Or are customers deserving of greater protection?

It is also interesting that many in-game purchases open 'loot boxes' whose contents are unpredictable. It's a relatively gentle form of gambling - but it has, as such, been banned in Belgium and the Netherlands. One (perhaps slightly exaggerated) comment explained players' reasoning as follows:- "99% of people who open these things are going for the rares; otherwise it'd be cheaper 90% of the time to just buy it off the market and not play the odds.".

 

Martin Stanley