My Introduction to Regulation summarises the factors that have driven much of the explosive growth in regulation since the 1980s. It now looks as though the world may be entering a further Industrial Revolution in which the physical, biological and digital worlds are coming together in the form of new technologies such as machine learning, big data, robotics and gene editing. This web page contains some initial notes on the consequential regulatory issues that are beginning to attract attention.
One preliminary comment:- It is vital that regulatory frameworks are pro-competitive so that innovators can test their ideas. It is a mistake to have a regulatory framework which requires every innovation to be challenged before it can be put into practice. But it is equally important that potentially dangerous technologies are properly evaluated before deployment.
(There was an interesting Policy Exchange roundtable in July 2017 which considered how regulation might keep up with disruptive regulation. Follow this link to download a note of the discussion.)
Please send me further information and articles etc. which might help other readers keep track of interesting regulatory developments in these or other areas. If and when the notes get too long, I will create separate pages to carry the detail. This has happened already for the discussion of Algorithms and Artificial Intelligence, and the regulation of the Technology Giants such as Google and Facebook.
Algorithms & AI
There is a bit of a theme running through many of the issues listed on this web page. Modern technologies, including Artificial Intelligence (AI) and algorithms, cut costs and facilitate activities which would otherwise be impossible. But they remove human involvement from the decision-making. These issues are explored here.
The Digital Poorhouse?
Increased unregulated use of AI may also have profound social consequences. Virginia Eubanks argues that 'We all live under this new regime of data analytics, but we don’t all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.'
Her full Harpers article is here.
New EU rules - the GDPR - came into force in May 2018 but one wonders whether any regulations can adequately protect the interests of consumers faced with increasing monetisation of personal data. The following extracts from a letter to the FT summarised concerns very well:
... The consumer will never own the data or the algorithms. ... Every moment, your data relating to browsing, calling, online, social media, location tracking and so on is being churned through a multiverse of data warehouses. If you have been browsing about a certain medicine, correlating to a call to an oncologist and a search for a nearby pharmacy, this can consequently be packaged as a data intelligence report and sold to your medical insurance company. This is just one of the myriad ways monetisation is being unleashed on unsuspecting consumers across the world.
The data protection regulations, although a step in the right direction, are usually still heavily tilted in favour of the corporate giants and still focused on cross-border transfers than on the real risks of monetisation. The fines imposed on the Silicon Valley giants are minuscule compared with the money they have made from data monetisation efforts. And this is all achieved in the age that is still a forerunner to the era of artificial intelligence and quantum computing.
The very concept of data privacy is archaic and academic. The tech giants are moving faster than this philosophical debate about data privacy. All the sound-bites from the tech giants are mere smoke and mirrors. Unless we revisit our concepts of what is data privacy for this new age of data monetisation, we will never really grapple with the real challenges and how to enforce meaningful regulation that really sets out to protect the consumer.
Syed Wajahat Ali
Some of us, but far from all, know that our browsing history, saved in the form of cookies, can significantly affect the way in which companies deal with us. The most obvious example are the cookies saved by travel companies, which reveal whether we have previously enquired about a particular flight or holiday. If we have, then the company is less likely to offer us their best deal, believing that our repeated searches suggest that we are very likely to become a customer. (See for instance Travel site cookies milk you for dough in the Sunday Times - 24 June 2018.
Does it matter? I would argue that this is no more than an automated example of the behaviour of any salesperson who has the ability to price a product (such as a car) according to their estimate of the customer's keenness to buy. But it's a bit less obvious - sneaky even.
Do We Need a New Watchdog?
Here in the UK, Doteveryone is leading some interesting thinking about the fast developing relationship between society and the internet. Their latest paper suggests the creation of a new body or bodies which would
- encourage collaboration between existing regulators in this area, and strengthen regulators capabilities,
- engage in horizon scanning and foresight activities,
- develop a long term vision for the sector, and
- advise and empower the public.
And Sky's Chief Executive has called on Ministers to create a regulator "with sharp teeth" to rein in the Internet Giants and curb the "flow of hate, abuse and offensive, illegitimate and even dangerous content online". Sky would obviously gain commercial advantage (or rather lose commercial disadvantage) from such regulation, but this does not invalidate his plea.
It was interesting, therefore, that there were reports in September 2018 that the government is developing plans for an internet regulator, responsible for policing 'online social harm', probably focussing on penalising companies if they fail quickly to remove certain content which has been reported to them, possible copying bits of German legislation.
Elsewhere ... It's early days as yet, but it will be interesting to see whether Australia's eSafety Commissioner is successful. The Office was established in 2015 with a mandate to coordinate and lead online safety efforts across government, industry and the not-for profit community. It takes a particular interest in cyber-bullying:- "If you are under 18 we can help you make a complaint, find someone to talk to and provide advice and strategies for dealing with these issues".
The Technology Giants
Numerous issues associated with Google/YouTube, Facebook, Amazon, Airbnb, Uber etc. are explored here.
CRISPR technology is now widely available. Tweaking individual letters of genetic code, it takes just hours to adjust what evolution has fashioned over billions of years.
A Mississippi dog breeder has already been given permission to use gene editing to fix a mutation that makes Dalmations prone to kidney disease. But future biohackers may have less acceptable objectives, including terrorism.
And some scientists are warning that CRISPR allows genetic constructions that can change the rules of inheritance in sexual reproduction, boosting the chances that a particular trait will pass from a parent to its offspring from 50-50 to nearly 100 percent. This could allow CRISPR to target control disease-carrying insects, invasive species and other problematic organisms. But what if the new organism escapes, and becomes invasive itself?
Lots of interesting issues here. Autonomous vehicles seem certain to be much safer (on average) than those controlled by humans. But their successful introduction requires successful advances to be made on many fronts.
It has already taken a long time for the technology to become truly road-ready. I remember, in around 1993, being driven in a technology-demonstrating Jaguar on a fairly quiet French motorway. The car was able to stay in lane by tracking the two white lane markers, and could adjust its distance from the vehicle in front.
And Elliott Sclar argued, rather persuasively, that ...
All transport is interaction between moving vessel and travel medium. Land-based modes require major capital investments in suitable physical infrastructure and administrative co-ordination between vehicle and travel medium. To date it has been implicitly assumed that existing configurations of streets, roads, traffic signalling, sidewalks and so on remain about as they are; only the physical driver disappears. It is important to remember that the present configurations evolved assuming human-operated rolling stock. The effectiveness, efficiency and safety of the new mode will greatly depend upon the degree to which the infrastructure is modified to accommodate its technology. All the adjustment cannot and will not be done in the vehicle.
This seems likely to be particularly true in major British cities where cautious autonomous vehicles will find it very difficult to understand when it is safe to change lane (think Hyde Park Corner!) or slip out of a side road into a busy High Street.
And will we hold autonomous vehicles to higher standards? For instance:
- Who will be blamed if a half-asleep driver fails to intervene to avoid a collision caused by a mistake made by another driver?
- Should the algorithms provide that the life of the vehicle's driver should be sacrificed if that is necessary to avoid killing, say, several pedestrians - or just one - who was jaywalking - but only five years old?
- It has even been suggested that driverless cars should be fitted with a dashboard dial allowing you to choose whether the lives of the car's occupants should always outweigh those of others, or whether the car should always sacrifice its passengers.
The government announced in November 2017 that self-driving cars would be in use in the UK by 2021, and that insurers would be required to cover injuries to all parties whether or not a human driver had intervened before his or her vehicle was involved in a collision. This implies a fundamental shift in road traffic law away from personal liability to product liability. And which manufacturer will be held liable - that of the hardware (the car) or the software?
Follow this link to read about the psychology involved in our attitude to Risk and Regulation.
Bitcoin and other Crypto-Currencies
Decentralized digital currencies, which use blockchain technology, feel like they are only a small and attractive step from where we are now.
Apart from my share in our house and car, all my significant assets are represented by bits and bytes in the IT systems of various financial institutions. I trust those institutions, of course, partly because they are so heavily regulated, and backed by the Government in the form of the Financial Services Compensation Scheme. But then I think about the financial crisis, and the way in which the true value of my financial assets is affected by interest rates and inflation, over which I have zero control. I remember all too clearly how the value of my financial assets fell by nearly one-fifth following the Brexit referendum.
Crypto-currencies, too, are no-more than bits and bytes, but they are registered in a peer-to-peer database that is controlled by no-one and with which no-one can meddle. That can't be bad. On the other hand, their value, too, currently fluctuates wildly in response to real world events such as Brexit.
The key difference, I guess, is that Bitcoin and the rest are truly international. Unlike Sterling, the Dollar or the Remnimbi, they are not linked to any one country or influenced by any one government. If their use continues to grow, will governments seek to regulate them or their users? And could they succeed? Many say not.
Blockchain, by the way, is a harmless - and indeed valuable - technology. It is essentially a database which stores records of transactions in units called blocks. New blocks get added to the end of ever-growing chains of blocks, which are in turn copied to many computers on the internet. This elaborate, distributed process makes the blockchain tamper-proof - an unfalsifiable record of the transactions that made it.
The Gig Economy inc. Uber
Information technology is facilitating new ways of ordering goods and services to be delivered to the door, including books (and much more) from Amazon, taxis (in particular from Uber), food etc. (from supermarkets), and meals (Deliveroo etc.).This is to be welcomed,. (See for instance Robert Hahn and Robert Metcalfe's The Ridesharing Revolution.) But there are some unwelcome consequences.
First, the gig economy facilitates new and arguably onerous ways of employing those who prepare and deliver many goods and services. They can be required to enter into contracts under which
- they must purchase their own vans and equipment (although the vehicles etc. are provided or specified by the 'employer'),
- they have no guaranteed work (zero hours contracts),
- they are not paid, and are sometimes responsible for providing cover, when sick or on holiday.
These arrangements can be tax efficient for both 'employer' and 'employee' and they suit many individuals very well. But they can also be exploitative, leaving workers without essential protections. It is far from clear that one-sided contracts are in the long term interests of the individuals or society. Numerous cases are working their way through the courts as lawyers seek to define the boundary between being a 'worker' and being truly self-employed.
The self-employed also pay much less by way of National Insurance Contributions (NICs) despite the fact that, since the introduction of the new state pension in 2016, they get pretty much the same state benefits as employed people. The government attempted to address this by increasing NICs in 2017 but this proved very unpopular and was abandoned. The proposal also appeared to pre-judge or forestall the recommendation of the Taylor Review of modern working practices published a few months later. (See also my web page commenting on weaknesses in HMRC.)
The gig economy can also be economically devastating for those previously sheltered from such competition. Here is an extract from a report in the New York Times.
For decades there had been no more than 12,000 to 13,000 taxis in New York but now there were myriad new ways to avoid public transportation, in some cases with ride-hailing services like Via that charged little more than $5 to travel in Manhattan. In 2013, there were [already] 47,000 for-hire vehicles in the city. Now [in 2018] there were more than 100,000, approximately two-thirds of them affiliated with Uber.
While Uber has sold that “disruption” as positive for riders, for many taxi workers, it has been devastating. Between 2013 and 2016, the gross annual bookings of full-time yellow-taxi drivers in New York, working during the day when fares are typically highest, fell from $88,000 a year to just over $69,000. Medallions, which grant the right to operate a taxi in New York City, were now depreciating assets and drivers who had borrowed money to pay for them, once a sound investment strategy, were deeply in debt. [NY Taxi Drivers representative] Ms. Desai was routinely seeing grown men cry and she had become increasingly concerned about the possibility that they would begin taking their lives.
There is a separate issue concerning the companies' willingness to adjust to local culture and regulation. The BBC commented in September 2017 that "Throughout its short, tempestuous life, Uber has clashed with regulators around the world - and more often than not it has come out on top. Its tactic has often been to arrive in a city, break a few rules, and then apologise when it's rapped over the knuckles. Some regulators have backed down, others have run the company out of town."
In the UK, Transport for London (TfL) have been prominent in attempts to regulate Uber in the same way as they regulate other taxi providers. The Court of Appeal, for instance, backed TfL's insistence that Uber provide a facility for passengers to talk to the company over the phone (or a voice call overt the internet) if the passenger does not want to use Uber's app for this purpose. And, having been threatened with the removal of their licence, Uber eventually conceded a string of safety failings before magistrates gave it a 15 month probationary licence running from June 2018. The judge criticised the company's 'gung ho" attitude and its attempts to "whip up public outcry" by launching a public attack on TfL rather than accepting blame for its mistakes and misbehaviour.
We have all grown up believing that, although our physical behaviour can easily be constrained and dominated by others, our minds, thoughts, beliefs and convictions are to a great extent beyond external constraint. As John Milton said "Thou canst not touch the freedom of my mind". But advances in neural engineering, brain imaging and neuro-technology mean that the mind may soon not be such an unassailable fortress. Elon Musk and others are developing tools such as
- brain computer interfaces,
- lie detectors that use brain scanning to achieve very high success rates,
- brain scans that predict recidivism rates for offenders, and
- ways of altering memories.
This suggests that we will, at the very least, require improvements to laws around data analysis and collection. But some scientists argue that human rights law will need to be updated to take into account the ability of governments not only to peer into people's minds but also alter them.
Protecting Key Infrastructure
Remember the stories about the Russian hacking of Western databases, and the Stuxnet attack on Iranian nuclear industry centrifuges? Much Western infrastructure is nowadays in private hands, so whose responsibility is it to defend it? Government is understandably reluctant to take on such a massive task, but industry is understandably unwilling to foot the bill. The answer, in the UK at least, is that the owners of designated Critical National Infrastructure have a legal duty to safeguard it, advised and monitored by the Centre for the Protection of National Infrastructure or the National Cyber Security Centre.
The Foundation for Responsible Robotics published an interesting report Our Sexual Future with Robots in July 2017. The report discussed whether increasingly lifelike robots, such as Sophia on the right, might:
- negatively impact on societal attitudes to women and their body image as well as further objectify and commodify the female body,
- encourage social isolation,
- help reduce sex crime, and
- (by allowing people to live out their darkest fantasies) have a pernicious effect on society and create more danger for the vulnerable. There is, for instance, a lack of clarity about the law regarding sex robots that look like children. (Child-like sex dolls are illegal in the UK.)
The pace of change in this area certainly seems likely to require some form of regulatory response before too long.
Games & Gaming
Competitive computer games are big business nowadays. Valve Corporation, for instance, hosts an annual international DOTA tournament where professional teams compete for a prize pool which had reached $25m by 1017. Intriguingly, however, DOTA is free to play - but the prize pool is funded through the purchase of 'compendiums' - self-updating, interactive booklets that track the tournament's statistics. It is less obvious, however, that Valve creams off 75% of the purchase price before transferring the balance to the prize fund. Nice work if you can get it! But is it abusive? Or are customers deserving of greater protection?
It is also interesting that many in-game purchases open 'loot boxes' whose contents are unpredictable. It's a relatively gentle form of gambling - but it has, as such, been banned in Belgium and the Netherlands. One (perhaps slightly exaggerated) comment explained players' reasoning as follows:- "99% of people who open these things are going for the rares; otherwise it'd be cheaper 90% of the time to just buy it off the market and not play the odds.".