Regulating

the Technology Giants

Companies such as Google/YouTube, Facebook, Airbnb and Uber benefit from strong network effects - the phenomenon through which their services become increasingly attractive as more people use them, to the extent that it becomes near impossible for any other company to compete - or for governments to challenge them. They inevitably become under less pressure to remain efficient, or to innovate, or to be good corporate citizens.

They also share the American 'see you in court' approach to regulation. Regulation, they believe, is for 'the guidance of wise men and the observance fools'. Uber, for instance, is said to have 'tended to barrel into new markets by flouting local laws, part of a combative approach to expand globally'. According to the New York Times, Uber developed an app which ensured that city officials were not able to book a car and so scrutinise the service. The Times called them the 'Predators of the internet who do as they please'. Facebook certainly seemed unperturbed when it was fined €110m (£94m) by the EU for providing misleading information about its 2014 takeover of WhatsApp.

The companies' huge size and international reach certainly make it near impossible for any individual regulator to tackle them with any prospect of success, partly because their businesses are so complex and partly because their resources enable them to out-gun all but the most persistent and well-funded regulators.

There is a related issue in that Google/YouTube, Facebook, Twitter etc. claim to be mere platforms, passively hosting content that they are unwilling to assess. In practice, their algorithms to some extent choose what their readers see, and the companies are financed by advertising, much like traditional media companies. They distribute fake news and other attention-grabbing content, regardless of its quality, veracity or decency, including material which encourages terrorism. It is well established that detailed guides showing how to make nail bombs, and ricin poison are freely available on Facebook and YouTube. The companies claim that it would be too technically complex to tackle the problem, but this is unconvincing. They have shown themselves to be adept at addressing copyright violation when it suits them. And there is no obvious reason reason why they should be exempt from the 'free speech' limitations that apply to the rest of us, and to other media companies.

But they can and do deploy the strong argument that any restriction of their behaviour threatens their customers' right to free speech.

The FT carried this interesting report in in April 2017:

Facebook blamed human error for its failure to remove dozens of images and videos depicting child pornography after they were flagged to the company. The Times newspaper reported that it had alerted the social media giant, using a dummy Facebook profile, to potentially illegal content that was posted on to its website by users, including images of an allegedly violent sexual assault on a child and cartoons of child abuse. The British newspaper accused Facebook of failing to remove many of the images but the company said this was because of human oversight and that its reviewers should have spotted that the content should be removed. ...“We are always looking for other ways to use automation to make our work easy, but ultimately content review is manual,” said Monika Bickert, ‎head of global policy management at Facebook in a past interview with the Financial Times.

This is a clear admission of failure of quality control. I have no doubt that most media organisations would have run into severe trouble - and probably been prosecuted - if they had behaved like Facebook. But I am not aware that any formal action has yet been taken against the company.

Later that month, a man killed his daughter and then himself whilst live-streaming his actions on Facebook Live. Again, the company's attitude seemed to be that its customers' wish to view live video outweighed any moral or other pressure that the content should be pre-approved. Presumably nothing will change unless and until someone prominent is killed by someone seeking Facebook fame.

The company had previously streamed a video which showed that a man chose 74 year old Robert Godwin Snr at random and then shot him.

Evan Williams — a Twitter founder and co-creator of Blogger — was reported as follows in 2017:-

“I think the internet is broken ... And it’s a lot more obvious to a lot of people that it’s broken.” People are using Facebook to showcase suicides, beatings and murder, in real time. Twitter is a hive of trolling and abuse that it seems unable to stop. Fake news, whether created for ideology or profit, runs rampant. Four out of 10 adult internet users said in a Pew survey that they had been harassed online. “I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,” Mr. Williams says. “I was wrong about that.”

It must be irritating (to use a mild word) for the established media to watch Facebook and others publish material which would lead to others being brought low. Imagine what would happen if the BBC decided that they would allow the public to broadcast murders and suicides live on one of their channels!

Optimists suggest that the companies' consumers may become a sort of regulator if they begin to desert the platforms in protest against their content or behaviour. But there is little sign of this happening, and many users are in effect locked into the products as a result of the strong network effects mentioned above.

Maybe the EU will collectively be strong enough to take the companies on, encouraged by Germany which has a tradition of strong form-based regulation. A start has been made by the EU's Competition Commissioner who has fined Google for abusing its dominant position in 'search'.

Artificial Intelligence may one day be the answer, but it cannot at present distinguish between the the video of the of murder of Robert Godwin and that of a fatal shooting by a police officer - which surely should be made available to the public.

There is also the danger that algorithmic news poses a risk to democracy as 1.2 billion daily Facebook users, for instance, mainly listen to louder echoes of their own voices - the so-called filter bubble. But Facebook’s relatively modest efforts to curb misinformation have been met with fury on the right, with Breitbart and The Daily Caller fuming that Facebook had teamed up with liberal hacks motivated by partisanship. If Facebook were to take more significant action, like hiring human editors, creating a reputational system or paying journalists, the company would instantly become something it has long resisted: a media company rather than a neutral tech platform. Facebook’s personalisation of its news feeds would perhaps be less of an issue if it were not crowding out every other source – but as a result it is at least  clear that Facebook must be responsible for finding solutions to its problems.

The Home Affairs Select Committee published a report on Hate Crime in early 2017. The committee strongly criticised social media companies for failing to take down and take sufficiently seriously illegal content – saying they were "shamefully far" from taking sufficient action to tackle hate and dangerous content on their sites. The Committee recommended that the Government should assess whether failure to remove illegal material is in itself a crime and, if not, how the law should be strengthened. They recommended that the Government should also consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict time frame.

Note: The 'Big Four' technology giants - Amazon, Facebook, Alphabet (i.e. Google & YouTube) and Netflix - were valued on Wall Street at around $1.5 trillion in May 2017.

 

Martin Stanley