Martin Lewis, the consumer advice and money-saving expert, is suing Facebook in a case that threatens the dominant business model of publishing on the internet. It raises in a very sharp form the question of responsibility for what appears on a user’s screen: is the owner of the site responsible for the content that appears there, even though no human eye may ever have seen it? Facebook and in fact all the ad-supported businesses on the internet maintain that they are platforms, not publishers. Their responsibility extends only to content they know about. Is this enough? Should they also be responsible for content they might reasonably anticipate?
Facebook’s defence is that it has taken down individual adverts as they are reported; Lewis counter charges that they are soon, predictably, replaced with almost identical ones. It does appear odd that Facebook, which is extremely keen on facial recognition and can label the people in friends’ photograph feed with sometimes disconcerting accuracy, is apparently unable to recognise the face of a television personality made as recognisable as possible or to kill automatically any ad in which it appears. In a similar way, YouTube, owned by Google, is far more successful at keeping pornography off the site than it is at keeping off incitements to hatred or bullying. All that really frightens them is the thought of driving advertisers away.
There is a technical defence in these cases: the advertising business on the web is almost entirely automated, and human judgment appears to play no part at all. Every time you load a page on an ad-supported site an invisible auction is conducted between competing programs in microseconds to sell the advertising slots on it, based on everything that is known about you from all across the internet and indeed elsewhere.
That is why the harvesting of enormous quantities of data is so important to all these companies: not just for the personal information that you know you are giving up, but for the further information that can be extracted by looking for patterns across tens of millions of other users. This means, in theory, that advertisers can search across the whole of the internet for the cheapest place to reach the audience they have in mind, while publishers can deliver the most exquisitely segmented audiences.
In practice it works less smoothly. Hundreds of companies are involved in an ecosystem whose details are almost impossible to grasp even for those involved. At every step money is shaved off, so that as much as 70% of an advertiser’s budget goes to people other than the publishers on whose site the ad finally appears. Procter & Gamble recently knocked $100m off its online advertising budget without seeing any loss of sales. Quite often, ads are displayed without any human intervention solely for the delectation of robots.
The Guardian, too, is enmeshed in this system, just as every other newspaper is. Nonetheless, it’s clear that the complexity of the technologies involved can’t be used to remove all human or corporate responsibility. The participants in an ad-funded online world are to some extent publishers as well as platforms. The software they use did not write itself. Algorithms are not acts of God or nature. They are the product of human ingenuity and, as such, there have to be humans or corporations held responsible for their actions. This principle is obvious when it comes to the software that drives cars. Why should the software that drives advertising be any different?