Simple Lessons from Complex Cases: Facebook

December 2020 may mark the beginning of a new era of technology business ethics: forty eight state attorney generals, in addition to the US Federal Trade Commission, filed antitrust court cases against Facebook.


This is substantial.


The focus of these court cases is Facebook’s tendency to shut out competitors by buying them out and deliberately degrading their products away from their privacy promise. It’s hard to argue against the thesis that Facebook has been defiant and refused to see the writing on the wall, seemingly blinded by the belief that breaking things now and apologising later is a sustainable business strategy.


While many were surprised by the sheer volume of these court cases, I was not.

Here is why.

They deal with antitrust cases rather differently in the U.S. than the EU. In my book Decoding AI in Financial Services in the Governance of AI chapter, on page 146, I wrote in 2019:


"Referring to European antitrust officials who fined Google a record $2.7 billion in 2017 for “unfairly favoring some of its own services over those of rivals”and a further €1.5 billion in 2019, Gary Reback noted that the “EU remedy [fines] on Google search [NB algorithm] manipulation case came 10 years after Google started their conduct and after they wiped out all their competition. Most of their competitors were already damaged or out of business by then.” He proposes a more efficient way to handle antitrust cases: to put CEOs of companies through court trials, instead of having politicians reviewing their companies’ business practices. The latter are yet to prove their impact, as shown by the Facebook and Cambridge Analytica manipulation case where US politicians struggled to build a conclusive case on Facebook. Gary Reback also referenced the antitrust AT&T case. This is an important example as the AT&T monopoly breakup, paired with pricing policy controlled by the Federal Communications Commission and state regulations, was able to unleash massive innovation, and the development of the internet is mentioned as the most notable consequence."


The regulators and legislators appear focused on breaking up Facebook and I’m sure they will target other large technology companies such as Google and Amazon, on the belief that regulating them could actually promote innovation by inviting more competition. Opinions differ widely whether breaking them up is the best way forward. The complex nexus lies in the fact that innovation is stifled when monopolies exist. This is the basis on which Slack filed a court case against Microsoft, claiming that Microsoft is bundling a significantly inferior product similar to that of Slack with their Microsoft suite of products, thus stifling other companies ability grow and service customers with a better product.


The flip-side of regulations, is the big question of “if we open the market to everyone, then duplication is likely to result. Will this mean that the quality of innovation will be stifled too?” In other words, will more regulation lead to lower quality products? Time will tell if this is the case with Facebook or Google. The only relevant proxy is the 1990s Microsoft antitrust case. I’m in the camp which thinks that better software has been created in the aftermath of the Microsoft antitrust case.


What is Facebook accused of?


Dina Srinivasan, a legal scholar, comprehensively and eloquently explains in her paper titled “The Antitrust Case Against Facebook” just how Facebook deployed their tactics. This paper has become the catalyst for the thinking of regulators and legislators; and the approach to contextualise Facebook’s dominance. The paper articulates Facebook’s strategy to create a monopoly.


Aggressive M&A Strategy

Facebook’s vision was to hold a monopoly in social networking. They bought Instagram and WhatsApp to accelerate this and to preclude their competition from challenging this monopoly. In an email in 2008, Zuckerberg states the direction of travel “it is better to buy than to compete”.

Facebook also implemented a range of commercial war tactics to undermine competitors which Facebook could not buy, but which where prepared to challenge Facebook.


Misleading Users with Privacy

Facebook originally differentiated themselves from Myspace by placing a large emphasis on their their privacy centred offering. Naturally, users believed them and subsequently signed up. Facebook went as far as to offer some guarantees on the integrity of their privacy policy. Once Facebook had established a monopoly, they deliberately started a process of eroding the very promise of privacy by instituting intrusive surveillance of its users and, in some cases, altering users’ privacy setting to “fully public” without users’ consent. Facebook deliberately started spying on their users while misleading them to believe that their privacy was protected. Users were trapped by their dependency on the social media, and as such had no where else to go.


Blocking Users with High Switch Costs

Facebook then ramped up exploitation of their captive audiences by making it very difficult to leave Facebook. One such strategy was to make it tough for users to take their photos away. Facebook devised a high level functionality layer which enabled users to perform editing, organising and sharing their photos. Should the users want to move, they wouldn’t be able to take this level of functionality and the comments of their photos with them.


Drowning Users in Adverts

Moreover, Facebook started to subject their users to a deluge of adverts which they had to overcome first before they’d be able to read their content of interest. Facebook put overwhelming amount of adverts in an indiscriminate order - meaning that they would place ads as they come. The advertisers started to take notice and complain that this positioning of their ads would hurt their profitability.

In summary, Facebook’s strategy was to lock in their users, making it impossible for them to leave by (1) embedding functionality which users cannot exit with and (2) blocking users from leaving by eliminating their competitors while drowning them indiscriminately in adverts; this aside from this invasion of users’ privacy and experience. The indirect result was to frustrate innovation in social media and to charge advertisers more for reduced service quality which the advertisers didn’t sign up to. Furthermore, allegedly advertisers noticed that Facebook provided success metrics in the form of analytics no one could cross check. Advertisers weren’t happy.


This is economic war which embraces multiple illegal forms and subversive tactics.


People have started to pay attention and a growing chorus of claims has started to build up. These people made the case for the rule of law which was the catalyst for this week’s flurry of court cases. It took them almost 7 years to get here. A lot will likely change once these court cases are concluded. There is also an antitrust case against Google in Europe, the development of which will also have a substantial impact on the world of AI ethics.


Technology companies are no longer the favourites in political circles and this has changed the narrative. In 2016, Sheryl Sandberg, the COO of Facebook, was a favourite in Hillary Clinton’s circle and tipped to join the Clinton team should the Democrats won. In 2020, Joe Biden’s campaign is said to have specifically refused donations from Facebook connected people. The same political party. Different narrative in the space of 4 years.


Three Simple Lessons


1. Privacy is sacred

It is interesting to notice that Facebook correctly identified privacy as a strong differentiator with a high pull. They got it right. Privacy is sacred. Once promised it has to be protected and respected. Facebook didn’t keep their promise to do this. In one single move, Facebook irretrievably lost the trust of their users. Trust is built in the smallest of moments and it can be destroyed in seconds. Financial Services industry is swimming in personal financial data. To repeat, privacy in our industry is sacred and in many ways we take it for granted. We shouldn’t.


2. Don’t lock in your users just so you can abuse their trust

Locking in users is an unlikely case in retail banking, thanks to regulations. It is possible to execute a benevolent lock-in, whereby you build products so good that you users don’t want to leave. That’s called good business and it is possible with various uses of AI, which I have explained on many occasions. However, locking-in users, just so you can abuse their trust and monetise their existence, stands as the direct opposite of good business.


3. You’ll get caught eventually

In my work, I come across ‘smart people’ particularly in the start-up space, who are experts at cutting corners and who are quick to deceive others by misappropriating others’ work while portraying an image of skilled business professionals. In 2020 alone, I have come across people like this. In one case they attempted to oust me — a similar story to that of Anne Boden, the CEO of Starling Bank (I might tell these stories one day, just like she did).


These ‘smart’ people think they are above society and that they are able to outsmart everyone. Worryingly, these people build technology for financial services to use. Their lack of morals will inevitably be reflected in how they run their companies and will transfer into the code of the algorithms they build.


However, the invisible hand of business karma and the very visible hand of law enforcement will eventually catch on. It may take several years, as in Facebook’s case, but it will ultimately happen.

No one should build a business on the assumption that they'll outsmart the law and ethics. Facebook tried. They have been successful and amassed a monumental fortune. Time will tell if their fortune and name survive while the regulators work hard to right the many Facebook wrongs.


Copyright Clara Durodié, 2020. All rights reserved.