Facebook’s business model is paying for toxic information

As important as the revelations in last week’s New York Times‘s Facebook bombshell are, they’re more meaningful–and less surprising–when taken in context. Facebook’s move-fast-and-break-things management style, and its comfort level with various shades of subterfuge, go way back in the history of the company. Facebook’s whole business model, after all, rests on a con game: Facebook provides a “free” social network, and says as little as possible about the personal data it harvests from users as payment. That data is necessary to power Facebook’s enormously lucrative advertising machine. In classic corporate double-think, the company’s executives see this as a “virtuous cycle.”

When you hold these facts in mind, the events, decisions, events, actions, and inactions described in the Times piece make a lot more sense. In the wake of the realization that Russian trolls had turned Facebook into a vast disinformation platform, Facebook’s executives, we learn, were far more interested in “managing” the reaction to that bad news than in quickly stopping the infestation and informing all stakeholders.

According to the story, Facebook’s COO Sheryl Sandberg and CEO Mark Zuckerberg ignored warnings about the Russian troll invasion and then ordered the suppression of details about it. The two remained unaware of the Russian interference for as much as a year after their own security people detected evidence of it. When security chief Alex Stamos finally got the attention of the executives, Sandberg’s response was to scold him for looking into it without permission (she said it left the company legally exposed).

We also learned that Zuckerberg was absent from (or distracted from) meetings where key decisions were made on how to contain (rather than communicate) the true extent of the Russian infestation and its effects.

On numerous occasions over the last year and a half, Zuckerberg and Sandberg have contritely said that Facebook was “slow to get ahead of” the Russian hijacking of its network. That in itself suggests either dishonesty or negligence. When Facebook first found evidence of coordinated Russian meddling in spring 2016, the company had “no policy on disinformation or any resources dedicated to searching for it,” the New York Times reports. But the Russians had used Facebook in this way before. Facebook knew that the Russian government had run a disinformation campaign on its social network to undermine Ukrainian president Petro Poroshenko in spring 2015.

For most of its history, Facebook has employed skillful public relations strategists to “manage” news of controversial changes to the social network.  At times, new features have benefitted both users and advertisers—but not usually—and the advertising business always seems to come first. Facebook often has responded to user backlash by apologizing and making small concessions, but rarely by totally reversing a decision. Only during the past year, under pressure from regulators, has Facebook begun culling back some of its finer audience targeting capabilities.

Nor should it be surprising that the Facebook executives have become more aggressive with these tactics since 2016.

More than any other single thing, the Cambridge Analytica scandal made Facebook users more aware that their data is being vacuumed up for all kinds of purposes, some of them harmful. Most people believe Cambridge Analytica used the Facebook user data to help find votes for Donald Trump during the 2016 presidential election. (The Facebook data models were not used, my sources tell me, but that hardly matters now.) To many, the idea that their personal data could be used without their knowledge or permission to support some political cause they may not agree with was offensive. In a wider sense, Facebook allowed its platform to become a key enabler of the tribalism and polarization that’s poisoned the national conversation. Because politically toxic content gets more user engagement, Facebook ends up profiting. As a result, people and lawmakers now understand much better Facebook’s “surveillance capitalism” business model, and Zuckerberg’s and Sandberg’s smooth “we connect the world” patter has begun to sound hollow.

The company’s deepest fear is losing users. Without them, the personal data in Facebook’s “social graph” stagnates and loses its power to target ads. And indeed, Facebook’s user growth has slowed. Young people are deleting the Facebook app from their phones in record numbers. If the information about the disastrous Russian troll infestation was released too quickly or in the wrong way, Facebook executives likely feared that consumers would begin to see Facebook as a harmful vice and begin to leave the network in droves. No doubt Facebook also feared new government regulations, or the growing bipartisan calls to break up the company, which is now collecting the personal data of 2.2 billion people around the world.