Facebook users who Mark Zuckerberg once referred to as “dumb fucks” in early 2004 are now facing up to the reality behind that insult. With admissions from the company that almost all its 2 billion users probably had their data harvested by “malicious actors,” and that 87 million – not 50 million – profiles were probably used by Cambridge Analytica, the business model powering Facebook is being laid bare.
Cambridge Analytica denied it used any more than 30 million profiles, saying they were gained under an agreement with a Cambridge University academic. However, Facebook’s latest disclosures suggest nearly three times that number. Perhaps more significantly, the social network acknowledged that virtually every users’ data may have been open to being gathered.
Facebook said in a blog post that “malicious actors” took advantage of its platform, admitting that “most people on Facebook could have had their public profile scraped.” Facebook didn’t disclose who the malicious actors are or how the data might’ve been used, but said it’s working to close the gaps in its policy and security.
That a company valued at nearly $500 billion could not afford – or rather did not try – to implement stronger privacy from the start is telling to the critics who described the essence of the free-services-for-personal-data model as: “You are the product” (Longform).
Facebook is scrambling to save its reputation by saying it will close loopholes and push out new privacy measures. But with this action, the company risks allowing its critics to infer that privacy wasn’t its main concern all along. Rather, its interests had always been the commercial exploitation of the “dumb fucks,” not its motto of “bringing the world closer together.”
Zuckerberg has agreed to appear before the United States Senate judiciary and commerce committees and the House energy and commerce committee, and will send top lieutenants to other inquiries worldwide. Since the scandal, Facebook says it will increase its privacy restrictions with new, more transparent policies. These include making it easier for users to see the data Facebook has on them, and no longer providing information from data brokers to advertisers – though of course it will continue to sell directly to advertisers.
Discuss what matters to youTalk
‘They trust me…’
“They ‘trust me.’ Dumb fucks.” Those two sentences forever haunt Mark Zuckerberg. The billionaire made the statement at age 19, when his social network was in its fast-growing infancy. Now, of course, it’s a money-machine with a revenue over $40 billion in 2017, a 47 percent increase on the previous year.
That revenue is based on access to more than 2 billion users, many of whom, judging by reaction to the recent scandal, are becoming ever-more aware of Facebook’s motives behind the use of their data.
It’s been nearly 20 years since Sun Microsystems then-CEO Scott McNealy told Wired: “You have zero privacy anyway. Get over it.” Zuckerberg echoed the sentiment in 2010, predicting that as social media grows, privacy will no longer be a “social norm.”
‘The problem is the hijacking of the human mind’ – former Google ethicist
That the Cambridge Analytica scandal wasn’t technically a data leak because it was based on legal agreements is beside the point. For many of its critics, the real problem is a laissez-faire approach to user privacy that happens to be central to Facebook’s business model.
“Privacy has always been a relative issue for the companies that dominate the consumer internet economy,” wrote Richard Waters in the Financial Times (may be behind paywall). “It is not about aspiring to some absolute standard for protecting user data: instead, what matters are the generally accepted standards of the time.”
Jörg Pohle, a researcher at the Berlin-based Humboldt Institute for Internet and Society, told WikiTribune that Facebook has never shown it cared about privacy. “There are many competing understandings and interpretations of privacy, and related concepts like surveillance or data protection…
“Nothing in Facebook’s policy documents indicates that Facebook understands your privacy as something that needs to be protected against Facebook itself.”
Sheryl Sandberg, Facebook’s COO, this week acknowledged that the company may have got the “balance” between openness and privacy wrong. “I think we were very idealistic and not rigorous enough,” she told the Today show.
Facebook’s just being Facebook
Industry analysts say Facebook’s very structure has virtually ensured that the platform was rife for exploitation, and that data misuse was inevitable. Historically, the site has limited privacy shields for personal data across the network. Perhaps, then, the scandal over Cambridge Analytica is better seen as a result of Facebook’s DNA, not a bug or unforeseen flaw in the system
For someone like Andy Yen, CEO of ProtonMail, the free, encrypted and self-destructing email service famously used by Cambridge Analytica CEO Andrew Nix, Facebook’s centralization of personal data has always been the problem.
“People talk about regulation, control, security, but I don’t think it matters whether the data was sold, hacked or breached.” Yen told WikiTribune. “That’s not the key point. The fact that this data exists causes a clear, present danger.”
It’s not that Facebook lacks the means to build privacy safeguards into its system – it’s simply not in its financial interest to do so, he said. “It’s not a technical limitation of the internet,” said Yen. “It’s entirely a business limitation.”
You can edit and add moreEdit
The social network dominates the advertising technology space called “ad-tech.” Ad-tech is an umbrella term for software tools that help companies target, deliver, and analyze their advertising efforts.
Facebook invests heavily in ad-tech because it makes its money by monetizing users’ data for advertising. Lots of money. The Facebook News Feed – most people’s primary gateway into the platform, for example, accounts for more than $30 billion of the site’s annual advertising revenue, according to Fast Company. Overall, its ad revenue is more than $40 billion.
“Facebook makes money … by profiling us and then selling our attention to advertisers, political actors, and others,” wrote techno-sociologist Zeynep Tufecki in the New York Times. “These are Facebook’s true customers, whom it works hard to please.”
Tristan Harris, a former design ethicist at Google, said that subtle techniques, such as the Facebook “Like” button, are designed to keep people on the site, so that profitable attention does not slip away.
“The problem is the hijacking of the human mind: systems that are better and better at steering what people are paying attention to,” he told Wired.
Did Facebook ever care about our privacy?
“People don’t care about privacy until they do,” Brent Mittelstadt, a research fellow at the Oxford Internet Institute, told WikiTribune.
Facebook users are no different.
Over the years, Facebook’s porous and heavily commercial approach to privacy has resulted in numerous mea culpas (see list below). In 2007, Zuckerberg issued a public apology over a Facebook program called Beacon, which allowed companies to track users’ purchases without their permission. In 2011, the company came to an agreement with the Federal Trade Commission over its alleged misbehavior in creating a false impression of privacy, including accepting regular privacy audits.
These were only few of the milestones along the way in the recurring issue of Facebook’s handling of privacy. They were warnings, but generally the rumpus would die down as millions of people continued to “Like” and “Share.” Privacy settings were there – if not obvious or easy, as Matt McKeon of the Electronic Frontier Foundation reported in 2010. But, much like the Terms and Conditions the masses famously ignore (The Guardian), privacy settings were a chore that many just ignored.
You can edit and add moreEdit
Andrew Keen, author of The Internet is Not the Answer, told WikiTribune that Facebook’s convenience and free service is why “people don’t want it to have bad repercussions, they’d rather not think about it.”
“It’s free, so why wouldn’t you want to believe in all the nonsense that Zuckerberg tells everyone?” Keen said. “But then when it becomes clear that they are actually mining our data, people have to make hard choices.”
To delete or not to delete
The #DeleteFacebook movement flared up briefly after the Cambridge Analytica scandal. It gained steam when Tesla and SpaceX CEO Elon Musk and Whatsapp co-founder, Brian Acton, joined the global chorus of critics. The lack of a viable alternative (TechCrunch), however, means these movements are unlikely to have significant impact.
“The ‘delete Facebook’ thing is one solution for a certain type of user,” said Mittelstadt at the Oxford Internet Institute. “For people concerned about their privacy, it can be step in the right direction. But this is also ignoring the fact that, in a lot of countries, Facebook is essentially the internet. So it’s not helpful in that context.”
Facebook’s monopoly allows it to gather users’ data even if they’re not on Facebook. It’s the reason why it acquires other platforms such as Instagram, with 800 million monthly users, and WhatsApp, with 1.5 billion monthly users. Facebook itself hit more than 2 billion monthly users in 2017.
Knowing this, internet browser developer Mozilla announced in March an add-on called Facebook Container that hides the identity of Facebook users from the rest of the Web.
“The difference is that it will be much harder for Facebook to use your activity collected off Facebook to send you ads and other targeted messages,” according to Mozilla. The add-on is an option for those who don’t want to delete Facebook but want to protect their privacy.
Security will not change business model
As the world becomes more digitized, more data-driven, privacy will be harder to maintain and cyberattacks more of a pressing risk. New regulations, such as Europe’s General Data Protection Regulation (GDPR) and Germany’s NetzDG, will set new rules for how Facebook can operate in Europe. Zuckerberg said he expects to use the European GDPR system as a guideline for the type of privacy protection the company should offer throughout the world.
But for ProtonMail’s Yen, the internet’s entire business model needs to be revised, as well as the perspective of users.
“The whole business model of mining data, to serve ads to make money, that is fundamentally a flawed model,” he said. “You cannot really add security or add safeguards to fix business models.”
Discuss what matters to youTalk
The rolling debate over online privacy is becoming more about who gets to control the information than strictly keeping information secure. Despite technically being an open, social platform, Facebook’s centralization of information, Yen explained, “is a huge risk to not only privacy but also to democracy as a whole.”
Tim Berners Lee, inventor of the Web, said his intention for it was the distribution of information that can be universally accessed. But it’s perhaps the prescient words of sci-fi writer William Gibson that better reflects today: “The future is already here, it’s just not evenly distributed.”
The looming problem is that most of the world’s information is concentrated in the hands of one-half of a duopoly (WSJ). Such concentration of information in a system with weak privacy designs make it easier for “malicious actors” to exploit information technology for propaganda and psychological operations. If Facebook continues this way, those “hellish two years” Wired described in February are far from over.
Timeline of Facebook’s privacy developments
- 2004 – Facebook launches as The Facebook, a Harvard University social network. Mark Zuckerberg makes his infamous “dumb fucks” comment, referring to its early users’ naiveté in trusting him.
- 2006 – Harvard researchers working on a psychology project use information from profiles of users without asking their permission (Chronicle of Higher Education).
- 2011 – Facebook opens user information and activity to app developers.
- 2011 – Facebook comes to agreement with U.S. Federal Trade Commission over its misbehavior in creating a false impression of privacy, including accepting regular privacy audits.
- 2014 – Psychological experiment measures subjects’ moods after being shown negative news stories, sparking outrage from those who felt they were manipulated. Facebook announced new guidelines on how to approach such experiments in the future.
- 2018 – In January, Facebook announces new “privacy center,” which it says will make protecting privacy easier by putting all privacy settings in one place.
- 2018 – Cambridge Analytica scandal breaks in March. Facebook introduces new privacy measures and tools already in development to comply with Europe’s forthcoming General Data Protection Regulation.