As publicity-shy Facebook co-founder and CEO Mark Zuckerberg testified before both houses of Congress last week amid an avalanche of criticism about the dearth of user privacy on the world’s biggest social media network, it became crystal clear that the entire social media industry has entered a new and perilous phase.
Trust in all social media platforms is plummeting. Facebook is the one in the spotlight now because it disclosed last week that data from as many as 87 million of its users may have been improperly shared with Cambridge Analytica, a U.K. analytics firm tied to the 2016 campaign of President Trump. The consumer surveillance model underlying the free services of Facebook, Instagram, and Google, among others, is coming under blistering attack from users, legislators, and regulators in both the United States and Europe.
Until now, social media companies have taken a reactive and subdued approach to misuse of personal data. This is certain to end, and could easily lead to utility-like regulation.
Psychographic profiles in jeopardy
At a minimum, the lucrative formula of selling ads based on psychographic profiles of their users is in jeopardy. While data aggregators have been collecting, packaging, and reselling consumer profiles for decades, digital social media companies collect data on an unprecedented and massive scale, with much greater accuracy and resolution.
A recent poll conducted by SurveyMonkey found that the three biggest social media companies—Facebook, Twitter and Alphabet-owned Google—are far less popular with Americans than they were as recently as last fall. Facebook’s already-lower favorability metric dropped twice as much as the scores of other tech giants.
Government intervention is needed. It is true that we live in a free market society that has served us well, making the United States the most prosperous major economy in the world. But the relative dearth of privacy safeguards has gone too far.
Three major steps required
Facebook and its brethren must respect consumer privacy to a much greater extent. In fact, three major steps are required.
First, Facebook must strengthen its privacy controls.
The company must not assume that a default boilerplate list of data-sharing permissions is OK with users. Rather, they should be given the choice to expressly opt in to any terms that would allow their data to be disseminated. And Facebook must ask users for permission every time they are asked to share information. Control panels—used by users to make adjustments to their data-sharing authorizations—can no longer be so baffling.
It would also be helpful if users could click a query button every time an article or ad appeared on their screen to call up a Facebook response explaining why they have been targeted to receive that item. “Trust Us” is no longer a viable policy.
Next, the United States needs to adopt a domestic version of the European Union’s stern General Data Protection Regulation, which mandates that EU member states rigorously protect personally identifiable information. Businesses that fail to comply face steep fines of up to 4 percent of the company’s total global revenue. EU citizens are protected no matter where their data travels.
Lastly, U.S. social media companies must begin to adopt and apply new, bleeding-edge technologies, such as homomorphic encryption and data provenance, as soon as possible.
Homomorphic encryption, which allows computations on encrypted tests, would allow Facebook to search user profiles to identify audience groups that would be attractive to select advertisers, without exposing details on the data of individuals. It would enable Facebook, for example, to generate aggregated data on groups of users who fall within specific psychographic profiles so that advertisers could still accurately target groups of prospective customers without knowing their actual identities.
Using the tools of data provenance, a social media company could trace and record the true identities of users. It could also track the original source of data, and follow its subsequent movements among databases. These tactics could unearth the true identities of Russian perpetrators and other malefactors.
As things stand today, there isn’t much regulation about companies’ cybersecurity practices, and most are capitalizing on this freedom. And, predictably, there is exceedingly little use of the aforementioned advanced data science techniques. On the former front, Congress has done almost nothing. It hasn’t required that Internet of Things devices accept security updates, for example, nor that consumer information be fully encrypted to mitigate the impact of a data breach. In fact, enforcement of a Federal Communications Commission rule that would have required Internet service providers to protect customers’ information has been blocked.
The only bright spot is at the state level, with California and New York leading the way.
Both states regulate data protections, and require that customers be notified when breaches occur. California, for instance, has expanded the definition of “personal information” to include bank card information and PIN codes, as well as medical records. The Golden State also mandates strict safeguards when companies share customer information with third parties.
Privacy not always a priority
Given the dearth of action at the federal level, it’s hardly surprising that social media users until recently had been largely oblivious to privacy concerns. One study that analyzed Facebook profiles of hundreds of students at Carnegie Mellon University found that 89 percent of the users used their real names, and 61 percent offered a photograph of themselves to ease identification. And the majority of users didn’t bother to change default privacy settings that allowed virtually uninhibited access to their data.
Privacy breaches are not the sole reason for the recent erosion of trust. Users are finally taking a hard look at the always asymmetric relationship between users and Facebook. Users provide a torrent of information, but have no idea