Congress Urges Tech Firms to Control Content—But With Qualms

At Congressional hearings this week on Russia’s use of social media to interfere with the 2016 U.S. election, lawmakers pressed Facebook, Twitter, and Google to take exhaustive measures to stop the same thing from happening again.

Lawmakers urged the companies to scour their networks to root out foreign advertisers, trolls, and bots; to eliminate messages meant to sow discord among Americans; and to banish content the social media companies themselves describe as “vile” and “malicious.”

But as senators grilled tech company executives at a Judiciary Committee subcommittee hearing on Tuesday, the discussion often veered to the actions of the big three Internet companies even when Russian entities weren’t involved—and how they should control content under those circumstances as well. Sen. Dick Durbin (D-IL) called out Facebook for sharing its expertise with what he described as a bigoted anti-Muslim advocacy group.

Facebook General Counsel Colin Stretch replied, “We are tightening content guidelines on ads with respect to violence.” “Regardless of the source?” Durbin asked him. “Yes,” Stretch replied.

After scoring that point, however, Durbin himself then quailed a bit at the consequences of Facebook’s broadening commitment to purge what he called “vile content.”

“How can you sort this out [while still preserving] freedom of expression?” he asked. “I don’t suggest it’s easy,” Stretch said.

Although shielding the integrity of elections from foreign powers was the main imperative at the hearing, the possible trade-off in speech suppression was also an uneasy theme threaded through the discussion.

While the subcommittee was collectively urging the tech companies to intensify their role as content referees, several members from the Republican side of the aisle also expressed discomfort with the idea.

“If we tell you, go forth and don’t run [certain] ads, that means we’re telling you to censor content,” Sen. John Kennedy (R-LA) said. “That kind of bothers me, too.”

The probes of Russia’s extensive influence campaign, and revelations about social media’s crucial role in it, have moved the big tech companies to make substantial new efforts to patrol and curate the content that moves through their channels—though once they preferred to see themselves as neutral conduits for user expression. Stretch told the subcommittee members that Facebook is doubling the number of staffers now devoted to this effort from 10,000 to 20,000.

During Facebook’s quarterly earnings report Thursday, CEO Mark Zuckerberg told financial analysts that efforts to root out fake news and foreign interference won’t come cheap, ReCode reported.

“I’m dead serious about this, and the reason I’m talking about this on our earnings call is that I’ve directed our teams to invest so much in security—on top of the other investments we’re making—that it will significantly impact our profitability going forward, and I wanted our investors to hear that directly from me,” Zuckerberg said, according to a Facebook transcript.

But the tech firms had already been under pressure to take more responsibility for domestic user-generated content—even though decisions they’d taken to permit, or to suppress, messages that could be seen as harmful or offensive had come under fire from both ends of the political spectrum.

For example, the Southern Poverty Law Center criticized Facebook in July for allegedly allowing white supremacist “hate groups” to use it as a platform, Forbes reported. Meanwhile, conservative Tennessee Congresswoman Marsha Blackburn denounced Twitter last month for deleting her campaign ad as “inflammatory” because she said she had “stopped the sale of baby parts” by opposing medical research involving fetal tissue. Twitter quickly reversed that decision, according to Politico.

Sen. Ted Cruz brought up the Blackburn incident as he took his turn questioning the tech company executives at the Judiciary subcommittee hearing Tuesday.

“The prospect of Silicon Valley companies censoring speech is troubling,” Cruz said.

Cruz claimed that Google search results had surfaced more positive stories about Hillary Clinton while featuring more negative items on Donald Trump, and accused Facebook workers of spiking stories about conservative politicians. Cruz asked the tech executives how they would react to this charge: “You’re putting your thumb on the scale of political debate and moving it toward the views of your employees.”

Expressing similar concerns at a hearing in June, Axios reported, Cruz asked whether “these global technology companies have a good record protecting free speech, and what can be done to protect the First Amendment rights of American citizens.”

In fact, activists on the right have floated proposals that would actually curb the ability of big tech companies to become arbiters of appropriate content, forcing them to remain the neutral forums they themselves still aspire to be. Former Trump advisor Steve Bannon has advocated regulating dominant tech companies such as Facebook and Google as utilities, The Intercept reported, presumably requiring them to serve all content providers equally.

And in a draft memo obtained by Axios, conservative activist Phil Kerpen proposed that tech platforms that abandoned a stance of neutrality and exercised “editorial control” would then become liable for all the content that flowed through their channels. That would mean the loss of immunity from liability for user-generated content that Internet companies now enjoy under the Communications Decency Act’s section 230, one of the bedrock protections that fostered the growth of social media networks.

At the Judiciary hearing, Cruz and other senators pressed the Facebook, Twitter, and Google executives to categorize their businesses—as media companies or tech platforms, content creators or hands-off content hosts.

“Do you consider your sites neutral public fora?” Cruz asked them.

Stretch said yes, “within guidelines, such as forbidding hate speech.’’ Facebook is “open to all ideas without regard to viewpoint or

Author: Bernadette Tansey

Bernadette Tansey is a former editor of Xconomy San Francisco. She has covered information technology, biotechnology, business, law, environment, and government as a Bay area journalist. She has written about edtech, mobile apps, social media startups, and life sciences companies for Xconomy, and tracked the adoption of Web tools by small businesses for CNBC. She was a biotechnology reporter for the business section of the San Francisco Chronicle, where she also wrote about software developers and early commercial companies in nanotechnology and synthetic biology.