In a Senate Judiciary Committee hearing yesterday, there was a striking change of scenery—rather than grilling the floating heads of Big Tech CEOs, senators instead questioned policy leads from Twitter, Facebook, and YouTube on the role algorithms play in their respective platforms. The panel also heard from two independent experts in the field, and the results were less theatrical and perhaps more substantive.
Both Democrats and Republicans expressed concerns over how algorithms were shaping discourse on social platforms and how those same algorithms can drive users toward ever more extreme content. “Algorithms have great potential for good,” said Sen. Ben Sasse (R-Neb.). “They can also be misused, and we the American people need to be reflective and thoughtful about that.”
The Facebook, YouTube, and Twitter execs all emphasized how their companies’ algorithms can be helpful in achieving shared goals—they are working to find and remove extremist content, for example—though all the execs admitted de-radicalizing social media was a work in progress.
In addition to policy leads from the three social media platforms, the panel heard from a couple of experts. One of them was Joan Donovan, director of the Technology and Social Change Project at Harvard University. She pointed out that the main problem with social media is the way it’s built to reward human interaction. Bad actors on a platform can and often do use this to their advantage. “Misinformation at scale is a feature of social media, not a bug,” she said. “Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them.”
One of Donovan’s proposed solutions sounded a lot like community cable TV stations. “We should begin by creating public interest obligations for social media timelines and newsfeeds, requiring companies to curate timely, local, relevant, and accurate information.” She also suggested that the platforms beef up their content moderation practices.
The other panel expert was Tristan Harris, president of the Center for Humane Technology and a former designer at Google. For years, Harris has been vocal about the perils of algorithmically driven media, and his opening remarks didn’t stray from that view. “We are now sitting through the results of 10 years of this psychologically deranging process that have warped our national communications and fragmented the Overton window and the shared reality we need as a nation to coordinate to deal with our real problems.”
One of Harris’s proposed solutions is to subject social media companies to the same regulations that university researchers face when they do psychologically manipulative experiments. “If you compare side-by-side the restrictions in an IRB study in a psychology lab at a university when you experiment on 14 people—you’ve got to file an IRB review. Facebook, Twitter, YouTube, TikTok are on a regular, daily basis tinkering with the brain implant of 3 billion people’s daily thoughts with no oversight.
Apples and oranges
“What we need to do is compare what are the regulations and protections we apply in one domain and we’re not applying in a different domain,” Harris said.
The hearing quickly shifted to Section 230 of the Communications Decency Act, the reform of which has been mooted by members of both parties. One bill introduced last year would require platforms with more than 10 million users to obtain permission from people before serving them algorithmically tailored content. If the companies fail to do so, they would not receive protection under Section 230. Another bill would similarly revoke Section 230 immunity if algorithms spread misinformation that leads to violence.
The panel concluded by exploring alternative business models for social media platforms, ones that wouldn’t be as reliant on algorithms to drive engagement and ad views. Harris suggested that electric utilities might be one such model. Rather than encouraging people to leave their lights on all the time—which would make them more money but work against the societywide goal of energy conservation—regulations have been set up to discourage flagrant overuse, and a portion of the profits are put into funds to ensure the sustainability of the electric grid.
“Imagine the technology companies, which today profit from an infinite amount of engagement, only made money from a small portion of that,” he said. “And the rest basically was taxed to put into a regenerative public interest fund that funded things like the fourth estate, fact checkers, researchers, public interest technologists.”
The National Archives and Records Administration is a federal agency responsible for preserving historically significant federal records, including tweets from senior government officials. For example, former Trump White House spokeswoman Sarah Huckabee Sanders turned over control of her official Twitter account to NARA when she left office. Leaving tweets on Twitter makes them easily accessible by the public.
But Politico reports Twitter won’t allow anything like this to happen for President Donald Trump’s now-banned @realDonaldTrump account.
“Given that we permanently suspended @realDonaldTrump, the content from the account will not appear on Twitter as it did previously or as archived administration accounts do currently, regardless of how NARA decides to display the data it has preserved,” a Twitter spokesman told Politico. “Administration accounts that are archived on the service are accounts that were not in violation of the Twitter Rules.”
Twitter permanently banned Donald Trump from its platform two days after the January 6 attack on the US Capitol. Twitter concluded that his tweets on that day had promoted or glorified violence.
Since then, Trump has had to get by without his most powerful megaphone. He no longer has the ability to blast his thoughts directly onto the screens of millions of people every day.
It’s not clear if NARA was seeking to have Twitter reinstate the @realDonaldTrump account under NARA control or create a copy of the account under another name. Perhaps NARA was proposing to post copies of Trump’s tweets to a completely new Twitter account. At this point it doesn’t matter, because Twitter has ruled out having Trump’s tweets on its platform in any form.
Instead, NARA says it will post an archive of Trump’s tweet to the website of the Trump Presidential Library, which itself is under NARA’s control. NARA says that the archive will include all of Trump’s tweets, including controversial tweets that got warning labels from Twitter, as well as the tweets that ultimately got Trump banned.
NARA spokesman James Pritchett told Politico that the agency is “working to make the exported content available… as a download.” That sounds like NARA may only offer the tweets as one large download as opposed to making each tweet available online individually—a much less convenient format than bringing the tweets back to Twitter.
It’s not clear how much this matters in practice. There is already at least one private website hosting copies of Trump’s tweets. But there’s no guarantee that independent sites like this will still be around a decade from now, whereas Twitter and NARA (or a successor agency), in all likelihood, will still be here.
I would have expected social media sites like Twitter to care more about not having Trump as an active user than about eradicating every trace of his old writings from their platforms. But Twitter apparently feels strongly about both Trump and his tweets.
And Facebook evidently feels the same way. Last week, the company deleted an interview between Trump and his daughter-in-law Lara Trump, warning that any content posted “in the voice of Donald Trump” wasn’t welcome.
Russia has implemented a novel censorship method in an ongoing effort to silence Twitter. Instead of outright blocking the social media site, the country is using previously unseen techniques to slow traffic to a crawl and make the site all but unusable for people inside the country.
Research published Tuesday says that the throttling slows traffic traveling between Twitter and Russia-based end users to a paltry 128kbps. Whereas past Internet censorship techniques used by Russia and other nation-states have relied on outright blocking, slowing traffic passing to and from a widely used Internet service is a relatively new technique that provides benefits for the censoring party.
Easy to implement, hard to circumvent
“Contrary to blocking, where access to the content is blocked, throttling aims to degrade the quality of service, making it nearly impossible for users to distinguish imposed/intentional throttling from nuanced reasons such as high server load or a network congestion,” researchers with Censored Planet, a censorship measurement platform that collects data in more than 200 countries, wrote in a report. “With the prevalence of ‘dual-use’ technologies such as Deep Packet Inspection devices (DPIs), throttling is straightforward for authorities to implement yet hard for users to attribute or circumvent.”
The throttling began on March 10, as documented in tweets here and here from Doug Madory, director of Internet analysis at Internet measurement firm Kentik.
In an attempt to slow traffic destined to or originating from Twitter, Madory found, Russian regulators targeted t.co, the domain used to host all content shared on the site. In the process, all domains that had the string *t.co* in it (for example, Microsoft.com or reddit.com) were throttled, too.
Today’s outages in Russia appears to have been caused by a bad substring match by @roscomnadzor.
Intending to block Twitter’s link shortener t[.]co, Russia blocked all domains containing t[.]co, for example Microsoft[.]com and Reddit[.]com.
That move led to widespread Internet problems because it rendered affected domains as effectively unusable. The throttling also consumed the memory and CPU resources of affected servers because it required them to maintain connections for much longer than normal.
Roskomnadzor—Russia’s executive body that regulates mass communications in the country—has said last month that it was throttling Twitter for failing to remove content involving child pornography, drugs, and suicide. It went on to say that the slowdown affected the delivery of audio, video, and graphics, but not Twitter itself. Critics of government censorship, however, say Russia is misrepresenting its reasons for curbing Twitter availability. Twitter declined to comment for this post.
Are Tor and VPNs affected? Maybe
Tuesday’s report says that the throttling is carried out by a large fleet of “middleboxes” that Russian ISPs install as close to the customer as possible. This hardware, Censored Planet researcher Leonid Evdokimov told me, is typically a server with a 10Gbps network interface card and custom software. A central Russian authority feeds the boxes instructions for what domains to throttle.
The middleboxes inspect both requests sent by Russian end users as well as responses that Twitter returns. That means that the new technique may have capabilities not found in older Internet censorship regimens, such as filtering of connections using VPNs, Tor, and censorship-circumvention apps. Ars previously wrote about the servers here.
The middleboxes use deep packet inspection to extract information, including the SNI. Short for “server name identification,” the SNI is the domain name of the HTTPS website that is sent in plaintext during a normal Internet transaction. Russian censors use the plaintext for more granular blocking and throttling of websites. Blocking by IP address, by contrast, can have unintended consequences because it often blocks content the censor wants to keep in place.
One countermeasure for circumventing the throttling is the use of ECH, or Encrypted ClientHello. An update for the Transport Layer Security protocol, ECH prevents blocking or throttling by domains so that censors have to resort to IP-level blocking. Anti-censorship activists say this leads to what they call “collateral freedom” because the risk of blocking essential services often leaves the censor unwilling to accept the collateral damage resulting from blunt blocking by IP address.
In all, Tuesday’s report lists seven countermeasures:
TLS ClientHello segmentation/fragmentation (implemented in GoodbyeDPI and zapret)
TLS ClientHello inflation with padding extension to make it bigger than 1 packet (1500+ bytes)
Prepending real packets with a fake, scrambled packet of at least 101 bytes
Prepending client hello records with other TLS records, such as change cipher spec
Keeping the connection in idle and waiting for the throttler to drop the state
Adding a trailing dot to the SNI
Any encrypted tunnel/proxy/VPN
It’s possible that some of the countermeasures could be enabled by anti-censorship software such as GoodbyeDPI, Psiphon, or Lantern. The limitation, however, is that the countermeasures exploit bugs in Russia’s current throttling implementation. That means the ongoing tug of war between censors and anti-censorship advocates may turn out to be protracted.
The US Supreme Court today vacated a 2019 appeals-court ruling that said then-President Trump violated the First Amendment by blocking people on Twitter. The high court declared the case “moot” because Trump is no longer president.
For legal observers, the ruling itself was less interesting than a 12-page concurring opinion filed by Justice Clarence Thomas, who argued that Twitter and similar companies could face some First Amendment restrictions even though they are not government agencies. That’s in contrast to the standard view that the First Amendment’s free speech clause does not prohibit private companies from restricting speech on their platforms.
Thomas also criticized the Section 230 legal protections given to online platforms and argued that free-speech law shouldn’t necessarily prevent lawmakers from regulating those platforms as common carriers. He wrote that “regulation restricting a digital platform’s right to exclude [content] might not appreciably impede the platform from speaking.”
Thomas doesn’t seem to be arguing for a wide-ranging application of the First Amendment to all online moderation decisions. Instead, he wrote that free-speech law could apply “in limited circumstances,” such as when a digital platform blocks user-submitted content “in response to government threats.”
“Because of the change in Presidential administration, the Court correctly vacates the Second Circuit’s decision,” Thomas wrote. “I write separately to note that this petition highlights the principal legal difficulty that surrounds digital platforms—namely, that applying old doctrines to new digital platforms is rarely straightforward. Respondents [the Twitter users who sued Trump] have a point, for example, that some aspects of Mr. Trump’s account resemble a constitutionally protected public forum. But it seems rather odd to say that something is a government forum when a private company has unrestricted authority to do away with it.”
The Trump case didn’t give the Supreme Court a chance to rule on the questions Thomas raised, but he is hoping that future cases will provide such an opportunity:
The Second Circuit feared that then-President Trump cut off speech by using the features that Twitter made available to him. But if the aim is to ensure that speech is not smothered, then the more glaring concern must perforce be the dominant digital platforms themselves. As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them.
US Rep. Ted Lieu (D-Calif.) blasted Thomas’ opinion. “Justice Clarence Thomas wants the government to regulate speech on the Internet. If you are a Republican who supports this view, don’t ever lecture anyone on free speech ever again,” Lieu wrote on Twitter.
“That Justice Thomas has… idiosyncratic.. views about the First Amendment is not exactly news,” wrote Stephen Vladeck, a professor at University of Texas School of Law who has argued before the Supreme Court. “That none of his conservative colleagues saw fit to join his concurrence in the Twitter case is probably the bigger story, at least for now.”
Trump “had only limited control of the account”
Twitter’s decision to permanently remove Trump from the platform (for inciting violence) demonstrated that Trump himself “had only limited control of the account,” Thomas wrote.
“The disparity between Twitter’s control and Mr. Trump’s control is stark, to say the least,” Thomas wrote. “Mr. Trump blocked several people from interacting with his messages. Twitter barred Mr. Trump not only from interacting with a few users, but removed him from the entire platform, thus barring all Twitter users from interacting with his messages. Under its terms of service, Twitter can remove any person from the platform—including the President of the United States—’at any time for any or no reason.'”
Thomas acknowledged that private entities usually aren’t constrained by the First Amendment but added that the First Amendment may apply on a private company’s online platform “if the government coerces or induces it to take action the government itself would not be permitted to do, such as censor expression of a lawful viewpoint.” Thomas continued:
Consider government threats. “People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around.” [Thomas was quoting a 1963 Supreme Court ruling in that sentence.] The government cannot accomplish through threats of adverse government action what the Constitution prohibits it from doing directly. Under this doctrine, plaintiffs might have colorable claims against a digital platform if it took adverse action against them in response to government threats.
But no such threat was alleged in the Trump case, and “[w]hat threats would cause a private choice by a digital platform to ‘be deemed… that of the State’ remains unclear,” he wrote.
Thomas also suggested that the First Amendment should not have applied to Trump blocking users because Twitter is the one that ultimately controls the digital space. “Because unbridled control of the account resided in the hands of a private party, First Amendment doctrine may not have applied to respondents’ complaint of stifled speech,” he wrote. “Whether governmental use of private space implicates the First Amendment often depends on the government’s control over that space.”
Thomas criticizes Section 230
Although Section 230 of the Communications Decency Act gives online platforms immunity from lawsuits over how they moderate user-submitted content, Thomas wrote that Congress “has not imposed corresponding responsibilities, like nondiscrimination, that would matter here.”
In a footnote, Thomas wrote that the legal immunity provided by Section 230 “eliminates the biggest deterrent—a private lawsuit—against caving to an unconstitutional government threat.” In the same footnote, Thomas cited an argument “that immunity provisions like Section 230 could potentially violate the First Amendment to the extent those provisions pre-empt state laws that protect speech from private censorship.”
Thomas’ Section 230 argument was disputed by Jeff Kosseff, assistant professor of cybersecurity law at the US Naval Academy and author of a book on Section 230. “I think that it’s very unlikely that a state must-carry law for social media would survive [First Amendment] scrutiny,” Kosseff wrote in a Twitter thread. Even if such a hypothetical state law passed First Amendment muster, it’s unlikely that Section 230 would be found to violate the First Amendment under existing interpretations of US law, he wrote.
Thomas: Online platforms are like common carriers
In addition to his First Amendment argument, Thomas wrote that digital platforms could be regulated as common carriers. “In many ways, digital platforms that hold themselves out to the public resemble traditional common carriers,” he wrote. “Though digital instead of physical, they are at bottom communications networks, and they ‘carry’ information from one user to another. A traditional telephone company laid physical wires to create a network connecting people. Digital platforms lay information infrastructure that can be controlled in much the same way.”
The similarity between online platforms and common carriers “is even clearer for digital platforms that have dominant market share,” such as Facebook, Google, and Amazon, Thomas continued.
“The Facebook suite of apps is valuable largely because 3 billion people use it,” he wrote. “Google search—at 90 percent of the market share—is valuable relative to other search engines because more people use it, creating data that Google’s algorithm uses to refine and improve search results. These network effects entrench these companies.” Thomas wrote that “Although both companies are public, one person controls Facebook (Mark Zuckerberg), and just two control Google (Larry Page and Sergey Brin).”
“Much like with a communications utility, this concentration gives some digital platforms enormous control over speech,” Thomas wrote. Google “can suppress content by deindexing or downlisting a search result or by steering users away from certain content by manually altering autocomplete results,” while “Facebook and Twitter can greatly narrow a person’s information flow through similar means.” Amazon, “as the distributor of the clear majority of e-books and about half of all physical books… can impose cataclysmic consequences on authors by, among other things, blocking a listing,” he wrote.
Arguing that lawmakers could impose common-carrier rules on digital platforms, Thomas wrote, “The similarities between some digital platforms and common carriers or places of public accommodation may give legislators strong arguments for similarly regulating digital platforms.”
“That is especially true because the space constraints on digital platforms are practically nonexistent (unlike on cable companies), so a regulation restricting a digital platform’s right to exclude might not appreciably impede the platform from speaking,” Thomas added. Thomas also wrote that his common-carrier analysis does not mean “that the First Amendment is irrelevant until a legislature imposes common carrier or public accommodation restrictions—only that the principal means for regulating digital platforms is through those methods.”
Thomas regretted Brand X decision
If Congress took up Thomas’ call to regulate online platforms, we could end up with a system in which Internet service providers like Comcast and AT&T are not regulated as common carriers while Twitter, Facebook, and Google do face the common-carrier restrictions that traditionally applied to telecommunications companies.
Thomas has played an important role in how common-carrier regulations are applied or not applied to Internet service providers. In the 2005 Brand X case, Thomas wrote the Supreme Court opinion that lets the Federal Communications Commission classify Internet service as either an information service or telecommunications as long as it provides a reasonable justification for its decision.
The FCC can only apply common-carrier rules to Internet service if it is classified as telecommunications, and the Brand X ruling allowed the FCC to change that classification decision multiple times under different administrations, including when then-FCC Chairman Ajit Pai deregulated broadband in 2017. Thomas last year wrote that he regrets the Brand X decision because it gave federal agencies like the FCC too much leeway in interpreting US law.