Taxpayers Funding Research on How to Stealthily Censor People for Spreading 'Misinformation'

Views 130

Originally published on by Efthymis Oraiopoulos

Researchers at the University of Washington published a study in June 2022 focused on techniques to curb "disinformation" on social media with strategies similar to "shadow-banning."

The study was published in the academic journal "Nature Human Behavior," and one of its researchers, Kate Starbird, acknowledged support from the National Science Foundation, a taxpayer-funded federal agency, reaching almost $750,000.

The research focuses on alleged misinformation during the 2020 general election, when some Twitter accounts posted information regarding specifics of election fraud.

The research does not prove why this was misinformation.

Just the News first reported the study--which seemingly went unnoticed for the first year after its publication--on Monday.

The strategies researched and discussed in the academic paper are stealth censorship techniques, which can be more effective than an overt account ban, a post deletion, or other public action from a social media company, and they could "minimize the public relations challenges," according to the study.

The dataset was pulled from about 1 billion election-related Twitter posts collected between Sept. 1, 2020, and Dec. 15, 2020.

"To construct our dataset, we first identified 430 incidents--distinct stories that included false, exaggerated or otherwise misleading claims or narratives. Search terms were devised for each incident, extracting 23 million posts generated by 10.8 million accounts from the broader collection," the research authors said.

Virality-Circuit Breakers

The first censorship technique was named "virality-circuit breaker" and was deemed quicker than fact-checking. It works "by suspending algorithmic amplification." In other words, even if an account successfully posts something, this post will not be shown to many of the account's followers, or other users.

"Through simulations, we reveal how virality circuit breakers can have similar efficacy to outright removal even if the amount by which virality is reduced is small," the paper says.

The researchers found that "a 10 percent reduction in virality implemented four hours after the start of an event can reduce the spread of misinformation by nearly 45.3 percent."

The second of the two techniques discussed in the paper were called nudges.

"Many instances of misinformation involve claims that are partly true or require non-trivial time to debunk," the paper says, such as claims that there are "statistical irregularities in reported vote tallies," which requires a "statistician gathering and analyzing the data and determining merit."

A "nudge" reduces the following of every user who discusses the incident in question.

The paper says that nudges can be used to substantially reduce "cumulative engagement."

Final Words

The researchers concluded that their framework could be adopted in the near future without the use of large-scale censorship or major advances in cognitive psychology and machine learning.

"In our case, the gold standard would be to have Twitter implement our recommended policies in some locations but not others and examine subsequent engagement with viral misinformation," the paper says.

"Our results highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity and democratic processes around the globe."

Jevin West, one of the paper's authors, told Just the News on Monday that the findings in the paper were theoretical, and that concerns about the research being used to stifle free speech involve a "fundamental misunderstanding of the paper that appears to be based on non-factual distortion and falsehoods."

Mr. West said the research was aimed only at assessing interventions that would have prevented the spread of "COVID-19 misinformation and disinformation."


The University of Washington received a grant of $197,538 in 2020 for a project entitled "How Scientific Data, Knowledge, and Expertise Mobilize in Online Media during the COVID-19 Crisis" and a $550,000 grant for a project entitled "Unraveling Online Disinformation Trajectories: Applying and Translating a Mixed-Method Approach to Identify, Understand and Communicate Information Provenance."

The university also has a multiyear grant of $2.25 million from the NSF for a program called "Rapid-Response Frameworks for Mitigating Online Disinformation."

The researchers using this grant are Ms. Starbird, Mr. West, and another author of the paper.

Mike Benz, director of the Foundation for Freedom Online and a former State Department diplomat under the Trump administration, commented on the study in an interview with Just the News.

The study is a roadmap on "how to censor people using secret methods so that they wouldn't know they're being censored, so that it wouldn't generate an outrage cycle, and so that it'd be more palatable for the tech platforms who wouldn't get blowback because people wouldn't know they're being censored," Mr. Benz said.

The study appeared to open a new front in the disinformation wars that could further disguise censorship so it can't be contested or shamed, he added.

The groups, he argued, appear to be trying to create an "information purgatory to place largely conservative, populist, or heterodox opinions and to stop them from going viral." He added he feared the motive of such censorship was to promote liberalism and crush alternate political philosophies.

"They explicitly say that the purpose of this censorship psychology study is to eliminate resistance to vaccination efforts, equity, and democratic processes, meaning elections," Mr. Benz said. "So they want to be able to control and prevent all opposition to election procedures that they want in place, to vaccination campaigns, and to what appears to be racial and climate equity initiatives."

Mike Benz, executive director of the Foundation For Freedom Online and a former State Department official, in Fort Lauderdale, Fla. on Jan. 19, 2023. (Jack Wang/The Epoch Times)

Missouri v. Biden

A federal judge recently ruled against the Biden administration in a case of government agencies pressuring social media companies to censor information the government did not want spreading online.

U.S. District Judge Terry Doughty, a Trump nominee, issued an injunction on Independence Day blocking multiple government agencies and administration officials from contacting social media platforms for censorship reasons.

In his words, they are forbidden to meet with or contact social media companies for the purpose of "encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech."

The lawsuit was filed by Republican attorneys general in Missouri and Louisiana.

"If the allegations made by Plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States' history," Doughty writes. "In their attempts to suppress alleged disinformation, the Federal Government, and particularly the Defendants named here, are alleged to have blatantly ignored the First Amendment's right to free speech."

The order also prohibits the agencies and officials from pressuring social media companies "in any manner" to try to suppress posts.

White House press secretary Karine Jean-Pierre said "we certainly disagree with this decision."

The lawsuit alleges that government officials used the possibility of favorable or unfavorable regulatory action to coerce social media platforms to squelch what the administration considered misinformation on a variety of topics, including COVID-19 vaccines, President Joe Biden's son Hunter Biden, and election integrity.

The injunction, along with Doughty's accompanying reasons saying the administration "seems to have assumed a role similar to an Orwellian 'Ministry of Truth,'" were hailed by conservatives as a victory for free speech and a blow to censorship.

Judge Terry A. Doughty speaks before the Senate Committee on the Judiciary in 2017. (Screenshot/The Epoch Times)

Social media companies routinely take down posts that violate their own standards, but they are rarely compelled to do so by the U.S. government.

Meta restricted access to 27 items that it thought violated laws in the United States during the first six months of 2020, most of them involving price-gouging allegations, according to its transparency report. But it reported no U.S.-specific content restrictions during 2021 or the first six months of 2022, the most recent data available.

By contrast, Meta restricted access to more than 17,000 social media posts in Mexico during the same period, most pertaining to unlawful advertising of risky cosmetic or dietary products, and more than 19,000 posts and comments in South Korea were reported as violating national election rules.

This injunction against the federal government's extensive censorship operation is one of the most consequential decisions in First Amendment jurisprudence in 200 years, according to Aaron Siri, managing partner at Siri & Glimstad.

"You have a federal judge in a case brought by two states in America--two attorney generals [in] Missouri, Louisiana--saying that the federal government has violated the First Amendment of the United States Constitution by colluding widely with social media companies to censor speech, all forms of speech in many different areas," Mr. Siri told The Epoch Times in a previous report.

The Associated Press contributed to this report.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of GreenMedInfo or its staff.

This website is for information purposes only. By providing the information contained herein we are not diagnosing, treating, curing, mitigating, or preventing any type of disease or medical condition. Before beginning any type of natural, integrative or conventional treatment regimen, it is advisable to seek the advice of a licensed healthcare professional.

© Copyright 2008-2024, Journal Articles copyright of original owners, MeSH copyright NLM.