• | 9:00 am

Misinformation research is under attack. So what’s the plan for 2024?

Between big tech’s staff cuts and ongoing political assaults, misinformation researchers and tech accountability groups have no choice but to change course.

Misinformation research is under attack. So what’s the plan for 2024?
[Source photo: Hill Street Studios/Getty Images]

In late September 2020, a series of photos started spreading on Twitter, showing what looked like at least 1,000 mail-in ballots sitting in dumpsters in Sonoma County, California. The photos, which were being interpreted online as clear evidence of election fraud, caught the attention of misinformation researchers at the University of Washington, who quickly put in a call to Sonoma County election officials.

The photos, they found out, actually showed empty mail-in ballots from 2018, which were being disposed of in accordance with state law. In fact, the state of California had yet to even distribute mail-in ballots for the 2020 election. Sonoma County corrected the record hours later, and the researchers, who were part of an academic coalition called the Election Integrity Partnership, shared the news with tech platforms, which then removed the original tweet.

For Kate Starbird, one of the leaders of the Partnership and co-founder of the University of Washington’s Center for an Informed Public, that incident is just one of many that illustrates how important it is for tech platforms, researchers, and government officials to keep lines of communication open during elections that are increasingly clouded by online misinformation. And yet, three years later, Starbird says, “It’s an open question going into 2024 if [election officials] are going to pick up the phone.”

Since the 2020 race, the landscape for election integrity work has changed dramatically. Over the last year, researchers doing this work — including most notably Starbird and a group of Stanford researchers who were also part of the Election Integrity Partnership — have been pummeled with subpoenas and public records requests, sued for allegedly conspiring with the government to censor people, and accused by House Republicans of being the masterminds behind the so-called “censorship industrial complex.”

At the same time, courts are questioning whether government agencies can pressure social media companies to remove misinformation without violating the First Amendment; the Supreme Court will soon take up the issue, but a lower court’s ruling in that case has already put a chill on collaboration between platforms and government officials. Tech companies, meanwhile, have undergone massive changes of their own, culling the ranks of trust and safety workers and walking back safeguards at a time when generative AI is making the mass dissemination of misleading text, audio, and imagery easier than ever.

All of it has pushed people who fight online misinformation for a living into uncharted territory. “There’s no playbook that seems to be up to the challenge of the new moment,” Starbird says. So, as the 2024 election cycle gets underway, she and others in the space are hard at work writing a new one.

One clear difference between the last presidential election’s playbook and the next one is that researchers will likely do far less rapid response reporting directly to tech companies, as it becomes increasingly politically untenable. Just this month, the House Select Subcommittee on the Weaponization of the Federal Government published hundreds of examples of reports that Stanford researchers made to tech platforms in 2020, citing them as evidence of the researchers’ alleged efforts to censor Americans at the behest of the U.S. government. “The federal government, disinformation ‘experts’ at universities, Big Tech, and others worked together through the Election Integrity Partnership to monitor & censor Americans’ speech,” Rep. Jim Jordan, who chairs the committee, wrote on X.

But the challenge isn’t just a political one; this kind of monitoring is also more technically difficult now than it was three years ago. A big reason for that is the fact that both Twitter and Reddit have hiked prices on their APIs, effectively cutting off access to tools that once offered a real-time view on breaking news. “Twitter has often been the bellwether for problems. You see this stuff starting to spread on Twitter, and then you can track how it flows on the other platforms,” says Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University and co-founder of the Coalition for Independent Technology Research. “We’re just in a world right now where it’s incredibly difficult, if not impossible in many instances, to do the sort of monitoring the researchers used to do.”

Starbird, for one, says she believes direct reporting to tech companies was never the most important part of her job, anyway. “I always thought the platforms should be doing their own work,” she says. She also says that the crux of her work — identifying viral rumors and tracing their spread — isn’t changing in 2024. What is changing, though, in 2024 is how she’ll share that information, which will likely happen on public feeds rather than in backchannels with tech companies and election officials. And yet, she notes, that doing this kind of work publicly can also slow it down. “We have to be so careful and parse every word,” she says.

It’s not just collaboration with tech companies that will need to change in 2024. It’s also the way researchers share information with election officials. An injunction ordered by a federal judge in Louisiana this summer temporarily blocked federal officials from working with social media companies on issues related to “protected speech.” While the Supreme Court has lifted the restrictions and will soon take up the underlying case, the uncertainty surrounding the case has inhibited communication between government officials and outside experts.

This, too, has prompted some election officials to come up with new approaches, says Jocelyn Benson, secretary of state of Michigan. “There have been deterrents to collaboration, but at the same time there’s been more incentive for collaboration than ever before,” she said on stage at the Aspen Institute’s Cyber Summit last week, in response to a question from Fast Company. One example of this, she noted, is a collaboration among six battleground states, as well as local government officials.

Academics and government officials aren’t the only ones shifting strategy. Jesse Lehrich, co-founder of the advocacy group Accountable Tech, says the last year has also required his organization to “adapt to the realities of the moment.” While in the past, Accountable Tech has been a particularly pugnacious critic of major tech platforms’ decisions regarding specific posts or people — going so far as to run television ads urging Meta to “keep Trump off Facebook” — now, Lehrich says, “We’re really trying to figure out ways to avoid the political and partisan landmines.”

To that end, Accountable Tech recently convened a coalition of nine other civil society groups to come up with what they call a “content-agnostic” election framework for tech companies in 2024. Rather than proposing specific rules on what kind of speech should or shouldn’t be allowed on the platforms — rules Accountable Tech has been quick to push in the past — the paper outlines a set of structural safeguards platforms should implement. That includes interventions like virality “circuit breakers” to slow the spread of fast-moving posts or limiting mass resharing to curb the proliferation of falsehoods.

Lehrich hopes that by proposing these technical solutions, rather than granular policies about what users can and can’t say, his coalition can help companies do more with less.  “Removing reshare buttons can be implemented with code, as opposed to 30,000 content moderators,” he says.

Nathalie Maréchal, co-director of the privacy and data project the Center for Democracy and Technology, worked with Lehrich on the paper and said she believes this “content-agnostic” approach is the right way forward for the research community. That’s not just because of the current political risks, but because, she says, the old whack-a-mole approach was always fraught, particularly in countries outside of the U.S. that don’t have the same speech rights as Americans do. Groups like CDT and other free expression organizations have historically been uncomfortable with efforts to pressure platforms into censoring one form of speech or another.

“Our group has such deep, long-standing expertise in how well-intentioned efforts to control online expression go wrong and end up hurting more people,” she says. But CDT was willing to work with Accountable Tech on its most recent framework because, she says she saw it as a way to “bridge that divide.”

Of course, both Lehrich and Maréchal realize it’s one thing to suggest “content-agnostic” changes in theory. It’s another thing entirely to actually apply them in the wild, where the nuance behind platforms’ policies is often lost. As Lehrich acknowledges, “It’s impossible for this stuff to be entirely content-agnostic.”

The question now is how responsive tech platforms will be to any of these new approaches. X is widely understood to be lost to the research community. Under Elon Musk’s leadership, the company has already gone through two trust and safety leaders, one of whom was forced to flee his home last year after Musk attacked him online. (Asked for comment, X’s press office sent an auto-response: “Busy now, please check back later.”)

But it’s not just X. Other platforms are also dialing back election integrity policies they stood by steadfastly just three years ago. Last week, The Wall Street Journal reported that Meta now allows political ads that claim the 2020 election was rigged. YouTube, meanwhile, announced in June that it would no longer prohibit videos containing false claims of “widespread fraud, errors, or glitches” related to U.S. presidential elections.

Google didn’t respond to a request for comment. In a statement, a Meta spokesperson said, “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – this includes our efforts to prepare for elections around the world.”

The spokesperson also pointed Fast Company to a series of new research tools and initiatives that the company unveiled Tuesday. As part of those announcements, Meta said it is giving researchers associated with “qualified institutions” access to its Content Library and API, which will enable approved researchers to search through public content on Facebook and Instagram and view engagement data on posts. “To understand the impact social media apps like Facebook and Instagram have on the world, it’s important to support rigorous, independent research,” Clegg wrote in a blog post announcing the new tools.

Supporting researchers heading into 2024, however, will require much more than just data access. It may well require things like legal defense funds and communications strategies to help people studying misinformation navigate an environment that is significantly more adversarial than anyone signed up for just a few years ago. “These things can take a heavy emotional toll,” Starbird says of the barrage of attacks she’s faced over the last year.

While she remains as committed as ever to the cause, she acknowledges these “smear campaigns” have had precisely the chilling effect on the field that they were intended to have. Not everyone has the appetite — or the legal cover — to assume the same amount of risk. “Some people are like, I can go study something else and write research papers. Why would I subject myself to this?” she says.

And yet, she says there are enough people — herself included — who continue to view election integrity as one of the biggest challenges of our time. “I don’t think we’re gonna walk away,” she says.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter