Business

Notre Dame fire highlights tech companies' struggle to combat misinformation

Reuters

Men work on a statue on the facade at Notre Dame Cathedral in Paris on Tuesday after a massive fire devastated large parts of the gothic structure.
Reuters Men work on a statue on the facade at Notre Dame Cathedral in Paris on Tuesday after a massive fire devastated large parts of the gothic structure.
/

YouTube’s defenses against misinformation just backfired in a big way — and ended up contributing to baseless speculation online that the Notre Dame cathedral fire resulted from a terrorist attack.

As news organizations and others used the service to broadcast the collapse of the spire in Paris, YouTube’s algorithms mistakenly displayed details about the Sept. 11, 2001, terrorist attacks in New York in “information panels” below the videos.

While these fact-checking tools are designed to counter hoaxes, they likely fed false rumors online.

People falsely claimed Muslim terrorists caused the incident, even as Paris officials said the fire likely was due to ongoing renovations and there was no sign of a terrorist attack.

And while the boxes noted the “extensive death and destruction” from attacks that took down New York’s World Trade Center and killed thousands of people, there appeared to be few injured in the Paris fire.

Technology companies increasingly are promising investments in artificial intelligence and algorithms will be a crucial component of their arsenal of tools to combat violent content, disinformation or other hoaxes.

But Monday’s high-profile mistake — on the heels of another recent failure to quickly stop the spread of violent videos of the terrorist attack in New Zealand last month — underscore how this technology can be still error-prone and unreliable.

And it’s raising questions about the efficacy of leaving such decisions to machines.

ARTICLE CONTINUES BELOW ADVERTISEMENT

“At this point, nothing beats humans,” David Carroll, an associate professor of media design at the New School in New York and a critic of social media companies, was quoted as saying in a Washington Post article.

“Here’s a case where you’d be hard pressed to misclassify this particular example, while the best machines on the planet failed.”

Pedro Domingos, a machine-learning researcher and University of Washington professor, said he wasn’t surprised YouTube’s algorithms made such a mistake. Algorithms don’t have the comprehension of human context or common sense, which makes them seriously unprepared for news events.

“They have to depend on these algorithms, but they all have sorts of failure modes. And they can’t fly under the radar anymore,” Domingos said.

“It’s not just Whac-a-Mole. It’s a losing game.”

YouTube’s mistake highlights the uphill challenge for companies under pressure from policymakers across the globe as they seek new ways to combat misinformation.

YouTube began rolling out so-called information panels to provide factual information about hoaxes in recent months. The computer algorithms likely detected visual similarities between Monday’s fire and the 9/11 tragedy, which frequently is a target of conspiracy theories on the service.

BuzzFeed News reported that the widget appeared on at least three news organizations’ streams.

“We are deeply saddened by the ongoing fire at the Notre Dame cathedral,” YouTube said in a statement. “Last year, we launched information panels with links to third-party sources like Encyclopaedia Britannica and Wikipedia for subjects subject to misinformation.

ARTICLE CONTINUES BELOW ADVERTISEMENT

Thank you for signing up for our e-newsletter!

You should start receiving the e-newsletters within a couple days.

“These panels are triggered algorithmically and our systems sometimes make the wrong call. We are disabling these panels for live streams related to the fire.”

YouTube wasn’t the only platform that struggled in its response to the cathedral fire. Twitter also was racing to address the rapid spread of hoaxes and conspiracy theories on its own platform.

Jane Lytvynenko of BuzzFeed News found numerous examples of fake claims about the fire Monday afternoon on Twitter, including an account impersonating CNN that attributed the fires to terrorists, and a fake Fox News account that posted fabricated comments purporting to be from Rep. Ilhan Omar, D-Minn.

Both those examples were removed, Lytvynenko reported.

In an interview, a Twitter spokesperson said the company is reviewing reports of disinformation related to the fires.

“The team is reviewing reports and, if they are in violation, suspending them per the Twitter rules,” the spokesperson said. “Our focus continues to be detecting and removing coordinated attempts to manipulate the conversation at speed and scale.”

Give us feedback

We value your trust and work hard to provide fair, accurate coverage. If you have found an error or omission in our reporting, tell us here.

Or if you have a story idea we should look into? Tell us here.

Give us feedback

We value your trust and work hard to provide fair, accurate coverage. If you have found an error or omission in our reporting, tell us here.

Or if you have a story idea we should look into? Tell us here.