The Premise for Fact-Checking AI
Bo Brusco | June 6, 2021 (11-minute read)
Photo by Unsplash user @possessedphotography.
The modern world is experiencing a crisis of virtual epistemology. As Americans become more politically polarized, fact-checkers are losing their credibility. Is artificial intelligence the key to bringing Republicans and Democrats back into a shared reality again?
The Premise
The internet might be the single most significant invention of the human race, but the consequences of opening this virtual pandora’s box may be more grave than anyone had anticipated. Its ever-present ability to connect us also carries with it an infinite amount of voices shouting contradicting facts and stories. While this over-saturation of information may not be life-threatening, it does challenge American’s relationship with truth.
The most salient example of how disputed truth has become is found in our current political climate. In March of this year, Pew Research published a study titled “A partisan chasm in views of Trump’s legacy,” which notes just how disagreeable the right and left have become. The study asked Americans to reflect on Trump’s presidency and assess the former president’s performance. Likely to no one’s surprise, the study found that “Americans are equally likely to say [Trump] made progress solving problems as to say he made problems worse.”
As the study focused on individual opinions, the polarizing nature of public opinion is indicative of a more troublesome issue. Digging deeper, one can reason that such a divide in perspective can be the result of how conflicting facts have become. As grave as it sounds, the truth is that the political right and left are no longer living in a shared reality.
Such apparent divisions are likely made worse by the sheer amount of disinformation that has been laced throughout the internet — a problem that has been ever-growing for internet users. Three years ago, another Pew Research study found that many citizens already began to notice a problem with locating reliable information on the internet, as 50% of Americans agreed that “made-up news is a critical problem that needs to be fixed.”
The Business of Fake News
One of the reasons why fake news and misinformation have been such enduring issues is simple supply and demand. As it turns out, publishing false information is a lucrative business. Paul Horner, who was arguably the most prolific publisher of fake news, earned about $10,000 a month in 2016 from publishing his phony stories. Such a profit denotes a demand.
A simple explanation of said demand is a combination of the country’s extreme political polarization and the psychological phenomenon known as confirmation bias.
In his Psychology Today article, “What Is Confirmation Bias,” Dr. Shahram Heshmat explains, “Once we have formed a view, we embrace information that confirms that view while ignoring, or rejecting, information that casts doubt on it,” he explains. “Confirmation bias suggests that we don’t perceive circumstances objectively. We pick out those bits of data that make us feel good because they confirm our prejudices.”
The logic follows then that when people align themselves with a political tribe, they instinctively adhere to their group’s truth or its biased narrative. With the ever-growing reservoir of information, such biased narratives and false truths become easier to justify, defend, and perpetuate.
Though lacking the decorum of Dr. Heshmat, Horner explained his take on why so many of his fake pieces go viral to the Washington Post in 2016. He said, “Honestly, people are definitely dumber. They just keep passing stuff around. Nobody fact-checks anything anymore — I mean, that’s how Trump got elected. He just said whatever he wanted, and people believed everything, and when the things he said turned out not to be true, people didn’t care because they’d already accepted it. It’s real scary,” he said. “I’ve never seen anything like it.”
As both Dr. Heshmat and Horner explained, information that reinforces the political preconceptions of American citizens is likely to be believed, regardless of whether or not said information is accurate.
Fact-Checkers Losing Trust
As Horner said to the Washington Post, many people aren’t “fact-checking” anymore. But the fact of the matter is that fact-checking is becoming a slightly less surefire way of confirming whether something is true or not. This is especially concerning considering social media platforms like Facebook and Twitter have been ramping up their efforts to filter or flag false information.
Unfortunately, finding the truth has become more complicated as the accuracy of fact-checkers has recently come into question. While the pandemic was unfolding, many were interested in the origins of the virus. Conspiracy theorists at the time were claiming that the virus leaked from the Wuhan Lab of Virology. In response, many news outlets were claiming to have “debunked” the theory.
In April 2020, for example, Vox published an article titled “Why these scientist still doubt the coronavirus leaked from a Chinese Lab.” That same month, NPR ran a headline reading, “Virus Researchers Cast Doubt On Theory Of Coronavirus Lab Accident.” Even factcheck.org released an article titled, “Report Resurrects Baseless Claim that Coronavirus Was Bioengineered” in September of that same year.
This is noteworthy because different news outlets with their varying levels of credibility have potentially all gotten it wrong. On May 14, 2021, 18 scientists wrote a statement to the journal Science, making a case for why a formal investigation of the lab leak theory should be conducted.
According to the Wall Street Journal, of those 18 scientists, three had previously condemned the lab leak as a conspiracy theory — a stance in which they were so sure that they even joined 24 other scientists in making an official statement on the matter.
In that initial statement, they wrote, “The rapid, open, and transparent sharing of data on this outbreak is now being threatened by rumours and misinformation around its origins. We stand together to strongly condemn conspiracy theories suggesting that COVID-19 does not have a natural origin.”
That same statement is what likely led to so many news outlets and fact-checking organizations labeling the lab leak as a conspiracy. But, as the New York Times recently reported, President Biden has ordered an official investigation into the virus’s origins. With an inquiry underway, new facts may emerge that could vindicate the media labeled conspiracy theorists.
If the investigation concludes that the virus did, in fact, leak from the Wuhan Lab of Virology, then the official, tried and true fact-checkers got it wrong. In that case, the relationship between the general public and professional fact-checkers will be compromised. How then will anyone be able to know if something on the internet is unmistakably true?
As the fact-checkers in this incident did make their claims based on the statement of 27 reputable scientists, the process of fact-checking the theory is more nuanced than average Americans might consider. Regardless, the confidence in official fact-checkers is likely to be injured as a result.
After this potential mishap, it will become even easier for individuals to favor their own biases over any fact-checking entity.
Additionally, there is already a lack of trust in the quintessentially reliable sources of news. According to a Pew Research study from June of 2019, Americans are split down the middle when it comes to trusting fact-checkers. The study found that 50% of people believe that “fact-checking efforts by news outlets and other organizations tend to deal fairly with all sides,” and 48% think that they favor one side over the other.
Artificial Intelligence: The Future of Fact-Checking
When government entities and news corporations can no longer be trusted to report factually, how will Americans rebuild their relationship with truth? Artificial intelligence may be the answer.
Nathan Lambert is currently studying Robot Learning Research at UC Berkley and will be earning his Ph.D. this fall. In an article he wrote for towardsdatascience.com titled “AI & Arbitration of Truth,” Lambert explains how artificial intelligence would be able to fact-check social media posts objectively and on a mass scale.
He begins by explaining a subfield of robotics known as Natural Language Processing or NLP. This field is concerned with “manipulating and extracting information from text” and is the same technology used in translating software and search engines. Lambert believes this branch of AI is the most likely candidate for an online fact-checker.
While Lambert acknowledges the scale of the task, noting that there are about 6000 new posts every second on Twitter alone, he says the technology is up for the challenge. “Fact-checking online will mean every text-based post will pass through a model either a) prior to being posted publicly or b) soon after being publicly posted,” he says. “The scale of computation here is unprecedented.”
The prospect of having an objective non-human third-party fact-checking entity is hopeful but is still not without its flaws. Lambert notes that an important question to ask when talking about fact-checking AI is “who moderates the database?” Simply put, AI will not be checking everything down to the first principles of science, so, as Lambert says, “That means there will be some data labeling process.” Lambert thinks that said process will likely be a “monstrous challenge.”
Time Will Tell
If, hypothetically, everything goes according to plan and AI becomes the country’s solution to the epistemology crisis on the internet, will that be enough for the American public? Will it become the tried and true arbiter of truth in the cyber world that brings the left and right back into a single shared reality, or will it be yet another focal point of controversy that becomes suspicious enough to discredit?