Can Artificial Intelligence Combat the Spread of Misinformation?
Bo Brusco | August 20, 2021
The widespread circulation of misinformation on social media threatens America’s relationship with truth, leading to intense political polarization. And with COVID-19 and its variant sweeping across the country, misleading information about the virus and vaccines puts lives at stake. One solution may lie in using AI technology in a fact-checking capacity.
Photo by Markus Winkler on Unsplash
Online misinformation is a serious issue. Gaining traction during the Trump campaign in 2016 and then exploding during the global pandemic in 2020, misinformation and fake news have grown at an alarming rate. Its effects are harmful, both to public systems as well as personal health. While social media companies have attempted to mitigate the spread of false information, their approach is far from perfect. And with fact-checkers beginning to lose credibility, a new approach is needed. Artificial intelligence might provide the fact-checking capabilities the country needs to bring political tribes into a shared reality once again.
A Virtual Pandora’s Box
The internet is arguably the most significant invention in the last century, but the consequences of opening this virtual pandora’s box may be more grave than anyone had anticipated. Its ever-present ability to connect people also carries with it an infinite amount of voices shouting contradicting facts and stories.
This over-saturation of information, not all of it accurate, complicates Americans’ ability to agree on the facts. And now, with a global pandemic as a backdrop, misleading information has also become a public health hazard.
The rise of misinformation became apparent during Trump’s 2016 presidential campaign and has only gotten worse since the onset of COVID-19. The American people went from being divided about politics to being divided about scientific facts.
Leading up to his election, Trump was often a central figure in fake news articles. Fake stories became popular during his campaign because creators of phony news realized how profitable lies could be. In the same year as Trump’s campaign, a man named Paul Horner made $10,000 a month by posting fabricated stories online. Horner’s stories attracted so much attention and often went viral because they were sensational and preyed on readers’ biases.
When asked by The Washington Post why his misleading stories sold so well, Horner said, “Honestly, people are definitely dumber. They just keep passing stuff around. Nobody fact-checks anything anymore — I mean, that’s how Trump got elected. He just said whatever he wanted, and people believed everything, and when the things he said turned out not to be true, people didn’t care because they’d already accepted it. It’s real scary,” he said. “I’ve never seen anything like it.”
The Deconstruction of a Shared Reality
The rise of misinformation surrounding Trump’s presidency severely polarized the political left and right, a fact that is evident in the aftermath of the Trump era. In March of this year, Pew Research published a study titled “A partisan chasm in views of Trump’s legacy,” noting just how disagreeable the two parties have become.
The study asked Americans to reflect on Trump’s presidency and assess the his performance. Likely to no one’s surprise, the study found that “Americans are equally likely to say [Trump] made progress solving problems as to say he made problems worse.”
The polarization of public opinion is indicative of a more troublesome issue. Digging deeper, one can reason that such a divide in perspective is the result of how conflicting facts have become. As grave as it sounds, the truth is that the political right and left are no longer living in a shared reality.
This problem has been made worse by the pandemic. Pew Research gave an in-depth analysis on political divisions in their report titled, “A Year of U.S. Public Opinion on the Coronavirus Pandemic,” wherein it is easy to see how Americans not only became divided on politics but also science.
As the report explains, Republicans and Democrats disagreed when it came to shutdowns, mask-mandates, and social distancing. But even on something as simple as reporting the data on infection rates became a polarizing issue as “most Republicans accepted Trump’s claim that the growing number of cases was primarily a result of increased testing,” during the summer of 2020, while “eight-in-ten Democrats pointed to more infections, not just more testing.”
Misinformation in the Public Sphere
As misinformation erupts across the internet, it has made its way into the country’s public systems. Teachers are caught in the middle of information wars in their classrooms. Healthcare professionals see the consequences of misinformation fill the beds of their ICUs. With the invasion of misinformation in full swing, public servants are voicing their concerns.
Public Education
“I haven’t been a public educator for very long,” explains Cade Buer, a high school teacher in Las Vegas, Nevada, “but some of the most pronounced misinformation I’ve seen has definitely been leading up to the pandemic and the quarantines that have gone on, especially at the beginning of March 2020.”
Recalling a specific incident in which some of his students spread misinformation in his classroom, Buer says, “I had a handful of students that came in with memes from Facebook.” Buer details how the message conveyed in the memes focused on “downplaying the severity of the virus” and how the novel coronavirus was “not able to be transmitted to the degree that the CDC was claiming.”
When asked if he felt hesitant to weigh in on the matter at that moment, Buer responded, saying, “Well yeah, especially given how young the children were here in this scenario. I think it’s important to develop critical thinking. I definitely think it’s important to point out incorrect assumptions.”
“But,” he continues, “when you have entire identities that are tied into being left or right or wearing a mask or not, it’s very hard for a public educator — a public servant — to weigh in on those discussions.”
Though a single incident might appear insignificant, Buer fears that misinformation will become more problematic for public educators in the future. “As an educator, I think that there is a problem going forward with misinformation simply because those that do support truth are often attacked for it,” he explains.
Mr. Cade Buer managing his virtual classroom in Las Vegas, Nevada, on March 12, 2021. (Photo: Bo Brusco)
Reiterating his feelings towards intervening when misinformation makes its way into his classroom, Buer says, “And it’s gotten to the point where myself, as an educator, I am hesitant to correct on some certain issues — even if it’s just a correction on where they got their information from because of all the pain and trouble I’ll have to go through defending that one correction that I’m making.”
While Buer feels uneasy addressing misinformation in the classroom as an educator, in a different public system, other professionals must address it head-on.
Public Health
Perhaps the most prominent display of misinformation’s work is found in COVID ICUs around the country. Nate Smithson has been a registered nurse in Salt Lake City, Utah, for the last four years. As the hospital he works in is a Level One Trauma Center, Smithson found himself on the front lines in the battle against the virus.
Regarding misinformation, Smithson says that he encounters it on a daily basis. “Especially in the ER,” he says, “we get people often times coming in when they’re sick, and either they or a family member will start throwing out ‘we’ll why don’t you try this?’ or ‘what if we try hydroxychloroquine, again?’ You know, that drug that was debunked a year and a half ago.”
Nate Smithson (3rd from the left) and his colleagues at Intermountain Medical Center wearing their PAPR masks. Salt Lake City, Utah, 2020. Image provided by Nate Smithson.
Hydroxychloroquine Misinformation
Some of the more blatant occurrences of misinformation in the hospital, according to Smithson, involve the previously mentioned drug, hydroxychloroquine, and COVID-19 vaccines. “They’re still people out there holding onto it like it’s some sort of miracle cure,” Smithson says, referring to hydroxychloroquine. “That’s not the case. And they come in and they try to tell us how to do our jobs when we treat these [COVID-19] patients daily, and we know what’s best for them.”
Smithson’s hospital, Intermountain Medical Center, researched hydroxychloroquine’s effectiveness against COVID-19. “Last year, when hydroxychloroquine was taking over the airwaves, and everyone was talking about how it was a miracle drug and everything,” says Smithson, “my hospital — the unit I worked in — we were part of the first trials using it with COVID patients.”
According to Smithson, the drug caused severe heart problems. In some cases, the hearts of patients who had taken the drug would completely stop, and hospital personnel would have to rush in to resuscitate them. But, at the time, Smithson and his team were unaware that it was hydroxychloroquine and not the virus causing the heart issues.
“It was at the beginning of the outbreak of the pandemic, and we didn’t know what was going to help, so we were just trying everything,” Smithson explains. “We didn’t know much about it at first, but the first couple months of treating people [who had] COVID, we thought that all of them had terrible heart issues. We thought that COVID caused these heart problems, and once we stopped using the hydroxychloroquine, once we found that that wasn’t effective, all those things went away.”
Upon this realization, Smithson and his coworkers were frustrated that they had been making the problem worse by using the drug. “We were like, ‘are you kidding me? We were causing these problems in the first place?’” he says. “Dangerous drug. Not effective. Causes problems. Don’t use it.”
On June 15, 2020, the FDA officially “revoked the emergency use authorization (EUA) to use hydroxychloroquine and chloroquine to treat COVID-19.” In the same statement, the FDA noted that the drug’s ineffectiveness when treating COVID, saying, “dosing for these medicines are unlikely to kill or inhibit the virus that causes COVID-19.”
Vaccine Misinformation
More recently, however, Smithson has been witnessing vaccine misinformation daily. “So many people think it’s implanted with a microchip,” he explains.
As Smithson’s experience can attest, believing inaccurate information can be life-threatening. “Everyone that I have seen that gets admitted to the hospital with COVID is not vaccinated,” he says.
According to Smithson, when he asks these patients why they haven’t been vaccinated, the responses allude to an “I’m not going to get that sick anyways” sentiment. To which Smithson responds with, “but you are that sick from it. You’re in the hospital.”
“Sometimes it’s somebody that’s my age,” he says. “You know, in their twenties getting admitted to the hospital. I know another person who’s been on a ventilator for the last three weeks, and he’s my exact same age. And it’s terrible, and I feel terrible for them and for their families, and the saddest part is that it didn’t have to happen in the first place.”
In addition to blatant misinformation, Smithson frequently encounters unvaccinated individuals who don’t understand the primary function of a vaccine. “People come in with COVID, and they say, ‘oh, give me the vaccine now.’ That’s not going to work. The vaccine is to prevent you from getting it. It doesn’t heal you once you have it,” he explains.
“And they’re just desperate as you’re about to intubate them. They’re like ‘no-no-no-no, it’s going to be fine! Just give me the vaccine! I’ll get the vaccine now! It’ll be ok!’ No. I’m sorry. That’s not how vaccines work.” Smithson believes these interactions further exemplify the consequences of misinformation.
For Smithson, it has taken a tragic toll on his personal life too. “I have family members even that I am very close to that haven’t gotten the vaccine because they think that it’s the government trying to control them.”
“Well, meanwhile, there’s people dying,” he says. Smithson relays how his aunt succumbed to the virus a week ago, who had adamantly refused to be vaccinated. “She was very vocal about how COVID’s not real,” he says. “And then she’s 50 years old, she gets COVID and dies a couple weeks later.”
“And it’s terrible,” he continues. “It’s sad because it didn’t have to happen.”
Speaking to the efficacy of vaccines, Smithson explained how the evidence shows that even if a vaccinated individual becomes infected, the symptoms are significantly less severe. “And so likely,” he continues, “if she would have had the vaccine beforehand, she wouldn’t have died.”
“But because of all the misinformation out there, she didn’t get the vaccine,” he concludes. “It’s things like that that are terrible, and it’s things like that that are happening in the hospital all the time.”
Pulling from the CDC’s data, Forbes reported that 99% of COVID-19 deaths in May were unvaccinated individuals. In June, the Associated Press reported how “Nearly all COVID deaths in US are now among unvaccinated.”
The latest numbers from the CDC report that as of August 2021, more than 166 million people in America have been vaccinated. While the CDC explains that breakthrough cases among the vaccinated are to be expected, serious infections and deaths remain below 1% of the 166 million vaccinated individuals; and, out of the 623,244 Americans who lost their lives to the virus (over the last 30 days), only 0.25% of them were vaccinated.
A Warning Against Vaccine Misinformation from Surgeon General Murthy
In an official press release last month from the U.S. Department of Health and Human Services, Surgeon General Dr. Vivek Murthy made an advisory warning against misinformation regarding vaccines. The press release reads, “health misinformation, including disinformation, have threatened the U.S. response to COVID-19 and continue to prevent Americans from getting vaccinated, prolonging the pandemic and putting lives at risk.” Most notably, the advisory called on social media companies “to take more responsibility to stop online spread of health misinformation.”
Fact-Checkers Losing Credibility
Misinformation has evidently put America in a dangerous position. Not only has it propagated intense political polarization, but it also has complicated public systems, and worse, led to deaths and increased infection rates of COVID-19. As it is most commonly disseminated on the internet via social media platforms, blogs, videos, etc., that is where the solution must be.
As Paul Horner said, “Nobody fact-checks anymore,” so the ideal solution would be to increase the scope and rate of online fact-checking. In an effort to realize such a solution, fact-checking entities have become increasingly popular. Though these fact-checking organizations were initially helpful, they are slowly losing their credibility.
The Wuhan Lab Controversy
A recent incident that calls the reliability of fact-checkers into question, once again, revolves around COVID-19. While the pandemic was unfolding, many were interested in the origins of the virus. Conspiracy theorists at the time were claiming that the virus leaked from the Wuhan Lab of Virology, as Forbes noted in its timeline of the theory. In response, many news outlets and fact-checkers were claiming to have “debunked” the hypothesis.
In April 2020, for example, Vox published an article titled “Why these scientists still doubt the coronavirus leaked from a Chinese Lab.” That same month, NPR ran a headline reading, “Virus Researchers Cast Doubt On Theory Of Coronavirus Lab Accident.” Even factcheck.org released an article titled, “Report Resurrects Baseless Claim that Coronavirus Was Bioengineered” in September of that same year.
This is noteworthy because different news outlets with their respective levels of credibility have potentially all gotten it wrong. On May 14, 2021, 18 scientists wrote a statement to the journal Science, making a case for why a formal investigation of the lab leak theory should be conducted.
According to the Wall Street Journal, three of those 18 scientists had previously condemned the lab leak as a conspiracy theory. A stance in which they were so sure that they even joined 24 other scientists in making an official statement on the matter just a month earlier.
In that initial statement, they wrote, “The rapid, open, and transparent sharing of data on this outbreak is now being threatened by rumours and misinformation around its origins. We stand together to strongly condemn conspiracy theories suggesting that COVID-19 does not have a natural origin.”
That same statement is what likely led to news outlets and fact-checkers labeling the lab leak as a conspiracy. But, as the New York Times recently reported, President Biden has ordered an official investigation into the virus’s origins. With an inquiry underway, new facts may emerge that could vindicate the media labeled conspiracy theorists.
The investigation is ongoing, but if it concludes that the virus did leak from the Wuhan Lab of Virology, then the official tried and true fact-checkers got it wrong. In that case, the relationship between the general public and professional fact-checkers will be compromised. How then will internet users be able to confirm that they’re digesting is factual information?
As the fact-checkers in this incident did make their claims based on the statement of 27 reputable scientists, the process of fact-checking this theory was more nuanced than some might consider. Regardless, the confidence in official fact-checkers is likely to be injured as a result.
If the internet’s last line of defense against fake news fails, it will become easier for individuals to favor their own biases over any fact-checking entity and further fuel political tribalism.
Additionally, there is already a lack of trust in the quintessentially reliable sources of news. According to a Pew Research study from June of 2019, Americans have split down the middle when it comes to trusting fact-checkers. The study found that 50% of people believe that “fact-checking efforts by news outlets and other organizations tend to deal fairly with all sides,” and 48% think that they favor one side over the other.
Fact-Checking AI’s Potential
To where can society turn for accurate information if fact-checkers lose their touch? One solution may be to use artificial intelligence instead of people to fact-check online information. This seems like the best option as AI is unbiased and capable of analyzing large data samples — that is to speak ideally of AI’s capabilities, though. But does that kind of technology exist?
While the question sounds simple, the answer is far from it. “I think it’s important to say that AI is not going to be able to determine what is true, and that’s the fundamental conflict,” says Nathan Lambert, a Ph.D. student studying Computer Sciences at U.C. Berkeley. While Lambert doesn’t think AI can fulfill the fact-checking role, he is confident the technology will still be necessary in helping humans monitor the sheer scale of the cyber world.
Nathan Lambert is a Ph.D. student studying Computer Sciences at U.C. Berkeley (Photo provided by Nathan Lamber).
“Given the scale of internet traffic and the growth,” he says, “it’s fundamentally untractable for humans to do all of it, so some computational tool will be in the loop there.”
Lambert points out that social media platforms like Facebook have become so large that they have already begun to rely on artificial intelligence. “On Facebook, for example, [most] of their moderation is already flagged or dealt with by AI,” he explains. “Even if you hire tons of people to do moderation, they cannot. There’s not enough human hours to look at every piece of content and decide is it true or is it hurtful or whatever.”
Facebook’s Utilization of AI
Facebook is an appropriate example of how large of a task internet moderation really is. According to statistica.com, “During the first quarter of 2021, Facebook reported almost 1.88 billion daily active users.” Combine Facebook’s traffic with the likes of other social media tycoons like Twitter and Instagram, and it is clear to see that no team of humans would be able to meticulously monitor the flurry of posts, comments, streams, and tweets.
Facebook’s experimentation and evolution regarding misinformation have, in large part, been influenced most recently by the pandemic. Anthony Miller, a former employee of Facebook who worked there from 2017 to 2019, recalls the company’s early concerns regarding the issue.
“We had a lot of company-wide discussion about the questions of to what degree we should even be censoring misinformation,” Miller explains. “At the time, Mark (Zuckerberg), and I think most of the employees, felt that it was a slippery slope, and we shouldn’t be getting too involved in deciding what is and isn't true.”
This sentiment has changed, however, due to the spread of false claims and conspiracy theories about COVID-19 on the platform. Noting that he has his own opinions about Facebook’s recent philosophical shift, Miller does believe that the “attitude has changed in recent years, both internally and in public opinion.”
“I think [COVID] was a contributing factor, but at this point, I’m getting into pure speculation and my opinion,” Miller says as he admittedly hasn’t been a Facebook employee for a few years now. “I left the company far before COVID, but I think we were starting to see a trend in that direction before COVID also.”
Offering his thoughts on Facebook’s recent attempts to mitigate the spread of misinformation, he says, “I believe Facebook has changed its stance on this in direct response to threats from the federal government that they are going to regulate social media. Facebook is much better off appeasing them through self-regulation.”
Speaking for itself, in May 2020, Facebook stated on its blog that “The COVID-19 pandemic is an incredibly complex and rapidly evolving global public health emergency. Facebook is committed to preventing the spread of false and misleading information on our platforms.”
To both Nathan Lambert’s and Anthony Miller’s points, Facebook states that it is using artificial intelligence as an advanced tool for human fact-checkers, but also, while they’re using new AI technology, they’re still incorporating AI technology they have used since before the pandemic.
“AI is a crucial tool to address these challenges and prevent the spread of misinformation because it allows us to leverage and scale the work of the independent fact-checkers who review content on our services,” the article states. “Since the pandemic began, we’ve used our current AI systems and deployed new ones to take COVID-19-related material our fact-checking partners have flagged as misinformation and then detect copies when someone tries to share them.”
The Three Road Blocks for Fact-Checking AI
Danny Godbout is a data scientist at Microsoft and has ten years of experience in the data science field. While it is already being used in various capacities on social media platforms, according to Godbout, there are three major roadblocks to developing autonomous fact-checking AI technology.
Common Sense
The first roadblock has to do with the “intelligence” side of AI. “I think the misnomer is intelligence,” says Godbout. “Intelligence implies understanding. If you talk about a person as being intelligent, they are able to come up with ideas. They have creativity. They understand how something functions, and there is a decision-making process based on that.”
“And AI to date does not have any sort of capacity for understanding,” he says. Godbout explains how AI doesn’t think the way humans do, rather “all AI does is it fits the historical data that it’s seen and it tries to parrot that. So if you ask it a question,” he continues, “it’s going to give you the answer that is most like what it's seen historically in the data.”
The first roadblock, then, is having to instill into AI the same kind of “common sense,” as Godbout calls it, that humans have. For humans, common sense came from centuries of evolution. Translating that into a computer would not be an easy feat.
While it is theoretically true that AI could be given access to all available data on the internet, that wealth of information would be of little use if AI is still incapable of thinking. Godbout noted that such a process would also be labor-intensive, lengthy, and extremely expensive.
Morality
The second roadblock is that, while both humans and AI have the ability to analyze data, humans can compute that data through a second process wherein they calculate the moral implications of a matter. As Godbout says, “there’s a lot of interpretation” that goes into deciphering whether or not a statement is true “that these machines just won’t do.”
Godbout uses the statement “Hitler was a good guy” to illustrate this point. Humans generally possess the ability to reason morally, so they comprehend what “good” means, ethically speaking. After stacking up this concept against the known historical facts about Hitler, the answer for most people seems objectively apparent. But for a computer that doesn’t understand morality, how could it hope to grasp the “good” concept?
Universal or Ground Truth
The final roadblock Godbout mentions involves universal truth or “ground truth,” as Godbout calls it. Godbout explains that “there is a personal bias” in every assertion and judgment that humans make. This being the case, how can humans, as subjects to subjectivity, hope to create a technology that is wholly objective and has a firm grasp on universal truths. Furthermore, until the people of this country can agree on universal truths, how can anyone hope to teach it to AI?
While that issue is a big area of research for Facebook and Twitter, according to Godbout, it’s still “pretty much unsolved.”
AI Still Has Much to Offer
While the idealistic vision of fact-checking AI seems very far in the future, the current capabilities of the technology cannot be understated. “I don’t think people fully realize how much they are being tracked,” says Godbout, “like how much of everything we do is instrumented.”
Godbout explains how he can see every click on someone’s computer from the last three years. Noting that Microsoft has a strict code of ethics when it comes to personal data and the protection of their users’ privacy, he says, “there’s a lot of companies that won’t [protect their customers’ data].”
What does data have to do with AI? As Godbout mentioned, AI is a misleading term. “There is sort of an internal joke amongst data scientists that when you’re trying to get venture capitalist money, it’s called AI,” he explains. “When you’re at work, it’s called machine learning. And then when you’re actually building a product, it’s called linear regression, which is the simplest, most basic formula.”
At the end of the day, AI in its current form is just a fancy term for algorithms and formulas. But these algorithms and formulas enable data scientists like Godbout to analyze and harness the data left behind by internet users. Godbout refers to these bits of information as “bread crumbs” and says that the general public often underestimates what data scientists can do with them thanks to “AI.”
Contemplating AI
Misinformation remains a serious issue that must be addressed. While AI may not be up to that challenge just yet, it is clear that the impacts of the technology will be significant. Hinting towards AI’s enormous potential, Nathan Lambert suggests that the subject is deserving of serious contemplation. “I want to believe,” he says, “but it’s hard to have faith that people will give the thought that is necessary with these kinds of things.”