Over the past few weeks, we’ve seen how far-reaching the effects of fake news can be.
Buzzfeed found that fake election stories generated more activity on Facebook than ones that were legitimate. Then, a man entered D.C.’s Comet Ping Pong with an assault rifle to investigate one of those fake election stories, which claimed that Hillary Clinton ran a child sex ring from the pizza restaurant’s basement.
While it’s impossible to tell whether misinformation altered the outcome of the election, Pizzagate provided a harrowing example of what could happen if we don’t do something about the proliferation of fake news.
So, what can we do?
One line of thinking proposes that because most people get their news from social media, it should fall on companies like Facebook to come up with a solution.
Well before the issue of fake news arose to the forefront of the national conversation, John Herrman in an article for The New York Times Magazine looked into how Facebook has become the host of a huge chunk of the nation’s political discussion. He argued that hyperpartisan news pages with names like “The Angry Patriot” and “Fed-Up Americans” have taken over political discourse and become “the most disruptive, least understood force in the media.”
A Buzzfeed analysis then found that Facebook’s most popular political news pages, especially those with conservative orientations, frequently post false or misleading information.
One key factor in the success of fake news stories on Facebook is the site’s algorithm, which prioritizes engagement. The more people react to a Facebook post by liking, commenting, or sharing it, the more likely it is to pop up in others’ news feeds alongside legitimate news stories and go viral. And fake news stories often feature attention grabbing headlines and outlandish claims to illicit a response.
Emily Bell, founding director of Columbia’s Tow Center for Digital Journalism, has been particularly outspoken in her criticism of Facebook. In the Columbia Journalism Review she criticized the company for presenting fake and real news equally. The website’s very design, she writes, “creates an environment where perception can matter more than truth.”
Bell also criticized Facebook’s original response to the proliferation of fake news, and wrote that the situation wont improve until Facebook acknowledges the scope of the problem and comes up with a solution. In the days following the election, Facebook Founder Mark Zuckerberg said it was a “pretty crazy idea” that fake news could have swayed voters.
Tim O’Reilly, a Silicon Valley investor and publisher, agrees that Facebook should take most responsibility. He wrote that ever since Facebook transformed from a timeline of updates from friends to a curated newsfeed, they implicated themselves in the business of controlling what people see.
“It’s their job,” O’Reilly writes, “So they’d better make a priority of being good at it.”
Google has also come under fire for its search algorithm. Although the algorithm organizes search results based on several factors, the click through rate, or the number of clicks a link receives, plays a key role.
Following the election, Google’s top news story for the search “final election results” linked to a fake news article with inaccurate numbers. The Guardian’s Carole Cadwalladr even found that Google’s search results strongly favor anti-Semitic and misogynistic answers to queries like “are Jews evil” and “are women evil.”
One approach, then, would be for Google and Facebook to alter their algorithms to block or flag fake news. As both companies have admitted, however, they don’t always get it right. The algorithm that replaced Facebook’s “trending topics” team, for example, repeatedly listed fake news stories on the social media site’s sidebar.
On Thursday, Facebook announced it would start giving users the option to flag content on their news feed as fake news. If enough users flag a potential fake news item, it will trigger an investigation from third-party fact checkers, like Snopes, Politifact, and FactCheck.org, that adhere to the Poynter Institute’s international Fact Checking Code of Principles. The non-profit journalism school’s code focuses on nonpartisanship and transparency.
If fact checkers determine the content to be false, a message will appear when a user attempts to share the post explaining that it has been “disputed” and providing a link to an explanation as to why. Their word choice here is slightly troubling, though, and gives the impression that fact checkers have simply quibbled over a fake news article rather than shown it to be verifiably false.
It is unclear whether or not their algorithm will be altered to make sure fake news doesn’t appear at the top of user’s news feeds in the first place. The announcement stated that “stories that have been disputed may (my emphasis) also appear lower in News Feed.”
While Facebook’s apparent commitment to solving this problem signals a step in the right direction, it’s too early to tell if their strategy will be able to effectively stamp out fake news.
Nicholas Lemann, former Dean of Columbia Journalism School and contributor for The New Yorker, argues that blaming social media doesn’t get to the root of the problem.
“Real news is not endangered by fake news, but by the declining economics of real news,” he said in an interview with Colorado Times Recorder. “The real news ecosystem needs a lot of shoring up right now.”
Lemann said he doesn’t believe we should censor fake news on social media. He did, however, argue that there should be substantial discussion about the concentration of power around Facebook and Google as information providers, although he’s not sure what that would entail. He added that these companies play a key role in “knocking the economic basis out from under journalism.”
In the week following the election, Google announced that it would ban fake news sites from using their advertising network. Facebook quickly followed suit and announced they had updated their policy of not displaying ads in sites that show illegal or misleading information to explicitly include fake news sites.
This does not inspire confidence, however, after a Media Matters for America analysis found that ads linked to Google AdSense are still running on numerous fake news websites a month after Google announced it would ban them from using its network.
Lemann also argued that “public policy should be involved if it’s a public problem,” and suggested that we should be looking at public funding for reputable media outlets as a potential solution. But the American public, he admitted, would likely be reluctant to do so because of widespread distrust in institutions, including government and the media.
One obvious shortcoming of all these potential solutions? They take time.
If Google and Facebook decided to change their algorithms to block misinformation from going viral, there’s no telling how long it would take to get it right, or if they could do it at all. And other strategies, like dismantling the economics of fake news by taking away ad revenue, wont have an immediate effect. And, clearly, no one really knows what to do about our country’s shrinking newsrooms.
But as Pizzagate demonstrated, the real world consequences of fake news can be immediate.
So, what can be done right now?
If strengthening real news is to be our defense against fake news, then one way for individuals to act would be to subscribe or donate to media outlets they rely on.
One problem here is that most U.S. citizens don’t know which news sources are reliable.
A Stanford study that was released a couple weeks after the election found that students of all ages are ill-equipped when it comes to evaluating online sources of information. And a Buzzfeed survey found that around 75% of the American public believes fake news when they see it.
Clearly, we need to do a better job of educating our students and ourselves about how to find accurate information, and how to recognize misinformation. Before that can happen, though, the problem of fake news needs to be taken seriously, especially by national and state political leaders.
Unfortunately, many of those leaders do not, and even contribute to the problem themselves.
For example, Donald Trump’s pick for national security adviser, Lt. Gen. Michael T. Flynn and his son, Michael G. Flynn, have pushed Clinton conspiracies on social media, including the bogus claims about a child sex ring at the center of Pizzagate. Trump fired the younger Flynn from his transition team after the situation escalated to gunfire, but his father faced no repercussions. And the Flynns aren’t the only people close to Trump who have spread fake news on social media.
Politicians at the state level are also aiding in the spread of fake news. State Rep. Tim Couch (R-Hyden) of Kentucky recently posted a bogus news story suggesting that Sasha and Malia Obama are adopted, and that first lady Michelle Obama is a transgender man. California Assemblywoman Melissa Melendez tweeted a fake story about Trump protesters beating a homeless veteran to death.
Over the past few months, Charles Buchanan has found quite a few examples of Republican officials spreading fake news and misinformation on social media, and he’s called them out for it. In the midst of the fake news crisis, the least we can do is hold our elected officials accountable, and we will continue to do so on The Colorado Times Recorder.