Representatives of nine tech companies have met with U.S. government officials to prevent the spread of misinformation on social media platforms. Facebook, Google, Twitter, Reddit, Microsoft, Verizon Media, Pinterest, LinkedIn, and the Wikimedia Foundation are coordinating with government agencies in an attempt to block the dissemination of fake news and user data during the upcoming election. The group will try to prevent a repeat of the 2016 presidential election, when their platforms allowed foreign actors to interfere. The companies decided to step up their efforts to close fake accounts and label misleading posts, with Facebook and Twitter removing a multitude of fake accounts on a daily basis.
“For the past several years, we have worked closely to counter information operations across our platforms. In preparation for the upcoming election, we regularly meet to discuss trends with U.S. government agencies tasked with protecting the integrity of the election,” the group said in a statement recently released to the press. While efforts are clearly being made in order to provide Americans with a better online environment before and during elections, there is still cause for concern. After all, if there is one thing that the Facebook–Cambridge Analytica data breach proved it’s that user data can be harvested without consent and then used for specific political purposes. Even so, one important question remains. Just how much does social media impact political decision making?
What Are the True Sources of Misinformation?
One recent study proved that social media has changed the way U.S. citizens receive political data and information. New media has indeed become an important channel for spreading false information. However, when it comes to analyzing the impact of misinformation on the 2012 and 2016 U.S. presidential elections, the results are surprising. The study shows that actual influence on citizens’ opinions remained low, despite fake news being popular at the time on these platforms. A limited increase in misinformation about President Barack Obama was obtained by using social media in 2012. However, this failed to change anyone’s opinion about his opponent.
When it comes to the 2016 elections, the results of the study are even more remarkable. Social media seems to have had a lower negative impact on belief veracity among Facebook users than among non-users, even though popular opinion might suggest otherwise. However, it’s important to note that other studies showed different results, going as far as to say that “one fake news article was about as persuasive as one TV campaign ad in the 2016 elections.” No matter where the truth lies, companies like Facebook, Google, and Twitter have been facing more and more demands from both consumers and the civil society to lower the number of fake news items posted on their platforms.
How Are Tech Companies Responding?
“We’re constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps. In 2019, we took down over 50 networks worldwide for engaging in coordinated inauthentic behavior, including ahead of major democratic elections,” Facebook announced recently on its Newsroom. According to the statement, the platform removed 1,057 accounts, 669 pages, and 69 groups in July 2020. These personal accounts and pages were proved to be associated with other entities working on political campaigns of holding political office in the U.S., Brazil, Ukraine, the Democratic Republic of Congo, Yemen and many other countries around the world.
According to Reuters, Facebook is also contemplating the idea of stopping political advertising altogether after the U.S. presidential election, in an attempt to stop post-election misinformation being spread on social media. The New York Times seems to confirm the news in an article claiming that Facebook CEO Mark Zuckerberg and other company administrators are meeting on a day-to-day basis to examine new ways of minimizing the risk of Facebook being used to challenge election results. The possibility of stopping all political advertising after November 3 is being considered, sources say.
Facebook is not alone in taking precautionary action against the dissemination of fake news on its platform. YouTube and Twitter have also started to analyze different strategies of preventing the spreading of post-election misinformation, according to political analysts who have worked with the two companies. Moreover, Twitter is already known for stopping political ads on its platform since 2019 and, more recently, for labeling some of President Trump’s tweets as “manipulated media.”
The Impact of Misinformation
With social media platforms becoming more and more popular, tech companies around the world are also becoming more conscientious about the key role they can play in political decision making. Social media content can be created and spread among users with almost none of the filtering, fact-checking, or rules followed by the traditional media. However, posts created by private individuals may have a similar impact on social media as posts created by newspapers or magazines. Tech companies may need to find new ways of protecting the public against fake news, especially during elections, without becoming arbiters of truth or damaging freedom of speech rights.