How Social Media Can Make or Break Our Democracy

Tatiana Grower
7 min readJan 31, 2021

Imagine a conservative browsing the comments on a news article about the presidential debate. They encounter a hippy environmentalist, a socialist, and a critical race theorist. For a few hours, maybe even a day, each comment is remembered separately and individually. But as memory fades, their brain picks out the key points of each argument before filing them away in the archives. File name: people_i_disagree_with. What did they have in common? Idealistic. Perhaps overly emotional in their arguments. They are blinded by compassion, and can’t see how the real world works.

A liberal reading through the same comments would have their interest piqued by different arguments. One expressed concern about the national debt. Another stated that we should nuke Iraq. Yet another lamented that unemployment is so high because illegal immigrants are undercutting American laborers. While wondering how these people can be so cold, the same process is occurring internally: They are “other.” Not like you. These people don’t care about the plights of their neighbors.

In both of the above scenarios, a person with preconceived notions is creating a concept of “the other side.” The human mind likes to categorize things into neat little boxes. It helps us process the litany of information that we are constantly bombarded with. Our cognitive biases determine which pieces can be classified as threatening. It’s easier to imagine your enemy as a solid shape; a conglomeration of all the people whose opinions rubbed you the wrong way. Nuance is erased from the equation because a black and white world is easier to interpret. In addition to oversimplifying complex issues, most people are politically reactive. In hearing an opinion that triggers anger or fear, they quickly take the opposite stance, without pausing to consider whether the issue is even deserving of attention.

Few people form opinions based on “positive” views of what they want the world to look like. Perhaps the most efficient way to affect change is to form a concept of an ideal world and focus on the biggest gaps between that world and reality. This method, however, requires a firm grasp on abstract thinking. It’s much easier to identify a problem such as homelessness or unemployment, find the most directly related scapegoat, and proceed to lash out. It is human nature to react more strongly to negative emotions, and thus to focus more on problems than solutions, so that our political opinions are often formed based on the stimuli around us. You see immigrants working in low-skill jobs and decide that they are responsible for high unemployment. You see low-income families getting pushed out of housing, and decide greedy landlords are to blame. Abstract thinking is not present in these kind of direct-cause reactions. But how do we push to think deeper? To consider the tides of stronger forces at play that may have little to do with immigration and unfair allocation of resources? Media consumption plays a large part.

Unfortunately, media of both the traditional broadcast news and open platform variety tend to promote reactionary thinking. The loudest, most radical voices often rise to the top because they succeed at capturing the audience’s attention. In truth, a well-reasoned analysis of current issues simply isn’t as captivating as a frenzied warning that the sky is falling. In this way, there is a strong motivation for content providers to stir the pot. This problem is inherent to both curated and open platform media because we are instinctually wired to be aware of threats. Naturally, our attention gravitates to the biggest challenges to our safety and happiness. Moderate voices are constantly drowned out by the ones screaming “the end is nigh.”

This has always been the case, but the introduction of social media has lowered the barrier of “oversharing.” Suddenly we are removed from immediate social consequences, which makes makes projecting our feelings much more comfortable. Our inner thoughts can now be broadcasted freely with little fear of repercussions. Who cares if it offends someone behind a computer screen a thousand miles away? Tweets and Facebook posts don’t usually contain well thought out opinions, polished for public consumption.

This rapid and unchecked spread of information is a double-edged sword. Access to opinions and information from users across the world can be extremely beneficial for developing a well-rounded understanding of current events. Especially in the case of candid video evidence, which is one of the least biased forms of information available. Videos can spread like wildfire across social media platforms and spark debate over social issues, as evidenced by the recent protests against police brutality sparked by the death of George Floyd. Since its founding, America has relied on curated news media since to determine what content is of interest to the public. Social media and websites like Youtube provide an important alternative to the centralized press, which is often not ideologically representative of the rest of the country. While journalists have formal training in ethics and proper representation of a story, their own personal biases are everpresent in determining a) what content is important enough to broadcast and b) how to represent said content with sound bites and visual media. The media does play an essential role in choosing which voices to lift up based on what they deem to be of interest to the public, but this power is extremely vulnerable to neglect and abuse. In contrast, the internet provides unfettered access to an abundance of (mostly useless) information, but the average person doesn’t have time to comb through every morsel to glean the relevant pieces.

Moderation of social media platforms is a contentious debate at the moment. Facebook, Twitter, and Youtube have strict policies against harassment and certain language that arguably make the platforms more attractive to users. However, as has been pointed out by many conservative pundits, these policies are not enforced equally against liberals and conservatives. Many people, regardless of political affiliation, recognize that these companies’ power to silence some voices over others is dangerous in itself. Combine this with the cooperation between Apple and Google to remove alternative platform Parler from their respective app stores, and the public is rightfully concerned about the tech industry’s ability to unilaterally silence opposition. But perhaps even more important than the removal of disagreeable content is the decision of which content to promote. The algorithms used by social media platforms to determine which content appears to the user, or which videos to recommend, arguably have a much larger impact on public discourse than content moderation. Naturally, these companies have an interest in protecting their intellectual property from competitors, which means users and creators are left in the dark as to how these algorithms work. The secrecy around these algorithms is extremely problematic, because it is entirely possible that no human working in tech even fully understands the dangers of a blind algorithm psychologically targeting users to stay engaged. Even the developers of this technology may not have considered the full extent to which it could affect political sentiments, for better or worse. While open-sourcing the algorithms that run social media is not be a realistic possibility, I believe tech companies have a social responsibility to inform the public how they work and what factors they consider, as closely as possible without revealing trade secrets. Social media has simply become too powerful of a force in swaying public opinion to continue allowing it to operate in the dark.

There is certainly value in a completely free and unmoderated exchange of ideas as well. When everyone has equal access to an audience, all ideas are brought to light and weighed on merit. However, there are also drawbacks to such platforms. 4chan is an excellent case study of how anonymity mixed with a laissez-faire moderation style plays out. The only rule policing content on 4chan is that posts must be relevant to the topic board, i.e. posts about television and film must be submitted to /tv/ and posts about fitness to /fit/. Boards are moderated to remove duplicate and irrelevant posts, but the posts themselves are not policed for content. Comments are practically unregulated. While this makes 4chan one of the most open platforms available, it is overrun with content that the average person would find very upsetting. Hateful sentiments, conspiracy theories, and violent/pornographic images are extremely common. While some well-reasoned civil discourse is present, the volume of trolling comments and memes usually outweighs any exchange of serious ideas.

The microcosm of 4chan begs the question, how much moderation on social media platforms is ideal? Recent events have left certain political factions homeless of a platform in which to share their good faith opinions without fear of being banned, but completely unmoderated platforms have their own drawbacks. Can social media be organized in a way that defeats the negative aspects of human nature? I think one of the easiest changes Facebook and Twitter can make is to write their terms of service as explicitly as possible, and enforce them as objectively as possible. This would include listing out specific words and phrases that are considered violations. There should be little room for human error in enforcing the rules. The rules should be public so that users know exactly what they can and cannot say on the platform, and if they consider the rules unfair, they can criticize the rules publicly or cite the grievance in their decision to move to a different platform.

Better yet, Facebook, Twitter, and Youtube should publish detailed descriptions of their content promotion algorithms. Giving users the opportunity to understand how these platforms prioritize content would, at the very least, absolve some users’ suspicions of suppression. If social media platforms are not consciously manipulating public opinion with their algorithms, they should have nothing to hide. If they are doing so without their knowledge, concerned citizens may be able to identify ways to address these unintended outcomes. And if social media companies are making a concerted effort to influence public opinion, users deserve to know how.

Overall, I think social media presents an amazing opportunity to the free marketplace of ideas. If organized and administered responsibly, it can solve a lot of the problems that traditional media is hindered by. Social media companies will only reform, however, if sufficient public pressure is applied.

--

--