The process of opinion formation for most people does not happen in an act of stoic silent thought, but instead through conversation with others.
People bring their own ideas and perspectives, disagree, and in the best cases, engage constructively with opposing views, to refine or change theirs to the most logically sound position.
Thus, changes to how these conversations happen can be of great societal importance. Each poorly formed, unquestioned opinion that some individual holds contribute to societal ignorance or collective holding of contradictory opinions on important topics.
Consider then one of the most dangerous externalities of the digital revolution: the move of opinion forming conversations from water coolers to social media. Though the physical proximity is the most obvious difference, there also is a reduced likelihood of constructive discussion. Important conversation data is lost in transmission including facial expressions, body language, tone of voice, and sarcasm.
Yet, the larger problem of online discourse is permanence. Standing around the water cooler, an offensive opinion shared by a colleague is only heard by those geographically near. Those who hear it could ignore it, ask questions to understand more, snip back sarcastically, or share logical refutation of the offending argument. But, regardless of how anyone responds, the conversation eventually ends and starts to fade into the memory of the participants.
Online, this is not possible.
A disagreeable opinion shared online can exist forever. The same techniques in distributed and high availability systems that power social networks often ensure that even if comments or posts are deleted by users, they simply are hidden from public view. The purging of databases, backups, or servers of deleted posts is rarely, if ever, guaranteed by a service.
Thus, any poorly formed comment, any roughly articulated argument, any opinion shared that a person no longer holds years later, can (and likely does) live forever in opaque computer systems. This unintended consequence of distributed systems ought to seem at first glance utterly benign, or at least it was seen as so prior to 2013.
"If Facebook keeps some of my deleted comments in some backup server in Utah, what do I care?"
But, the Edward Snowden leaks in 2013 changed everything. Thousands of leaked documents revealed the far-reaching collection systems used by United States intelligence agencies to vacuum up user activity from companies including Google, Facebook, Twitter, and Microsoft. The collection, storage, and complicity with government surveillance by the biggest technology companies mean that "the day" had finally arrived when our emails, instant messages, and even social media could be used against us.
Like a scalable and digital East German Stasi secret police, or a real-life digital Thinkpol "thought police" from George Orwell's 1984, these dictatorial powers exist like a sleeping dragon hidden deep in government data centers. Supposedly used only for hunting terrorists, these systems collect user data on almost everyone, though thankfully have not led to prosecution of many people... yet.
But, do we all have faith that an elected government will only use these surveillance tools to prosecute terrorism?
What will stop an administration from using these powers to prosecute crimes of thought?
Yet, even without government help, the permanence of the web means that, with changing societal definitions of problematic opinions, a disgruntled colleague can search long enough and find reason to have you fired, a university could find reason to block your admission, or anyone can find digital evidence to justify a narrative for your public shaming.
The future may well be devoid of the grace of constructive conversation, amicable disagreement, and personal integrity of thought. Self-censorship out of fear of mob rule retribution is becoming the reality for Western societies who have long prided themselves in values of personal liberty. We enter an age of sanitization, where only collectively determined, corporately approved, or government dictated opinions can be publicly shared.
If privacy means working out thoughts for yourself without fear, what can anyone do to protect their privacy? Encryption and expiring messages in Signal messenger or ProtonMail email bring both privacy and ephemerality to online communications. Deliberate efforts to have offline conversations outside of publicly crowded and surveilled spaces or away from always listening Siri-enabled smartphones or Alexa speakers is another option to consider.
Beyond our messages, email, and conversations, social media still serves as the gold mine of personal details on who you are including your friends, your interests, your fears, and your political leanings.
Even if one can't trust that a deleted account will be purged from Facebook and government servers, deleting social media accounts may still be the best protection. Outside of government surveillance and mob rule, most social engineering hacks can use your pet name or elementary school from Facebook to get through your bank's security questions and into your account.
Yet, with each like, share, and up-vote that we, the social media addicted masses, feel compelled to do, we unceasingly feed the algorithmic surveillance beasts that eat away our privacy, free thought, open conversations, and individuality.
And very few seem willing to seek a better way.
Written for CS492 Social Implications of Computing course at University of Waterloo.