
Information dissemination on the internet has undoubtedly evolved from a technological challenge to a contentious, political, and justice-oriented issue. In simpler times, web designers and developers achieved the best ways to code and display text and images online. It is beyond that now, when conglomerates are developing technology to automatically filter and selectively display information to users for the greater purpose of revenue generation. The internet under capitalism has demonstrated a bleak reality; humans have been dwindled down from users to consumers to data points, content has become nothing more than a means to an end. That end being anything from ideological influence to self-preservation, usually for a for-profit institution.
Perhaps you cannot pin this information transformation to one bad actor. It would be fairly inaccurate to ascribe this seemingly dystopian reality of the World Wide Web to one entity, even if humans have the tendency to do so. However, I would rather argue that the faceless, robotic social media algorithms, designed by all kinds of ICT companies, were the revolutionary technical innovation that soon became the arbiter of truth and knowledge via the control of the creation and spread of information.
Because these observations concerned me so much, I wanted to follow my intuition to support this argument. My resulting research confirmed my suspicion that social media users have collectively identified algorithms as the culprit that helped social media rear its ugly head.
Before algorithms, social media was considered undisturbed and unfiltered. Your news feed was chronological and you received information in the order it was posted. Now, algorithms choose what posts to show you first based on some vague data about what it thinks you like and would engage with (Golino, 2021). Before the Algorithm took a chokehold on your free speech, there was arguably a more democratic process of selecting and sharing acceptable information in the form of user feedback. Now, the Algorithm knows you well enough to do it in your stead.
Algorithms are a nebulous technology if you are not familiar with the science involved. Also, it tends to be proprietary. The social media companies that utilize bots (which are essentially sets of algorithms) to moderate posts will not reveal how they truly operate. It is no surprise that humans cannot help but to speculate how algorithms have transformed the way they engage with social media, especially since it has visibly impeded the way users spread and receive information about more substantive issues and topics.
The darkness surrounding algorithms has made them adversarial. Users cannot understand how or why their post got removed. They will inevitably notice and attempt to decipher patterns, then contextualize them within capitalism and determine that social media companies have an agenda to undermine any content, users, and even movements that don’t serve them to their benefit. This has resulted in clever and even righteous pursuits to work around and even against the adversary and promote a freer internet in spite of itself.
One tactic of this counteroffensive is the use of “Algopeak,” by users on various social media networks, but most saliently on TikTok. Perhaps you can liken Algospeak to kids speaking in code to prevent their parents from understanding their conversations. Or conversely, it is similar to parents spelling out words to shield profanity from young ears. In the context of social media however, it is a linguistic strategy to circumvent bots seeking out content that typically violates a site’s community guidelines.
More niche technology journalists have covered the rise of Algospeak in prominent news publications like the New York Times and the Washington Post. They have described it as an American phenomenon (Levine, 2022) where content creators use codewords to replace explicit terms to talk about serious issues or topics that may be considered “red flags” to the social media platforms. Instead of saying “kill” or “suicide,” you say “unalive.” This extends beyond the voice. If you want to discuss sexuality, you could type “pr0n,” “seggs,” or use an eggplant emoji (Delkic, 2022). These are just a small sample of dozens of terms determined mostly by informal consensus among users. Despite the range of topics addressed, essentially these terms serve the same purpose of avoiding removal or suppression of content by automatic moderators like bots.
It is only recently that researchers have delved into this phenomenon. A 2023 article in the journal Social Media + Society took the task of exploring and revealing Algospeak by interviewing TikTok creators who have utilized it to circumvent what they perceived to be “unjust content restriction” (Klug, et al. 2023, p. 1).
In the article, researchers Ella Steen, Kathryn Yurechko, and Daniel Klug share their findings from 19 interviews with TikTokers to “learn about [their] motivations of and experiences with using algospeak.” (p. 4). They were able to glean 70 examples of algospeak from preliminary research online as well as contextualize algospeak within the greater sociolinguistic phenomenon of “computer-mediated language,” (p. 1) or more simply, internet speak.
Before algospeak there was first textspeak, which simplified language by removing “vowels, capitalization, [and] spacing.” while still sending a clear message to the receiver (Klug, et al. 2023, p. 2). Then there was leetspeak (133t speak), which the researchers described as a more “playful encryption” to make fun of new gamers. In the 2010s, LOLspeak arrived, which also played on language through intentional grammar and spelling errors to emanate a cheerful tone.
By tracing the history of internet speak, the researchers make it clear that algospeak is a new manifestation with significant precedent. Users have always manipulated language online or through communication technologies, at least for practical and fun purposes. But algospeak is unique in that it tries to take on the serious issues of censorship and injustice according to the content creators that utilize it (Klug, et al. 2023, p. 2).
From the interviews, the content creators shared a collective frustration of TikTok’s opaque community guidelines as motivation to either adopt or invent algospeak (Klug, et al. 2023, p. 7). They perceive the algorithms responsible for moderating content as non-contextual, random, inaccurate, and biased (p. 7). They believed that topics like violence, controversial beliefs, and discussions about the LGBTQ+ community were the biggest examples of unacceptable content on TikTok. Despite not knowing the true methods, the interviewees “concluded that TikTok looked for keywords” via captions, hashtags, and even text on the TikTok itself to screen content. Even though there is no proof of this, they exhibited a sense of paranoia by believing that TikTok had an official “list of unapproved words.” The resulting censorship came in the form of removing audio, shadowbanning via the removal the visibility of the TikTok in tags, or outright deletion. More often than not did the interviewees feel like the punishments were unwarranted (p. 8). But nevertheless, they used algospeak as a workaround.
However, the attitude towards wrongful moderation extended beyond shallow grievances. The crux of the issue to the interviewees was that the algorithms were pushing an agenda of social injustice against marginalized groups (p. 7). This is where algospeak takes on a unique purpose compared to other forms of internet speech, as the motivations behind its use is more profound and possesses societal implications. Interviewees who created content related to the LGBTQ+ community felt their videos were being unreasonably restricted. One user’s video was taken down, presumably because of their use of the term “lesbian.” Another user felt she was even being “targeted” for being a Black lesbian. To her, “it felt like… somebody was directly targeting her…” and believed that “TikTok is secretly quite homophobic, or not even secretly” (p. 10).
The issue of a flawed humanity negatively affecting algorithms has been documented. In a study from ProPublica (2016), they found the algorithmic formula that informs Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, was falsely identifying Black defendants as repeat criminals at twice the rate of white defendants, thus giving them more severe prison sentences. ProPublica pinned this disparity on the algorithm’s reflection of bias and inequalities within the criminal justice system where people of color are incarcerated at a disproportionate rate (Angwin, Larson). There is a resulting consensus that algorithms act as extensions of human prejudice, discrimination, ignorance, and promote activities of injustice, whether intentionally or not. This problematic behavior of algorithms predictably elicits these feelings of paranoia and a sense of injustice amongst content creators. No longer is the algorithm just trying to censor content that may be unsavory to advertisers; the humans behind the bots are deliberately homophobic, racist, transphobic, and just generally hateful towards marginalized groups.
In the context of algospeak, interviewees used terms like “le$bean”, “qu33r”, and “g@y” as codewords for lesbian, queer, and gay respectively, even though talking about the LGBTQ+ community generally should not bear any liability. While it is debatable whether or not TikTok should allow actual depictions of violence, interviewees expressed concern of their free speech in outside contexts when a comedian’s video was removed for joking about suicide (Klug, et al. 2023, p. 8). Overall, free speech continues to be an evolving issue when bots are literally evolving to catch on to algospeak, leading creators to improve their tactics further (p. 12). It is a never ending cycle between humans and bots, and more generally social media companies, that exemplifies the greater effect of algorithms on society.
MacKinnon (2012) captures the larger online landscape succinctly in Consent of the Networked: The World Wide Struggle for Internet Freedom:
“In the Internet age, the greatest long-term threat… [is] more like Aldous Huxley’s Brave New World: a world in which our desire for security, entertainment, and material comfort is manipulated to the point that we all voluntarily and eagerly submit to subjugation.”
For social media companies, the use of their services is transactional. Users give input in the form of long watch times, likes, and comments, and in return content creators get “rewarded” for their popular posts that elicit “strong reactions” (McCluskey, 2022). More engagement leads to increased advertising returns. This is how the algorithm works, and it is not a balanced affair.
Users make sacrifices to the algorithms by letting themselves be vulnerable to information bubbles and “amplified”, “divisive content” (McCluskey, 2022). They unintentionally allow these bots and companies to exploit personal information to perilous results. Researchers have made connections between this harmful reward cycle, social media addictions, and negative teen issues like “body image issues, mental health crises, and bullying.” (McCluskey). According to a Pew Research (2018) study, 74% of Americans believe that social media is creating an inaccurate picture of society at large. 71% report that the content they see makes them angry. The materiality of algorithms transpire these controversial outcomes.
This is not a hidden agenda. MacKinnon (2012) states that “companies argue that collecting a wide array of personal data is necessary to serve people better…” and our dependence on these ICT technologies lets companies keep us in the dark of “how power works in the digital realm.” She argues for a digital commons that is filled with “netizens” who are “tech-savvy” can use their skills to hold powerful companies accountable by dictating what policies are acceptable and non-acceptable (p. 26). Content creators are beginning to do this by creating algospeak codewords and raising awareness of how algorithms suppress free speech in secretive and invasive ways within the context of social justice.
But ultimately the battle against algorithms, whether it is artificial intelligence or machine learning or some other set of instructions, is more of an accepted reality of being online than a cause to outright boycott social media. It is nigh impossible to make a Facebook profile, share photos on Instagram, or generally interact with friends, family, and brands incognito. By making yourself available to these services, you are letting your ‘Self’ be visible to the algorithms. If you want to be seen and heard by the masses, you must also be willing to let social media companies exploit your content in order to improve their services, or whatever their stated goals are in their privacy policies. Additionally, there is the social pressure to be online in the first place. As evident in their opacity and randomness, there is a legitimate cause of perturbation of algorithms.
But the greater societal issue is that uneasiness turning into outright cynicism. While algospeak is one of the ways to antagonize algorithms, the externalities it causes just makes the use of it more difficult by allowing bots to learn from the ways users speak differently online. Social media users know this, but they use algospeak anyway. One interviewee stated that “the little community guidelines bot is gonna keep following you,” leading the researchers to conclude that “in order to keep evading unjust consequences creators must… adapt their algospeak in response to TikTok’s continuously improving algorithm” (p. 11). Bots track every move to personalize feeds in order to collect more engagement, and users continue to let them as 72% of Americans use social media (Pew, 2022). Despite these contradictions, and despite my frustration with the current state of the Internet, I understand that the power dynamics between users and social media companies are insurmountable, at least until greater collective interventions are executed. Given the smaller ways users do this, I do have hope it will happen.