Feature image by Zackary Drucker for The Gender Spectrum Collection.
Today GLAAD released its 2022 Social Media Safety Index report, a comprehensive review of the LGBTQ user safety policies of the major social media networks. All major social media networks – Instagram, Facebook, Twitter, TikTok, and YouTube – earned below a 50% score out of a possible 100; they are literally failing the queer community.
Jenni Olson (she/her/TBD), Senior Director of Social Media Safety at GLAAD, led the report in partnership with Ranking Digital Rights and Goodwin Simon Strategic Research. An advisory committee of researchers and queer media leaders, including representatives from Stanford, Harvard Law School, Media Matters for America, Kairos, and Fight for the Future; plus Nobel Prize Laureate Maria Ressa, Kara Swisher, and author and activist ALOK contributed.
“GLAAD has worked on these issues in various ways for over a decade,” Olson explained in an interview. “But it became clear in 2020 that there was a need for a focused program dedicated to LGBTQ social media safety and platform accountability.”
The experience of online hate and harassment is common in our community. A June 2022 report from the Anti-Defamation League found that 66% of LGBTQ+ respondents experienced harassment, and 54% experienced severe harassment — defined as physical threats, sustained harassment, stalking, sexual harassment, doxing, or swatting. GLAAD’s own research in May revealed in the Social Media Safety Index, also found that 84% of LGBTQ adults agree there are insufficient social media protections to prevent discrimination. 40% of all LGBTQ adults, and 49% of transgender and nonbinary people, do not feel welcome and safe on social media. Of course, these experiences and harms compound the more identities intersect, combing with racism, ableism, Islamaphobia, xenophobia, classism, and other forms of hate.
Algorithmic Bias, Free Speech, Surveillance, and Censorship
Essential to any discussion of social media platforms is a fundamental review of algorithms. Algorithms are essentially a series of instructions that tells the platform what steps to take to solve a particular problem or make a decision. It can, for example, decide what content to promote in more news feeds and timelines and what content to hide. It is usually a combination of human-created rules and code and artificial intelligence (AI) learning that helps the algorithm adapt over time. For example, human moderators could teach the algorithm to deprioritize phrases used by white supremacists.
When prompted to take additional steps to limit hate and harassment, social media companies often cite the need for free speech to justify their failure to act. In a 2019 speech at Georgetown University, Mark Zuckerberg said, “I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary.”
However, it’s hard to follow how this argument applies, because the platforms have proven to be biased, not the free beacons of speech Zuckerberg alleges. Multiple studies, experiments, and regular reports of the most popular posts on platforms show that across the board, the platforms prioritize right-wing content. The social media algorithms prioritize engagement in clicks, likes, views, saves, reactions, shares, et cetera, above all else. Engaging content keeps people on the platform longer; this attention is required to make money from advertising. Controversial and upsetting content tends to generate more engagement, and the racist dog whistles, sexist namecalling, and transphobic tropes used by right-wing media are undoubtedly good at angering their target audience and causing reactions and comments from opponents.
“When you come across some horrible, hateful YouTube video or Twitter account or TikTok post — rather than the platform’s AI algorithm design choosing to show you yet more toxic hate or disinformation, the company can make a responsible choice to surface higher quality content.” Olson, Senior Director of Social Media Safety at GLAAD, explained in an interview.
To quote one of GLAAD Social Media Safety Index advisory committee members, Kara Swisher, “enragement equals engagement.” The platforms, or more specifically, their algorithms, are trained to push out hate-based content because it leads to more engagement which leads to more profits.
Furthermore, like all forms of artificial intelligence, AI-based algorithms learn the same biases that humans teach, unless steps are actively taken to combat prejudice. For example, Microsoft bots trying to learn from Twitter quickly started tweeting hate and had to be shut down. Human and AI bias also result in the dispropriate removal and suppression of LGBTQ content and creators. Hence, LGBTQ content rarely stands a chance of reaching as far as hate-based content.
Since the algorithms recreate human biases and promote controversial content, failing to actively work against these biases and remove hateful content only serves not just to spew hate but actively promote, spread, and profit from hate. Though Facebook says this is no longer the case, a 2016 internal report at Facebook found that “64% of all extremist group joins are due to our recommendation tools.”
Social media companies’ ads also rely on gathering and selling user data, often known as surveillance advertising. The companies let advertisers choose precise criteria for targeting ads from political beliefs, relationship status, education and income level, particular pages you like, things you’ve purchased, and everything in between.
“This is also an area where we will have to see regulatory oversight reign in the companies,” said Olson. “Thankfully, we are beginning to see some impact from the EU as they just passed the Digital Services Act (DSA), which won’t go into effect until next year, but the platforms already need to adapt. These companies need to be required to implement real data privacy and to refrain from targeting users with surveillance advertising.”
Online Hate Leads to Offline Violence and Hate-Based Policy
In the tech age, especially after January 6, the world has become more acquainted with disinformation – intentionally false or misleading information often used to target systemically marginalized people. The falsified New York Post article about Kamala Harris is an excellent example of deliberately false information used, in this case, to undermine the leadership of a powerful woman of color. The article falsely claimed that Vice President Kamala Harris’s children’s book was being given to children at the U.S. border as part of government-funded “welcome kits,” but the New York Post knew this was false at the time of publication. The story came to light when the reporter who wrote it publicly admitted to being forced to falsify information and resigned from her position.
LGBTQ people have long been the targets of disinformation. All of the lies that the right spreads claiming that queer people, and especially trans people, are a danger to children or try to convert people are disinformation. What’s clear to anyone targeted by this disinformation is that it isn’t simply a lie. It’s a form of hate.
The Social Media Safety Index reports, “What social media companies define as “hate” is insufficient. An enormous amount of the anti-LGBTQ disinformation circulating amidst our current culture wars is indeed, quite simply, hate. Much of this material should rightly be evaluated against current existing policies for hate and harassment and dehumanizing speech, with corresponding enforcement. In the same way that these companies prohibit lies about COVID, the posting of inaccurate voting information, denial of the Holocaust — the intentional posting of patently false disinformation intended as a targeted attack on LGBTQ people (and other historically marginalized groups) must be mitigated.”
Further, the hate on social media never stays online. It fuels offline violence and policy changes.
“Today’s political and cultural landscapes demonstrate the real-life harmful effects of anti-LGBTQ rhetoric and misinformation online,” said GLAAD President and CEO Sarah Kate Ellis. “The hate and harassment, as well as misinformation and flat-out lies about LGBTQ people, that go viral on social media are creating real-world dangers, from legislation that harms our community to the recent threats of violence at Pride gatherings. Social media platforms are active participants in the rise of anti-LGBTQ cultural climate, and their only response can be to create safer products and policies urgently, and then enforce those policies.”
The Scores and Platform Takeaways
GLAAD’s Social Media Safety Index puts forth a set of overall recommendations for all social media platforms and more specific recommendations that focus on the areas where particular companies fail. The broader proposals seek to:
- Address algorithmic bias;
- Train moderators to understand LGBTQ needs;
- Increase transparency;
- Strengthen protections for LGBTQ people;
- Increase privacy and stop surveillance advertising.
Instagram Score 48 out of 100%, and Facebook Score 46 out of 100%
Meta owns both Facebook and Instagram, and in many cases, the policies are not differentiated between the two sites. Instagram scored slightly higher for allowing pronouns on some users’ profiles.
- Instagram and Facebook have no policy protecting users from targeted deadnaming and misgendering despite allowing some users on Instagram to add pronouns to their user profiles. This potentially dangerous combination could open users up to hate without adequate policies to support them.
- Meta prohibits targeted advertising based on sensitive topics, including topics related to sexual orientation. However, no similar disclosure was found that indicates the company does not permit detailed targeting based on users’ gender identity, creating concerns about advertising being able to use sensitive information to target ads.
In response, a Meta spokesperson said, “We prohibit violent or dehumanizing content directed against people who identify as LGBTQ+ and remove claims about someone’s gender identity upon their request. We also work closely with our partners in the civil rights community to identify additional measures we can implement through our products and policies.”
YouTube Score 45 out of 100%
- The company provides limited transparency on users’ options to control the company’s processing of information related to their sexual orientation and gender identity. The company discloses that it prohibits targeted advertising based on sensitive categories, including sexual orientation and gender identity. It’s good that they don’t allow ad targeting, but this doesn’t go far enough to protect user privacy, especially considering that the same company owns YouTube and Google.
- The company also prohibits advertising content that could be harmful or discriminatory to LGBTQ individuals. However, it doesn’t go far enough with a ban on promoting conversion therapy. This is especially dangerous on a platform that auto-plays related videos, helping lead users down a hate-filled rabbit hole.
YouTube did not respond to Autostraddle’s request for comment.
Twitter Score 45 out of 100%
- Twitter is one of only two platforms, the other is TikTok, which bans deadnaming and misgendering, providing recourse for users who are targeted in this way, including, recently, Elliott Page.
- Twitter prohibits targeted advertising based on sensitive categories, including sexual orientation and gender identity but does not disclose options for users to control the company’s collection of information, and recommendation of content based on users’ disclosed or inferred sexual orientation or gender identity is not off by default. Turning it off is helpful, but users do not have complete control over their privacy related to their sexual orientation and gender identity.
- Twitter also prohibits advertising that could be harmful or discriminatory to LGBTQ individuals, including conversion therapy.
In response, a spokesperson for Twitter said “At Twitter, we know the public conversation only reaches its full potential when every community feels safe and comfortable participating. We welcome GLAAD’s feedback and the opportunity to better understand the experiences and needs of the LGBTQ+ communities on our service. GLAAD has been an active member of the Twitter Trust & Safety Council since 2016 and a key partner in this space. We’ve engaged with GLAAD to better understand their recommendations and are committed to open dialogue to better inform our work to support LGBTQ safety.”
Twitter’s spokesperson also shared additional details including that their “hateful conduct policy prohibits inciting behavior that targets individuals or groups of people belonging to protected categories. This policy prohibits targeting others with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.”
One especially interesting part of Twitter’s response includes noting that “We recognize that if people experience abuse on Twitter, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. This includes; women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, marginalized and historically underrepresented communities. For those who identify with multiple underrepresented groups, abuse may be more common, more severe in nature and more harmful.” This presents an important acknowledgment of the role of intersectionality in targeted hate and harassment and the fact that hate and harassment discourage people from participating and can be harmful to free speech and expression.
TikTok Score: 43 out of 100%
- In 2022 TikTok added bans on pro-conversion therapy content, deadnaming, and misgendering. GLAAD found TikTok was the only company to provide comprehensive information on how it detects violations of this policy. Paired with the fact that TikTok allows users to specify pronouns, this is an important policy that means if you are misgendered or deadnamed, you should be able to get help and support from the platforms.
- The company currently does not disclose options for users to control the company’s collection of information related to their sexual orientation and gender identity. The company also provides only limited options for users to control the recommended content they see based on their orientation. This is a crucial privacy concern that means advertisers may use your personal information to target. However, a TikTok spokesperson did not that in the United States and Canada, housing, employment, and credit ads cannot target audiences using age, gender, zip code, marital or parental status, or any other protected characteristic.
- TikTok was the only company that did not disclose any information on steps it takes to diversify its workforce, an important step given that technology repeats human bias.
A TikTok spokesperson responded, “TikTok is committed to supporting and uplifting LGBTQ+ voices, and we work hard to create an inclusive environment for LGBTQ+ people to thrive.”
TikTok also provided additional information on their policies including that their “hateful behavior policies prohibit hate speech and hateful ideologies. We’ve long prohibited such content and behavior, and to make that abundantly clear, earlier this year we updated our policies to specify that deadnaming, misgendering, and content that supports or promotes conversion therapy programs is not allowed and will be removed from our platform when found. This was embraced by organizations including GLAAD, UltraViolet, and Lesbians Who Tech.”
TikTok also noted that they have prompts that encourage people to reconsider posting potentially unkind comments. Data was not readily available to confirm if this is improving the experience of TikTok users, but It’s worth noting that Twitter has this feature too and that Twitter’s own reporting found that these prompts resulted in “34 percent of the people revis[ing] their initial reply or decid[ing to not send their reply at all.” This demonstrates the power of harnessing the AI learning technology these platforms have to be used in creative ways to create a more welcoming environment.
One interesting find in GLAAD’s report is that there can be a big difference in the stated policies versus the reality. Facebook, for example, scored high for having expressed commitment to protecting LGBTQ people but then scored poorly for having no protections against deadnaming. Often if a form of hate speech isn’t explicitly named as prohibited, companies fail to enforce it even if it should be covered under general hate speech policies. For example, the intentionally false New York Post article about Kamala Harris, mentioned earlier, was not considered hate speech even though it relied on sexist and racist tropes to spread false information.
Conversely, expanding policies makes it possible to hold people accountable for hate, as was recently when Elliott Page was targeted with intentional misgendering and deadnaming on Twitter.
“In the 2021 report, we urged all platforms to follow the lead of Twitter and Pinterest to add a prohibition against targeted misgendering and deadnaming to their hateful conduct policies,” explained Jenni Olson ), Senior Director of Social Media Safety at GLAAD. “Earlier this year, we were really glad to partner with the folks at UltraViolet [full disclosure, I work for UltraViolet and led this work in collaboration with Jenni at GLAAD], and through our combined, coordinated advocacy work, we were able to convince TikTok to make this change. They also added language against misogynist hate speech simultaneously, thanks to UltraViolet. GLAAD also urged them to add the prohibition on conversion therapy content. To illustrate the difference this can make for individual LGBTQ users — the existence of these policies gives users a very specific lever to report this kind of violative content.”
“In the first week of July, a well-known right-wing figure intentionally and maliciously posted a tweet misgendering and deadnaming Elliot Page. So while we are continually pressing the companies to improve policies and improve enforcement, it does make a huge difference that the policies do exist.”
“We should all be enraged at these companies and urge our legislators to work towards thoughtfully crafted regulatory solutions…These companies have an inherent conflict of interest in their decision-making process around content moderation. To some degree, they don’t mitigate hate because they are making money from it, and we as a society are paying the price.”
Olson also shared some advice and, in the report, resources for protecting yourself if you become the target of online hate, harassment, and disinformation. “The most valuable piece of guidance I’ve gotten from folks who do training on this topic… is that you should really take care of yourself and your mental health and seek support from friends or professionals if and when you experience online abuse.”
“Lastly, these are difficult times, and this is difficult work we’re all doing. This line from the report reflects something very important for us all to remember: ‘As we stand together to fight against hate, we stand also united in love — as a community: LGBTQ together.”
Resources if you are the target of online hate, harassment, or disinformation:
Before today’s release, GLAAD held briefings with each platform named in the Social Media Safety Index to review the recommendations described in the report. GLAAD will maintain an ongoing dialogue about LGBTQ safety amongst tech industry leaders throughout 2022 and beyond.