One afternoon, as I scrolled through my favorite time-suck app, TikTok, a video of a user with a disability dancing to a trending song surfaced on my For You Page. As I read some of the thousands of comments left under the user’s video, my blood boiled at the mindless harassment and belittling rhetoric targeted at the user’s disability. Social media only served to further the commenters’ hostility: behind the cowardice of their anonymous, private TikTok accounts, these internet bullies were protected from any form of repercussions for their actions. After tirelessly fighting back against this ableism, the user eventually turned off their comment section.
Ableism, as defined by the Center for Disability Rights, “is a set of beliefs or practices that devalue and discriminate against people with physical, intellectual, or psychiatric disabilities and often rests on the assumption that disabled people need to be ‘fixed’ in one form or the other.” Digital ableism is when this discrimination, prejudice or devaluation of disabled people occurs online, whether on social media apps, online forums or websites. Digital ableism has worsened since the explosion of the digital age at the turn of the century, expanding far beyond the cyberbullying of social media users with disabilities. Inaccessible digital platforms have hindered many people’s ability to equitably participate in online resources, resulting in thousands of lawsuits against companies such as Netflix and Five Guys who failed to make their website accessible to all of their customers.
Social media platforms have dramatically exacerbated the inequality faced by users with disabilities by inadvertently structuring their algorithms to mimic the implicit biases held by developers. For example, one Twitter algorithm that cropped pictures to focalize on faces was discovered to be actively exercising implicit bias, selectively cropping out faces of people with disabilities, older people, non-white people and users who wore head coverings. TikTok also admitted to using an algorithm that suppressed content they deemed “vulnerable to cyberbullying” — i.e. “Disabled people or people with some facial problems such as birthmark, slight squint and etc.” — rather than punishing or banning users who cyberbullied these creators.
This blatant favoritism toward non-disabled users in the digital world mirrors the full scope of digital ableism in our current society: because addressing the problem at its root by regulating ableism on their apps would lessen their user base and result in a loss of revenue, these social media platforms place profit over the safety and accessibility of their apps for users with disabilities. Plain and simple, users with disabilities on these social media platforms are expected to take the fall for the comfort of ableist and bigoted social media users.
The lack of importance placed on making digital platforms welcoming for users with disabilities sends the message to these users that their participation in our society is not profitable or meaningful enough to create environments that are conscious of their needs. The ubiquity of ableism in the digital world is a repulsive reflection of these corporations’ warped values: while taunting diversity, equity and inclusion initiatives to promote their self-image, these corporations simultaneously participate in the suppression of their diverse users by sacrificing these same qualities.
Perhaps more ridiculously, artificial intelligence (AI) systems that have been incorporated into the hiring processes of companies ranging from Amazon and Facebook have been proven to exercise heavy biases against individuals with disabilities. As explained by the AI Now Institute at NYU, “In modeling the world through data, AI systems necessarily produce and reflect a normative vision of the world” to filter out outliers in hiring processes. However, AI’s understanding of what is “normal” almost always excludes people with disabilities.
AI is programmed by human beings that have been socialized to understand normalcy and abnormalcy in binary terms that are frequently used as interchangeable with non-disabled and disabled, meaning that ableism is consequently copied from our social fabric to AI. This has resulted in heavy ableist bias in hiring processes that employ AI to select applicants. Not only does digital ableism make online platforms more exclusive against users with disabilities, but it also threatens their right to equal opportunity for employment.
Online platforms can also be designed to be more inclusive of users with disabilities. Organizing a platform’s headings and content to be clear and methodical can assuage some of the problems faced by screen reader users, who may struggle to understand the content of a platform if it is carelessly organized. Additionally, alt text should be included on images so that screen reader users can conceptualize the messages that the platform is trying to convey. For pages that are longer and content-heavy, platforms should include anchor links (otherwise known as jump lists) so that users with mobility disabilities can skip to the main content or next section of a page instead of manually doing so.
Ensuring that videos on a site do not automatically play will prevent users with disabilities from becoming overwhelmed with non-consensual sound. Moreover, all the videos included on a platform must include options for closed-captioning or transcripts for the hearing impaired. While this list is not exhaustive, it is essential for online platforms to make their sites accessible for all of their users to avoid perpetuating digital ableism.
Online platforms that include ableist users must work to support and implement initiatives that promote disability advocacy rather than working against users with disabilities. These users should not have to ask for the bare minimum from multi-billion dollar companies that could undeniably be doing more to create an inclusive and equitable environment for all users. In other words, online platforms need to start practicing what they preach. Enforcing zero-tolerance banning policies for users exercising ableism and including people with disabilities in the process of coding AI is just one of the baseline steps to eradicating digital ableism.
So, what can we, as individuals, do to unlearn ableism in our own lives? Most importantly, our community must center and listen to people with disabilities in discussions about disability. When people with disabilities ask our greater society to stop casting non-disabled actors for roles of disabled characters, listen. When they ask for non-disabled people to stop using accessible bathroom stalls, listen. When they ask you not to speak on their behalf, listen. When they ask you to not speak to disabled people like children, listen. When they say that invasive questions about their medical history or personal life make them uncomfortable, listen. When they ask society to stop prying into “how disabled” a person is, or to stop invalidating non-visible disabilities, listen.
Only when non-disabled people stop centering themselves in discussions about disability and stop ignoring disability advocates’ requests can we put a dent in the extensiveness of ableism in our society. We must, as a college community and online, work to create inclusive, accessible and equitable spaces for the disabled community. We cannot in good conscience stress the importance of diversity, equity and inclusion until we include people with disabilities in these measures.
Sophia Lehrbaum is an opinion columnist and can be reached at email@example.com