By: Aliya Bhatia & Ariana Aboulafia
Policy Analysts
Center for Democracy and Technology
Online communities are critical for young people with disabilities.
Many people with disability rely on online spaces to get information on diagnoses and healthcare, navigate career options, and even find companionship, community, and love.
Barriers to physical accessibility for people with disabilities — including risks exacerbated by the 21st century’s second pandemic — can make gathering in person either dangerous or impossible, making online spaces imperfect yet integral proxies.
Young people with disabilities are vulnerable to the same online harms as other children, including invasive data collection, targeted advertising and engagement tactics that prey on their insecurities, and manipulation and exploitation by bad actors.
Federal and State lawmakers have introduced many ideas to keep children safe online, such as providing young users with safety tools, enabling parental monitoring, and mandating technology companies keep children away from certain material.
All of these proposals require online services to know which users are children and which are not, so they don’t erroneously prevent adult users from accessing critical sources. That’s easier said than done. The biggest tech companies already have rich, data-filled profiles on many of their users.
Upon creating an account on a major social media platform, for example, you may have added your date of birth, credit card details and other identifying information giving them a sense of your age or whether you’re over 18 by virtue of having a credit card.
Companies may also use other signals like your search queries, purchasing habits, the ages of your connections, and the amount of times you watched the Espresso music video, to roughly estimate your age. While this information may lead to imperfect (or even outright incorrect) results, it does allow larger companies to at least guess their users’ ages. Smaller companies, though, may not have access to this same treasure trove of information.
Left without a vast amount of data to analyze, small companies are turning to a rapidly growing cottage industry of age verification vendors. These methods peddle another imperfect science, asserting that machine learning technologies can approximate a user’s age.
These methods rely on biometric techniques such as facial scanning or voice pattern analysis to estimate age, but — even when used in contexts outside of age verification — these tools have proven discriminatory against people with disabilities, who may have facial or vocal differences.
These methods, then, not only run counter to what Americans are asking for — less invasive data collection — they just don’t work. Most computer vision machine learning methods are fundamentally pattern recognition tools trained to draw conclusions based upon training data that typically over-represents able-bodied, white male examples. Many individuals with disabilities exist outside of these patterns, and as a result, these algorithmic systems are inherently designed to treat them unfairly.
One way to reduce inaccuracies in age verification models may be to create more representative training datasets from which models can learn. Yet, there is no proof that feeding a model images of people with different disabilities leads to better performance. Even worse, doing so could require the expansion of the dragnet of surveillance by requiring more data collection on people with disabilities.
There are ways to make age-appropriate online spaces, and even age verification systems, more accessible to individuals with disabilities. For example, age verification companies can and should regularly test the effectiveness of their tools for users with and without disabilities, offer alternative verification methods for users, and allow users to report when a tool doesn’t work.
Researchers have shown evidence that restricting young people’s access to online communities may have significant negative consequences including undermining the development of critical skills and traits including independence, empathy, literacy, and more.
Requiring error-prone tools to grant access to online spaces at scale risks cutting off people with disabilities from online communities that are essential to their safety and wellbeing, degrading their ability to exercise discretion and autonomy. If knowledge is power, then the denial of access to knowledge is inherently disempowering — and disempowering and isolating youth with disabilities can have real impacts on their health outcomes, both physical and mental.
Source: teenVogue