Much of the data being used to train machine learning algorithms doesn’t take ethnicity or race into consideration. Errors from incomplete AI training data already affect people of color. The more intertwined our lives become with AI, the more biases could bloom, some of which could result in life or death. Before AI exacerbates inequities throughout society, we must include and protect minority data.
Health care, which increasingly uses algorithms to determine diagnoses and treatments, is also problematic. Nearly 40 percent of Americans identify as being nonwhite, but 80 to 90 percent of participants in most clinical trials are white. In the US, people of color are projected to outnumber non-Hispanic white citizens by 2045. Around 50 percent of the world’s population growth between now and 2050 is expected to come from Africa.
Data ownership is essential. It’s not just our human right today, but also the key to our future rights. We must design technology that doesn’t inadvertently oppress those who have already been oppressed. Coding inclusivity into algorithms is a challenge when most developer teams are made up of paltry percentages of women and of people of color. But we must augment initial training datasets to reflect the actual demographic makeup of our society.
An AI-Run World Needs to Better Reflect People of Color
The Best Algorithms Struggle to Recognize Black Faces Equally