Artificial intelligence failure and improvement were all the talk at a Rutgers-Newark conference on Tuesday, where nine experts in the fields of computers, information science, human rights and employment law took the stage.
One hundred students and staff attended the event where speakers held presentations focused on the issues of social justice and workforce development when it comes to artificial intelligence. The general consensus could best be summed up by Cornell University assistant professor Ifeoma Ajunwa — because algorithms are created by humans, they aren’t always neutral.
“These algorithms are not being audited. So companies can just basically adopt a hiring platform or a hiring algorithm … and worse, they may actually be doing it based on, what are called, protected characteristics,” said Ajunwa. “So protected characteristics are things that basically indicate race, gender and algorithms may actually be using that to exclude people.”
Those types of imbalances concern students like Shreya Kotteda. The 22-year old is worried that bias algorithms could block her from a job opportunity if, for example, it detects that she is a mother.
Patrick Shafto, Henry Rutgers Term Chair in Data Science, said the event was one way to bring awareness and start a conversation about artificial intelligence as it becomes increasingly a part of people’s lives, especially in a city like Newark, that is now considered a tech hub.