Today we have a different type of episode, this is a presentation that Felipe did at the Chief Data and Analytics Officer Conference in Canberra, and it is on explainable AI. First, Felipe explains how Amazon used a secret AI recruiting tool that had a bias against women. Also, the U.S. government used an algorithm predicting how likely people in the criminal justice system would reoffend. What they found is that it targeted specific racial groups. The algorithm isn’t racist or sexist, the data is.
Regarding job applications, as your company scales up, the need to automate the process of looking at the applications becomes necessary. Sometimes, bias will creep into the automated decision-making algorithm. The bias can even be narrowed down to the person’s name. For example, somebody with name Felipe might get scored lower than somebody with the name Tyler. Lean into the inequality and predict the bias. You can plug in the CV information, and ask the algorithm to predict the person’s race and gender. Then, find out what key inputs they are flagging to determine this and remove them from the algorithm.
Then, Felipe explains how algorithms can tackle unstructured data approaches. When discussing images, an algorithm was able to correctly identify a wolf from a husky 5 out of 6 times. However, when uncovering how the algorithm determined which was which, it was merely looking at if the animal was in the snow or not. If the picture had snow in it, then it must be a wolf. To determine how this algorithm was functioning, Felipe used LIME - Local Interpretable Model-Agnostic Explanations. It works for classifications and came out of a study from MIT. Later, Felipe discusses using EL15 and how transparency is essential for the public to understand how the algorithms could affect them.
Enjoy the show!
We speak about:
[03:40] Large companies and their biases
[05:40] Racism and sexism is in our data
[08:45] Uncovering inputs of the bias
[10:45] Unstructured data approaches
[14:30] Using ELI5
[19:20] The right to an explanation
“We teach our algorithms on how to replicate our decisions.”
“The algorithms show the inequality that we have in the world today.”
“Explainable AI is more ethical in the sense that it is more transparent.”
“Explainable AI helps us avoid blunders and informs us how the algorithm perceives the data.”
Now you can support Data Futurology on Patreon!
Thank you to our sponsors:
UNSW Master of Data Science Online: studyonline.unsw.edu.au
Datasource Services: datasourceservices.com.au or email Will Howard on firstname.lastname@example.org
Fyrebox - Make Your Own Quiz!
Felipe Flores is based in Melbourne, Victoria, Australia.
And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show!