#70 Making Black Box Models Explainable with Christoph Molnar – Interpretable Machine Learning Researcher

Christoph Molnar.jpg

Christoph Molnar is a data scientist and Ph.D. candidate in interpretable machine learning. He is interested in making the decisions from algorithms more understandable for humans. Christoph is passionate about using statistics and machine learning on data to make humans and machines smarter.

In this episode, Christoph explains how he decided to study statistics at university, which eventually led him to his passion for machine learning and data. Starting out studying with a senior researcher gave Christoph exposure to many different projects. It is an excellent program for students and companies whom both benefit greatly. Christoph learned so much about statistics that he would not have been able to acquire otherwise. The clients got nine hours of consulting for free, which is very valuable for their businesses. When Christoph started his statistical consulting career, he did patient analysis to assess if a medication was affecting the spine. He found this very interesting as it differed significantly from his previous consulting.

When labeling data, Christoph says to label and always compare continuously. For instance, when a student labeled one photo, later on, Christoph would show a student the same photo and see if it got labeled identically. Sometimes people will see the same image but label it differently; so, this is one thing you can do to ensure labeling data is going smoothly. If you have multiple labelers, you will need to compare how each labeler will mark the same photo. Do not be blind to the quality of your data; it is easy to adjust the numbers. 


Then, Christoph speaks about pursuing his Ph.D. in Interpretable Machine Learning. He publishes his book, Interpretable Machine Learning, on his website chapter by chapter. Christoph gets feedback and uses it while continuing his writing on future chapters. Learning about interpretable machine learning is not exactly present at university now. Some schools and professors are starting to integrate it into the curriculum. Stay tuned to hear Christoph discuss accumulated local effects, deep learning, and his book, Interpretable Machine Learning.

Enjoy the show!

We speak about:

  • [02:10] How Christoph started in the data space

  • [09:25] Understanding what a researcher needs

  • [15:15] Skills learned from software engineers 

  • [16:00] Statistical consulting 

  • [19:50] Labeling data  

  • [23:00] Christoph is pursuing his Ph.D.

  • [29:00] Why is interpretable machine learning needed now? 

  • [31:00] Learning interpretability  

  • [33:50] Accumulated local effects (ALE)

  • [37:00] Example-based explanations  

  • [39:15] Deep learning  

  • [43:35] The illustrations in Interpretable Machine Learning.

  • [49:50] How Christoph maximizes the impact of his time

Resources:

Christoph’s LinkedIn: https://www.linkedin.com/in/christoph-molnar-63777189/
Christoph’s Website: https://christophm.github.io

Interpretable Machine Learning: https://christophm.github.io/interpretable-ml-book/

Quotes:

  • “Always look at the process when labeling data.”

  • “After each chapter of my book, I publish it and get feedback.”

  • “I randomly read a lot of papers and structure the knowledge to fit them together.”

  • “I express what I want easier with illustrations in my book.”


Christoph Molnar is based in Munich, Bavaria, Germany.

And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. Thank you so much for listening. Enjoy the show!