#169 AI Combatting Child Sexual Abuse: #dataforgood in action with Chris Wexler, CEO at Krunam

 
 

In 2020, there were 121.7 million reports of suspected online abuse of children - 6,000 times the number reported in 1998 - while in the last decade there has been a hundredfold increase in files containing child sexual abuse material (CSAM). The online sexual abuse of children is facilitated by the progress in technology, which is providing low-cost platforms for peer-to-peer sharing of files, fast live streaming capabilities, and end-to-end user encryption so that perpetrators can remain anonymous. 

In this episode, we speak with Chris Wexler — CEO of Krunam. Krunam is in the business of removing digital toxic waste from the internet using AI to identify CSAM and other indicative content to improve and speed content moderation.  Krunam’s technology is already in use by law enforcement and is now moving into the private sector.

In this sensitive and equally essential topic, we’ll learn from Chris why AI is the future of fighting child sexual abuse material.

Thanks to our sponsor Talent Insights Group!

Quotes:

  • “When a typical search engine or social media platform has billions of images going through today. It's really hard for humans to do this. And so our classifier will identify and classify to speed content moderation. It's designed to protect, of course, the kids, they're being reviewed every time one of these images are shared, but also the brand reputation of that platform and the community health of that platform.”

  • “Much like properly applied AI takes the stuff humans are bad at — repetitive, psychologically gruelling, the low incidence in a large data set. These are all things we're terrible at. Gives humans the things we are good at which is pulling together all these crazy things that the police investigated, we're a long way from AI being able to replace a police investigator because they understand human psychology and they understand local conditions and they understand previous examples. Let humans do what humans do well, and let AI do an AI does well. And that's really where we came about.”

  • “We're rebuilding a privacy safe AI deep neural network computer vision. I'm just glad our CTO is a heck of a lot smarter than I am. As he pulled it as he and the team pulled this together. Because it's taking some of the cutting edge bleeding-edge work that they've been doing in intelligence and defence proof of concept work. That's really what they were, their background is, and they on their own took the initiative. Go, even though nobody's asking for this, the world needs this technology.”

  • “The Privacy side is great the focus on on the data, I think I agree with you, it's something that we need more of and I've seen a bit of a shift in the industry recently from a sole focus on improving the algorithm and tuning the hyperparameters moving from only focusing on that to a much greater focus on the data quality and improving data and what that can do.”

  • “I spent about five or six years, in finance, working as head of data science and in banks and financial institutions. And I often say that job while it was interesting, it was using machine learning to sell money. It was just clay, ads and methods. That said, do you want money? Would you like more money? How much money would you like? And either moving to healthcare, like much closer to public health and doing a lot of work using AI to keep people healthier.”

  • “We moved too fast, and we broke too many things. And so now, it's not that I want to move slow. It's not that I want to not revolutionize how we do technology. But we need to do it in a way that isn't damaging to people, business models on fine, shatter a business model, go for it. But if you're damaging people, that's a very different thing. And I think it took us years to realize that it’s actually what we were leading to because we just didn't know. We just didn't know what was happening. I don't blame anybody for that. But now that we do know, we have a moral imperative to start thinking about that and adding not only legal frameworks but ethical frameworks into product design. And if there's one thing the world is not good at right now is ethics. We're good at lawyers, but we're not as good at things.”

What we discussed:

2:35 - So maybe first off, tell us a little bit about the business and the purpose.

6:42 - And at what point did you decide to tackle this and to jump all in?

11:22 - Can you tell us a little bit about how the process works, and a little bit about the AI side.

19:17 - Tell me a little bit about the privacy side of this.

19:35 - But overall, how does the privacy side work?

25:19 - Tell us a bit about the journey of turning this into a business and your approach to the business side.And how that journey has been?

28:26 - And how has it been creating a social enterprise and going to market in this space?

33:19 - And how was your journey into getting to this point? Can you tell us a bit about your background?

37:53 - What do you think made your family like your siblings to have this giving nature with their careers?

41:56 - Are there any people that you look up to? Or take inspiration from on that side?

49:39 - What are you most excited about? What’s in the future for the business.

54:13 - What do you think it'll take for us to come to a good middle ground?

UPCOMING EVENTS!

At Data Futurology, we are always working to bring you use cases, new approaches and everything related to the most relevant topics in data science to help you get the most value out of these technologies!

Join us at upcoming events https://www.datafuturology.com/events