#138 Standardisation and Governance of AI with Aurelie Jacquet - Chair

Untitled design (13).png

We are joined by Aurelie Jacquet, Chair of Standards Australia, an independent not-for-profit organisation that specializes in the development and adoption of internationally-aligned standards in Australia.

Aurelie is an expert in governance, data ethics, privacy and responsible use of technology. She starts by sharing with us how her journey in the data and ethics world started. She is a lawyer by trade, who started as a litigator and then moved on to work in finance for algorithmic trading. Her interest in AI and ethics peaked in 2016, she soon realized all law initiatives regarding AI were done overseas, so she decided to venture into the world of standards and push for Australia to participate in the international standards around AI.

Enjoy the show!

We speak about:

  • [2:00] Aurelie’s journey into the data world.

  • [05:20] Aurelie’s role as chair of Standards Australia and the work that is being done by the organization. 

  • [9:50] Description of their standards and the differences/similarities with ISO Standards.

  • [13:30] Discussion of examples and case studies that highlight the importance of  ethical standards in AI.

  • [20:00] Examples of current AI Ethics scenario.

  • [23:00] What about cases when organizations don’t have better data to improve fairness or reduce bias?

  • [30:30] Is there a certain part of the organization that would be better suited to take the lead on embedding these principles across the organization?

  • [33:40] Insights on quantitative ways to cover bias and privacy for organizations.

  • [39:20] For companies that need to create a code of conduct or ethical AI framework, what do you recommend for them and where would be a good place to start?

  • [49:30] Trade offs between benefiting society and individual rights. 

Resources:

Aurelie’s LinkedIn: https://www.linkedin.com/in/aurelie-jacquet-94b75638/ 

Standards Australia on LinkedIn: https://www.linkedin.com/company/standards-australia/ 

Quotes:

  • "A use case I see that keeps coming back is 'how do you manage privacy, bias and accountability?"

  • "The solution you use to address fairness and bias should strongly align with the legal principles of fairness and bias."

  • "If you don't have the right control in place, or the right filters, then your output can be problematic."

  • "Trust is essential and building good mechanisms in the first place will help gain or regain that trust."

  • "Knowing the roles and the context in which you are going to intervene with your algorithm is extremely important."

  • "There is a tendency to think that if you show that you are doing as well as individuals will do without any system then you should be fine. I think there is a need to show that it is doing better."

  • "There will always be a need to explain, not necessarily the model, but the process."

Thanks to our sponsor:

Talent Insights

And as always, we appreciate your Reviews, Follows, Likes, Shares and Ratings. It really helps new data scientists find us.

Thank you so much, and enjoy the show!