On AI Governance: How to Get Started

Towards a sustainable and responsible AI, Guest Blog by Albert Suryadi, Head of Data Science and Analytics, Mirvac


 

We have entered a new wave of digital fear driven by artificial intelligence (AI). This is not new. In the 1950s, increasing production demand, rising safety concerns, and declining work-life balance led to the introduction of factory machinery. This led to sharp criticisms, such as job loss and product quality degradation. In retrospect, it created positive impacts, but we didn’t have enough trust back then. We currently observe a similar reception with AI.

AI is known by many names, such as big data, data science, and machine learning. At its core, AI is the application of advanced analytics and logic-based mathematical techniques to interpret events, support and automate decisions, and take action. It shines where there are numerous and complex inputs for a decision. The computer brain (also known as AI) computes better than humans, and the human brain innovates better than computers.

Its successes have been well-documented across the media. Examples include Google’s natural language processing search and Netflix’s content recommendation engine. In addition, McKinsey’s Global AI survey claims that high-performing AI organisations enjoy at least a 20% earnings increase. The publicity itself inspires others to have a go.

In the corporate context, AI implementation remains a black-box situation. Algorithm handling requires individuals with advanced mathematics knowledge, typically Ph.D.-level data scientists. However, the academic and corporate worlds have very different mindsets and ways of working. As a result, collaboration suffers, and trust has widened between the two. Therefore, data science teams are under enormous pressure to demonstrate fast value with little resources or support from the business.

AI vendors have innovated to demonstrate value quickly through features such as autonomous AI and MLOps. Autonomous AI enables citizen analysts to build AI models through a simple and guided user interface. MLOps allows data science models to be developed, released, and monitored faster. They enable end-to-end AI production to unleash business value seamlessly.

The Trust Issue

Despite advancements in AI technologies, trust has not gotten any closer between business and data science teams. AI governance might be an answer. AI governance seems to be “born” out of data governance. Data governance aims at appropriate information consumption through various processes and frameworks. Arguably, AI governance shares similar overarching objectives — but given its technical complexities, it needs a dedicated focus. They are related but different.

Given its importance, why hasn’t AI governance taken off? Firstly, there is a misconception that AI governance initiatives must come after completed data governance initiatives. Secondly, there is a misconception that data governance initiatives fully cover AI governance. Thirdly there is a misconception that data governance doesn’t yield significant business value. As a result of these misconceptions, data governance initiatives struggle for funding and are limited to addressing regulatory or compliance needs only.

AI governance could run in parallel to data governance. Most modern data ecosystems adopt an ELT (Extract-Load-Transform) approach where raw data is loaded to the data lake for further curation and analytics consumption, separating storage and compute. While business users access curated data, data scientists access raw and curated data. AI models also have a different consumption application from visualisation dashboards. Therefore, core data governance leans towards analytics needs which are different (but complementary) to AI needs.

Let’s do a quick recap. AI presents a huge opportunity and fear. Trust is needed to address this going concern. AI governance seems like a reasonable approach but is being shadowed by data governance. Due to the complexities, organisations shy away from resource investments — and we have to generate AI value and production at lightning speed.

AI Governance Landscape

So what is currently happening with AI governance? Data governance enjoys well-known frameworks, such as DAMA-DMBOK Functional Framework, and regulations, such as GDPR. However, the AI governance framework is still in its infancy. Hence many organisations don’t want to initiate AI governance. They are waiting for a first-mover.

Let’s start with some definitions. I believe a sustainable and responsible AI is a combination of AIOps and AI Governance. AIOps (synonymous with MLOps) focuses on model production lifecycle and technical robustness. AI Governance focuses on how AI should operate in an organisation.

Sustainable and Responsible AI = AIOps + AI Governance

This technical AI governance has been incorporated into some AI technologies. This includes areas related to model accuracy degradation, data quality, bias, and many others. These are automated and served as out-of-the-box features. However, they relate to technical and mathematical aspects. AI governance encompasses broader areas such as business processes and the human-to-AI operating model.

Given the current challenges, I believe an integrated technology that develops, operates, and governs AI models is a good approach. While the industry is moving towards component-based architectures or best-of-breed philosophy, most companies’ AI scale and maturity could benefit from an all-in-one approach. It isn’t easy to hire good data scientists, build MLOps, and find leading AI governance practitioners. Also, governance initiatives are not a one-off and are not monthly reviews — they need to be continuous. Governance is effective when it translates to everyone’s daily workflows, which technology enables. There might come a time when AI technologies specialise, but let’s focus on the present.

The industry is maturing in AI tools with in-built MLOps and technical governance frameworks. The early data science tool of choice is Python and R. We use them during our academic formation and it’s free (or open source). While they are our coding backbone, using them alone for an enterprise AI implementation is similar to scaling marketing campaigns using spreadsheets. These leading AI tools leverage leading AI experts with a user-friendly interface that does most things in a few clicks. It is enough to build trust among classical data scientists to move beyond open source tools with its algorithm options, model explainability, and performance governance. It is enough to build trust among the technology team with the release management and integration frameworks.

Beyond automated tools, people are crucial to bringing ethical AI to life. If PhDs are the stereotypical data scientist’s potential, AI governance leaders are another black box situation. Let’s learn from data governance initiatives. From my conversations with data leaders, data governance practitioners are too theoretical with complicated frameworks. They originate from cybersecurity, policy, or risk management backgrounds, resulting in a wide empathy gap between data scientists and business leaders. Simply put, they add more complexity to an already complex commercial situation. Hence companies boost their legal and compliance departments as they seem to do the “same” thing.

While we don’t have well-established AI governance practitioners, these are key attributes that may move us forward. Firstly, a quantitative background with commercial acumen. With AI’s black-box nature, only individuals well-versed in mathematics understand its core flaws and strengths. In addition, commercial acumen brings practicality and uncovers the value in the process. Secondly, an author/“writer” mindset. Stephen King wrote that a writer would “read a lot and write a lot.” There are numerous reads on AI ethics and regulations and the complementary data regulations that need to be adventured and communicated to wider audiences. It’s similar to the gospel authors whose beyond writing has the overarching purpose of evangelization. Thirdly, a growth mindset. The AI field proliferates, and the individual must grow with it.

A balanced mix of personal attributes is paramount to moving AI governance forward. An author’s mindset alone will perform and produce many thought-leadership articles. Given that we live in a hyper-information age, it contributes best towards an amalgamation of high-level industry ideas, but tangible progression or application remains amiss. Another example I have seen is the over-discussion of AI bias. Critical thought-leadership articles regard AI bias as bad, where AI needs to be equitable and fair. However, most businesses and marketing initiatives have a target audience so the data will be biased by design. Even if data quality is biased, the AI model build still needs to progress if there is significant business value on the line — and the model could be iterated further when better data arrives. A person with Bootstrapping knowledge (quantitative background) understands this issue, publishes their analysis (author mindset), and continues to improve the methodology (growth mindset).

First Steps

Based on my experience, a step-by-step approach to AI governance seems to work well. Every organisation is complex, and every organisation is on a journey. Understanding their business, technology, and AI landscape from both current state and aspiration is essential. A good AI governance seeks harmonious relationships between these elements — ensuring AI achieves its intended business purpose. AI’s blind spots are typically related to the over-pursuance of algorithmic excellence. An external perspective is invaluable to bridging these diverse organisational elements.

Beyond strategy alignment, this “audit” review seeks value creation. Common industry feedback on AI is that it hasn’t met the mark in delivering business value. Despite its potential, aligning AI with the business value process will identify re-calibration or new opportunities. Also, we need to understand AI’s intended purpose, which allows us to design an operating model around it. Governance doesn’t prevent the business from marketing to its niche target market, but governance supports why the initiative is not flawed or “biased.”

A mature AI capability augments humans and AI by leveraging their strengths to achieve a common business purpose. Given evolving economic and regulatory conditions, a dedicated AI governance team is needed. Its role is to establish and implement an AI governance framework covering operating models, metrics, technologies, and culture. While all are important, culture gets a special mention that educates and enables everyone, particularly the company executives, in uncertain situations. In addition, this team serves as proactive risk management measures in an area of growing regulatory surveillance.

Despite the urgent needs, we have talent shortages — and a managed service is a viable option by an external trusted partner. An organisation may start by fully managed and evolve to hybrid as it matures. It brings a diverse set of people with AI expertise and invaluable peer review perspective.

Moving Forward Together

In closing, AI governance is complex, but you are not alone. Various countries have issued public policy guides and thought-leadership papers on AI governance. AI and data technologies have also matured well to handle key fundamental components. The next step is to synthesize, relate, and implement these pieces to your specific context. AI has moved from our science fiction to our modern lives, and we must collaborate and manage them toward a sustainable future. And there is no better time than now.

 

About the author:

Albert is a proven leader in enabling advanced analytics and data science capability in blue chip organisations. Bringing extensive experience in solving complex business problems through data analytics. Possessing strong mathematical/quantitative knowledge to deliver production grade data science solutions. Recognised as the leader of the Analytics CoP (Community of Practice) that empowers and motivate others beyond the status quo.

He currently leads the Data Science department for Mirvac which includes the integrated breadth from Engineering Science to Decision Science. Day-to-day, he partners with various different teams, such as construction, residential, commercial, office, design, marketing and many others, to achieve their business strategy through data and quantitative solutions. It aims towards a modern data ecosystem that is fast, friendly and flexible with fast speed-to-value for everyone at Mirvac.

Catch Albert in action sharing his insights at OpsWorld: Data Centric Operations for Business Value

Albert Suryadi, Head of Data Science and Analytics, Mirvac

This article was first published on Towards Data Science here..