A lot of organisations are embarking on digital transformation projects right now and considering whether to leverage artificial intelligence (AI) or machine learning (ML) technologies, and if so, how to do so effectively. 

Recent Australian experience is indicating that for an AI system to be effective in practice, it must be trusted, transparent and meaningful. Interestingly, the emphasis on trust and transparency as a practical success factor for AI projects corresponds closely with key AI regulatory themes we are seeing around the world, with privacy and transparency at the forefront of conversations about AI regulation in the Asia / Pacific region in particular - alongside related issues of accountability, safety / security, fairness and ensuring appropriate levels of human oversight. 

What this means is that an AI project that is designed to be trusted, transparent and meaningful from the outset will have improved prospects of success along with a reduced risk of attracting adverse regulatory scrutiny.

Key practical tips to improve the effectiveness of an AI-driven project include:

  • The AI development process should start by identifying the problem the organisation wants to solve or the value the organisation wants to achieve by implementing the AI-driven solution and examining whether AI is really the right technology to achieve the desired outcome, rather than being technology-led (i.e. "building some algorithms hoping they may be relevant or useful"). Examples of value-driven objectives may be to make a better decision, improve a particular process or shape a particular experience. A guiding question to ask: "is this a process or activity that is better suited to AI or ML than a human, or a less sophisticated form of data analytics?" 
  • Algorithm design must take into account human factors and usage of the AI system, i.e. how will users interact with the system? What level of human input is involved, and at what stages, in producing the outputs of the system?
  • Trust and transparency are crucial to long-term acceptance of AI by users and unlocking the value of AI. A good question to ask is "how do we design the processes, policies and capabilities we need to use the outputs of the algorithms in a meaningful, trusted and effective way?"
  • To avoid "trust failure", AI-driven processes require a supportive framework and the backing of trusted leaders who can articulate the rationale for implementing the solution and allay risks and concerns of users, while also setting realistic expectations regarding project outcomes and any constraints or points of friction that may be encountered along the way. This may require cross-functional teams, AI ethics boards and new structures and roles to ensure that various risks and vulnerabilities are properly understood, addressed, documented and communicated to relevant stakeholders.