THANK YOU FOR SUBSCRIBING
In Artificial Intelligence’s most recent victory over people, a software program defeated a highly trained and experienced US fighter pilot in what has been described as a flawlessly executed series of simulated aerial dogfights. This is, of course, just the latest in a long line of successes for AI in narrow tasks ranging from recognising objects in images to more complex games of strategy, like Chess and Go.
Each new victory is heralded with fanfare by the media, leading to some commentators talking about the so-called AI ‘control problem,’ commonly referred to as the challenge of managing a ‘super-intelligent’ AI. While such dystopian definitions certainly make entertaining science fiction, the control problem has a far more specific meaning for businesses, as described recently by John Zerilli, Alistair Knott, James Maclaurin, and Colin Gavaghan in the December 2019 issue of Minds and Machines. It is “the tendency of the human within a human-machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system.”
For CIOs keen to accelerate the adoption of AI within their enterprise, the control problem can lead to a counter-intuitive degradation of service as the quality of professional judgement declines in lock-step with each advance in technology. And even at their most reliable, AI systems do occasionally fail in non-human-like ways, which, in business-or safety-critical settings, can have damaging consequences. So, what are the common origins of the control problem, and how should CIOs tackle it?
For many organisations, the control problem is caused and exacerbated by poor data quality. Seduced by the hype of AI, business users can all-too-easily assume that any sufficiently advanced technology will be able to operate no matter how bad the data. Moreover, without a thorough understanding of how such poor data may affect techniques like machine learning, users may be blissfully unaware of the potential the technology has for distorting outputs and making mistakes. Not only does the CIO have a critical role to play in educating business users about the various technologies being deployed, but all business functions need to keep abreast of their organisation’s data quality – investing appropriate time and resources in understanding the data and assessing and remediating significant issues.
Even after resolving data quality problems, it can be a mistake to rush to invest in the latest and most sophisticated AI software and hardware. At this stage, business outcomes and not technology inputs are what matter. Across its suite of AI products, EY, like many other professional services firms, delivers technology-enabled services that focus on efficiency and quality of insight. These outcomes often mean trading off absolute algorithmic performance for algorithmic explainability, even though the selected algorithm on its own may not perform quite so well at a given task.
We also use a process known as ‘active learning’, a special case of machine learning in which the computer enlists the help of a human to review its model-based predictions and, where necessary, correct the input data. In a humanmachine control loop, active learning coupled with enhanced algorithmic transparency engages professionals in a more dynamic way. This allows professionals to provide a more trusted, increasingly accurate, and deeper understanding of business issues than might otherwise have been achieved by algorithm alone.
For example, most professionals who work in regulated industries struggle to keep up to date with the ever-shifting regulatory landscape – a problem that has become particularly acute as Covid-19 forces governments worldwide to react with a series of legislative measures. Although humans, by themselves, are typically unable to gather and assimilate this vast volume of information, AI enables professionals to monitor for specific changes around the world. Using active learning, data in the form of legislative texts gathered from hundreds of official government publications every week can be processed using a combination of machine and human intelligence in a way that ensures timely and relevant updates are provided to business decision-makers, allowing them to prioritise and take immediate action. Active learning is essential to achieve these outcomes, given that most legislation is not classified by type of tax or even by sector-relevance when it is first published. When presented with samples of legislative texts by a tool, trained professionals can provide labels that a machine learning algorithm uses to construct a comprehensive classification model. And, as both language and policy shift to recognise global and local economic needs, active learning also ensures that predictions are kept continually up to date, preventing decision-makers from becoming too reliant on obsolete models.
Although the advances being made in the field of AI right now are substantial, it’s important to understand that the business environment is typically less predictable than the pristine laboratory conditions in which many datasets and algorithms are first conceived. The hype surrounding the latest advances in deep learning or big data can, at best, lead organisations to over-estimate the financial and broader benefits of AI or, at worst, create technological complacency and a reticence to question and probe data quality issues – in other words, the beginnings of a genuine control problem.