Fairness of little help in a new AI-led world
"What would it take for you to put the same level of trust in AI as you would extend to a human?" Australia's Chief Scientist, Alan Finkel, asked. "To chat with your child? To drive your taxi?" Wayne Taylor
Australia has bungled a lot of the pre-AI basics.
Last week's report catalogued a litany of government "tech wrecks". They include the baptism of fire that was the first online delivery of the 2016 census; the algorithmic tone-deafness of Centrelink's "robo-debt" scandal; the Australian Criminal Intelligence Commission's botched biometric technology project; repeated Australian Tax Office website crashes; blowouts to child support service delivery infrastructure upgrades; the halting of online Naplan testing; and the departure of several Digital Transformation Agency heads.
As we belatedly set our sights on highly complex artificial intelligence ethics, frameworks and services, I am reminded of a confronting John Steinbeck quote: "Man is the only varmint that sets his own trap, baits it, then steps in it." Steinbeck was greatly influenced by life in the 1930s, a period of disruption which many commentators and economists equate with the angst of these times.
Is AI a trap? A door? A trapdoor? It is all of them, precisely because as the power and disruption of so-called "smart tech" increases, we are only now turning attention to its intended and unintended consequences.
It's early days for the yet-to-be-defined AI National Road Map. The $30 million to be distributed among several organisations including the CSIRO's Data61, Standards Australia and Co-operative Research Centres (CRCs), is pennies compared to the billions being invested by other countries, especially China.
AI is already big business globally. An estimated $US22 billion was spent on AI-related mergers and acquisitions in 2017.
Revealingly, most of this spending is about gaining access to scarce talent: AI-driven start-ups without revenue are fetching prices equal to $US5 million ($6.7 million) to $US10 million per AI expert.
Finkel estimates there are about 22,000 PhD-qualified AI experts globally. Australia has a tiny percentage of those and some of the $30 million for the National AI Roadmap will be dedicated to funding scholarships.
It won't go very far. Finkel readily acknowledges that rare skills in high demand are "very hard to keep in universities and public agencies".
The terms of reference for the AI National Roadmap are still being worked out and Data61 chief executive Adrian Turner says they should articulate the future we want as a country. Finkel meanwhile, in imagining a trusted AI-centric Australia, draws analogies to the trust we already place in strangers and systems like electricity, cars and food.
Australians didn't invent electricity, or the car, or even the components that go into them. If we don't want to be slaves to other countries' and companies' AI laws and mores, or to surrender to the might of the Googles and Facebooks, we have some work to do.
Who sets and standardises AI rules and who audits the algorithms behind them will really matter. As will decisions around what we create versus what we customise and where we can have impact?
Australia is arguably good at rules and standards and AI is highly dependent on those. Public and private sector compliance costs Australians an estimated $250 billion a year, according to Deloitte Access Economics, and represents Australia's fastest growing industry. AI will decimate the many thousands of jobs now employed in compliance.
To encourage ethical AI, Finkel proposes a "trustmark" he calls the "Turing Certificate", in honour of British scientist Alan Turing. In theory, it could work for AI much like trustmarks do in other domains such as food.
In practice it costs a lot of money and requires some reputation globally in AI systems to influence global standards bodies. Given our relatively small skills base in AI – and the pace of change in the field globally – that will be hard to do.
That doesn't mean we don't try. But it does highlight the dangers for a nation – one that has barely contributed to the creation of the global digital revolution to date – in maintaining any real or imagined mastery of destiny in the age of smart machines.
We are going to have to do a lot more than fall back on cultural illusions of the Fair Go. In the coming AI world, fairness may have little to do with it.
Sandy Plunkett is a tech sector commentator and analyst.