Will white-collar professionals be digitised away?

Knowledge workers need to leverage the new power of deep-thinking computers

Policy-makers should stop sticking their heads in the sand and ignoring the fact that there will be "a massive amount of jobs destroyed" by the digital revolution.

So says Mike Cannon-Brookes, speaking at a forum in Sydney. The Atlassian co-founder offers an example in the "2.5 million [Australians] who drive a car as a significant part of their job", warning "those jobs are all going away whether it takes 10 years, 15 years or 20 years".

Nick Wailes, a professor and deputy dean (digital & innovation) at UNSW Business School, is more sanguine. 

"Mike Cannon-Brookes is not the first person to make very large claims about the impact of technology on the world of work, and more often than not they don't play out that way," Wailes says.

"But I think it's good to be having those conversations now because there's clearly some drivers of change."

And while cars without drivers may be rolling about in greater numbers in the not too distant future, a more unsettling trend in technological unemployment is how the utility of increasingly powerful and capable computers is nipping at the heels of white-collar professionals.

"You have a large number of people who are employed for their expertise – their judgment, discernment, accumulated knowledge – and they tend to be highly paid for that," Wailes says.

"But now we're seeing the dawn of technological changes which threaten to undermine the ability of that knowledge class to command a premium – or will fundamentally change their roles."

'It's not the AI replacing human judgment and discernment, but the AI extending and enhancing it'


Likely scenarios

"I think there's a lot of exaggeration about artificial intelligence (AI)," says Wailes.

"But there are now systems that are capable of holding a lot of knowledge, and are capable of learning from past experience to infer what future behaviour might be, or to make recommendations.

"And they can do that very quickly. And they can sometimes do that more reliably and at a broader scope than an individual human can do."

The implications for a range of professional roles are significant, but not necessarily dystopian. As an example, Wailes notes that it's almost impossible for a GP to keep up to date with the contemporary research about drugs, different drug combinations, and how they may interact with each other.

But a cognitive computing system could check existing and latest research and make a recommendation once the doctor had made an initial patient assessment.

"So in that case it's not the AI replacing the human judgment and discernment, but the AI extending and enhancing it," Wailes says.   

"And I think that is a much more likely scenario."

But the lines are blurring. Fukoku Mutual Life Insurance in Japan is reportedly replacing 34 insurance claim assessors with an AI system based on IBM's Watson technology, which will scan hospital records and other documents to determine insurance payouts – factoring in injuries, patient medical histories, and procedures administered.

Or take the case of Bridgewater, the world's largest hedge fund. According to The Wall Street Journal, engineers are developing software to eventually take over the day-to-day management of the firm. The new technology would enshrine founder Ray Dalio's unorthodox management approach.

It calls to mind an anecdote Wailes tells about keen tech investor, philanthropist and former Microsoft executive Daniel Petre.

"I sat in a room with Daniel and he said to the alumni from a great MBA program, working primarily in the knowledge industries: 'You are all software, and you are software that hasn't been coded yet'. You could sort of see a chill go through the whole room," Wailes says.

The digital divide

Technological shifts have been changing industries throughout history – and people have had to find new employment roles, acquire new skill sets, and grow new businesses.

"I think we're in one of those transitions," Wailes says. "Within the industries of today that are the prime generators of employment, including a lot of the knowledge-based industries – consulting, accounting, law, medicine and so on – there are going to be less and less requirements for individuals to play roles that technology could replace.

"So in many cases that means upgrading your expertise or being able to identify how you can leverage technology, rather than try to rail against it."

Firms are presently leveraging technology to strip out costs. Such as the legal profession automating the sorting of the thousands of emails and documents that may be considered before a court case.

Looking beyond that, "a computer can read codified law and judgments, and a lot of legal transactions are quite standardised. So you could imagine a [future] situation where you don't need a lawyer because you could ask a smart computer, or an AI, what the best form of contract for this situation would be, or what's the best advice around these things," Wailes says.

But of more immediate concern to Wailes is the digital divide.

"I think there's a new cleave of inequity that will run down whether you've had the ability to learn and have been exposed to these systems, and have had the opportunity to work on them, or whether you haven't," he says. 

"And I think it's an urgent task – for governments, but also for universities and businesses – to go back to early childhood education and think about how we stop generating a digital divide. How do we not exclude a whole lot of people from the economy of the future?

"One of the biggest social impacts of technology is that people who are unable to participate in the digital economy will find it very hard to earn a living, or to get by, and will be isolated economically and socially," Wailes says. 

'I think there's a new cleave of inequity that will run down whether you've had the ability to learn and have been exposed to these systems'


Bugs in the code

Our dependence on computers, to provide essential services and to transact business, highlights how vulnerable we are to the reliability of digital infrastructure.

But systems go down – whether it's a specific site, a service provider, mobile access, or something as repercussive as Amazon's cloud service crash in February which caused internet havoc on the US east coast. A catastrophic systemic failure would have huge social and economic implications.

"Data, networks, and our reliance on these systems means that they are no longer just nice things that companies provide, but are becoming essential infrastructure for governments and populations," Wailes says.

"At the moment these are all privatised networks, they're privatised systems – one person's agglomerated data from other people."

A debate about how something so powerful, so fundamental to our daily lives, needs public oversight and regulation must surely be looming.

"It's going to be very hard. Something similar happened with electricity generation, and something similar happened with roads. And it happened with water. But I think this one's going to be quite tough – to see what a regulatory regime that will ensure equity of access and consistency of supply is going to look like," Wailes says. 


You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy