AI: friend or foe? (and what business leaders need to know)

Artificial intelligence presents significant opportunities for business – as well as not insignificant threats to humanity – and governance frameworks are urgently needed to create a fair and equitable future under AI

Artificial intelligence is behind more technologies than most of us realise. For example, do you ever ask your virtual assistant about the weather? Or scroll through social media platforms for updates from family and friends? Or do you rely on playlists for music apps or movie suggestions on video streaming platforms? We all interact with AI on a daily basis more than we think, and this process is only set to become more prevalent.

AI is currently the fastest-growing tech sector in the world, according to an analysis by IDC. It found worldwide revenues for the AI market are estimated to grow 15.2 per cent year-on-year to US$341.8 billion (A$459.2 billion) in 2021, and by 2024, the market is expected to break the US$500 billion (A$671 billion) mark. And Gartner has said AI augmentation will create US$2.9 trillion ($A3.8 trillion) of business value this year alone and the equivalent of 6.2 billion hours of worker productivity globally.

And while there are obvious commercial benefits to the use of AI, there are some potentially concerning ethical issues associated with its use. Philosopher Toby Ord, for example, believes it has the potential to be one of our most significant existential threats - an even bigger threat to our existence than future pandemics, climate change and potentially nuclear war.

BusinessThink banner ad The Business Of Podcast

It’s clear that AI is a two-edged sword. In the Business of AI – the third episode in The Business of Leadership Podcast Season 2, hosted by AGSM at UNSW Business School – Professor Nick Wailes, Director, AGSM and Deputy Dean at UNSW Business School, recently interviewed Dr Catriona Wallace, technology entrepreneur and Founder/CEO of Ethical AI Advisory and Dr Sam Kirshner, Senior Lecturer in the School of Information Systems and Technology Management at UNSW Business School.

What is AI and how can it help?

AI is any software that can mimic or replicate human intelligence, according to Dr Wallace. “A simple way is to look at the components of anything that is artificially intelligent, and that it has data, it has algorithms, it has an analytical capability, a decision-making capability, and then an automation capability,” she says. The fundamental component most AI technologies today is machine learning as a capability, and Dr Wallace says machine learning is simply a software programme that is able to learn on its own accord without needing to be explicitly reprogrammed or programmed by humans. “With every task that it does, it will get smarter and smarter and smarter,” she says.

Dr Wallace, who is an Adjunct Professor at AGSM and executive chair of Australia’s largest venture capital fund for startups, Boab AI, says there are three core benefits associated with AI. The main one is efficiency, as AI can automate tasks that were previously manual or fulfilled by inferior, older technology. The other main benefits are associated with analytics and better decision-making, according to Dr Wallace, who says these benefits can be seen in the world of marketing and improved customer experiences.

Catriona_Wallace_3-min.jpg
Dr Catriona Wallace, Adjunct Professor at AGSM and Founder/CEO of Ethical AI Advisory, believes AI still has a way to go before it understands humans better than they understand themselves. Image: supplied

“This is actually the biggest area that we’re seeing AI being deployed in. This is predominantly the personalisation of everything, we call it, which is using machine learning and algorithms to be able to assist enterprises to better understand their customers’ intentions, and then to deliver to them a marketing or a sales opportunity for them to buy,” she says.

Gartner predicts that by 2022, 70 per cent of customer interactions will involve emerging technologies such as machine learning applications, chatbots and mobile messaging. “The advantage to that is it’s likely to be fast, 24/7 [and] robots won’t go on annual leave or sick leave. They’ll always be there. They won’t complain. They won’t have any other human-like challenges. So, more and more, we will see the customer experience becoming automated using virtual assistants and robots to manage that customer experience,” says Dr Wallace.

“But I do believe we have a long way [to go before] these machines really … understand us better than we understand ourselves, to anticipate our needs before we even know we have a need, and to be able to curate or deliver really great offers or products or services to us.”

Read more: How to avoid the ethical pitfalls of artificial intelligence and machine learning

Challenges and concerns associated with AI

While the benefits of AI are promising, there are myriad ethical and other challenges associated with its development and adoption. Many leading thinkers in the field of AI agree that there is a light side to the technology, but an “equal dark side”, says Dr Wallace.

“The dark side is largely because this type of technology is very difficult to understand, to explain, and also to control,” she says. “And ... the fact that it can learn on its own accord means that often the humans who have programmed it will not be able to understand the machines over time as they learn and make their own decisions and eventually not have any need for their human masters.”

To-date, there has also been very little regulation, legislation, rules or guidelines that provide a framework for those who are building or deploying AI to work within. While there have been some guidelines around ethical algorithm decision-making produced by the Australian Human Rights Commission as well as ethical AI principles produced by Minister for Industry, Science and Technology Karen Andrews, Dr Wallace says there are no hard rules or laws that govern the technology. “I’m afraid to say that we are not particularly advanced in this field,” says Dr Wallace.

Read more: AI chatbots: How IBM and UNSW are working together to solve industry problems

“The dark side of it comes from incredibly powerful software being used by bad actors, and that could be in warfare, in bioengineering disease, in manipulation of populations and elections as we’ve seen over the past few years. Or it could come from the machines themselves not being aligned with human values and starting to behave on their own in order to perpetuate their own goals.”

Dr Wallace also pointed to the work of Toby Ord, a Senior Research Fellow in Philosophy at Oxford University, and his book Precipice, which examines existential risks to humanity – one of which is AI (along with nuclear war, climate change, an asteroid colliding with the earth, pandemics and bio-engineered disease). While academics in the field give the latter five risks a one in a 1000 to a one in a 100,000 chance of destroying humanity by the end of the century, AI has a one in six chance of causing the destruction of humanity. “AI is now regarded as one of the most serious threats to humanity unless it is controlled. And where’s the leadership? It’s not coming from the tech giants,” says Dr Wallace.

Samuel_Kirshner 2-min.jpg
Dr Sam Kirshner, Senior Lecturer in the School of Information Systems and Technology Management, says organisations cannot plug AI into existing processes and expect great results. Image: supplied

How business can put AI to work

Dr Kirshner, whose work focuses on understanding when and how individuals use and engage with AI, observes most organisations who are pioneering work in AI tend to be large tech firms, institutions such as banks and consulting firms, or data-driven start-ups. While many organisations in these industries are leading the way in AI, Dr Kirshner says “a lot of companies in between are just not really there.”

This can largely be put down to skill gaps in the workforce around digital capabilities and data literacy. Without these skills, “it’s really hard to actually implement AI” according to Dr Kirshner, who notes AI is not the kind of technology that can readily be bought and used ‘off-the-shelf’. Organisations cannot plug AI into existing processes and expect great insights or results: “it comes down to re-imagining business and business models and taking a very structured approach, or else AI just simply won’t scale within the organisation,” says Dr Kirshner.

“What you’re looking for is ‘Goldilocks’ conditions for these organisations. You need to find an application that’s not something that’s so critical to the business that involves hundreds of people. You really need to find something that’s meaningful for a select group of business leaders or champions, where the project will move the needle – but it doesn’t have dozens of people arguing about the accountability or the direction that these projects go. And then once you demonstrate the value of AI in this niche application, more people in the organisation will be willing to actually adopt it.”

For the full interview, listen to the Business of AI episode, or listen to all the Business of Leadership episodes.  

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy