How leaders should weigh up the risks and rewards of AI
What do leaders need to consider to be able to take advantage of AI, while also overcoming its complex challenges?
Technology has transformed the world in more ways than one – at times with unintended consequences. When the Jumbo Jet was first introduced in 1969, it made the world smaller and travel much more accessible. But in 2020 it helped accelerate the global spread of COVID-19. pandemic.
Scientia Professor Toby Walsh, UNSW Laureate Fellow and author, called on leaders to think about unexpected outcomes that the technologies they build and implement might bring about, in his keynote session at the AGSM @ UNSW Business School 2022 Professional Forum.
“Technology is not additive but vast and unpredictable,” he said. “It's easy to predict the first order effects of introducing autonomous trucks into your business – you’re going to need to employ less truck drivers. That's an easy prediction. But what is the unexpected impact this is going to have in the longer term?”
It is one of the most challenging and important areas for leaders to consider – balancing the rewards of technology with its potential complex risks.
Key considerations for ethical AI
Leaders face several different challenges when it comes to developing responsible AI practices and technologies.
A panel of experts at the Professional Forum explored how leaders can manage these complex challenges, take advantage of the opportunities – and what they need to think about next when it comes to the evolving technology.
“The risks and rewards of AI: making informed decisions in the era of artificial intelligence” panel included Professor Mary-Anne Williams, UNSW Michael J Crouch Chair in Innovation and Deputy Director, UNSW AI Institute, Associate Professor Sam Kirshner, School of Information Systems and Technology Management, UNSW Business School, and Lorenn Ruster, Responsible Tech Collaborator at the Centre for Public Impact.
The first challenge they raised was the question of transparency and how transparent leaders should make their AI. Ethical frameworks suggest businesses adopt a high level of transparency with their customers and stakeholders when it comes to AI. But one big challenge for leaders will be managing the competition, according to A/Prof. Kirshner.
“If your competitors are less transparent, and you can get away with it, there is that temptation. A lot of your competitors are really going to be sneaky with AI, and it's up to you whether you want to rise above that as leaders.”
Overcoming inherent bias in AI
Another key consideration for leaders in charge of developing and implementing AI is bias. Humans have over 100 cognitive biases, which are embedded in the data we collect and store and the technologies we structure and build, Prof. Williams said.
The real challenge is that once embedded in the technology, bias is amplified and scaled. Something that seems quite small and trivial can become very significant and impact a lot of people. Ms Ruster warned leaders to consider bias and nuances from the very beginning of any technology project.
“When an AI model is created based on data coming from Western cultures, for example, that data is generalised to all countries or cultures,” she said. “If this bias is embedded in automated vehicle design for instance, this could cause issues around the world. Different cultures have different views of what safety looks like, how cars and people interact and how road rules work. These nuances shouldn’t be abstracted from how we think about and design AI.”
One answer to overcoming bias is synthetic data – artificial information developed by a machine learning model that developers and engineers can use instead of real, historical data. Not only does synthetic data remove bias from the equation, but it also overcomes the challenges around access and concerns around privacy.
Prof. Williams added: “these new methods allow you to generate data that you can share or use to experiment. We all need to be doing more business experiments, because that's where the innovation is lurking. And that is where you can exploit it to grow your business.”
Closing the diversity gap
One of the significant challenges in AI is a lack of diversity. Bridging this gap is becoming increasingly important, not only to further reduce bias, but to also allow leaders to make better, more well-rounded decisions. Professor Williams suggested that leaders need to think differently about how they can encourage people from a range of backgrounds and perspectives to get involved.
“We know AI is not just a technical problem, in fact, I would argue that’s the easy part. We need to bring more diverse talent and perspectives into the design and building phases. We've got to understand what gets different people excited about technology and engage them early,” she explained.
As investors, leaders also have to think carefully about the kinds of businesses they invest in – supporting those that focus on building diverse teams, rather than those who don’t.
Turning responsible AI into a competitive advantage
The broad scale application of AI within a business context can bring a raft of different opportunities to organisations, both from a customer facing and a process automation perspective, according to A/Prof. Kirshner. “Typically, when we think of AI, we think of chatbots that interact with consumers, and all the product recommendations we get after every product or service we use,” he said.
“But then there’s the more boring AI, which you can use to increase revenues and create new products, achieve efficiencies and lower costs. The question is, how do you bring that into your organisation?”
Technology pioneer Reejig is using AI to help businesses re-frame how they think about people’s potential. The company helps harness people’s skills, experiences and passion so as companies evolve, leaders can better deploy talent to different roles – helping businesses redistribute and leverage existing potential using AI.
Although A/Prof. Kirshner predicts that eventually every business will turn to AI to improve their bottom line, the technology presents a huge competitive advantage today. Especially for those who take responsible and ethical considerations seriously, given consumers’ increasing awareness of data manipulation and unethical practices.
“If you meaningfully invest in sound, responsible AI principles, and lay the foundations for responsible business practices, you will have a real source of competitive advantage when it really starts to matter from the perspective of the consumer.”
While the current focus on minimising harm is an important first step in developing ethical technology practices, the journey doesn’t end there. Getting clarity around what responsibility means is increasingly critical to overcoming some of the challenges AI presents, according to Ms Ruster.
“I think the next part is, how do we move beyond just compliance to a broader notion of responsibility? How can these tools help us get to a world that we want? How do we get to a limitless upside of what's possible and think critically about that?”