Ethical AI: navigating opportunities and risks in business

Download The AGSM Business of Leadership podcast today on your favourite podcast platform.


Businesses need specific guidelines and leadership strategies to leverage AI ethically and drive innovation and customer satisfaction

The rapid adoption of artificial intelligence (AI) in business presents both unprecedented opportunities and significant risks. As organisations increasingly rely on AI for decision-making, customer satisfaction, and innovation, leaders must navigate the ethical implications of these technologies. The conversation about AI is no longer just about technological advancement but about responsible and ethical usage that ensures positive outcomes for society and the environment.

These issues were the subject of a UNSW Business School The Business Of podcast, which pulled together some of the best insights from previous The Business Of podcast guests. Hosted by Dr Juliet Bourke, Professor of Practice in the School of Management and Governance at UNSW Business School, podcast guests included Stela Solar, Director of the National Artificial Intelligence Centre at CSIRO’s Data61, Dr Catriona Wallace, Adjunct Professor at AGSM and Founder of the Responsible Metaverse Alliance, and Dimitry Tran, AGSM MBA alumni and Co-founder and Board Director of Harrison.ai. Dr Bourke explored the ethical issues associated with the rise of AI, and what organisations can do in response. 

“The more organisations use AI, the more benefits they experience: the kind of benefits include higher customer satisfaction, faster decision-making, more innovative products and services and so on,” said Ms Solar. These advantages can provide a competitive edge in the market, suggesting that AI is a critical tool for business success.

However, with these benefits come significant risks. “The higher we go up in an organisation the less they know about this, but it’s definitely something that should be at the board and at the executive team level,” Dr Wallace said. This disconnect can lead to situations where ethical responsibilities are delegated to engineers, who are already under pressure to deliver technical outcomes efficiently.

Stela Solar.jpg
Stela Solar, Director of the National Artificial Intelligence Centre at CSIRO’s Data61, said tangible, practical examples of how to implement AI ethically are needed. Photo: supplied

The ethical challenges of AI

Delegating ethical responsibilities to technical teams without proper guidance can have negative consequences. “Often, the responsibility for doing ethical AI is pushed way down to them. And they don’t believe that the senior management really has any idea about this,” said Dr Wallace, who explained that this gap can lead to unintended harm, emphasising the need for ethical leadership at the executive level.

The existential risks associated with AI are another major concern. AI is listed among six core existential risks, including nuclear war and climate change, with a significant probability of causing or contributing to catastrophic outcomes. Dr Wallace referred to Toby Ord’s book Precipice, which argues that AI poses a one-in-six chance of causing severe disruption or destruction of humanity by the end of the century.

“Artificial intelligence, however, is not a one-in-a-thousand chance,” said Dr Wallace. “It is a one-in-six chance that AI will cause or go near to causing the destruction of humanity by the end of the century.” This statistic underscores the urgent need for robust regulation and ethical oversight of AI technologies.

Read more: Rolls-Royce's Aletheia Framework: pioneering safety-critical AI

8 principles for ethical AI implementation

To mitigate these risks, businesses must adhere to established guidelines for ethical AI implementation. Dr. Wallace outlines a number of key principles:

1. AI must be built with humans, society and the environment in mind
2. AI must be built with human-centered values in mind
3. AI must be fair and not discriminate
4. AI must be reliable and safe
5. AI must adhere to privacy and security requirements
6. There must be a mechanism to challenge a decision AI has made against a person or a group
7. AI must be transparent and explainable
8. AI (and the organisation that develops it) must be accountable

Catriona_Wallace_3-min.jpg
Dr Catriona Wallace, Adjunct Professor at AGSM and Founder of the Responsible Metaverse Alliance, cited philosopher Toby Ord's estimate that there is a one-in-six chance that AI poses an existential threat to humanity. Photo: supplied

Practical steps for ethical AI

Implementing these principles requires a structured approach, incorporating standardised checklists and centralised governance frameworks, according to Ms Solar. She drew a parallel with the medical field, where checklists have significantly reduced errors and improved outcomes.

Similarly, businesses can adopt checklist approaches to ensure ethical AI practices are consistently applied across the organisation. “What I believe is needed is more tangible, practical examples of how to actually implement this thing and that’s where there is a gap right now, it’s in the how – how to implement things responsibly?” Ms Solar asked.

Engaging with AI technologies and contributing to their ethical development is crucial. She advised leaders to actively participate in AI discussions and design processes within their organisations. “Please really lean into the AI technology, learn what it is contribute to how it is designed and used across your organisation so that it can create that value for business and for community.”

The role of leadership in AI

Leaders play a critical role in shaping the ethical use of AI within their organisations. Dr Bourke explained: “Operationalising AI requires leaders to continually stop and ask themselves, is this ethical? And what are the implications for our business, our customers and clients?” This reflective approach ensures that ethical considerations are integrated into every stage of AI development and deployment, she said.

Dimitry Tran, AGSM MBA alumni and Co-founder and Board Director of Harrison.ai.jpg
Dimitry Tran, AGSM MBA alumni and Co-founder and Board Director of Harrison.ai, said AI can help improve clinical workflows and patient outcomes. Photo: AGSM

Ms Solar underscored the importance of leadership in developing and implementing ethical AI. “I truly believe AI is only as good as we lead it, and that’s what we’re seeing even across industry, that when organisations adopt a mindful leadership approach to the way they’re designing systems and developing AI technology that better outcomes are had,” she said.

Case study: AI in healthcare

The healthcare sector provides a practical example of ethical AI implementation. Tran runs three healthcare technology companies including Harrison.ai, a groundbreaking healthcare technology company that combines human intelligence with artificial intelligence. He explained how he had seen AI improve clinical workflows and patient outcomes. “What we do is we provide a co-pilot that can detect findings alongside the doctors,” he said. “For example, sign of pneumonia on a chest x-ray, or sign-up stroke on a CT brain, and that will help the clinician to make more accurate diagnosis in a timelier manner.”

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

However, Tran also acknowledged the challenges of maintaining AI performance over time. One of the things he has witnessed is AI getting past regulatory approval and into users' hands – but then things stop. "It has what we call performance drift over time, because the users use it in a different way, therefore, the population that the data was trained on,” explained Tran, who said continuous monitoring and updating of AI systems are crucial to ensure their effectiveness and reliability.

AI is poised to be a defining technology in the business world. However, its potential can only be fully realised if it is used responsibly and ethically. Business leaders must prioritise ethical considerations, ensure robust oversight, and engage in continuous learning to navigate the complexities of AI. "Operationalising AI requires leaders to continually stop and ask themselves, is this ethical? And what are the implications for our business, our customers and clients?" said Dr Bourke.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy