How ethical leadership creates responsible AI

Download The AGSM Business of Leadership podcast today on your favourite podcast platform.


Artificial intelligence (AI) can only be as ethical and responsible as the humans who design, develop, and deploy it

AI and machine learning have the power to transform human lives and work for the better, but they can also amplify our worst prejudices and biases. For example, in recent years, multiple cases of biased AI algorithms have unfairly targeted minority groups for crimes they did not commit or facial recognition systems that have difficulty accurately identifying people of colour. Therefore, it is crucial for business leaders to approach AI ethically to avoid such ethical pitfalls, recognise inherent biases and minimise harm.  

AI ethics refers to the principles and values that guide the design, development, deployment, and use of AI systems. It involves considering AI's potential ethical implications and consequences and making decisions prioritising fairness, transparency, accountability, and respect for human rights. For example, ethical considerations in AI may include issues such as bias and discrimination, privacy and data protection, accountability and responsibility, and the impact of AI on society. The Australian Government introduced the AI Ethics Framework to achieve this goal, which helps businesses and governments design, develop, and implement AI responsibly.  

However, some experts have argued that a voluntary framework may not be enough, given the rapid speed at which the technology is being created and deployed. In a recent episode of the AGSM Business of Leadership podcast, Stela Solar, Director of the National Artificial Intelligence Centre CSIRO's Data61, a UNSW Sydney graduate, and Toby Walsh, UNSW Scientia Professor of Artificial Intelligence, joined host Dr Lamont Tang, Director of Industry Projects at AGSM @ UNSW Business School, to explore critical challenges and whether AI practitioners should take the Hippocratic Oath. 

Stela Solar.jpg
Stela Solar, Director of CSIRO's National Artificial Intelligence Centre, says AI systems must be built on equitable data by diverse sets of teams to avoid propagating bias. Photo: CSIRO

Diversity to minimise AI bias and other ethical pitfalls 

According to Ms Solar, AI is “just as theoretical, creative, and philosophical as it is technological”. Therefore, ethical considerations and technology must work together to improve business outcomes and make the world a better place. She also emphasised the benefits of AI, such as creating efficiency and accessibility of services. However, she stressed that truly responsible AI requires careful consideration of the data used to train AI systems and the need for diverse teams to co-design and develop AI technology to intercept the propagation of biases.  

“I see a lot of benefits… one of the ones I'm most excited about is that AI could help with accessibility of services, benefits and experts,” she said. For example, Ms Solar expressed her fascination with generative AI. Popular services like Grammarly and Wrike are already using their output which is expected to improve exponentially through 2030, surpassing what humans can produce. But she said it was also important that developers use robust and unbiased data sets to train AI models, as prejudices and under-representation of specific data sets can lead to more biases in the future.  

“AI systems are really reliant on the data that they're trained on, and so one of the major areas to mindfully navigate is the data that AI systems are built upon… are there biases latent in the data, is there under-representation of certain data sets?” 

Read more: Steer it – don’t fear it: navigating AI with confidence

Ms Solar continued: “And so this is an area to very mindfully navigate where we need to do a lot better in having complete trust in robust data sets… if AI models are built in data and data by default is historical, our history has not been equitable or fair. And so straight away, if the models are built on this non-equitable data, it means that we risk the potential to propagate biases into the future.” 

The solution, she said, was to ensure more diverse teams are the ones developing AI systems. Ms Solar explained: “And so one of the ways that we can actually intercept the propagation of biases is by ensuring that there are diverse teams who are co-designing and developing the usage and the technology of AI. I really see diversity not only as a thing that we should do because everyone should thrive in this world and have opportunities.”  

In agreement, Professor Walsh emphasised the importance of diversity in the field and pointed out that questions about fairness, equity, privacy, and more are the questions that should have troubled anyone introducing any new technology. Yet, he pointed out that many minority groups and people of colour are poorly represented in the field. Therefore, ensuring more diverse teams are developing AI systems to intercept biases will be essential. 

He explained: “Diversity is a fantastic thing to focus on, and it's worth pointing out that we really struggle to deal with diversity in the field... we still only have about one in five people in the field who are female.”  

But it’s not just gender that is poorly represented. Professor Walsh continued: “Many minority groups, people of colour, and other groups are poorly represented, and so it's something that you've really got to put a lot of effort into. If you're working on a project right at the beginning to put the effort into finding that diverse team, and it's not easy, but it's worth the payback.” 

UNSW Sydney's Professor of Artificial Intelligence Toby Walsh.jpg
UNSW Sydney's Professor of Artificial Intelligence Toby Walsh says AI is going to have a very profound impact on our society, but equally, we get to shape that technology. Photo: Supplied

Ethical leadership needed to deliver responsible AI 

Both Ms Solar and Professor Walsh stressed the importance of ethical leadership in developing and using responsible AI. Ms Solar said that ethical AI requires complete trust in robust and unbiased datasets, and developers should use them to train AI models to minimise biases. Meanwhile, Professor Walsh pointed out that diversity is a fantastic thing to focus on and emphasised the importance of finding a diverse team when working on any AI project. 

“I truly believe AI is only as good as we lead it, and that's what we're seeing even across the industry, that when organisations adopt a mindful leadership approach to the way they're designing systems and developing AI technology that better outcomes are had,” said Ms Solar. “And so right now it's imperative for leaders to actually step into the role of leading and shaping how AI is used across their organisation.” 

“It almost brings the question forward of, is it responsible AI that we're talking about here? Or is it responsible human? Because so much of it is dependent on what we choose, what would decide, what values, what decisions we put in the system,” she added. 

So, for leaders in charge of data-driven organisations, does this mean they should take an AI Hippocratic Oath? While there’s an ongoing debate, Ms Solar said this might not be the best approach. She explained: “What I believe is needed is more tangible, practical examples of how to implement this thing that you have taken an oath to do, and that's where there is a gap right now.  

“It’s in how to implement things responsibly? What is that checklist that people go through? What are the questions to ask? That is where the gap is. I do believe most folks have a positive intention in using AI, and unfortunately, most of the challenges with AI are inadvertent.” 

She continued: “I think the Hippocratic Oath is a good symbolic gesture that makes us think first of responsible AI, but there's still this gap of how to actually do it, and that's where the standards that are coming, I think are going to be helpful. And then we're needing a lot more work across industry research, academia in the space of developing approaches to applying AI in a responsible way.” 

Lamont Tang.jpg
Lamont Tang, Director of Industry Projects and Entrepreneur-in-Residence at AGSM @ UNSW Business School, says it is important that leaders ensure they are ethically deploying AI. Photo: Supplied

Responsible AI as a business advantage 

For Professor Walsh, he said an important idea that can help promote more ethical and responsible AI practices is to see responsible AI as a business advantage. “I'm convinced that there will be companies in the future that gain greater business because they have taken a more responsible stance than their competitors, and you already see this unfolding,” he explained.  

“You already see there are certain tech companies that treat your privacy with much greater respect than others, and that is a business advantage. Your customers, I think, are going to start increasingly voting with their feet and choosing those businesses that do behave with technology more responsibly than others,” he said. 

Another critical (but often overlooked) idea is technology doesn’t just shape society; it is society that decides how to shape technology. “AI is one of those technologies that's going to have a very profound impact upon our society, but equally, society gets to shape technology. We get to choose where we introduce technology into our lives and where we don't, to the how and how we do not. And these are really important conversations that we should all be having it,” explained Professor Walsh. 

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

“Unfortunately, in the past, it's been too often white male people in rooms in Silicon Valley like myself who have been making many of the decisions. And since this technology is going to be touching all of our lives, all of us should be involved in the conversations, so there is a lot to be talked about, and I encourage everyone to join those conversations,” he said.  

Stela Solar is a UNSW Sydney graduate and the Director of CSIRO's National Artificial Intelligence Centre, with the mission to accelerate positive AI adoption and innovation that benefits business and the community. Professor Toby Walsh serves as the Chief Scientist of the AI Institute at UNSW, a collective of approximately 300 academics from various faculties and 50 research groups dedicated to investigating and implementing AI. The institute's primary focus is to advance AI development and enhance all university-related activities responsibly. Lamont Tang is the Director of Industry Projects and Entrepreneur in Residence at AGSM @ UNSW Business School.

For more information, listen to the full podcast or visit the UNSW AI Institute.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy