What do business leaders really need to know about ethics and AI?

While many companies are exploring how to use artificial intelligence, they also need to consider a number of important legal and ethical implications

Organisations are getting better at understanding ethical approaches to artificial intelligence (AI) but they need help to minimise human rights-related risks such as breaking laws, financial penalties as well as reputational damage.

“Companies are in the early stages of a long journey. Many have started to take action to understand the social and human rights implications of their use of AI, but they generally haven’t been good at taking meaningful action,” said Ed Santow, Australia’s former Human Rights Commissioner and a Visiting Professorial Fellow at UNSW Sydney.

While there is a greater awareness of the social impact of AI, Mr Santow said this impact can be positive or negative, or more frequently, both positive and negative simultaneously. “On the negative side, we’ve seen a more passive kind of awareness-raising so far. It generally hasn’t yet translated into practical action,” he observed.

Mr Santow, who spoke about embedding responsible technology practice into organisations at the AGSM 2022 Professional Forum: Ethical AI in an Accelerating World in Sydney, said there has been a “massive proliferation” of companies, governments and international organisations making ethical frameworks for the use of AI. 

However, empirical research indicates that the vast majority of these ethical frameworks have “no discernible impact at all”, said Mr Santow, who also serves as an Industry Professor at UTS. “I think our big challenge at this point is to take that growing understanding, those undoubted good intentions, and find some practical ways to help companies and government live out those good intentions.”

Regulators are late to the AI ethics party

It’s a well-established fact that government legislation regularly plays catch up with the application and impact of technology in the business world. This is particularly the case with AI, as Mr Santow observed in AHRC reports including Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias and the Human Rights and Technology Final Report, a three-year national initiative that made a number of significant recommendations about ethical use of AI.

One of the observations this report made was that if organisations broke some ethical code, standard or regulation, the negative consequences would have been minimal. “Regulators (and I say this as a former regulator myself) have been too slow at enforcing those laws that are there to protect citizens and consumers,” he said. “And indeed, the broader regulatory ecosystem has been too slow to come into force. For that matter, consumers have been kept in the dark about human rights and broader implications of the rise of AI.”

However, Mr Santow said this is changing “really quickly” and regulators are stepping up and becoming much more skillful and effective in enforcing existing laws. “There are new laws coming into effect, particularly in Europe, that have a transnational effect, and we know that citizens and consumers are becoming much savvier around what’s at stake,” he said.

Read more: AI: friend or foe? (and what business leaders need to know)

AI ethics risks and challenges

Significant penalties are a more tangible risk for organisations that use AI that results in negative outcomes. There have been many cases where algorithmic bias has landed big companies in hot water. Apple, for example, drew the ire of regulators in the US after its credit card was labeled “sexist” as it was found to offer smaller lines of credit to women than to men. Closer to home, the Federal Government, for example, had to pay more than $1.8 billion in compensation as a result of its bungled Robodebt AI initiative, in which an ill-thought-through algorithm targeted the wrong social security recipients in a bid to recover overpayments.

“If AI is used in a way that causes unfairness, or even discrimination, this is becoming a commercial imperative,” said Mr Santow. “Not only is this a human rights issue, but it’s a problem because you as a company are making bad decisions commercially.”

Similarly, Mr Santow has observed companies are often unable to disentangle their legal obligations from their ethical obligations. “These two things are really different,” said Mr Santow, who explained legal obligations state the obvious in terms of compliance and what is lawful and unlawful. He gave the example of banks and home loan decisions and said it is just as unlawful to use AI in discriminating against people of colour as it is to use conventional mechanisms for making such decisions.

Edward Santow ethics AI-min.jpg
While companies try and understand the social and human rights implications of their use of AI, Ed Santow said they generally haven’t taken any meaningful action. Image: supplied

“We have tended to discrimination caused by AI as an ‘ethical’ problem, but it’s not. That matters. It’s not just an issue of semantics, because we have a choice about whether to comply, let alone how we comply, with ethics. By contrast, the law states you must comply,” he said.

Another challenge is that company leaders “generally don’t know what AI is”, according to Mr Santow. “I don’t mean to say that in a rude way, but the data on this is pretty compelling. Companies are reporting that, at the very senior levels and middle management, they literally don’t know what AI is. And yet, they’re investing really heavily in it. This is a dangerous combination; how can companies set an effective strategy which is deployed accurately and accountably if they don’t know the basic building blocks of what they’re procuring and working with?”

Three ways companies are approaching AI

In his role at the AHRC, Mr Santow got to see how many organisations in Australia and around the world are approaching AI. Companies generally take three kinds of approaches – the first of which is a “Zuckerberg-like move fast and break things”, according to Mr Santow. “They’re largely unconcerned about ethical principles and even by how the law might apply to them. They have a really cavalier approach. If you’re operating in an area where you’re actually impacting on people’s fundamental rights, I think that is morally bankrupt and unacceptable, and from a legal perspective as well as commercially that approach is going to become more and more problematic. So you don’t want to be in category one.”

Read more: Three useful things to know about data, AI and the privacy debate

The second category involves companies that go to the other extreme eschewing good and bad approaches to AI equally. Often this is a result of a bad experience or an abundance of caution after seeing other companies attract negative publicity. These organisations “throw the baby out with the bathwater” and Mr Santow thought this too is “a bad approach and does not make any sense”.

The third category sees companies take a more nuanced approach, in which they are “willing to treat AI as something more concrete than just magic”, he said. “Too many executives see AI as something that will magically deliver them a solution. While there’s powerful technology at the heart of it, they need to understand the basic building blocks of how AI works and where the risks and opportunities are. The more we see that nuanced approach by companies, the more we’ll see the market start to shift in a positive direction. And these particular companies will also be at a significant competitive advantage,” he said.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy