Rolls-Royce's Aletheia Framework: pioneering safety-critical AI
With its Aletheia Framework, Rolls-Royce is at the forefront of implementing an ethical AI framework that creates safer work environments
A paradigm shift is underway in the ever-evolving landscape of artificial intelligence (AI) – the emergence of safety-critical AI. This approach aims to integrate AI into business processes, like manufacturing plane jet engines, where errors may impact safety. Manufacturing processes demand unprecedented reliability and accountability for safety regulators to approve them. New developments in safety-critical AI raise profound ethical questions and prompt a need to re-evaluate current regulatory frameworks.
Rolls-Royce, a company that makes products that power people on the ground, at sea and in the air, and critical backup power for hospitals and data centres, has pioneered an ethical AI framework known as Aletheia. This free-to-use tool, designed by Lee Glazier, Head of Digital Integrity, and his team, is a 32-step process that embodies a strategic approach to AI ethics. It emphasises brevity, clarity, evidence, and practicality.
With so many organisations finding it difficult to approach ethical AI and the adoption and integration of various ethical frameworks, UNSW Business School recently spoke with Mr Glazier about the company’s journey in developing and implementing Aletheia, the challenges they faced and the lessons for business.
What is the Aletheia Framework?
About four years ago, Rolls-Royce started working on Aletheia to address the potential ethical implications of job losses at the company from automation. It is essentially a quality assurance system for AI focused on safety-critical tasks, which can be printed on a single page, emphasising evidence and a central trust model. Mr Glazier explained the framework was designed to go beyond theory to be a clear one-page process that any organisation can follow so that its AI is accurate, well-managed, and positively impacts the world.
“It's very accessible and includes an area on trust. So, how can you trust your AI? It also goes beyond the ethics into how we can realise ethics,” he said. “We went beyond pure guidance into ‘I want a piece of evidence to show that you have realised that ethic in your project’.”
But according to Mr Glazier, the Aletheia Framework isn’t just a testament to the capabilities of AI; it is a comprehensive quality assurance system that addresses the challenges of AI’s ethical implications, and so can be used by anyone looking to incorporate automation into organisational decision-making.
Given the increasing complexities of navigating automation in an increasingly digital world, Mr Glazier said the importance of an ethical framework that goes beyond ethical principles, incorporating realisation principles and necessitating tangible evidence to prove adherence to ethical guidelines, cannot be overstated. Mr Glazier also explained that collaboration was key to creating the framework. He highlighted the collaborative effort involving departments like ethics, data science, manufacturing engineering, HR, and crucial discussions with unions and workers' councils. “It was very much a collaboration from inside Rolls-Royce,” he said.
He also acknowledged that while most Rolls-Royce projects aren't AI-related per se, the framework provides a tailored approach for the few AI projects that could have ethical implications. He explained, “For the tiny point one of a percentage of projects in Rolls-Royce that may have an ethical impact, we will go away and fill out the ethical framework.”
What purpose does the Aletheia Framework serve?
At its core, the Aletheia Framework seeks to bridge the gap between extensive high level ethical guidance and the work of agile data scientists and risk owners in organisations. Mr Glazier highlighted a key challenge: sifting through hundreds of pages of guidance on the use of AI.
“We took all the guidance and condensed it onto a single page – whether you were looking at the Good Cooperation, the EU, European Parliament – all recognised bodies producing fantastic guidance, but it was generally 100 pages long, and no data scientist is going to read all 100 pages,” he said.
Addressing concerns about AI replacing jobs, Mr Glazier emphasised that, in many ways, it would be more unethical and challenging to keep workers in tough conditions where a machine could just as easily do the job (in many ways better) without putting human life at risk of serious injury.
Deploying AI for tasks like manual inspections in dark rooms is almost an ethical obligation due to the nature and risk of those jobs, especially when the AI is as good as or even better than, in some cases, existing human inspectors, he said. “And we've demonstrated that capability,” explained Mr Glazier. “So, it's almost ethical to stop doing that. If you can have AI do that job, you should.”
So, Rolls-Royce's consideration of ethics intends to safeguard the wellbeing of human workers. Emphasising that the individuals involved in these tasks would transition to more humane and value-added roles, Mr Glazier added: “We had to then look at caring for those people whose jobs will be impacted.”
Regarding the implementation of the Aletheia Framework, Mr Glazier acknowledged initial scepticism, stating, “I think probably the first challenge would have been 'Oh no, not another piece of governance that we're going to have to comply with.'”
However, he clarified that the framework includes screening questions to determine the potential for ethical impact and aims to shorten the time needed for overall governance. “We have some sentencing questions up front, which means, as I said before, very, very few projects are deemed to have the potential for an ethical impact,” he said.
Despite initial concerns regarding matters like job losses, he said there was minimal pushback, with positive responses from younger and older generations that were part of the company's union. Mr Glazier noted, “Graduates, the people embracing all this tech, responded very positively to the idea of ethics. The fact that the unions had been involved was also seen as a positive for the older generation.”
Looking ahead at what’s next for Aletheia, Mr glazier said that advances in safety-critical AI are positioned for integration into full autonomous processes where errors could directly impact safety and people's lives. To address this, he is working on building on Aletheia as a more comprehensive framework consisting of six components, one focusing on ethics and the others facilitating the certification of AI for direct use in safety-critical environments.
“So, in simple terms, six frameworks, one of which is free because it's the ethical framework, and then five more to be able to then look at a route to certification of AI,” Mr Glazier explained, emphasising the need for businesses and policymakers to extend their focus towards building safety-critical considerations into the very fabric of AI.
Lessons and implications for business
The Aletheia Framework acts as a catalyst for understanding and safely utilising AI, addressing ethical concerns while enhancing the efficiency of complicated business processes. As such, Mr Glazier recommended that all businesses seek to understand AI, use it ethically, and embrace it. “It makes people think it has a set of guidelines, and then people know that I'm going to be looking at that project to ensure that they've done various things in their project to make sure it is ethical,” he said.
Furthermore, Mr Glazier highlighted a key milestone for the company: negotiating a route to fully autonomous safety-critical AI with regulators, which is a global first. Engaging with regulators, including the Minister of Defence, reflects a proactive approach to shaping the regulatory landscape.
“So, I've sat down with three plus the Ministry of Defence, who are very interested in what we've done there. And the regulators I've spoken with are very positive,” he shared. This achievement has broader implications for businesses, emphasising the necessity of engaging with regulators in the nascent stages of cutting-edge AI development
The response from regulators stems from Rolls-Royce's demonstration that governance can effectively be implemented around safety-critical AI for certification, setting a precedent and signalling to the industry that such technologies can be regulated appropriately, he added.
“So that's the first thing. Safety-critical AI will need regulating and certifying where we're already sitting down with regulators to discuss that,” Mr. Glazier affirmed. This represents a pivotal lesson for businesses venturing into safety-critical AI, emphasising the importance of proactive engagement with regulatory bodies.
In this way, Rolls-Royce's journey into safety-critical AI offers vital business lessons, highlighting the imperative to integrate safety considerations into the AI development process, showcasing the significance of ethical frameworks, and setting an example of proactively engaging with regulators.