Empire of AI: what OpenAI means for global governance and ethics

UNSW Sydney professors and investigative journalist Karen Hao delve into the rise of OpenAI and discuss the environmental, regulatory and ethical implications for humanity

The AI industry, along with nearly every other sector, is in the midst of a revolution fuelled by the rapid rise of large language models (LLMs) and the race toward artificial general intelligence (AGI). However, a common misconception is that OpenAI’s ChatGPT represents the only form of artificial intelligence available today. In reality, a wide range of AI models and approaches exist – some of which may be more sustainable in terms of energy use and social impact. OpenAI, however, has chosen a path that critics argue carries significant environmental and human costs

In a new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, investigative journalist Karen Hao, a former writer for the MIT Technology Review, delivers an eye-opening exposé, praised by The New York Times and the Financial Times, arguing that OpenAI and companies striving to be like it are not neutral innovators but the architects of a new digital empire with profound moral, environmental, and political consequences.

“We really need to start thinking of companies like OpenAI as new forms of empire… more powerful than most nation states, if not all nation states in the world,” said Ms Hao, who was recently interviewed at UNSW Sydney by Scientia Professor of Artificial Intelligence Toby Walsh at her book’s launch hosted by the UNSW Centre for Ideas

Photo gallery: The UNSW Centre for Ideas' 'Empire of AI' Wallace Wurth Lecture

Ms Hao, who has made the TIME100 AI list, traces how OpenAI was originally founded as a nonprofit with the mission of advancing intelligence for the benefit of humanity rather than financial gain. Yet, in recent years, the startup has transformed into a highly profitable company valued at US$300 billion (as of March 2025), striking up business partnerships with tech giants Microsoft and Amazon in the global arms race for AI dominance. 

The human cost behind OpenAI’s success 

Ms Hao was the first journalist to profile OpenAI and its high-profile CEO, whose lack of transparency, she said, long obscured key issues – many of which have only recently come to light. “Altman is a politician. That's the best way to understand [him]. He's just really good at telling stories that persuade people that he's on their side," explained Ms Hao. "But the problem is that he ends up telling different stories to different people."

Speaking on current trends in Silicon Valley, Ms Hao said: “Everything is so surreal, because it's kind of like this weird fun-house mirror effect where all the world's extremes are kind of being amplified in Silicon Valley… the fact that you can tweak just one thing in your AI model, and suddenly it affects billions of people's experiences around the world. Everything just feels a little upside down.” 

AI systems such as those developed by OpenAI are often portrayed as self-learning, but in reality, they rely heavily on human labour. Workers are employed to label, filter, and process the massive datasets that make training possible. Workers are hired from countries including Kenya and Chile for content moderation. For example, Ms Hao recounted several cases where workers in the Global South were tasked with reviewing disturbing online content to train AI moderation systems. These workers face long hours, low pay, and exposure to psychologically harmful material. 

Learn more: How AI is changing work and boosting economic productivity

AI is not fully autonomous (at least not yet), and so each query to a system like ChatGPT draws on countless hours of human input in addition to the computing power behind it. Prof. Walsh wrote an article, for example, calling it the greatest heist in human history. "How authors', artists', musicians' intellectual property is just [taken] without consent, without compensation and with callous disregard," he said.

“I mean, knowing that what they're doing is probably not fair use, by any definition of the word ‘fair.’” To illustrate, AI company Anthropic recently agreed to pay US$1.5 billion to settle a class-action lawsuit from a group of authors who accused it of using their content to train its AI model without permission.

Another criticism centres on the way AI-driven applications are designed. Companies often employ behavioural scientists to optimise digital products for engagement, sometimes in ways that encourage compulsive use. This dynamic raises questions about user well-being, attention, and the ethics of making tools intentionally “sticky”. 

During a panel session at the same event, speaking from a human psychology perspective on the impacts on human health and wellbeing, UNSW Sydney Psychology Professor Joel Pearson explained: “Most of the things we’ll see on social platforms are generated by AI to trigger you. We need to be careful about that. We need to be aware of the way that uncertainty and change are affecting us," he said.

UNSW Sydney Psychology Professor Joel Pearson.jpg
UNSW Sydney Psychology Professor Joel Pearson said AI is used to generate social media content that is deliberately designed to trigger responses from users. Photo: UNSW Centre for Ideas

“There are lifestyle things you can do – sleeping and eating well, being with family and friends – and paying attention to the mental health side. I think the next decade or two, these things are going to become more important than ever before.”

The environmental costs of ChatGPT remain undisclosed  

Beyond labour, there are also serious environmental concerns. To date, OpenAI has not disclosed GPT-5’s energy use, which some have said is likely higher than previous models. Training and running large-scale AI models require substantial amounts of electricity and water, both of which have measurable environmental footprints.  

Massive data centres consume energy not only for computation but also for cooling, often straining local water supplies. Experts warn that these costs are disproportionately borne by communities that host data infrastructure or provide critical mineral resources, especially, once again, in the Global South. Even a simple interaction, such as asking a chatbot a single question, consumes energy and water. At scale, even small interactions add up to significant resource use. 

Investigative journalist and former writer for the MIT Technology Review, Karen Hao.jpg
Investigative journalist and former writer for the MIT Technology Review, Karen Hao, said there is a limited understanding of LLMs because the companies that create them paint the models as flawless. Photo: UNSW Centre for Ideas

“There are communities around the world that are literally competing with computers for drinking water," Ms Hao said. "Technology serves people when you first centre people and the challenges they face, and then you find creative ways of solving those challenges… instead of trying to build an everything machine that now lacks product-market fit.” 

But it doesn’t have to be this way. Prof. Walsh echoed her concerns, adding: “What gets me here is that it's fresh water [being used to cool] data centres. You don't have to use people's water supply.”

How can Australia use AI tools ethically and sustainably? 

With AI’s rapid adoption, abandoning it is unrealistic. Instead, experts argue for responsible use – applying AI tools where they add real value, while supporting models that prioritise sustainability, labour rights, and social justice. This means engaging companies in open dialogue and acting on values.

Importantly, AI can play a constructive role in tackling some of our biggest societal challenges. When designed and deployed responsibly, AI systems can help monitor environmental change, optimise energy use, and support climate resilience efforts. The key, the critics argued, is ensuring that AI development aligns with these broader goals of human and ecological wellbeing rather than undermining them by focusing on profit and power motives. 

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

“Open source [an AI system that is freely available to use, study, modify, and share] is a perfect way to start breaking up that monopoly," said Ms Hao. "One of the reasons why we do have a very limited understanding of the limitations of these models is because we don’t have access to the models other than the companies that have every incentive to paint the models as flawless.”

She also argued for more public AI research. “We also need development of different types of AI approaches, more AI ideas in the space," she said. "We need more certification bodies and then increased transparency across all of the aspects of what tech companies do.”

Professor Walsh added a local perspective and asserted that Australia could be a world leader in AI readiness and AI integration. "There’s a massive opportunity there for Australia to be a world leader in this human side of things, getting AI ready, understanding and building these change models. We can export that," he said. “If I were in government, I would start with something like [AI change management] and focus on people and help people through these next two decades.” 

Prof. Walsh concluded the talk by stressing the importance of agency, collective action, and individual responsibility in shaping the role of AI in our lives. “What role do you want AI to play in your life, in your future? We have agency," he said. 

UNSW Scientia Professor of Artificial Intelligence Toby Walsh.jpg
UNSW Scientia Professor of Artificial Intelligence Toby Walsh said Australia has a "massive opportunity" to be a world leader in the human side of AI change management. Photo: UNSW Centre for Ideas

"We can come together as communities – at work, at school, at home, in our social spaces – and take collective action. If you want better policies, better systems, and better outcomes, you can act. We also have individual responsibility and power: to set boundaries, to decide where we let AI into our lives, and where we don’t, so that we feel comfortable with the future.” 

Finally, he reminded the audience not to forget that, despite all the hype, AI remains just a tool. “It allows us to do more than we could on our own, but like smartphones, there are limits to how much we should let it into our lives. We should use it to empower our decision-making – it’s still our choice," said Prof. Walsh.

Australia can lead the world in AI regulation  

Australia does not yet have an AI-specific law. In recent years, the government has introduced a set of AI Ethics Principles and a Voluntary AI Safety Standard. It has also proposed a set of mandatory guardrails for high-risk AI, though at the time of writing, the Productivity Commission has warned the government not to pursue such a proposal, and only as a “last resort”.  

UNSW Law & Justice Professor Mimi Zou, who also spoke at the event, emphasised that now is the perfect opportunity for Australia to step up and lead the way: “Right now, we are seeing globally, among regulators and policymakers, almost a pause. The European Union started the process," she said. 

UNSW Law and Justice Professor Mimi Zou.jpg
UNSW Law and Justice Professor Mimi Zou said laws that regulate AI need to be carefully designed, otherwise they can reinforce the power structures that AI companies depend on. Photo: UNSW Centre for Ideas

Australia recently put forward a proposal, but Prof. Zou added that AI regulation seems to be on hold. “If regulators are not doing anything right now, we're going to have to apply our existing laws, whether they're privacy laws, human rights, discrimination laws, labour laws and environmental laws, to really challenge some of these big tech companies that are causing significant harm.” 

What would this look like in practice? “Laws need to be carefully designed, because otherwise they can reinforce the power structures that these [AI companies] depend on," she said. "Existing laws are not going to do it, as most of them are triggered only after harm has occurred. We will need robust governance and oversight mechanisms to ensure greater transparency and accountability of these companies, especially where high-risk AI is being developed and deployed.” 

Prof. Zou affirmed that our own values can and should inform our approach to AI governance. “There are values that Australians embrace, such as a fair go, human rights, democracy, and the rule of law," she said. "We don’t have to follow the US or China. We can align ourselves with other countries that are like-minded, and we can certainly push for a third way,” she said.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy