Toby Walsh on the AI singularity: when AI is smarter than humans
Artificial intelligence and generative AI are prompting debate on the 'AI singularity' as large language models and neural networks reshape human-AI collaboration
“AI is not just an economic possibility. It’s perhaps the next step in our evolution,” said UNSW Sydney Laureate Fellow and Scientia Professor of Artificial Intelligence, Toby Walsh, during a talk at the Sydney Writers’ Festival. Sitting alongside acclaimed novelist and essayist Jeanette Winterson, CBE, the two thinkers took on one of the most profound questions of our time: is artificial intelligence the natural next step in human evolution, or could it mark our undoing?
This question lies at the core of today’s debate around AI technologies, spanning neural networks, generative AI, and large language models like ChatGPT. The discussion between Prof. Walsh and Ms Winterson was introduced by Verity Firth AM, Vice-President Societal Impact, Equity and Engagement at UNSW Sydney. With a headline of 'The Art and Science of AI', the speakers explored how AI systems might augment human cognition, reshape social interactions, and impact the evolution of human lives.
Human evolution or human obsolescence?
Ms Winterson suggested that the trajectory of AI deep learning might not merely mimic evolution, but be evolution. “Maybe human beings haven't finished evolving, and this is genuinely our next jumping off point, our next chance, and we should take it. Because if we don't, when I look at the world now, say we took away all the tech and all the AI, we’d still go on killing and murdering each other and wrecking the place. We've got this far, and we're a mess,” she said.
The idea that AI might continue natural selection – enhancement through synthetic, digital means – echoes long-running debates in neuroscience and evolutionary biology. Could neural networks be the next step in brain evolution? And if so, what happens when they surpass the structure of our own neurons and genomes?

For Prof. Walsh, the pace of change is what makes AI revolutionary and potentially destabilising. “The Industrial Revolution took 50-odd years. Now, we’ve got global markets overnight and a billion customers. The challenge is greater; it’s going to be disruptive on the same sort of scale, but much quicker,” he said.
From AI-generated media to algorithmic decision-making on social media and in financial systems, these technologies are already transforming how we live, connect, and work. But are we encoding the right values into them – especially in a world struggling with climate change, misinformation, and increasing polarisation?
Artificial or alternative intelligence?
“We have to align AI with our values,” said Prof. Walsh. Is this enough to mitigate the risks? Ms Winterson suggested a more radical possibility: that AI might help us improve our values or replace them with new ones. “So that's why people panic when they say, We have to align it with our values. And then you look at us in Gaza and in Ukraine, and you think, what values?”
Rather than treat AI as an artificial imitation of human brains and thinking, she offered a different label: “I want to call it alternative intelligence, because we need some alternatives. Let’s bring AI in to do it a bit differently.”

Ms Winterson spoke about AI as a potential creative partner – a vision that aligns with the rise of generative AI, where tools from companies like OpenAI and Microsoft are used not to replace human creativity, but to extend it – from composing music to co-authoring articles with proper attribution via DOI (digital object identifier) systems. “We’re always making up the next story; couldn’t we write a better one?” she asked.
In an ideal world, AI becomes a collaborator in storytelling, scientific discovery, and social imagination – potentially a key to understanding the emotions that shape human lives, well-being, and consciousness.
Existentialism and the AI singularity
However, no discussion of AI's future would be complete without referencing the ‘singularity’: the hypothesised point when AI surpasses human intelligence. Prof. Walsh cited science fiction author Vernor Vinge: “The problem is not simply that the singularity represents the passing of humankind from centre stage, but it contradicts our most deeply held notions of being.”
Ms Winterson expanded the point, blurring the line between science and spirituality: “Enlightenment onwards, science and religion just came apart. Parallel lines that would never meet. And now, you've got science and religion asking the same question: is consciousness obliged to materiality? Religion has always said no. But science has always said yes. And now science is saying – probably not.
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
“This fascinates me: that these parallel lines that shouldn't have met in space are coming together in a different way. That's why I wonder (and I do genuinely wonder this) if we've been telling the story backwards. What if it were true, but we didn’t know how to talk about it except through a religious prism? And what if this is the moment where it becomes reality?”
Whether the convergence between science and spirituality (the belief in something bigger than ourselves) signals a new human awakening – or the loss of what makes us human – is still an open question.
For Prof. Walsh, AI holds the potential to help us learn more about consciousness and imbue life with more, not less, meaning. “I'm a bit prejudiced because I'm an AI researcher, but I do actually think that one of the reasons that AI is exciting is because it’s going to address, I think, one of the most important scientific questions that remains to be answered, which is, you know, what is it that makes us?”
Challenges, disempowerment, and misinformation
For all its potential, however, both speakers acknowledged the serious risks posed by AI – among them, misinformation, embedded bias, and a lack of accountability in system design. Ms Winterson noted the absurdity of using the term “hallucination” in machine learning: “Meaningless – but it’s no more meaningless than so much of human interaction, is it? I don’t think machines hallucinate. I think they 'machine-splain'."

She also warned of growing public alienation: “Most of us are outside of that conversation, and that’s dangerous.” Even speed, they suggested, can be harmful. “Maybe the speed is too fast for humans,” said Ms Winterson. “We’re not as fast as machines, keeping up is bewildering and upsetting.”
That sense of disempowerment is reflected in how many people, even public figures like Elon Musk, have publicly warned about the future of AI, despite also investing heavily in its development.
Writing a better story for humanity
Ultimately, the conversation returned to an age-old question: what kind of society do we actually want to create? And who will benefit and lose out the most from AI?
Ms Winterson said: “You’re going to need your plumbers, your gardeners – we need humans at lower levels and at higher levels, but maybe not so much in the middle. That’s only a problem if there’s no money, and people won’t do universal basic income unless we restructure society to share the goodies,” she continued.
“The rich list this year is obscene. It’s as far away as we can possibly get from one person’s experience to another. That can’t be right. But again, is suffering or scarcity baked in? Is it necessary? I don’t think so. Is abundance possible? I do think so.”
Learn more: Beyond chatbots: Navigating AI's industrial transformation
Ms Winterson also said we must treat society not as fixed, but as something constructed – and therefore changeable. “The way we live – it’s not a law of physics like gravity. I think it’s propositional. We make it up as we go along. As somebody who writes stories and creates mini worlds, I think we’re always making up the next story. That’s how society changes.
“So I’m thinking – if the way we live, the way we suffer, the way we don’t, isn’t a law like gravity – couldn’t we write a better story?”