Toby Walsh debunks the top 12 artificial intelligence myths

UNSW Sydney's Professor Toby Walsh dismantles the persistent myths obscuring what artificial intelligence can and cannot achieve in business and society

For every headline celebrating artificial intelligence (AI) as the transformative technology of our era, another warns of spectacular failures and wasted investments. This contradiction leaves business leaders, policymakers, and professionals struggling to separate substance from hype.

The evidence surrounding AI, for example, presents a study in contradictions. Stanford’s latest AI Index, for example, reports that 78% of organisations used AI in at least one business function in 2024 (up from 55% in 2023) while Gartner forecasts that spending on generative AI will reach US$644 billion in 2025 (an increase of 76.4% from 2024).

Yet, for every study touting the benefits of AI, there is research that presents a counterargument. An MIT study, for example, found that 95% of generative AI pilots failed to deliver measurable financial returns, with the vast majority stalling and delivering little to no impact on profit and loss statements. And RAND Corporation confirmed that more than 80% of AI projects failed, twice the failure rate of non-AI technology projects.

Some 20_ of the world_s research and development budget is being directed toward AI.jpeg
UNSW Sydney's Professor Toby Walsh noted that 20% of the world's research and development budget (roughly US$1 billion per day) is being directed toward AI. Photo: Adobe Stock

At the root of many arguments about the pros and cons of AI lie a number of common myths, according to Toby Walsh, Laureate Fellow and Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at UNSW Sydney. In a recent presentation at UNSW Sydney, he dismantled the misconceptions that obscure an understanding of what AI can and cannot do.

His perspective combined the patience of someone who has watched the technology develop over decades with the urgency of someone witnessing its unprecedented rate of adoption. Rather than offering predictions or promoting a particular perspective, Prof. Walsh focused on correcting the record where misunderstandings have taken root.

The speed myth: AI follows historical patterns

Prof. Walsh began by acknowledging that comparing AI to electricity is a sensible approach. Just as electricity became embedded in devices throughout society, AI was becoming integrated into technology at every level. The comparison breaks down, however, when considering the velocity of change.

"Other technologies entered and changed our lives, changed the nature of work, changed the nature of science, changed the nature of education, but nothing has entered our lives as quickly as artificial intelligence," explained Prof. Walsh, who also serves as Chief Scientist of the UNSW AI Institute. "That's the thing that surprises me, not the technology. The technology has got to pretty much where I thought it might have got to when I started 40 years ago, but the speed and scale that it's arriving is unprecedented."

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

The numbers behind this acceleration were staggering. Twenty per cent of the world's research and development budget (roughly US$1 billion per day) is being directed toward AI, according to Prof. Walsh, who noted that this level of investment has no precedent in technological history. "AI is happening overnight, because we've already put the plumbing in," Prof. Walsh said. "You just have to be told, ‘Well, there's this useful tool, ChatGPT, go try it out,’ and you can. It's not a coincidence that it was the fastest-growing app ever."

The big tech dominance myth: Innovation trumps resources

Given the billion dollars per day being spent on AI, Prof. Walsh addressed the perception that universities and smaller organisations could not possibly compete with the resources of technology giants. He challenged this assumption with evidence from the field itself.

He recounted a conversation with Peter Norvig, who served as Director of Research and Search Quality at Google. “Toby, you know, the next breakthrough in AI is not going to come out of Google. It's going to come out of some university, someone who's thinking out of the box," Norvig had told Prof. Walsh, who said this perspective revealed something about the origins of genuine innovation.


The competitive advantage in AI was not the technology itself, he argued. Google, Microsoft, OpenAI, Anthropic, DeepMind, and others were building essentially the same algorithms. Computing power had become a commodity that anyone could access through services like Amazon Web Services. The real differentiator was data, specifically, unique datasets that organisations possessed.

Bloomberg, for example, has developed a chatbot that can converse about finance, allowing users to ask questions without needing to master the intricacies of finance or global markets. Only Bloomberg and perhaps one other company in the world (Reuters) could have developed this tool, because Prof. Walsh said they possessed a treasure trove of finance-related information – and this made a monthly subscription to the service more valuable.

This pattern will define the future of AI development. "The future is going to be much more focused. It's not going to be just these generic models,” said Prof. Walsh, who said most professionals in the future will need AI that can provide expertise in their specific field of work.

The one AI myth: Specialisation mirrors human intelligence

The notion that one monolithic AI system would dominate the future reflects a misunderstanding about how the technology actually works. Prof. Walsh drew a parallel to human development. Everyone attends school to learn reading and writing, but then people specialise. Intelligence in machines will likely follow the same pattern.

Learn more: How AI is changing work and boosting economic productivity

"You can expect the future is not going to be one all-singing, all-dancing model that tries to do everything (and fails to do everything). As with humans, it's going to be experts who are experts in particular domains," Prof. Walsh explained. Domain expertise and knowledge will allow these focused models to excel. As such, he said organisations should analyse and understand the specific datasets they possess and other specialised information that could form the foundation for targeted AI applications.

The generative label myth: Focusing on the wrong capability

Prof. Walsh admitted that the AI field had a poor track record with naming conventions. He pointed to ChatGPT as evidence, suggesting that OpenAI might have chosen a better name if it had anticipated the technology’s success.

Similarly, the term "generative AI" is problematic, according to Prof. Walsh, who explained that this suggests generation. While the technology can indeed produce poems, pictures, and videos, he said this has obscured the technology's more practical applications. “The much more useful thing that it can do, that's much more useful in a university or in a business or in many other places, is it's a fantastic tool for taking a large body of information and summarising or synthesising that information together," Prof. Walsh said. "And that's where AI really shines.”

Learn more: Why AI systems fail without human-machine collaboration

The scale myth: Diminishing returns and quality data

Silicon Valley operates under the assumption that scale solves everything, Prof. Walsh explained. While this approach has delivered results, he said that throwing more computing power and more training data at problems has produced performance improvements that reshaped the field.

He shared data from The Economist, which showed exponential scaling in computing resources devoted to AI training. Throughout the 2000s and into the 2010s, the graph demonstrated Moore's Law in action, with computing power doubling roughly every two years due to advancements in hardware. Around 2012, something changed. The doubling accelerated from every two years to every three or four months, as organisations threw thousands, then tens of thousands, then hundreds of thousands of GPUs at model training.

However, Prof. Walsh said this trajectory cannot continue indefinitely. While billions of dollars are being spent on training models, he doubted this would reach spending trillions, as certain constraints would eventually limit the technology’s expansion. "It's been remarkable what we could get by training," Prof. Walsh acknowledged. "But those of you who played with ChatGPT 5, it seems to me we're running into diminishing returns."

Hugging Face is an example of an AI platform that uses data that has been ethically sourced.jpeg
Hugging Face is an example of an AI platform that utilises data that has been ethically sourced, either from out-of-copyright materials or those with appropriate licenses. Photo: Adobe Stock

Instead, he said the next wave of improvements would likely come from focusing on data quality. Just as humans learned better from higher-quality information, AI systems could improve more rapidly with curated data rather than whatever material happens to be available.

The fair use myth: Copyright and industrial-scale training

Prof. Walsh turned his attention to what he characterised as a more outrageous aspect of Silicon Valley's approach to AI development. Technology companies have argued that training models on any content they can find on the internet constitutes fair use under copyright law – but Prof. Walsh argued this position did not withstand scrutiny.

The scale of the operation bore no resemblance to how humans read books, which society did consider fair use. ChatGPT, for example, was trained on possibly millions of books, but Prof. Walsh said no human could read that volume of material in a lifetime. “When a person reads a book, they cannot recite it verbatim afterwards. These models can, because they had memorised large amounts of content,” he said.

The third reason the fair use argument fails is related to its business impact. "They're taking business away from the people who created the intellectual property that these models are trained on," Prof. Walsh said. When users type a Google search query, for example, they now receive an AI-generated overview summarising search results, and less traffic means reduced advertising revenue from their websites. As such, Prof. Walsh suggested Google has become a competitor to these media companies, which violates the concept of fair use.

Learn more: Empire of AI: what OpenAI means for global governance and ethics

Prof. Walsh saved his harshest criticism for a final point. Technology companies had not even purchased the copies of books they trained on. "It's a stolen, pirated copy that they trained on," he said. "They could have bought it at least if they were trying to pretend it was fair use. They could have paid for the one copy.”

He pointed to Hugging Face, an online repository where the AI community shared data, models, and results, now hosting datasets that could be used to build ChatGPT equivalents. All the data in these sets had been ethically sourced, either from out-of-copyright or with appropriate licenses. "It took a bit more effort. It took another year, actually,” said Prof. Walsh: “You can build a ChatGPT equivalent that performs just as well as ChatGPT on data that wasn't stolen, and that was actually used according to the copyright." Instead, he said large tech companies had chosen the lazy, quick route to remain competitive.

The creativity myth: Machines can innovate

When AI has demonstrated new capabilities, such as reading X-rays more accurately, cheaply, and quickly than human doctors, people have often responded defensively for a number of reasons. For example, they suggest that AI is only doing what it was programmed to do and is incapable of creativity.

However, Prof. Walsh provided evidence that machines could indeed be creative. The latest antibiotic, for example, was discovered not by humans but by machine learning. As humanity faces a shortage of effective antibiotics, computers have stepped in to accelerate new drug discovery.

Learn more: James Cameron on how AI will impact creativity and innovation

Prof. Walsh's favourite example of machine creativity came from aerospace engineering. A component of AI that he believed was the first to go into space was an aerial that flew on NASA's Space Technology 5 mission in the early 2000s. "It looks like a bent paper clip," he said. The design accommodated a particular set of wavelengths that the shortwave aerial needed to handle, which explained the strange lengths in its structure. No human would have designed such an object, according to Prof. Walsh, who explained that a genetic evolution algorithm created it before the invention was patented.

The fairness myth: Implementation determines outcomes

Prof. Walsh offered a nuanced perspective on whether AI would prove to be fairer than humans. He counselled against believing anyone who made definitive claims in either direction, as the outcome of this debate depends entirely on how carefully organisations build their AI systems.

In fact, Prof. Walsh noted that an opportunity exists for AI to surpass human fairness. Humans carry conscious and subconscious biases that compromise decision-making, whereas computers offer the possibility of making evidence-based decisions in a more consistent manner. However, numerous recent examples demonstrate that careless AI implementation can replicate, or even amplify, human biases.

Apple_s credit card launch was marred by issues associated with credit card ratings.jpeg
Apple's credit card launch was marred by issues associated with credit rating algorithms that used historical data about different credit card limits for men and women. Photo: Adobe Stock

Prof. Walsh shared the case of Apple's credit card launch. The multinational introduced a card with a design so minimal it lacked numbers or names, featuring only the Apple logo and a chip. Apple wanted the signup process to be seamless and used algorithms to determine credit ratings and whether to issue cards at all. However, credit limits had historically been sexist, with men receiving higher limits, despite not necessarily being better credit risks than women. Apple ensured that gender was not a factor in preventing discrimination.

After careful testing, Apple released the card. "Terrible PR disaster for Apple. It turned out to be incredibly sexist," said Prof. Walsh, who explained that Apple’s credit algorithms discriminated, despite gender not being an input, because of many other correlating factors.

The job displacement myth: Human advantages remain

Prof. Walsh addressed job displacement resulting from AI and warned against believing anyone who definitively stated AI would or would not take jobs. Technologies have always eliminated some jobs while creating new ones, and he said the net balance in the past had favoured job creation – but history did not guarantee this pattern would continue.

This transition differed from previous technological shifts. In the past, machines took away mechanical work through the Industrial Revolution, leaving cognitive work for humans. If AI handles much (or all) of the cognitive work, the question becomes: what remains for humans? Prof. Walsh was optimistic that there are human characteristics that machines will struggle to replicate. Emotional intelligence, social intelligence, creativity, and adaptability give humans advantages.

Learn more: When AI becomes a weapon in the cybersecurity arms race

"We forget we're social. Our superpower was not our intelligence. Our superpower was our society. We're social animals. We come together," said Prof. Walsh, who explained that, while a machine could make a cheaper and potentially better cup of coffee than a human, people pay human baristas because they remember names, engage in conversation, and provide social interaction.

Prof. Walsh also raised the possibility that machines might give humanity the gift of working less. The idea of a weekend originated from the Industrial Revolution, when workers in northeast England demanded Sunday off for church, then Saturday afternoon, and eventually all of Saturday. They stopped asking for more, and Prof. Walsh found this puzzling. “Studies examining four-day work weeks have produced consistent findings,” he said.

“First, people proved just as productive in four days as in five. They eliminated unnecessary meetings and focused on essential tasks. Second, people reported greater happiness. They spent more time with families, in communities, at cultural events, and pursuing interests. That's a potential gift that the machines might give us."

The energy myth: Context matters for carbon footprint

While there are concerns about AI's energy consumption, Prof. Walsh thought people worried too much about this particular issue. While society needs to remain mindful of carbon footprints, he said the scale of AI’s growing energy use was more modest than many realised.

He said that data centres, as a whole, consume 1-2% of electricity. Much of this usage powers the likes of the internet, streaming services, and other online activities. About one-fifth of data centre usage is related to AI, which Prof. Walsh said accounts for less than 0.05% of energy consumption.


However, technology companies have made commitments to achieve carbon neutrality by 2030 and achieve net-zero emissions by 2050. Prof. Walsh noted they were backtracking on these commitments and believed they should be held accountable. Australia expects to double its data centres from 200 to 400 by 2030, which he said would result in a doubling of the carbon footprint. “Many of those data centres should run on renewables, but that creates a separate conversation about whether Australia is building sufficient renewable capacity for data centres and other needs like electric vehicles,” he said.

The investment and regulation challenge

Prof. Walsh expressed disappointment with the government’s AI policy on two fronts. Every other G20 country is making billion-dollar investments in AI, recognising the opportunities ahead. Australia is not matching these commitments. "I can't understand why our government thinks that we are somehow special, that we get the benefits of that future, without making the investments," Prof. Walsh said. Politicians need to be persuaded to make these investments – even as they seem willing to bet on quantum computing, and Prof. Walsh questioned why they could not see potentially larger returns from AI.

The government has also retreated from AI regulation. Prof. Walsh recalled a period when productive regulation seemed possible, as every technology created new capabilities and new potential harms. Rather, the government appeared to be copying the United States’ approach, which Prof. Walsh did not consider advisable.

Australia expects to double its data centres from 200 to 400 by 2030.jpeg
Australia expects to double its data centres from 200 to 400 by 2030, which would result in a doubling of the carbon footprint required to power them. Photo: Adobe Stock

He pointed to social media as a cautionary tale. Society initially did not regulate social media, assuming it would be an unbridled good. While social media did indeed bring benefits, delayed regulation came only after the harms were discovered. Anxiety levels among young people had risen, while body image issues among young girls had intensified, for example.

“These harms required after-the-fact regulation. Can we not be on the front foot with artificial intelligence and not wait till we see the harms that AI is bringing?" Prof. Walsh asked. Early signs of harm have emerged, such as the creation and distribution of deepfakes. Prof. Walsh suggested proactive regulation could address these issues before they become entrenched problems.

Main photo credit: Maria Boyadgis