Beyond chatbots: Navigating AI's industrial transformation
AI's true revolution isn't in chatbots but in transforming industrial operations through dark factories and predictive maintenance, writes UNSW Sydney's Toby Walsh
This article is republished with permission from I by IMD, the knowledge platform of IMD Business School. You may access the original article here.
The Japanese company FANUC, one of the largest manufacturers of industrial robots, has operated a ‘dark factory’ near Mount Fuji since 2001. Guided by sensors and internal navigation systems, robots diligently assemble other FANUC robots in a fully automated facility, 24 hours a day, seven days a week. This facility operates without lights, heat, or human supervision. After all, there's no need for illumination when humans aren't present and where, in many cases, it is too dangerous to allow people inside.
While this might sound dystopian, it represents the pinnacle of efficiency and technological achievement. As these mechanical workers silently build their own kind, they simultaneously construct urgent questions, however, about the future of human employment, purpose, and society's relationship with increasingly autonomous technology.
The hardware challenge: Why robots lag software
AI software has advanced dramatically in recent years, however, robotics and hardware development face greater challenges. Hardware is much 'harder' than software. The progress made in AI has been orders of magnitude greater in the digital domain than in physical systems. Robots are difficult to build, prone to breaking, and expensive to maintain.

The economic models differ significantly as well. Once developed, a single software program can be distributed to billions of users via the cloud at minimal cost, which explains why ChatGPT became the fastest-growing app in history. Each physical robot, unlike software, costs money to design and build, encounters distribution hurdles, and requires ongoing maintenance.
Consider that even reasonably priced robots might cost around $20,000 – similar to a car – and the second most expensive purchase most people make. Cars took decades to become ubiquitous, and robots will likely follow a similar adoption curve. This hardware limitation explains why AI's most immediate industrial impacts will likely be in software applications that optimise existing systems rather than in new robotic or physical automation systems.
Predictive maintenance: AI's industrial sweet spot
One of the most promising industrial applications for AI is predictive maintenance – using data analysis to anticipate when equipment will fail before breakdowns occur. The biggest industrial application will be helping businesses optimise their operations – improving logistics, enhancing preventive maintenance cycles, and making better decisions about the use of existing systems and processes without necessarily investing in expensive robotics.
Learn more: Rolls-Royce's Aletheia Framework: pioneering safety-critical AI
Companies that collect operational data – from their own use of technology, and from the manufacturer and other users – can use machine learning to predict with remarkable accuracy when components like bearings will fail. This has the advantage of allowing for scheduled maintenance that minimises downtime due to breakdowns and equally, without replacing good parts prematurely. This approach transforms maintenance from unplanned reactive to need-based proactive, saving both money and time, while increasing productivity.
An example from the industrial sector demonstrates this value. By analysing the metallic content in oil samples from rotating machinery in a factory, companies can predict when bearings need replacement and schedule maintenance before catastrophic failures occur, while avoiding unnecessary preemptive replacements.
AI and automation improve productivity – and keeps workers safe
In addition to productivity improvements, industrial automation also offers significant safety benefits.
Australia's mining sector already provides a compelling example. Despite being a high-wage economy, Australian mines remain competitive thanks to automation and AI technologies like autonomous trucks. More importantly, mining fatalities have dropped dramatically – from about 300 deaths annually a decade ago to 32 last year. That's a tenth of what it was – still too many, but it is literally removing people from hazardous environments, which is far more effective than managing the risk around human/machine accidents.
As I observed in my book, '2026', the oil industry provides an informative case study of the scale of the challenge related to automation and AI displacing jobs. The price of oil collapsed from $115 per barrel in August 2014 to below $30 at the start of 2016. This drove the industry to decrease headcount and introduce more automation. Nearly half a million jobs disappeared from the oil industry worldwide. But now, as the price of oil is rebounding, and the industry is again growing, fewer than half of those jobs have returned. Automation has reduced the 20 people typically employed at an oil well to just five. This reduction in people at work and increase in automation is helpful from a safety perspective, and also held the sector in good stead for weathering the COVID pandemic and dealing with other oil price shocks.
Who is in the lead in the race to develop industrial AI?
While many focus on which companies are creating the most advanced AI models, an equally important question is: who will deliver these technologies at scale? The answer is already clear and predictable – it will be the existing tech giants that have invested billions in digital infrastructure.
The race to deploy AI at scale is being won by those who can outspend competitors. Amazon, Google, Microsoft, and their Chinese equivalents have already invested billions in the necessary infrastructure. Building the digital foundation to deliver AI technologies to billions of people requires massive investments in data centres, satellite networks, and undersea cables – infrastructure that remains invisible to most users.
Learn more: How AI is changing work and boosting economic productivity
This explains why AI startups typically form partnerships with these tech giants; they may develop innovative models, but they need the heft of the infrastructure of established companies to deploy them at scale. The barrier to entry for creating this infrastructure is prohibitively high, making it almost impossible for new entrants to compete without partnering with existing platforms. But in any good deal – there must be a 'win-win' and where the startup piggybacks on the mega tech company's scale, the tech giant maintains its leading-edge first-mover credentials by buying-in innovation through these partnerships.
Industrial titans: Process expertise and decades of data
While tech giants possess vast data resources, industrial leaders like ABB, FANUC, and Siemens hold crucial advantages: decades of specialised knowledge about industrial processes and massive installed bases worldwide. Their equipment already operates in thousands of factories globally, creating natural pathways for AI integration.
FANUC expertise in discrete automation is characterised by its ‘dark factory’ where robots build robots, which creates a continuous improvement loop informed and refined by real production challenges. ABB leverages its extensive cross-industry presence for both robotics and distributed automation systems, to apply learnings from one sector to another – something pure data companies cannot replicate. Siemens bridges physical and digital realms through its Digital Twin technology, built on deeply rooted, long-term understanding of industrial systems.
While AI technologies offer great advantages for efficiency, safety, and productivity increases, they also present a fundamental dual-use challenge. The same technology powering beneficial applications can be weaponised or misused. Three significant risks stand out:
Autonomous weapons: AI could transform warfare into something far more destructive and less controllable than conventional conflicts. Autonomous weapon systems may lower the threshold for military action when human casualties on the deploying side are removed from the equation.
Disinformation at scale: AI can and does generate convincing fake content that is increasingly indistinguishable from reality. This capability threatens to supercharge disinformation campaigns, potentially undermining trust in institutions and accelerating political polarisation through tailored propaganda.
Surveillance infrastructure: The pattern-recognition capabilities that make AI useful for predictive maintenance and for spotting tumours on medical scans can also enable facial recognition, leading to unprecedented surveillance. This raises concerns about privacy, civil liberties, and the potential for its use in establishing or perpetuating authoritarian control.
The risks aren't about AI becoming sentient or rebelling, but rather how humans might deploy these technologies in harmful ways. Addressing these challenges requires both technical safeguards and governance frameworks that span national boundaries.
While AI technologies offer great advantages for efficiency, safety, and productivity increases, they also present a fundamental dual-use challenge. The same technology powering beneficial applications can be weaponised or misused. Three significant risks stand out:
Autonomous weapons: AI could transform warfare into something far more destructive and less controllable than conventional conflicts. Autonomous weapon systems may lower the threshold for military action when human casualties on the deploying side are removed from the equation.
Disinformation at scale: AI can and does generate convincing fake content that is increasingly indistinguishable from reality. This capability threatens to supercharge disinformation campaigns, potentially undermining trust in institutions and accelerating political polarisation through tailored propaganda.
Surveillance infrastructure: The pattern-recognition capabilities that make AI useful for predictive maintenance and for spotting tumours on medical scans can also enable facial recognition, leading to unprecedented surveillance. This raises concerns about privacy, civil liberties, and the potential for its use in establishing or perpetuating authoritarian control.
The risks aren't about AI becoming sentient or rebelling, but rather how humans might deploy these technologies in harmful ways. Addressing these challenges requires both technical safeguards and governance frameworks that span national boundaries.
What gives these industrial giants their edge is their unique combination of sophisticated algorithms, physical hardware, and sector expertise coupled with their own and customer use data. Their process knowledge, customer base and ready-made supply chain and deployment channels form a formidable foundation in the race to deliver industrial AI; not an unbeatable position, but certainly a considerable advantage that pure tech companies will struggle to replicate.
Ethical automation: The human element
Implementing industrial AI systems raises significant ethical considerations, particularly regarding workforce impacts. Two Australian examples demonstrate different approaches to this challenge.
Rio Tinto, when implementing autonomous trucks at one of its mines, committed to finding alternative employment for all 100 displaced truck drivers – a commitment they fulfilled. This responsible approach brought unions on board and facilitated a successful transition.
In contrast, Westpac Bank announced layoffs of 4000 staff while simultaneously hiring 2000 'digital natives' from outside the bank. This approach generated negative press, with critics arguing that management should have foreseen the changing skill requirements sooner than they did, which should have prompted earlier investment in retraining and reskilling existing employees who already had domain knowledge of the financial services sector. From a financial perspective alone, it's extremely expensive and difficult to find thousands of digitally skilled staff in a competitive market, often making retraining more cost-effective than replacement.
Universal basic income: A necessary adaptation?
As automation displaces workers, we may need structural changes similar to those from the Industrial Revolution, which gave rise to welfare states and universal education. The COVID pandemic showed that once-radical ideas like universal basic income are feasible – most countries implemented versions during lockdowns without economic collapse.

The real challenge to levelling the AI playing field lies in taxation. If robots perform work instead of humans, should we tax the robots? This question grows more urgent as corporate tax contributions diminish, particularly from tech companies minimising obligations through offshore domiciling and transfer pricing.
Five key takeaways for business leaders
1. Prepare your data infrastructure now: AI implementation begins with data collection. The two biggest bottlenecks to using AI effectively are data and expertise. Companies should assess whether they've collected historical records in usable formats that can train machine learning models. Without good data, even the most sophisticated AI systems will falter.
2. Grow AI expertise internally rather than outsourcing: While companies need fewer employees with PhDs in artificial intelligence, they do need people who understand both their business and AI capabilities. The ideal approach is training existing staff who have deep domain knowledge to apply machine learning concepts to business problems. This is more sustainable than hiring expensive external consultants who lack industry-specific knowledge.
3. Focus on decision optimisation before robotics: The most accessible and immediate AI applications involve improving operational decisions rather than implementing expensive robotics. Look for opportunities to enhance logistics, maintenance schedules, and resource allocation through data-driven insights. Software improvements typically deliver faster ROI than hardware investments.
4. Plan for workforce transitions: Responsible deployment of automation includes recognising and preparing for workforce impacts. Companies that commit to retraining employees for new roles not only avoid negative publicity but also retain valuable institutional knowledge while building a more adaptable workforce. The most successful transitions start their change management approach early and begin planning for evolving workforce changes years before implementation.
5. Monitor the innovation ecosystem: While startups often drive AI innovation, large tech companies typically acquire the most promising ventures. Business leaders should track developments across the ecosystem – including the established industrial hardware and software suppliers.
The responsible path forward
As we navigate the industrial AI revolution, it's worth recognising that many automated tasks are dull, repetitive things that humans probably should never have been asked to do in the first place. The goal should be to liberate people from such tasks while ensuring they maintain gainful income and meaningful lives.
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
This requires thoughtful implementation of technology alongside social policies that distribute benefits broadly. By combining technological innovation with ethical considerations, businesses can harness AI's transformative potential while contributing to a more equitable future.
The industrial applications of AI extend far beyond the headline-grabbing language models. By focusing on data-driven decision making, predictive maintenance, and responsible automation, businesses can realise significant operational improvements while learning about and navigating the broader societal implications of this technological revolution.
Toby Walsh is Laureate Fellow and Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales, adjunct professor at QUT, external Professor of the Department of Information Science at Uppsala University, an honorary fellow of the School of Informatics at Edinburgh University, and an Associate Member of the Australian Human Rights Institute at UNSW.