How businesses can leverage AI agents (without crossing the line)
Businesses are increasingly adopting AI-powered systems to streamline customer engagement strategies, but relying too heavily on artificial intelligence (AI) can backfire
Developments in AI systems and AI-driven initiatives have enabled businesses and their project managers to leverage better metrics and actionable insights. From communication strategies to social media, AI tools can ensure informed decisions and ethical use of products at lightning speed. And yet, there are growing risks. Case studies on ChatGPT and other AI tools in recent years show that AI can improve decision-making and forecasting. However, they also highlight the growing need for ethical safeguards and human oversight.
Conversational AI agents, basically advanced chatbots that can handle complex tasks and work more independently, offer benefits like saving time and cutting costs. But these advantages come with trade-offs. New research highlights the delicate balance companies must strike when deploying embodied conversational agents (ECAs) – advanced AI technologies which act as virtual characters designed to resemble humans and foster natural conversations with customers.
Co-authors Dr Terrence Chong, Marketing Lecturer at UNSW Business School; Dr Ting Yu, Associate Professor in Marketing at UNSW Business School; Professor Debbie Isobel Keeling, Deputy Pro-Vice Chancellor for Knowledge Exchange and Professor of Marketing at the University of Sussex; Professor Ko de Ruyter, Head of Department and Professor of Marketing at King’s Business School and UNSW Business School; and Dr Tim Hilken, Assistant Professor at the School of Business and Economics, Maastricht University, recently explored how ECAs influence customer interactions and stakeholder engagement. Their paper, Stakeholder engagement with AI service interactions, provides surprising insights that challenge traditional assumptions about how AI models should look and behave to be effective.

Published in the Journal of Product Innovation Management, the study utilised an experimental research design to control key variables and examine how variations in the design of an embodied conversational agent (ECA) influence customer responses, enabling the identification of clear cause-and-effect relationships. They developed a text-based financial coaching scenario featuring an ECA named "Penny," varying her appearance (human-like versus robotic) and conversation style (emotional versus functional), inspired by real-world ECAs from companies such as Soul Machines and UneeQ, and implemented Penny using IBM Watson Assistant to facilitate natural language interactions.
The study recruited 600 adult participants from the US through the online panel Prolific, randomly assigning them to interact with different versions of Penny. Following the interaction, participants completed a survey assessing their reactions, including their willingness to pay for the ECA service, measured using the Gabor-Granger pricing method to determine price sensitivity and perceived value.
A balancing act: Why over-reliance on AI is risky
Building on Dr Chong and colleagues’ previous research, the study's most critical findings are the concept of the “reliance threshold,” or the point at which using AI switches from helpful to harmful. According to Dr Chong, businesses must carefully manage customer dependence on AI agents such as ECAs.
“While some level of reliance on embodied conversational agents (ECAs) is helpful, too much reliance can actually be harmful,” explained Dr Chong. “Up to a certain point, reliance on the ECAs helps people achieve positive outcomes. But once they start depending on ECAs too much, the benefits drop off.”
Over-reliance on AI can lead to several negative outcomes. For businesses, there is the risk that customers blindly follow the advice of AI assistants without considering personal relevance, losing confidence in their own decision-making abilities, and then blaming companies for the adverse outcomes. So not only is overreliance of AI a risk to consumers, it’s also a risk for the business.
Learn more: Building socially intelligent AI: Insights from the trust game
To mitigate these risks, the authors recommend that businesses actively involve customers in the decision-making workflows. Dr Chong explained: “Companies should design ECAs to promote customer involvement,” he said. “Instead of just recommending a financial product, the ECA could ask the customer to review their goals and choose between a few tailored options.”
The best way for businesses to stop people relying too much on AI? Keep them involved. Ask them to pause, to check and to think. Build in moments that remind them they still have a choice, and when it matters, offer a real human to talk to.
The surprising power of emotional robots
Perhaps the most surprising finding related to customer perceptions of ECAs that combine a robotic appearance with emotional conversational styles. The research initially hypothesised that ECAs portrayed as servants (or task-focused agents) would be most effective with robotic appearances and purely functional dialogue.
However, the data showed something different. Dr Chong noted the unexpected finding: “We found that when the ECA is presented as a servant, combining a robotic appearance with an emotional conversation style actually made people feel more confident in using it. In other words, even when the ECA looked machine-like, people responded better when it conversed like it was trying to build personal connections.”
These findings challenge the traditional belief that emotional intelligence in AI should be reserved only for humanlike appearances. It suggests customer expectations have evolved significantly. “These days, customers expect service providers – whether human or AI – not just to get the job done, but also to show empathy and emotional understanding," said Dr Chong. "People want to feel heard and cared for, even in routine interactions.”
Advancements in generative AI and machine learning algorithms enable robotic agents to appear genuinely empathetic, blurring the lines between human and machine communication. Businesses embracing this evolving ecosystem can create AI agents that resonate emotionally with customers, improving stakeholder trust even in seemingly routine interactions.

The study also emphasised the importance of clearly defining AI agents' role in service interactions, as either partners who co-create value or servants who perform tasks. The researchers found that aligning an agent’s role with appropriate appearance and conversational style significantly enhanced customer confidence.
“Managers should design ECAs based on their intended role as either a servant (task automation) or a partner (collaborative engagement), as this role influences how customers engage with the agent and perceive its value,” Dr Yu explained. “Aligning the ECA’s appearance and conversational style with its role builds customer confidence in its capabilities.”
When ECAs serve as collaborative partners, adopting a humanlike appearance and emotional conversational style enhances customer interactions. Conversely, for task-oriented servant roles, businesses now have the flexibility to combine robotic appearances with empathetic dialogue to achieve positive engagement outcomes.
Moreover, Dr Yu highlighted the potential of real-time sentiment analysis in ECAs. By dynamically adjusting roles based on customer speech patterns, such as assertiveness or expressiveness, businesses can further personalise interactions and meet stakeholder needs more precisely.
Practical implications and strategies for trust
Stakeholder trust remains a critical factor in sectors such as finance and healthcare, where customers naturally approach AI interactions with caution. The study provides clear strategies for designing trustworthy ECAs, emphasising transparency, governance frameworks, and responsiveness.
“Businesses should design ECAs by clearly defining their role as either a servant (performing outsourced tasks) or a partner (collaborative engagement) to match customer expectations,” said Dr Yu. “This alignment helps foster customer confidence in the ECA’s capabilities – referred to as proxy efficacy beliefs – which in turn influences customers’ willingness to use the agent’s advice.”
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
To strengthen stakeholder management, Dr Yu suggests companies should communicate specific ECA benefits clearly, such as reliability and contextual awareness, through data-driven dashboards. Moreover, businesses should proactively address user concerns through education, testimonials, and transparency.
Dr Yu explained: “Because customers may sometimes be reluctant or concerned about using AI agents – especially in sensitive sectors – businesses should address these concerns through education, testimonials, and transparency to reduce perceived incompatibility with customers’ lifestyles and build trust.”
So, how can companies manage the risks of AI in practice? Dr Chong said they need to be aware that too much reliance is a risk and encourage active participation. Companies should design ECAs to promote customer involvement. For example:
- In a banking or investment app, instead of just recommending a financial product, the ECA could ask the customer to review their goals and choose between a few tailored options.
- In a healthcare chatbot, rather than simply diagnosing symptoms, the system could encourage users to compare a few possibilities and think through lifestyle impacts.
- For learning platforms, the ECA could include short quizzes or interactive exercises to help customers test their understanding before making a decision (e.g. choosing a course or learning pathway).

Another practical strategy is to design checkpoints. For example, at key points in the service journey, build in moments where the ECA prompts the customer to reflect or verify their choices. This might look like: “Would you like to double-check this with a human advisor?” or “Based on your knowledge, what do you think is the best option?”
By following these strategies and incorporating AI models and agentic AI into their decision-making and workflows, companies can optimise use cases, achieve cost savings, and enhance data quality, all without falling prey to the pitfalls of over-reliance. Businesses must create ECAs “that support but don’t dominate the customer’s decision-making,” Dr Chong concluded. By doing so, companies can leverage AI effectively, enhancing customer satisfaction and trust in an increasingly automated world.