Why your colleagues stay silent about their ChatGPT use

AI is transforming work, but many employees stay silent – creating hidden risks and missed opportunities for organisations, writes UNSW Business School’s Frederik Anseel

Surveys show that the number of people using AI tools at work has doubled in two years, to about 45 percent of employees. No wonder ChatGPT is the fifth most visited website in the world. But you wouldn’t know it based on what you see in the workplace. Many workers remain remarkably quiet about their AI use. You don’t see ChatGPT reports or prompts flying back and forth, and no enthusiastic conversations about the time savings that AI delivers. No, AI use flies conspicuously under the radar. Why? Wouldn’t you expect a certain pride from people who are keeping up with the latest technology?

The hidden use of AI is one of the primary concerns for companies – if they’re even aware of it. AI stealth usage is one of the fundamental obstacles we must overcome to achieve a real productivity leap in our economy. There are three reasons why people prefer to stay silent.

Fear of job cuts

The first reason is, paradoxically, the AI productivity gain itself. Many people find that difficult tasks, which used to take them hours to complete, now take only a few minutes with platforms such as ChatGPT. Answering a few emails, generating a report or analysis, or creating a summary – add these up over a week, and there’s a time savings of several hours.

Frederik Anseel UNSW Business School.jpeg
UNSW Business School Dean, Professor Frederik Anseel, says companies looking for AI-enabled productivity gains must first win the trust of their employees. Photo: UNSW Sydney

Who benefits from those time savings? For now, it’s the individual employees. But what if the employer catches wind of the time savings? That consideration makes many people hesitate. The danger is that companies will do the math and either give people more work or cut jobs. You can already see a shift in jobs worldwide, particularly in consultancies, marketing agencies, technology companies, and law firms.

Lack of clarity

The second reason is the threatening tone companies take with regards to privacy and security. Many people are confused about what’s allowed and what isn’t. They choose certainty over uncertainty. ChatGPT is eagerly used in the private sphere, where many people prefer the simplicity of using free chatbots at home over their company’s workplace policies.

That’s a headache for companies, which urgently need to work on more permissive and workable AI support to get employees out of grey zones. It doesn’t help that companies are still recommending the ‘safe’ version of AI platforms, while the new AI models are much more advanced. Companies need to go back 15 years in time and think about how they once had to handle a ‘Bring Your Own Device’ policy, because they simply couldn’t compete with the much more sophisticated devices people were using at home.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

The last reason is complex. People who use ChatGPT are concerned about being labelled as inauthentic, cheating, or lazy. Look, nobody has a problem with you using a spellchecker. It’s probably also okay to ask an AI bot to proofread an email. But how do you think a colleague would react if they discovered that jovial thank-you email was written by an AI bot? Or what do you think about it yourself, if your manager didn’t directly write your own feedback report?

How trustworthy are you?

A recent series of studies show how complex the psychological problem is. In companies, the recommendation is: ‘Be transparent and always clearly state where you’ve used AI.’ But research shows that those who are transparent actually lose trust. People find colleagues less trustworthy when they confess to using ChatGPT.

AI adoption is more of a psychological than a technological problem. Companies looking for productivity gains must first win the trust of their employees. They don’t necessarily distrust the AI technology, but rather how employees will use it. People want assurance that their time savings won’t be exploited, that they won’t be punished for experimenting with AI, and that they won’t be looked at sideways for using AI to take a more efficient approach to work. We urgently need to bring AI stealth usage out of the shadows.

Frederik Anseel is Professor of Management and Dean of UNSW Business School. He studies how people and organisations learn and adapt to change, and his research has been published in leading journals such as Journal of Applied Psychology, Journal of Management, American Psychologist, and Psychological Science. A version of this post was first published in De Tijd.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy