Protecting your privacy: should AI write hospital discharge papers?

Download The AGSM Business of Leadership podcast today on your favourite podcast platform.


With the next big shift in AI advancements just around the corner, businesses must ensure these new tools are ethical – before it’s too late

When it first launched, ChatGPT went from zero to 100 million users in only 60 days. Within the first six months of its launch, Microsoft released ChatGPT-powered Bing; Google launched its latest next-gen language model PaLM 2; and Meta released LLaMA. With these tech companies competing to usher in the next leap in AI, experts are growing concerned with the ethical application of these technologies, namely serious privacy issues.  

Given the speed at which these tools are being deployed, there is a growing risk that they will usher in a raft of societal-level dangers. AI’s unsuspecting test subjects: the general public, deserve tools that keep their privacy and safety intact.  

The latest episode of the AGSM Business of Leadership podcast examines the intersection of data privacy and responsible AI, the key issues, and what businesses and governments need to do to address privacy concerns. Hosted by Dr Lamont Tang, Director of Industry Projects at AGSM @ UNSW Business School, the podcast features a conversation between Professor Mary-Anne Williams, the Michael J Crouch Chair in Innovation at UNSW, and Professor of Practice Peter Leonard, a data and technology business consultant and lawyer. 

Lamont Tang.jpg
In the latest AGSM Business of Leadership podcast, Lamont Tang, Director of Industry Projects and Entrepreneur-in-Residence at AGSM @ UNSW Business School, explores pertinent privacy issues related to the use of ChatGPT. Photo: Supplied

ChatGPT ‘hallucinations’: the privacy and confidentiality risks 

Companies like Samsung have recently been stung by staff members inadvertently giving away sensitive business information via ChatGPT, resulting in its use being banned among employees. Professor Leonard, a part-time Professor of Practice in the Schools of Management and Governance, and Information Systems and Technology Management at UNSW Business School, explained some of these potential risks, referencing the use of ChatGPT by healthcare professionals to write discharge papers.  

“So health professionals in Australian hospitals today have to, amongst other things, write up their patient notes in the course of the day and then use those notes to prepare, amongst other things, discharge summaries for when a patient is leaving the hospital, and one of the things that ChatGPT does very well is summarising unstructured information, like the notes in a medical record, and from that can produce a pretty damn good first draft of a discharge summary,” explained Professor Leonard.  

“That's perhaps not an issue in terms of the reliance of the health professional because one would hope that the health professional would read both the summary in the notes and the discharge summary very carefully to see whether ChatGPT had hallucinated or otherwise produced an unreliable summary, but just think for a second as to what's going on here."

Read more: In the co-pilot's seat: how AI is transforming healthcare

To upload the information necessary to prompt ChatGPT to write an effective, healthcare professionals must disclose a patient record in the ChatGPT database. This causes some concerns, given that until very recently, no one really knew how much information is retained by the chatbot database.  

Healthcare professionals are bound by laws that set out how medical records and information can be shared. Patient confidentially means health professionals cannot discuss health information with anyone else without a patent's consent, and this medical information must be stored in a way that protects their privacy.

“So there's a classic example of the kinds of patient confidentiality and privacy issues that can arise through the use of large language models such as ChatGPT, just through the kinds of information that are fed into the system in order to prompt the system to do something to help a human in their everyday work environment.” 

Peter Leonard.jpeg
Peter Leonard, UNSW Business School Professor of Practice and Principal and Director at Data Synergies, says serious privacy issues can arise through the use of large language models such as ChatGPT. Image: Supplied

Professor Williams also spoke about the growing risks of advancing generative AI specifically. While traditional AI has solely focused on detecting patterns, making decisions, honing analytics, classifying data and detecting fraud, generative AI models produce new content, chat responses, designs, synthetic data or deep fakes.  

Professor Williams is a distinguished scholar, innovator and world-class expert in business AI; she has a PhD in AI and a law degree in innovation and technology and has worked in AI for the past 30 years. Last year, she facilitated the launch of the Business AI Research Lab, which was set up at UNSW Sydney. 

“Previously, we would train an AI with, say, images and we would tell the AI when there was a cat in that image or not, and that's called supervised learning, and what is really different about the AI behind generative AI is that it is self-supervised, and that is where all the power is coming from. That is why it can just ingest millions and billions of data examples.” 

Generative AI, on the other hand, entails very little human supervision. “Now, this kind of AI is very different. It is self-supervised, and that is where all the power is coming from. That is why it can just ingest millions and billions of data examples.   

“So these models can use self-supervision to learn, and there is almost no limit to how much data they can ingest. And they can create new data that no one's ever seen before. That's where the generative idea comes from,” she said. 

New ProjectProfessor Mary-Anne Williams, the Michael J Crouch chair for innovation at UNSW.jpg
Professor Mary-Anne Williams, the Michael J Crouch chair for innovation at UNSW, warns AI technologies like ChatGPT are being tested "in the wild". Image: Supplied

Data privacy is the ‘first base’ of AI 

According to Professor Williams, it is imperative that developers and leaders in this space view privacy as the “first base” needed to achieve responsible AI. “You can't do anything unless you can build the trust you need to gather the data you need to make better decisions, whether you're a government or a business,” she said.

“Unless you put that first, you don't... It's very hard to get to third base via different means other than first base, and so there's real movement for more than a decade around privacy by design, where you put privacy first rather than tack it on at the end after you've built your system.”

At UNSW Sydney’s Business AI Lab, for example, researchers take data privacy very seriously. Professor Williams explained: “The data is really the critical piece, and it's also the piece that really advantages different businesses. Data is the new oil, I'm sure everyone has heard that, and it's really true. And it's also kind of a limitation on what we can do in universities as well.  

“We are very good at building the models, but we don't necessarily have access to the data we need to make better models or to even investigate how good these models are or stress test them.”  

Read more: Business AI: the game-changer in predicting and enhancing employee retention

The solution? AI and ethics must work hand-in-hand 

Central to this discussion is the idea that companies like OpenAI are releasing tools that have not been stress-tested enough, said Professor Williams. In a bid to out do competitors, these businesses are releasing AI services at a fast pace. Indeed, Professor Williams discussed the effects of competition and potential risks of companies competing with each other to see who deploys the next big thing in AI without the data and technology necessarily having gone through the necessary testing.  

“We've seen how ChatGPT was released by OpenAI before they could really determine if it was safe,” explained Professor Williams. “It was an experiment. They were testing that technology in the wild to see what would happen, and there are examples of other companies doing that where they've had to actually pull that AI off the shelves, so to speak, because it turned out to be dangerous.”   

In agreement, Professor Leonard said there's virtually zero chance of pausing the proliferation of AI, and so developers will have to move forward carefully and deliberately. "...we don't want to stop the uses of AI, but we want the uses of AI to be responsible, and that requires a reflective view and input from a range of people around a table, including lawyers, ethicists, data scientists, AI experts, who can each bring a perspective on what being responsible means,” he said.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

What can and needs to be done to ensure businesses deploying generative AI, which relies on vast amounts of people’s data, do so in a way that doesn’t threaten to undermine people’s privacy and security? 

"You need a range of skills to evaluate and give input as to what responsible is, and it's all around how you do it, what kind of technical, legal, operational safeguards and controls you put in place about how a process is undertaken, whether the inputs have been adequately evaluated for quality and reliability, and whether the outputs from the analysis that you're doing are properly curated and presented with appropriate warnings as to their safety and reliability that whoever it is that will be using those outputs are likely to understand,” explained Professor Leonard.

While Professor Williams said: “If I was deciding what I would do if I was going to uni is ethics and understanding human values. We need to go deep on that. What does it even mean and what is the relationship between ethics and the law in practical terms when it comes to technology?”

“We're going to be doing a lot of training and helping to upskill every part of society and business,” she concluded. 

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy