How AI recommendations can balance privacy with accuracy

Research examines how AI can improve privacy and accuracy while balancing consumer trust and data protection in recommendation systems

When you scroll through Netflix suggestions, browse TikTok’s feed, or shop Amazon’s “recommended for you” section, it probably doesn’t surprise you to know that artificial intelligence (AI) is working seamlessly in the background to make your life (hopefully) a little easier. But beneath the convenience lies a hidden risk: the use of AI technologies in recommendation engines can expose sensitive information, raise security risks, and even affect how much you pay

That’s the focus of new research from Dr Xingyu Fu, lecturer in the School of Marketing at UNSW Business School, co-authored with colleagues from the University of Toronto, CUHK, and Western University. The study, Privacy-Preserving Personalised Recommender Systems, explores how to design privacy-preserving recommender systems that protect consumers while still delivering value for platforms and retailers. 

“Personalised product recommendations are crucial for online platforms but pose privacy risks,” says Dr Fu. “Our approach applies differential privacy to mitigate the risk of exposing personal information during the transmission of recommendations.” 

The hidden risks of AI recommendations 

The risks of misusing AI in personalised recommendations are very real. Target’s infamous pregnancy-prediction algorithm once exposed sensitive information by recommending maternity products to a teenager before she told her family. To avoid such backlash, Target later began deliberately adding irrelevant products to its AI outputs to avoid such backlash. 

Target’s pregnancy-prediction algorithm once exposed sensitive information.jpeg
Target’s pregnancy-prediction algorithm once exposed sensitive information by recommending maternity products to a teenager before she told her family. Photo: Adobe Stock

TikTok takes a similar approach, promising to inject “disruptive” content into feeds to prevent outsiders from fully reverse-engineering user preferences. Both are examples of generative AI systems introducing randomness – much like Dr Fu’s threshold model – to protect privacy

These use cases highlight how AI chatbot recommendations, large language models, and other AI applications can reveal more than intended. Without safeguards, the secondary use of these datasets risks eroding consumer trust and drawing regulatory scrutiny

It’s easy to assume the main threats come from the collection of personal information: things like browsing history, locations, or search habits. Companies like Google and Apple already rely on machine learning techniques such as federated learning to keep datasets on devices rather than central servers, reducing the chance of data breaches. However, this UNSW Business School-led research shows that the risk doesn’t stop there, because the AI outputs themselves (the books, films, or products recommended) can reveal sensitive data if intercepted or observed. 

“Unlike raw data, recommendation outcomes distil consumer preferences into a compact yet highly revealing form,” explains Dr Fu. “Over time, these exposures can gradually reveal a consumer’s complete preference ranking, transforming implicit personal choices into an explicit, exploitable profile.” 

This matters because generative AI systems and other AI models don’t just use training data to predict behaviour, they also generate AI products (like recommendations or rankings) that can be reverse-engineered. Attackers, retailers, or even casual observers might infer traits ranging from health conditions to political affiliations. 

Learn more: Managing the ethical risks of AI, big data and psychological profiling

Increasingly, AI and privacy are top concerns for regulators such as the Office of the Australian Information Commissioner (OAIC). Australia’s Privacy Act and the Australian Privacy Principles (APPs) already set obligations for how organisations handle personal information, but specific rules for AI technologies have not kept pace with the rapid growth of artificial intelligence.  

Reforms to the Privacy Act are under review, with proposals for stronger enforcement powers for the OAIC, yet some warn this may not be enough to safeguard consumers. They argue for clearer guardrails on the use of personal information, restrictions on its secondary use, and greater accountability in AI systems. Internationally, the regulatory landscape does appear to be shifting: for example, China’s Algorithmic Recommendation Management Provisions (2022) require socially responsible algorithms and regular audits of recommendation services.  

Differential privacy as a solution 

In practice, companies developing AI products and generative AI tools will increasingly be required to demonstrate due diligence, maintain strong data quality, and provide clear collection notices when gathering data for training purposes or any secondary use. They will also be expected to conduct privacy impact assessments (PIAs), embed human oversight into automated functions, and explain to consumers the intended purpose of their AI technologies. For the Australian Government and other government agencies, stronger data governance will be critical to prevent data breaches, ensure proper de-identification of records, and uphold the privacy of individuals. 

Dr Xingyu Fu, lecturer in the School of Marketing at UNSW Business School.jpg
Unlike raw data, UNSW Business School's Dr Xingyu Fu says recommendation outcomes distil consumer preferences into a compact yet highly revealing form. Photo: UNSW Sydney

To address the specific vulnerability of recommendations, Dr Fu and colleagues turned to differential privacy, a technique widely used in computer science to obscure patterns in datasets. Instead of always recommending a user’s top-ranked product, their model introduces controlled randomness through a threshold policy: items above a certain cut-off are shown with higher probability, while less-preferred products still appear from time to time. 

In their study, the team built a computer model of online shopping platforms – where retailers set prices, consumers have their own product preferences, and the platform uses AI to recommend items. They then tested what happens when a layer of “noise” is added using differential privacy, meaning the system doesn’t always display the top choice but occasionally mixes in other options. 

By running this model, the researchers were able to see how privacy-friendly recommendations affect both the consumer experience and the way businesses set prices. “The optimal policy is a coarse-grained threshold policy, where products are randomly recommended with either high or low probability based on whether their preference rankings are above or below a certain threshold,” explained Dr Fu. 

This approach reflects the principle of privacy by design – embedding safeguards throughout the AI development lifecycle rather than treating them as an afterthought. It helps reduce the risk of re-identification and unauthorised access, while still keeping recommendations relevant and useful. 

Netflix uses AI-driven recommendation systems.jpeg
Netflix, TikTok and Amazon all use AI-driven recommendation systems, which can potentially expose sensitive information, raise security risks, and even affect how much you pay. Photo: Adobe Stock

Balancing accuracy, prices, and consumer trust 

Hyper-targeted recommendations allow companies to charge more precisely because they know how much a customer values a product. Put simply, if businesses know exactly what you want, they can raise prices with confidence because they understand your willingness to pay. 

When prices are fixed (set without using customer data), adding privacy makes recommendations less accurate, so consumers may not always see their best-fit product. This is what Dr Fu refers to as a “lower match value.” In this situation, privacy safeguards protect data, but they can also reduce the immediate benefit of highly accurate recommendations. 

“The impact on consumer surplus is non-monotonic, reflecting a trade-off between recommendation accuracy and price inflation,” says Dr Fu. In other words, protecting privacy may lower accuracy in some cases, but it can also stop companies from overcharging when they know too much about individual preferences. This means that generative AI models offering pinpoint personalisation may not always serve consumers’ best interests. Paradoxically, adding privacy can lead to fairer prices, not higher ones. 

For businesses, visibly integrating data protection measures is not just about meeting privacy obligations. It’s also a way to build trust, strengthen reputation, and reduce the risk of penalties from regulators such as the OAIC. The Information Commissioner has already stressed the need for privacy impact assessments (PIAs) and reasonable steps to safeguard consumers in AI applications. 

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

Ultimately, protecting privacy in AI development is not just about compliance. It’s about preventing tangible harms in digital marketplaces where AI models increasingly influence consumer decision-making every day. As Dr Fu puts it: “Pursuing privacy is not a free lunch. However, the cost of not protecting privacy may be even higher, as the loss of privacy can harm consumers monetarily by resulting in higher product prices.” 

To ensure best practice, he recommends businesses:  

  • Embedding privacy impact assessments at every stage of the AI lifecycle. 
  • Limiting the secondary purpose of training data to what consumers have given informed consent for. 
  • Ensuring de-identified data remains robust against re-identification. 
  • Providing clear notifications and collection notices when collecting data. 
  • Taking reasonable steps to prevent unauthorised access and mitigate high-risk vulnerabilities. 

By doing so, businesses can align with global privacy laws, meet privacy obligations, and avoid scrutiny of the OAIC and other regulators. 

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy