Why Facebook's meta-morphosis won't fix ethics headache
Facebook is under fire (again) for its questionable business ethics – what exactly did it do wrong and what will happen as a result?
In September 2021, The Wall Street Journal published a series of damming articles on Facebook based on internal documents that revealed several questionable practices within the technology company.
Later revealed to have been leaked by whistleblower Frances Haugen, a product manager in Facebook’s civic integrity team, the documents included revelations that Facebook’s own research showed Instagram (which it owns) exacerbates poor self-image and mental health in teenage girls, as well as the existence of ‘VIP users’ that are exempted from certain platform rules.
This is just the latest big scandal to hit the tech company, which has also been under scrutiny for its poor handling of user data in the Cambridge Analytica scandal (2018), accusations of inciting genocide in Myanmar (2018), and spreading misinformation and ‘fake news’ during the 2016 US Presidential election (2016) and consumer anger over their ‘mood’ experiment analysis and manipulation (2014).
It was the organisation’s decision not to act on their own findings, according to Haugen. She told the US Senate last month this decision was part of a continued pattern that sees Facebook opt for profits over consumer wellbeing. “The company’s leadership knows how to make Facebook and Instagram safer,” she told the Senate, “but won’t make the necessary changes because they have put their astronomical profits before people.”
So, what does this latest ethical controversy mean for the Silicon Valley monolith and the other tech giants, and what lessons can it tell us about the importance of ethical decision-making in organisations?
Why are consumers and policymakers upset with Facebook?
According to Rob Nicholls, Associate Professor in Regulation and Governance at the UNSW Business School, one of the reasons policymakers and consumers alike are riled up at Facebook’s behaviour is what is perceived to be a continued lack of responsiveness to regulatory intervention. Combined with the separate issue – that they were sitting on a trove of information that showed Facebook knew it was causing harm and chose not to act – makes for a poor look.
“They didn’t use the information they had to change the approach,” he says. “You’ve got something that looks not dissimilar to big tobacco. Yes, there are harm issues, but no, we’re not going to talk about it.”
Finding this was hidden is particularly alarming to regulators because it brings up the question of what else don’t we know, A/Prof. Nicholls says. There is then a ‘piling on effect’ by other concerned parties. “In Australia, the ACCC’s chairman, Rod Sims saying, ‘Well why isn’t Facebook negotiating with SBS on under the news media bargaining code?‘ All of these things tend to compound when the company is front and centre in the news.”
Read more: Can the law truly protect consumers from data profiling?
We are now more aware of technology shortfalls and dangers
A/Prof. Nicholls says there is now more awareness by consumers and policymakers about the shortcomings of Facebook and other big tech companies, with the COVID-19 pandemic leading to a stronger realisation about how much we rely on Facebook’s platforms (which include Instagram and WhatsApp).
He also points out that in Australia, Facebook’s taking down of Australia-based pages during debate over the News Media Bargaining Code into parliament has also drawn attention to Facebook’s lack of competition and excess of control in the space.
“If Facebook can take down our Health Service website, which we’re relying on to get information on the pandemic or could cut off 1800 Respect because Facebook thinks it’s a news media business ... Suddenly there’s that realisation of how ingrained social media companies are.”
“Ten years ago, [the leadership mantra of] Facebook was ‘Move fast and break things!’ That’s great. You’re a small start-up,” says A/Prof. Nicholls. “But Facebook today, your revenue is $US85 billion and you’re part of the day-to-day life of the vast majority of people. You actually have to take some responsibility.”
Meta-bad timing: get your house in order first
To add fuel to the fire of public debate, Facebook announced a rebranding of its parent company from ‘Facebook’ to ‘Meta’. Mark Zuckerberg’s company would now shift its focus to working on creating a ‘metaverse’ – a 3D space where people can interact in immersive online environments.
While hiring thousands to create a metaverse might have been an exciting announcement at other times for the company formerly known as Facebook, the public and lawmakers were quick to mock and question the move as one that swept systemic issues under the carpet.
“It should have sounded like a really great success story: Facebook is going to invest in 10,000 new tech jobs in Europe,” says A/Prof. Nicholls. “But instead, the answer has been, ‘Just a minute, if you’ve prepared to spend that much money, why not fix the problems in the services that you currently provide, with far fewer jobs?’”
He also points out how the identified issue of body image might in fact be amplified in the metaverse. “Are you now going to create a metaverse where, ‘We’re going to deal with body shape, because in your virtual reality, you’ll look perfect’? That is going to cause distress in itself.”
Read more: How Facebook could lose out on advertising over its content ban
Will the debate affect regulation around tech companies?
According to A/Prof. Nicholls, we could be seeing a moment where agencies and government come together to protect consumers.
“It’s possible, but it takes a bit of political will,” he says. “In the past, Facebook, Google, and to a lesser extent, Amazon, Microsoft and Apple have each been able to delay and deflect, partly because it takes a big political decision.”
A/Prof. Nicholls says the difference now is that there is the realisation that a political decision on this is going to be popular, making it far more likely that it will be taken. But he also points out it’s not likely to be a ‘break them up’ kind of solution.
“Part of the reason that Facebook does touch so many of us on such a regular basis is what they offer is really useful,” he says. “The issue that flows from that is, how do you make sure that the business operates effectively without stifling innovation in that area?”
How can other AI-based companies avoid this situation?
While A/Prof. Nicholls does not expect to see policy changes from big tech companies (“because policy change is an admission”), he does expect that we will see some practical changes by other companies that consider the issues faced by Facebook.
“Ultimately, if you do some research and you find really bad outcomes, if you act on those, then you’re not going to have a problem a little bit later of a whistleblower pointing out that you’ve suppressed that research,” he says, referring to Haugen.
There is a simple way to avoid this issue though, A/Prof. Nicholls points out. By acting ethically as a business, one can avoid these problems and achieve good business outcomes without having to change too much. And for businesses that are built around algorithms, this means ensuring you’ve embedded ethical approaches throughout the AI design.
“Ethical behaviour can be built into the design of AI. Some of that actually means that you end up with better outcomes from your AI because you ensure that you actually think about it first. It really doesn’t matter whether you’re a small start-up, doing analysis of big data, or you’re a very big platform-based company. Actually, thinking about those design processes is really important.”
Disclaimer: Associate Professor Rob Nicholls and his team have received funding from Facebook for his research into models, people and their interaction with Facebook.