When machines decide, who is responsible?
As algorithms take on decision-making, accountability blurs and creates new risks for businesses and customers.
As algorithms take on decision-making, accountability blurs and creates new risks for businesses and customers.
Artificial intelligence is transforming the way we work, but it may also be reshaping how we think about right and wrong.
Dr Hongmin (Jess) Yan from 91色情片 Canberra is concerned that AI is changing workplace ethics and could make it easier for employees to avoid feeling responsible for unethical decisions because it blurs who is accountable.
鈥淲hen an algorithm recommends a decision or automates a choice, employees can rationalise unethical behaviour in new ways by saying, 鈥榯he AI suggested it,鈥欌 Dr Yan explained.
鈥淭hat diffusion of responsibility, where it鈥檚 unclear whether the person, the algorithm, the organisation, or the developers are accountable, creates a psychological shield that makes it easier to engage in unethical behaviour without the same moral friction.
鈥淭丑别 technology creates distance between people and the ethical implications of their choices. Unless we鈥檙e very intentional about maintaining human accountability and building ethical checks into AI systems from the start, we risk amplifying Unethical Pro-Organisational Behaviour in ways we鈥檙e only beginning to understand.鈥
Dr Yan鈥檚 research into Unethical Pro-Organisational Behaviour (UPB) predates the rise of AI, and looks at how employees would cross the ethical line at work if doing so helped the organisation succeed, like that of a high-profile case that shocked the world.
鈥淭丑别 was a pivotal moment for me,鈥 Dr Yan said.
鈥淓mployees at Theranos genuinely believed they were revolutionising healthcare, yet their actions ultimately put patients at risk. It raised a critical question: how do well-meaning people end up crossing ethical lines when they鈥檙e trying to help their organisation?
鈥淲hat struck me was that this phenomenon extends far beyond high-profile corporate cases. I began noticing similar patterns in everyday service encounters, like the waiter who oversells a mediocre dish or a retail worker who exaggerates product benefits.
鈥淭丑别se employees aren鈥檛 acting out of personal greed, they鈥檙e trying to help their organisation succeed.鈥
To understand this phenomenon more deeply, Dr Yan focused her research on frontline service roles, those who interact with customers face-to-face. Unlike corporate settings, where unethical behaviour feels abstract, customer-facing employees interact directly with the people affected.
鈥淵ou鈥檇 think this immediate proximity would make unethical behaviour less likely, that the human element would activate stronger moral restraints. Yet our research shows UPB still happens with striking frequency in these settings,鈥 she said.
鈥淚t reveals just how powerful organisational loyalty can be, strong enough to override the natural empathy and moral discomfort that comes from face-to-face interaction.鈥
Dr Yan warns that some organisations may unintentionally encourage UPB through aggressive performance targets.
鈥淭丑别y promote values like integrity and customer service but often overlook how these ideals can clash with competitive pressures, creating impossible positions for staff,鈥 Dr Yan stated.
The stakes are rising as AI enters the equation. Dr Yan recommends organisations cultivate genuine psychological connections between employees and stakeholders.
Her research shows such connection acts as a powerful ethical safeguard. But as algorithms increasingly mediate workplace decisions, maintaining human accountability becomes even more critical.
鈥淯nless we鈥檙e intentional about building ethical checks into AI systems from the start, we risk amplifying UPB in ways we're only beginning to understand.鈥