When we harm someone through a robot or a game avatar — when our actions travel through an artificial body — is it still really us doing the harming? Do we feel less responsible? Are we more willing to cross moral lines? This project investigates the moral psychology of acting through artificial agents, from virtual characters in a computer game to humanoid robots in a real lab.

THE BEGINNING: A MILGRAM EXPERIMENT INSIDE A COMPUTER GAME (MASTER'S THESIS, 2010)

This line of research began with my master's thesis, in which I built a virtual version of the classic Milgram obedience experiment. In Milgram's original studies, participants were instructed to administer electric shocks to another person — and most complied, even when the shocks appeared harmful, because an authority figure told them to. My version took place inside a computer game called Fallout: High Voltage, where participants controlled an avatar and decided whether to keep "shocking" a digital character.

I varied the avatar's appearance, its name, and whether it was associated with a soldier role — a social identity strongly linked to obedience and use of force. The results showed complex effects of all these factors on participants' willingness to continue. But the most striking finding was qualitative: despite the setting being entirely virtual and clearly a game, participants showed a strong tendency to treat the digital victim as a being with feelings. Even an obviously artificial character was enough to engage our moral intuitions — a finding that resonates more than ever in the age of humanoid robots and AI.

Woźniak, M. (2010). Willingness to inflict suffering in the world of a computer game: Identity and social role in virtual reality. Master's thesis, Jagiellonian University, Kraków (written in Polish).

DOES ACTING THROUGH A ROBOT CHANGE HOW HONEST WE ARE?

Years later, the same question arose in a very different context: real robot teleoperation. When people control a humanoid robot via a screen — seeing the world through its camera, acting through its body — does the physical separation make them more likely to cheat?

We had participants perform a card game task either in person or while teleoperating the iCub humanoid robot — from either a first-person (seeing through the robot's eyes) or third-person (watching from behind) camera view. We tracked two types of dishonesty: cheating (lying to gain a personal reward) and generosity (lying to give more to your partner at your own expense).

Teleoperation did not make people cheat more. But it did make them more generous — especially when operating in first person. This was unexpected: we had assumed that acting through a robot might encourage more selfish behavior by reducing moral inhibitions, but instead it seemed to create a sense of closeness with the partner. The moral effects of artificial embodiment are far from simple — and can go in surprising directions.

Woźniak, M., Scattolin, M., Provenzano, L., De Tommaso, D., Aglioti, S.M. & Wykowska, A. (2026). "Non-immersive robot teleoperation does not reliably affect egocentric dishonesty but can influence altruistic dishonesty". iScience, 29, 114645.

ONGOING AND FUTURE WORK

This project is actively growing. Ongoing studies continue to explore how acting through artificial agents shapes moral judgment, perceived responsibility, and ethical behavior. Results will be added as they become available.