Articles
| Open Access | Ethical Architectures for Autonomous Driving: Reconciling Trolley Problem Thought Experiments with Practicable Decision-Making Frameworks
Dr. Esteban R. Moreno , Global Institute for Technology and Ethics, University of LisbonAbstract
This paper examines the persistent tension between abstract moral thought experiments—most notably the trolley problem—and the concrete engineering, legal, and social realities of autonomous vehicle decision-making. We synthesize philosophical analyses, empirical studies, technical descriptions of perception and control systems, and recent regulatory and dataset-auditing work to produce a coherent account of ethically bounded architectures for autonomous driving systems. The central argument is that a purely philosophical framing (e.g., sacrificial dilemmas) is insufficient for designing viable autonomous driving ethics; instead, ethically bounded AI must integrate formal ethical constraints, decision-theoretic planning, robust perception, human-centered accountability mechanisms, dataset auditing, and meaningful human control. We propose a layered framework that links (1) low-level safety and collision mitigation algorithms, (2) intermediate decision-theoretic planning incorporating probabilistic risk assessment and distributive ethical constraints, and (3) high-level governance mechanisms for transparency, auditing, and legal alignment. We analyze normative trade-offs between utilitarian, deontological, and contractualist approaches and show how these philosophical families map onto technical choices in architecture, datasets, and evaluation. The paper also explores responsibility attribution, the role of simulated and real-world datasets in shaping moral behavior, and the procedural mechanisms needed to operationalize “meaningful human control.” Finally, we identify research priorities and policy recommendations to move from rhetorical trolley-problem debates to implementable, justifiable systems that can be audited and regulated. The conclusions emphasize layered safeguards, multidisciplinary governance, dataset quality assurance, and a move away from binary sacrificial framing toward continuous harm-minimization under uncertainty. (max 400 words)
Keywords
Autonomous vehicles, ethics, trolley problem, decision-theoretic planning
References
Zhao, L.; Li, W. “Choose for No Choose”—Random-Selecting Option for the Trolley Problem in Autonomous Driving. In LISS2019: Proceedings of the 9th International Conference on Logistics, Informatics and Service Sciences; Springer: Singapore, 2020; pp. 665–672.
Wu, S.S. Autonomous vehicles, trolley problems, and the law. Ethics Inf. Technol. 2020, 22, 1–13.
Philippa, F. The problem of abortion and the doctrine of double effect. Oxf. Rev. 1967, 5, 5–15.
Judith, J.T. Killing, letting die, and the trolley problem. Monist 1976, 59, 204–217.
Derek, L. A Rawlsian algorithm for autonomous vehicles. Ethics Inf. Technol. 2017, 19, 107–115.
Gray, M. Moral machines. New Yorker. 2012, p. 24. Available online: https://www.newyorker.com/news/news-desk/moral-machines (accessed on 21 August 2024).
Wendell, W.; Colin, A. Moral Machines: Teaching Robots Right from Wrong, 1st ed.; Oxford University Press: New York, NY, USA, 2008.
Jianwu, L. Capacity Difference and Responsibility Difference: On Possibility of Driverless Vehicle as Ethical Subject. Soc. Sci. Yunnan 2018, 4, 15–20+186.
Sven, N.; Smids, J. The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory Moral Pract. 2016, 19, 1275–1289.
Keqiang, L. Key topics and measures for perception, decision-making and control of intelligent electric vehicles. Sci. Technol. Rev. 2017, 14, 85–88.
Zhang, X.; Gao, H.; Zhao, J.; Zhou, M. Overview of deep learning intelligent driving methods. J. Tsinghua Univ. (Sci. Technol.) 2018, 58, 438–444.
Basye, K.; Dean, T.; Kirman, J.; Lejter, M. A decision-theoretic approach to planning, perception, and control. IEEE Expert 1992, 7, 58–65.
F. Fossa, Unavoidable Collisions. The Automation of Moral Judgment. Stud. Appl. Philos. Epistemol. Ration. Ethics, vol. 65, pp. 65–94, 2023, doi: 10.1007/978-3-031-22982-4_4/COVER.
Parliamentary Forum for the Study of Science and Technology. Auditing the quality of datasets used in algorithmic decision-making systems. Eur. Parliam. Res. Serv., 2022, doi: 10.2861/98930.
M. Hennig and M. Hütter, Revisiting the divide between deontology and utilitarianism in moral dilemma judgment: A multinomial modeling approach. J. Pers. Soc. Psychol., vol. 118, no. 1, pp. 22–56, Jan. 2020, doi: 10.1037/PSPA0000173.
C. Wu; R. Zhang; R. Kotagiri; P. Bouvry, Strategic Decisions: Survey, Taxonomy, and Future Directions from Artificial Intelligence Perspective. ACM Comput. Surv., vol. 55, no. 12, Mar. 2023, doi: 10.1145/3571807.
D. Dellermann; A. Calma; N. Lipusch; T. Weber; S. Weigel; P. Ebel. The future of human-AI collaboration: a taxonomy of design knowledge for hybrid intelligence systems. Proc. Annu. Hawaii Int. Conf. Syst. Sci., vol. 2019-January, pp. 274–283, May 2021, doi: 10.24251/hicss.2019.034.
Patil, A. A.; Patel, N.; Deshpande, S. Ethical Decision-Making In Sustainable Autonomous Transportation: A Comparative Study Of Rule-Based And Learning-Based Systems. International Journal of Environmental Sciences, 11(12s), 390-399, 2025.
F. Rossi; N. Mattei. Building Ethically Bounded AI. Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, pp. 9785–9789, Jul. 2019, doi: 10.1609/AAAI.V33I01.33019785.
D. Shin. User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability. J. Broadcast. Electron. Media, vol. 64, no. 4, pp. 541–565, Oct. 2020, doi: 10.1080/08838151.2020.1843357.
R. Williams et al. From transparency to accountability of intelligent systems: Moving beyond aspirations. Data Policy, vol. 4, no. 3, p. e7, Feb. 2022, doi: 10.1017/DAP.2021.37.
F. Santoni De Sio; G. Mecacci; S. Calvert; Daniel Heikoop; M. Hagenzieker; B. Van Arem. Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach. Minds Mach. 2022, pp. 1–25, Jul. 2022, doi: 10.1007/S11023-022-09608-8.
H. Karvonen; E. Heikkilä; M. Wahlström. Safety challenges of ai in autonomous systems design – solutions from human factors perspective emphasizing ai awareness. Lect. Notes Comput. Sci., vol. 12187 LNAI, pp. 147–160, 2020, doi: 10.1007/978-3-030-49183-3_12.
Download and View Statistics
Copyright License
Copyright (c) 2025 Dr. Esteban R. Moreno

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.

