The paper analyzes the current use of ethical artificial intelligence (AI), argues that there are ideological limits to it, and discusses these limits. The topic is of particular relevance to research on the social implementation of AI systems, as ideological underpinnings are not easy to identify and ideology research is underrepresented in research on AI phenomena. The first section analyzes what counts as ethical in ethical AI systems. The second section classifies the dimensions of the ethical in AI systems, highlights their interrelationships and applies forness as a key concept that helps narrow the focus on the ideological component of ethical AI. The third section describes the presence of ideology in ethical AI and clarifies the limits it imposes on AI as a general phenomenon that undoubtedly has the potential to contribute to a more humane society, but is severely constrained by ideology.
This paper explores what computational methodologies can tell us about philosophical education, particularly in the context of artificial intelligence (AI) ethics. Taking the readings on our AI ethics and responsible AI syllabi as a corpus of AI ethics literature, we conduct an analysis of the content of these courses through a variety of methods: word frequency analysis, term frequency–inverse document frequency (TF–IDF) scoring, document vectorization via SciBERT, clustering via k-means, and topic modelling using latent Dirichlet allocation (LDA). We reflect on the findings of these analyses, and more broadly on what computational approaches can offer to the practice of philosophical education. Finally, we compare our approach to previous computational approaches in philosophy, and more broadly in the digital humanities. This project offers a proof of concept for how contemporary natural-language processing techniques can be used to support philosophical pedagogy: not only to reflect critically on what we teach, but to discover new materials, explore conceptual gaps, and make our courses more accessible to students from a range of disciplinary backgrounds.
AI outcomes that exhibit racism, sexism, homophobia, or other biases are deemed “unfair.” Several scholars have applied John Rawls’s theory of justice to evaluate this unfairness. This paper clarifies, though, that Rawls’s ideal and nonideal theories are ill-equipped to deal with individual instances of AI unfairness; it furthermore argues that Young’s theory is better equipped to do so – not only because it includes sociological accounts of racism and other -isms, but also because it incorporates the consciousness-raising spaces that help “name” the racist, sexist, etc. behaviours – behaviours that, if left unnamed, remain undetected, and, as a result, are both reenacted in society and reproduced by AI.
This paper develops a framework for understanding autonomy and moral agency in hybrid human–AI systems. We begin with an examination of the autonomous vehicle “trolley problem,” a problem for how the AI in autonomous vehicles should be programmed. This scenario reveals a fundamental distinction between computational reasoning, where AI excels, and social-moral judgement, where human capabilities remain essential. The autonomous vehicle scenario exemplifies broader challenges in human–AI collaboration. Purely computational approaches to moral decisions prove insufficient, as they lack the social understanding and attentive care characteristic of human judgement. This insufficiency becomes particularly apparent in applications attempting to replicate human social relationships, where the absence of what Ellen Ullman in her article Programming the Post-Human: Computer Science Redefines “Life” on posthumanism terms genuine “presence” and mutual recognition creates risks of diminishing rather than enhancing human capabilities. By examining these cases, this paper develops principles for responsible integration of AI capabilities while preserving meaningful human agency.
The growing concerns about using tools based on large language models (LLMs) have caused academic institutions and scientific publishers to adopt rigid policies with little to zero tolerance for LLMs in academic writing. Moreover, some may employ artificial intelligence (AI) tools to differentiate LLM-generated and human essays. We argue that such an approach is inherently limited, as it leaves room for false detection. After analysing recent studies on the effectiveness of AI detection tools and human ability to recognize AI-generated text, we explore epistemic conclusions and the black box problem. Turning to ethical aspects, we argue that non-native English speakers are particularly at risk of false-positive AI detection. We propose the potential benefits of moderate tolerance for LLM-based applications in scientific publishing.
Artificial intelligence (AI) has emerged as a transformative force, profoundly reshaping many dimensions of human life. Its rapid growth, however, requires critical reflection on both benefits and risks. Ethical evaluation is not secondary but an opportunity to reconsider the meaning of human existence in a technology-driven world, while orienting progress with wisdom and foresight. The initial absence of clear frameworks has intensified debate on the urgent need for governance, legal safeguards, and moral principles to guide its invention, production, and use. This article analyzes the Catholic ethical evaluation of AI and the risks of unregulated development through documents of the Holy See, the teaching of recent popes, and their public pronouncements. It compares Catholic positions with existing governance instruments – such as the EU AI Act, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the Rome Call for AI Ethics with its Hiroshima Addendum – highlighting convergences and divergences, with particular attention to emerging ethical challenges. Based on the view that research and innovation are never morally neutral but always value-laden, the article underscores convergence between secular governance and Catholic teaching regarding the design, implementation, and responsible use of AI. At the same time, it highlights the Catholic emphasis on the centrality of the person – affirming that AI must serve humanity rather than replace or dominate it – on the inviolability of life (rejecting autonomous weapon systems), on human dignity (including principles such as non-discrimination, transparency, inclusion, accountability, reliability, safety, and privacy), on the dignity of work, social justice, and the universal call to fraternity. From this perspective, the Church supports a global ethical and regulatory framework, which it sees as essential not only to prevent harmful applications but also to promote virtuous practices and ensure continuous human oversight in the development and deployment of AI.