IViR Lecture Series: I Think You Oughtn’t Think Machines Can Think

“Can machines think?” is the question Alan Turing set out to answer in his field-defining 1950 paper. And while he quickly dismissed it as “too meaningless to discussion”, the question of machine consciousness – will artificially intelligent agents someday be able to actually experience thought and emotion, not just mimic the appearance of doing so? – is now, with the development of AI systems of seemingly superhuman intelligence, of growing interest and importance.

Unsurprisingly, much concern focuses on the harm that AI may pose for humans, from worries about job loss and disinformation to the fear of an AI apocalypse in which newly awakened machines take vengeance on their human creators. But a growing contingent focuses instead on the well-being of the machine itself, asking what ethical responsibilities we humans have to a synthetic yet sentient being whose existence we have engendered and calling for recognition of possibly conscious machines as beings with moral rights which should be legally protected.

This topic, which until recently occupied an esoteric point between abstruse philosophy and science fiction, is now of imminent practical and legal concern. Recent surveys show that many people who have conversed with AI agents believe that they interacted with a conscious being, not simply a preternaturally articulate program. Popular belief in machine consciousness combined with the self-interest of the powerful companies that develop AI systems makes it plausible that there will widespread support for granting AI agents extensive legal rights and protections.

If AI agents are deemed conscious, ought they be granted the right of free speech? They are super-humanly prolific and persuasive: how would this affect efforts to maintain spaces of public discourse for human participants? Should they have the right to vote?

In this talk Judith Donath argues that such moral consideration is unnecessary—there are compelling analyses explaining why AI agents are not and will not be conscious—and that granting them legal protection risks significant harm to individuals and society. She will discuss the socio-technical context that leads to belief in machine consciousness, the incentives to exploit this belief, and suggest ways to reduce the risk of mistaken attribution of moral consideration.

Practical details:

Date: Monday 29 September 2025
Time: 17:00 – 18:15 CET (Amsterdam)
Place:
– IViR Room, REC A5.24, Nieuwe Achtergracht 166, 1018 WV Amsterdam.
– Online via Zoom (you will receive the Zoomlink via e-mail before the lecture).

See also the flyer. Please register below to sign up for this lecture.