Can Robots Feel Pain? JCU Hosts Post-Human Studies Workshop

History and Humanities Department Professors Brunella Antomarini and Stefan Lorenz Sorgner, along with Professor Francesco Lapenta, Director of the JCU Institute of Future and Innovation Studies, organized a day-long discussion called AI: III Post-Human Studies Workshop at John Cabot University on September 21, 2019. The keynote speaker was Professor Domenico Parisi, from the Istituto di Scienze e Tecnologie della Cognizione, who delivered a talk called “Human, not humanoid, robots.” The main focus of the workshop was Artificial Intelligence and the moral status of non-organic entities, such as robots.

Posthuman Studies Workshop

Posthuman Studies Workshop

In the humanistic tradition, only humans are counted as people, and personhood determines hierarchy and unique moral status, of humans vs non-human species. Many argue that after Charles Darwin this dichotomy is no longer plausible. All entities that have self-consciousness, cognitive abilities, agency, and who are able to suffer and have emotions, seem to deserve moral consideration. Yet, there are differences with respect to the capacity of suffering or feeling, depending on the entity’s qualities, in terms of consciousness or sentience.

The decisive questions are the following ones: is sentience necessary for personhood? There are humans who cannot feel physiological pain. Should they not be counted as people? Cognition might not be dependent on consciousness either, as there are indications (such as experimental robots) for the possibility of non-conscious cognition. Conversely, cognition can also lead to a type of cognitive suffering or emotion, which A.I.s with sensors (embodied A.I.s) could also perceive.

Technological development in the field of  A.I. seems to challenge the organic mind/brain bases of cognition, self-consciousness, suffering and the definition of cognitive abilities such as intelligence and creativity. After the failure of the first cybernetics to use A.I. as a model of the brain, and after general systems theories developing from second-order cybernetics, there might be a new attempt to consider the relationship between organic cognition (probable inference) and artificial cognition (big data).

Whether A.I.s or robots are supposed to reproduce human abilities, creativity, and emotions or evolve on their own, develop human-like cognition, morals and the ability to suffer, or redefine their biological and human qualities, a question persists about the moral agency and responsibility of humanity in the development of these technological evolutions, and their effects on the environment in which they will eventually co-exist.