Data Ethics of Power: Institute of Future and Innovation Studies Hosts Gry Hasselbalch

On December 20, 2021, the JCU Institute of Future and Innovation Studies, in collaboration with the European think-tank DataEthics.eu, hosted Gry Hasselbalch for a presentation of her latest book Data Ethics of Power: A Human Approach in the Big Data and AI Era (Edward Elgar Publishing, 2022). Hasselbalch presented her book in an open discussion with internationally renowned guests: Professor Frank Pasquale, Brooklyn Law, author of the books Black Box Society (Harvard University Press, 2015) and New Laws of Robotics (Belknap Press, 2020), and Nathalie Smuha, Former Coordinator of the “EU High Level Expert Group on AI” and Editor of the official European Commission “Ethics Guidelines for Trustworthy AI.”

Data Ethics of Power serves as a map of the field of data ethics. It exposes the present power relations while providing a historical account and theoretical analysis of the field. The book also offers a foundational study of the human-centric approach for artificial intelligence (AI) and big data. Hasselbalch explained that the existential dilemma located in the center of the human struggle with AI and big data grows out of centuries-old human anxiety of sacrificing our agency to other entities. She argued that in the attempt to expand the human body and surpass the limits of human cognition, we should ask what trade-offs are we willing to give away.

Gry Hasselbalch

Gry Hasselbalch

Instead of following the trend of discussing the values of data ethics, Hasselbalch decided to focus on three key focal points in data ethics as a field: power transformation, power transparency, and power-as-data-ethics. The transformation of power relations has been a reoccurring phenomenon as societies progress. However, the most recent historical transformation of the conversion of currency into data is a transformation that requires wider recognition and awareness. Power transparency is crucial for a holistic understanding of the distribution of power in a data society. By making visible the structure of power relations one can outline its new possible form. Data ethics is a tool of power that shapes human relations with technology. Data ethics, as a field and a net of artifacts, is power; as Hasselbalch explained, for companies, experts, and governments, data ethics serves as a foundation in deciding what roles tech should play in society and human life. “It is about human power, our power,” Hasselbalch said.

Hasselbalch argued that as data became a resource and locus of interest which shapes socio-technological infrastructures, many people became trapped in the systems where they feel powerless. The feeling of human disempowerment is apparent in biased facial recognition systems or in the huge asymmetries in power between tech companies and citizens. Hasselbalch explained that the issue of human power and empowerment should be embedded in data ethics infrastructure. As human power is set in constant negotiation with the socio-technological structure of power, data ethics can serve as a way to challenge and re-negotiate the current power configuration. “Data ethics has to become culture,” Hasselbalch claimed, quoting from her book:

“…data ethics has to become more than just a moral obligation, a set of programmed rules – it has to be human. We can formulate data ethics guidelines, principles, and strategies, and we can even program artificial agents to act according to their rules. However, to ensure a human-centric distribution of power, data ethics must take the form of culture, to become a cultural process, lived and practices as a way of being in the world” (Data Ethics of Power, 2022).

The first discussant of Hasselbalch’s book, Professor Frank Pasquale, described Hasselbalch’s work as “a meeting place between policy discourse and academic expertise, where these two worlds which are strikingly different are brought together.” Professor Pasquale pointed out that Data Ethics of Power offers a uniquely positive and human approach, as it “helps us see a better way forward.” Professor Pasquale referred to the human approach championed by Hasselbalch as a needed counterpart to the omnipresent tech approach. The tech approach, Professor Pasquale explained, is alienating and harmful for society. Professor Pasquale distinguished the tech and human approaches in the following way: tech approach assumes that there is no metaphysical distinction between human and machine, therefore people can be reduced to machines – human approach presumes that there are distinctions between human and machine artifacts; tech approach rejects the possibility for a global baseline for regulations of AI – human approach assumes that there are fundamental values and normative levels of human experience that are shared; tech approach displaces any regulation and perceives ‘ethics talk’ as an obstacle – human approach eliminates the conflict between data and ethics and it perceives ethics as the intellectual foundation for regulation of AI that has the practical power, as Hasselbalch points out in her book. In Data Ethics of Power, Professor Pasquale said, Hasselbalch provides a vision of a positive horizon of human values as she moves the argument forward beyond “just critique.”

The second discussant, Nathalie Smuha, considered how Hasselbalch brings to light three crucial concepts in data ethics: a holistic vision; AI systems as a distributed network of impact; and transparency of negotiation. A holistic vision in Data Ethics of Power shows in the choice of using the “human” approach instead of the “human-centric” approach. As Smuha argued, “it is not about humans, but it is human. If we do not take a human-ethical approach to AI, we will dehumanize ourselves.” The “human” approach allows for going beyond the human-centricity of Anthropocene while highlighting the ethical human responsibility. Smuha pointed out that Hasselbalch provides a framework that makes visible how AI is a complex network with real effects. As Smuha explains, a biased AI system impacts society on three levels: individually, a biased facial recognition system affects a single person; collectively, a group of people is being biased against due to AI system; societally, a citizen revolts against living in an unjust society which can sooner or later discriminate against her. Hasselbalch also demonstrates how the legal framework is an artifact of power to address data ethics and frame data ethics issues, Smuha highlighted. As actors who have resources frame the discussion and laws, diversity of action is required globally. A piece of regulation, such as the AI Act which is currently put forward in the EU, will have an impact on our lives and society, and having a space to shape it is crucial for democracy. Yet even if there is a space for debate, citizens are often not aware of what is at stake and what the concepts in the regulation are.

Hasselbalch added that AI discourse involves a range of stakeholders, ranging from businesses, governments, and citizens. They all see different problems and solutions. However, some stakeholders gained too much power in this discourse, Hasselbalch pointed out. “It is often said that GDPA (the General Data Protection Act) is an obstacle for innovation, but innovation can take different forms,” Hasselbalch argued. She suggested that a necessary step to take is to focus on resources for diverse agents to get access to this discourse, because activists and critics are a healthy part of democracy who are able to challenge and question. In particular, the voices coming from marginalized groups and regions have to gain the space for negotiation.

In the face of the socio-technological transformation, a more reflective and critical mode is needed, Hasselbalch continued. She explained that in a search for an adequate response to challenges posed by AI and big data, she was inspired by the concept of love as explored by philosopher Henri Bergson. Love, Hasselbalch said, could be seen as a new kind of power. The concept of unlimited love can serve as a weapon against the liquidity of power. “Just as power is liquid,” Hasselbalch argued referring to sociologist Zygmunt Baumann, “so is love; love is unconditional.” Hasselbalch concluded by saying that “We are in human capacity to control emerging technologies and changes. To lead the change in AI and data, we need empowered humans. AI has to be supplementary to human agency, and not replace it.”

Gry Hasselbalch is a scholar and co-founder of the DataEthics.eu think-tank, and Research Director of DataEthics.eu Research. She was a member of the EU High Level Expert Group on AI that developed EU’s AI ethics guidelines, coined the term “Trustworthy AI” and contributed directly to the EU’s AI strategy. Hasselbalch is the Senior Key Expert on AI Ethics in the European Commission’s International Outreach for a Human-Centric Approach to Artificial Intelligence (InTouch.AI.eu).