JCU Hosts Debate on the “Geopolitical Dimensions of the A.I. Race”

John Cabot University hosted an online roundtable called “The Global A.I. Race. Geopolitical Dimensions of the A.I. Race and the European Human-Centric Model,” on October 30, 2020. The event was organized by the JCU Institute of Future and Innovation Studies as part of “Diplomacy – Festival della Diplomazia 2020,” and was moderated by Francesco Lapenta, Director of the Institute.

The discussion provided an overview of the development of A.I. and compared key geopolitical and economic approaches from around the world, with a focus on its socio-economic dimension and global risks.

Highlights
A global geopolitical race for artificial intelligence is being fought out in the form of national plans designed to guide the local development of the next generation of A.I. technologies. These plans show that countries around the world see similar opportunities in artificial intelligence, but also highlight the profoundly different priorities given to these emerging principles in different geopolitical contexts. The plans stress the importance of preserving and promoting national interests and social, economic, and cultural values, principles that need to be contextualized and legally implemented in different national, social, and entrepreneurial realities. These different interpretations, as well as the competition for business and technical leadership, will have profound effects when assessing their contribution to social justice, human rights, and fair and open economic development, as well as their contribution to national and international security.

The Speakers at the A.I. Race Roundtable

The Speakers at the A.I. Race Roundtable

The speakers were: Paul Nemitz (Director of the EU Directorate-General for Justice and Consumers), Gry Hasselbalch (Member of the High-Level Expert Group on Artificial Intelligence), Gabriele Mazzini (Member of the European Commission), Helena Malikova (Member of the Directorate General for Competition), Enzo Maria Le Fevre Cervini (Project Leader at DIGIT), JCU Professor Stefan Lorenz Sorgner (Director and Co-founder of the Beyond Humanism Network), Carolina Aguerre (Member of the steering committee of GIGANET), Kondaine Kaliwo (Chief Information Officer at the Malawi Agricultural and Industrial Corporation), Mathias Vermeulen (Public Policy Director at AWO), Francesco Grillo (Director of the Vision think tank).

The participants discussed how in recent years, public interest in the ethical implications of A.I. has increased significantly, as has the growing consensus around a set of values and principles that should guide its development. Principles of fairness, transparency, accountability, privacy, security, diversity, and inclusion are increasingly recognized as representing core values for the ethical development of human-centric A.I.. Their implementation, however, is more complex and faces major challenges, geopolitical, cultural, and economic, as well as technical.

A.I. models
Debating the challenges of the Chinese vs the European model, Paul Nemitz recognized an urgent need to understand which rules A.I. must follow in terms of personal data collection. “China has to make up its mind whether it wants technology that does not care about protecting individual rights or to be able to trade and sell while respecting democracy and complying with European rules,” he said.

Francesco Lapenta described how Europe is being squeezed between the dichotomy of two dominant surveillance models. The US model, in which data collection is organized around a form of commercially driven consumers’ surveillance coordinated by industry, and the Chinese model, driven by the government and organized around a form of systemic personal data collection and social surveillance and control. The European model suggests a very different approach.

Gabriele Mazzini explained that the so-called “European human-centric approach to A.I.,” outlined by the European Commission in February 2020, is based on values of trust and excellence, and key requirements that include transparency, robustness and accuracy. This model promotes human oversight to govern the risks of misuse and guarantee the safety and fundamental rights of individuals.

Carolina Aguerre talked about the geopolitical position of Latin America within the A.I. race. “We have to integrate the narrative of A.I. in terms of the existing divides regarding the use and development of digital technologies,” said Aguerre. Most Latin American countries are trying to adapt to the European perspective on A.I., but at the same time, they are also considering the Chinese strategic technology and infrastructure economic model, which is more accessible to developing countries.

Kondaine Kaliwo suggested that the African geopolitical position regarding the question of adopting a model for A.I. technologies is similar to Latin America’s. According to Kaliwo, most people in Africa are attracted to the Western model but see China as a cheap solution.

Enzo Maria Le Fevre Cervini commented that differences among various Latin American countries need to be assessed, with countries such as Mexico, Argentina, Chile and others emerging, and the rest being left behind. The same logic applies to Africa.

Helena Malikova showed data that clearly displayed how uneven the investment of leadership in A.I. is, with Europe being a distant third, but with the rest of the world as a combined entity as a strong competitor to the US and Chinese A.I. market leadership.

Francesco Grillo explained the Chinese competitive advantage by its cultural and pragmatic roots in necessity. “Necessity is the mother of innovation,” stated Grillo. China is leading because of its pragmatic approach and committed economic investment strategies in solving what it sees as practical problems. In this view, collective necessities trump individual rights. He gave the example of the different reactions to the COVID-19 health crisis in March, where China acted with an authoritarian and pragmatic approach while the US and Europe failed to use data to manage the dramatic situation.

Gry Hasselbalch, member of the “European Commission’s High-Level Group on AI, countered by saying that the European model is more ethical and driven by values. “I actually think that when having geopolitical conversations, it is much more important to consider that we have to build technology for democracy,” stated Hasselbalch.

JCU Professor and Director and Co-founder of the Beyond Humanism Network Stefan Lorenz Sorgner stated that A.I.’s capacities need to be assessed based on their potential implications for humans. “It is not just advertising or social control. We need data for policy making, for natural and social sciences,” said Sorgner. He claimed that collecting digital data is important for health and medical research, but it also undermines democratic structure. A democratic use of data can be achieved by making sure that the interpretation is performed by human beings for the sake of public health.

Looking to the future
To conclude the conversation, Nemitz expressed his concern about what needs to be improved to develop geopolitical cooperation regarding the A.I. race. On the one hand, individuals need to adopt a more critical attitude towards technologies “touching on the essence of being human and challenging democracy,” said Nemitz. On the other hand, “we need to reinstall the ability to deliver truth both in research and the free press.” According to Nemitz, the ultimate objective of the A.I. race is not establishing who will earn first place but understanding whether individuals are willing to live in freedom and democracy and make investments accordingly.