Institute Hosts JCU Seminar “Social Justice in Code”
The JCU Institute of Future and Innovation Studies in collaboration with resident and visiting faculty from Florida A&M University, Maryland Institute College of Art, and Indiana University Bloomington, organized the seminar “Social Justice in Code” on June 7, 2022.
The seminar was hosted by the Institute’s Director Francesco Lapenta, Dean Richard Alo (Florida A&M University), and Professors Carlos Theran Suarez (Florida A&M University), Yohn J. Parra Bautista (Florida A&M University), Stefan Sorgner (John Cabot University), Firmin DeBrabander (Maryland Institute College of Art & John Cabot University), Amit Hagar (Indiana University Bloomington & John Cabot University), Gry Hasselbalch PhD., Bonn University Sustainable Al Lab. Research Lead EU International Outreach Human-Centric Al.
The seminar provided a comprehensive overview of how digital technologies have transformed the global economy and society over the past several decades, impacting every aspect of our daily lives. Professors Stefan Sorgner and Firmin DeBrabander argued that data-driven innovations, such as AIs, have the potential to induce epochal transformations with immense social benefits. In addition, open data should be the basis of this positive evolution. Professor Richard Alo explained that such AI and data-driven potential evolutions pose a number of risks. These risks raise ethical and moral concerns regarding the unintended consequences and inherent biases of the unregulated development of AI applications. Professors Carlos Theran Suarez and Yohn J. Parra Bautista explained that some of these risks may include the unequal distribution of the potential benefits that these AI and data-driven innovations will generate, as well as the inherent social biases that may become systemic in AI algorithmic decision-making processes. They recommend that this issue be addressed during the early coding phase. To make AI developments and the data-driven economy socially just and inclusive, they argue, a set of technical standards and guidelines must be developed for identifying and mitigating unintended, unjustified, and/or inappropriate biases in the outcomes of AI applications and algorithmic systems.
Unwarranted biases are the differential treatment of individuals based on criteria that lack a justifiable operational basis. These include, for instance, race, gender, sexual orientation, or any other criterion that is either legally or morally unacceptable in the social context in which the system is used, and which may impact the system’s effect on society and its social justice. The panel centered on the concept of Social justice, which focuses on the elimination of barriers to social mobility, the establishment of welfare, the impartiality of the justice system, and the economic equality and fairness of a social system or society that ensures that individuals who fulfill their societal roles receive equitable and proportional compensation from society regardless of other factors.
The application of these moral and ethical codes to technological innovation in general and AI development in particular necessitates the acquisition of knowledge and ethical considerations that were not previously a part of scientific education and formation, stated Prof. Amit Hagar and Institute Director Francesco Lapenta, and the panel agreed. Understanding the intricate issues involved in designing AIs for decision-making processes should be a central component of any scientific curriculum and the foundation for the development of standards that could guide the evolution of AI. These standards should provide a framework to aid algorithmic developers in addressing and eliminating unintended, unjustified, and inappropriate bias in the development of their algorithmic system, as well as other unintended consequences of AI applications, but also to know the intrinsic limits of all AI decision making processes and their limitations.