Ghanaian Professor Jerry John Kponyo Explores Ethical AI and Afrocentric Datasets in Global Discourse Series

Ghanaian Professor Jerry John Kponyo Explores Ethical AI and Afrocentric Datasets in Global Discourse Series

(IN BRIEF) Professor Jerry John Kponyo from Kwame Nkrumah University of Science and Technology (KNUST) in Ghana delves into the concept of responsible AI as part of the “One Topic, One Loop” global discourse series. He highlights the significance of ethical, transparent, and inclusive approaches in AI development, emphasizing the importance of addressing existing biases and potential misinformation. Kponyo discusses the Responsible AI Lab at KNUST, which generates Afrocentric datasets and focuses on AI solutions tailored to the African context. He stresses the multidisciplinary nature of responsible AI and calls for collaborations between various experts to ensure ethical AI development. The goal is to achieve AI systems that prioritize public safety, fairness, inclusivity, and accountability.

(PRESS RELEASE) MUNICH, 24-Aug-2023 — /EuropaWire/ — Artificial Intelligence touches every facet of the human condition. It is having enormous impact on our workforce, making us more productive, efficient, and impactful. However, AI comes with a host of other unintended consequences, including the perpetuation of existing stereotypes against minorities and the ease of misinformation.

The need for responsible AI

In order to design AI solutions for the public good it is important to consider a set of principles that ensure the ethical, transparent, and accountable use of AI technologies consistent with user expectations, organisational values, and societal laws and norms. For us at the Responsible AI Lab at KNUST, responsible AI refers to the practice of designing, developing, and deploying artificial intelligence in an ethical manner. This means ensuring that AI solutions are delivered with integrity, equity, and respect for individuals, and that developers of AI solutions are always mindful of the social impact of what they are building. This is because AI systems are fundamentally socio-technical, including the social context in which they are developed, used, and acted upon, with its diversity of stakeholders, institutions, cultures, norms, and spaces. In sum, responsible AI is human-centered AI.

Responsible AI understands how to maintain public safety, how to prevent harm against minorities, and how to ensure equal opportunity. This requires AI systems that operate reliably and with low error rates , minimising risks, and preventing potential harm to individuals and society. Additionally, responsible implementation of AI is crucial to avoid reinforcing biases or discriminating against minority groups, and to promote fairness and inclusivity. By adhering to ethical principles and rigorous testing, AI can mitigate biases and promote equitable outcomes.

Generating Afrocentric datasets

While STEM professionals build AI systems, responsible AI is a multidisciplinary field and requires a variety of essential contributors to ensure a holistic and comprehensive approach to AI development. Computer scientists and engineers could focus on creating transparent, fair, and unbiased AI systems, while data scientists and statisticians ensure data integrity and develop methods to counter bias.

In addition, legal experts are just as important to the process as social scientists, economists, and humanities scholars, as well as communications and cybersecurity experts. Only a broad-based, multidisciplinary approach can succeed in developing AI that benefits society while minimizing harm.

In the Responsible AI Lab at KNUST, our core priorities revolve around the creation of Afrocentric datasets and the development of solutions specifically catered to the African continent. This commitment arises from recognizing the fact that Africa has been left behind in the AI dialogues. By curating datasets that accurately represent the diverse African context and designing AI solutions that directly address the continent’s challenges, we aim to bridge the gap and contribute to meaningful technological progress. Our emphasis on Afrocentric datasets and solutions reflects the need to ensure that AI discussions encompass and benefit the unique realities of the African continent.

Making Generative AI responsible

To design Generative AI more responsibly, it is important to foster further research aimed at comprehensively assessing the efficacy of Generative AI across a range of contexts, including critical settings like classrooms and hospitals. Currently, there is a lack of quantitative research that delves into the impact of Generative AI on education, learning outcomes, and its potential to assist individuals with learning disabilities.

The establishment of ethical guidelines stands as a critical pillar in the responsible advancement of AI technologies. Collaboration between local governments, academia, industry stakeholders, and international organizations plays a key role in formulating ethical standards that span domains like data privacy, bias mitigation, transparency, and accountability. Education, too, occupies a central position in the responsible use of AI systems. The ethical application of AI should be inculcated into the educational curriculum, running in parallel with the teaching of technical skills. Addressing key issues such as fairness, accountability, confidentiality, ethics, transparency, and safety is vital across all disciplines.

At this point, I would like to hand over to Prof. Sune Lehmann with the question of what data sets do we need to ensure responsible AI.

Global discourse series “One Topic, One Loop”

Four people from four different countries and four different universities discuss a current topic in research and teaching. The series begins with an initial question to which the first person responds and asks the next person another question on the same topic. The series ends with the first person answering the last question and reflecting on all previous answers. The topic of the first season is Large Language Models and their impact on research and teaching.

Our authors are: Enkelejda Kasneci, Head of the Chair for Human-Centered Technologies for Learning at the TUM School of Social Sciences and Technology, Aldo Faisal, Professor of AI & Neuroscience at Imperial College London, Jerry John Kponyo, Associate Professor of Telecomunnications Engineering at Kwame Nkrumah’ University of Science and Technology and Sune Lehmann Jørgensen, Professor at the Department of Applied Mathematics and Computer Science at Technical University of Denmark.

Publications

  • P. Mikalef, K. Conboy, J. E. Lundström, and A. Popovič, “Thinking responsibly about responsible AI and ‘The dark side’ of ai,” European Journal of Information Systems, vol. 31, no. 3, pp. 257–268, 2022. doi:10.1080/0960085x.2022.2026621
  • V. Dignum, “The role and challenges of education for responsible ai,” London Review of Education, vol. 19, no. 1, 2021. doi:10.14324/lre.19.1.01
  • W. M. Lim, A. Gunasekara, J. L. Pallant, J. I. Pallant, and E. Pechenkina, “Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators,” The International Journal of Management Education, vol. 21, no. 2, p. 100790, Jul. 2023, doi: doi.org/10.1016/j.ijme.2023.100790
  • Su (苏嘉红)J. and Yang (杨伟鹏)W., “Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education,” ECNU Review of Education, p. 209653112311684, Apr. 2023, doi: doi.org/10.1177/20965311231168423

Further information and links

  • Jerry John Kponyo is Associate Professor of Telecommunications Engineering at  Kwame Nkrumah’ University of Science and Technology. He is scientific director of the Responsible AI Lab at KNUST and co-founder of the Responsible AI Network Africa, a collaborative effort between KNUST and TUM.
  • Digital transformation at TUM

Media Contact:

Phone: +49 (0)89-289-01
Fax: +49 (0)89-289-22000

SOURCE: Technical University of Munich

MORE ON TECHNICAL UNIVERSITY OF MUNICH, TUM, ETC.:

Follow EuropaWire on Google News
EDITOR'S PICK:

Comments are closed.