AI is at the forefront of the latest wave of innovation and Citizen Centred Relationships potentially
The predictability and fairness of AI is central to make the applications of such intelligent approaches truly citizen centred. A significant part of AI systems are black boxes or non-deterministic, because they need randomness to work (Guidotti et al., 2018; Wang, Rainu and Dhruv, 2020). This means many AI approaches may exhibit varied outputs on different runs based on the same input. This intrinsic uncertainty enables AI systems to operate effectively in a non-deterministic universe, i.e. under environments with incomplete or uncertain information, but this also proposes challenges to the reliability of AI in a sense of minimising potential algorithmic harm to citizens such as unequal economic, social, health, and education opportunities (Ryan, 2020; Zecchin, 2023). Bias is another related aspect of AI approaches which may place a certain group of people in a disadvantaged position (Landers and Behrend, 2023). For instance, the commonly used accuracy measurement for AI model development during the training stage may make AI algorithms work better for the majority but worse for the minority. A good example of this is computer vision-based medical diagnosis. Biased AI approaches can result in misdiagnosing certain patient groups, such as gender and ethnic minorities, historically underrepresented in existing datasets (Norori et al., 2021). To address this, we must involve real people in the design, deployment, and governance of AI, in addition to the development of more reliable and fair AI algorithms.
Explainability and transparency of AI are the key enabler for the adoption of citizen-centred AI design, deployment, and governance approaches (Ehsan et al., 2021; Ferrario and Loi, 2022; Fiok et al., 2022). Explainable AI (XAI) elucidates how AI algorithms generate outputs, thereby improving understanding and confidence of the approaches. The interpretability and verifiability of AI by laypersons allows the connection between people (and their data), and government, industry and other organisations in ways that prioritise people’s thought, needs, rights and aspirations. By doing so, AI approaches can be co-designed, co-verified, and co-governed by citizens as well as government and businesses (Latonero, 2018). This will significantly minimise the fears around the potential inequality and harm as aforementioned to avoid potential compromise on full benefit of what AI may offer. A small set of AI algorithms are intrinsically explainable, such as decision trees and rule-based models, but others require post hoc interventions. Some successes of post hoc XAI approaches have been reported in the literature (Abdollahi and Pradhan, 2023; Chakraborty et al., 2021; Kim, 2023; Kuppa and Le-Khac, 2021; Van et al., 2022), but they are mostly algorithm specific.
This AI CDT theme aims to investigate and innovate the inclusiveness, methodology, mechanism and impacts of these fundamental features during the design, development, deployment, and governance of AI approaches. We will particularly focus on the citizen’s involvement in these processes and their final benefits, in the fields of health & wellbeing, climate &sustainability, communities, democracy & society, identity & security, co-design & creation, learning and educations, future of work & organisations.
Experts
- Professor Longzhi Yang
- Professor Wai Lok Woo
- Professor Shaun Lawson
- Dr. Kyle Montague
- Dr Dawn Branley-Bell
Related Peak of Research Excellence
- Computerised Society and Digital Citizens
- Volunteering, Humanitarian Crises & Development
Related Projects at Northumbria
- Automated generative design of modular buildings through explainable artificial intelligence (XAI) – IDRT International Centre for Connected Construction (IC3)
- Deep Learning and Explainable Artificial Intelligence for Flood Prediction – Faculty of Engineering and Environment
- Making explainable AI more.. explainable! An experimental comparison of AI explanations in human decision-making – Faculty of Health and Life Sciences
Suggested Literature
- Abdollahi, A. and Pradhan, B., 2023. Explainable artificial intelligence (XAI) for interpreting the contributing factors feed into the wildfire susceptibility prediction model. Science of The Total Environment, 879, p.163004.
- Chakraborty, D., Alam, A., Chaudhuri, S., Başağaoğlu, H., Sulbaran, T. and Langar, S., 2021. Scenario-based prediction of climate change impacts on building cooling energy consumption with explainable artificial intelligence. Applied energy, 291, p.116807.
- Ehsan, U., Liao, Q.V., Muller, M., Riedl, M.O. and Weisz, J.D., 2021, May. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
- Ferrario, A. and Loi, M., 2022, June. How explainability contributes to trust in AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1457-1466).
- Fiok, K., Farahani, F.V., Karwowski, W. and Ahram, T., 2022. Explainable artificial intelligence for education and training. The Journal of Defense Modeling and Simulation, 19(2), pp.133-144.
- Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F. and Pedreschi, D., 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), pp.1-42.
- Kim, P.W., 2023. A Framework to Overcome the Dark Side of Generative Artificial Intelligence (GAI) Like ChatGPT in Social Media and Education. IEEE Transactions on Computational Social Systems.
- Kuppa, A. and Le-Khac, N.A., 2021. Adversarial xai methods in cybersecurity. IEEE transactions on information forensics and security, 16, pp.4924-4938.
- Landers, R.N. and Behrend, T.S., 2023. Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist, 78(1), p.36.
- Latonero, M., 2018. Governing artificial intelligence: Upholding human rights & dignity.
- Norori, N., Hu, Q., Aellen, F.M., Faraci, F.D. and Tzovara, A., 2021. Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10).
- Ryan, M., 2020. In AI we trust: ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), pp.2749-2767.
- Van der Velden, B.H., Kuijf, H.J., Gilhuijs, K.G. and Viergever, M.A., 2022. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis, 79, p.102470.
- Wang, F., Kaushal, R. and Khullar, D., 2020. Should health care demand interpretable artificial intelligence or accept “black box” medicine?. Annals of internal medicine, 172(1), pp.59-60.
- Zecchin, M., Park, S., Simeone, O., Kountouris, M. and Gesbert, D., 2023. Robust bayesian learning for reliable wireless ai: Framework and applications. IEEE Transactions on Cognitive Communications and Networking.
*if you are struggling to access any of the suggested literature, then please contact CCAI@northumbria.ac.uk.