Skip navigation

Uncertainty Explainability Transparency and Bias in AI

URKI LOGO

 CCAI Large

 




AI is at the forefront of the latest wave of innovation and Citizen Centred Relationships potentially

The predictability and fairness of AI is central to make the applications of such intelligent approaches truly citizen centred. A significant part of AI systems are black boxes or non-deterministic, because they need randomness to work (Guidotti et al., 2018; Wang, Rainu and Dhruv, 2020). This means many AI approaches may exhibit varied outputs on different runs based on the same input. This intrinsic uncertainty enables AI systems to operate effectively in a non-deterministic universe, i.e. under environments with incomplete or uncertain information, but this also proposes challenges to the reliability of AI in a sense of minimising potential algorithmic harm to citizens such as unequal economic, social, health, and education opportunities (Ryan, 2020; Zecchin, 2023). Bias is another related aspect of AI approaches which may place a certain group of people in a disadvantaged position (Landers and Behrend, 2023). For instance, the commonly used accuracy measurement for AI model development during the training stage may make AI algorithms work better for the majority but worse for the minority. A good example of this is computer vision-based medical diagnosis. Biased AI approaches can result in misdiagnosing certain patient groups, such as gender and ethnic minorities, historically underrepresented in existing datasets (Norori et al., 2021). To address this, we must involve real people in the design, deployment, and governance of AI, in addition to the development of more reliable and fair AI algorithms.

Explainability and transparency of AI are the key enabler for the adoption of citizen-centred AI design, deployment, and governance approaches (Ehsan et al., 2021; Ferrario and Loi, 2022; Fiok et al., 2022). Explainable AI (XAI) elucidates how AI algorithms generate outputs, thereby improving understanding and confidence of the approaches. The interpretability and verifiability of AI by laypersons allows the connection between people (and their data), and government, industry and other organisations in ways that prioritise people’s thought, needs, rights and aspirations. By doing so, AI approaches can be co-designed, co-verified, and co-governed by citizens as well as government and businesses (Latonero, 2018). This will significantly minimise the fears around the potential inequality and harm as aforementioned to avoid potential compromise on full benefit of what AI may offer. A small set of AI algorithms are intrinsically explainable, such as decision trees and rule-based models, but others require post hoc interventions. Some successes of post hoc XAI approaches have been reported in the literature (Abdollahi and Pradhan, 2023; Chakraborty et al., 2021; Kim, 2023; Kuppa and Le-Khac, 2021; Van et al., 2022), but they are mostly algorithm specific. 

This AI CDT theme aims to investigate and innovate the inclusiveness, methodology, mechanism and impacts of these fundamental features during the design, development, deployment, and governance of AI approaches. We will particularly focus on the citizen’s involvement in these processes and their final benefits, in the fields of health & wellbeing, climate &sustainability, communities, democracy & society, identity & security, co-design & creation, learning and educations, future of work & organisations.

Experts

  • Professor Longzhi Yang
  • Professor Wai Lok Woo
  • Professor Shaun Lawson
  • Dr. Kyle Montague
  • Dr Dawn Branley-Bell

Related Peak of Research Excellence

  • Computerised Society and Digital Citizens 
  • Volunteering, Humanitarian Crises & Development

Related Projects at Northumbria

  • Automated generative design of modular buildings through explainable artificial intelligence (XAI) – IDRT International Centre for Connected Construction (IC3)
  • Deep Learning and Explainable Artificial Intelligence for Flood Prediction – Faculty of Engineering and Environment
  • Making explainable AI more.. explainable! An experimental comparison of AI explanations in human decision-making – Faculty of Health and Life Sciences

Suggested Literature

*if you are struggling to access any of the suggested literature, then please contact ccai.cdt@northumbria.ac.uk

 

 

Back to top