Evolving frameworks for responsible and human-centred AI practice in higher education and beyond

Authors

DOI:

https://doi.org/10.47408/jldhe.vi39.1829

Keywords:

AI framework, critical AI literacy, digital sustainability, ethical integration of AI, inclusive AI literacy, human centred AI, responsible AI

Abstract

This letter reflects on the co-creative, cross-disciplinary discussions emerging from the Lovelace-Hodgkin Symposium: Responsible AI and Education at the University of Glasgow. Bringing together academics, professional services, students, external partners and industry representatives, the event explored how artificial intelligence (AI) is transforming education. Participants focused not on technology itself but on people: the ways AI affects learning, teaching and work. Anxiety and opportunity coexisted throughout discussions, revealing the need for reflective frameworks that integrate agility, trust, transparency and inclusion. This letter offers insight on principles for building such frameworks and outlines three recommendations for individuals: to reflect on their AI use, to be transparent and to keep learning. For institutions: create safe spaces, connect policy with practice, extend access to all roles, and engage external partners to create an AI ethics ecosystem. These insights call for a collective, human-centred approach to AI that supports creativity, confidence and equity in education.

Author Biographies

Lydia Bach, University of Glasgow

Lydia Bach is Equality, Diversity and Inclusion Officer for the College of Science and Engineering at the University of Glasgow. Her work focuses on culture change, data-driven equality practice, and inclusive approaches to AI and organisational transformation. She supports evidence-informed initiatives that strengthen inclusive practice and strategic development across the College. 

Ciorsdaidh Watts, University of Glasgow

Ciorsdaidh Watts is a Senior Lecturer in Organic Chemistry at the University of Glasgow and serves as School of Chemistry Equality, Diversity and Inclusion Chair and AI Champion. Her research interests include technology-enhanced learning, ethics in science education, and widening inclusion within Science, Technology, Engineering and Mathematics (STEM). She works to support inclusive teaching practices and thoughtful integration of emerging technologies in higher education. 

Sarah Henry, University of Glasgow

Sarah Henry is Manager of the Centre for Data Science and AI at the University of Glasgow, leading strategic initiatives that integrate data-driven research and innovation. With a background in molecular biology and advanced analytics, she brings experience spanning academia, industry, and public policy, supporting collaboration and research development across sectors. 

Ana Basiri, University of Glasgow

Ana Basiri is Director of the Centre for Data Science and AI at the University of Glasgow and a UK Research and Innovation (UKRI) Future Leaders Fellow. Her research addresses bias and missingness in data to create fairer, more transparent AI systems. She works to improve trust, accountability, and ethical standards in data-driven technologies and decision-making. 

References

Crawford, J., Allen, K. A., Pani, B., & Cowling, M. (2024). When artificial intelligence substitutes humans in higher education: The cost of loneliness, student success, and retention. Studies in Higher Education, 49(5), 883–897. https://doi.org/10.1080/03075079.2024.2326956

Khazanchi, D., & Saxena, M. (2025). Navigating digital human rights in the age of AI: Challenges, theoretical perspectives, and research implications. Journal of Information Technology Case and Application Research, 1–14. https://doi.org/10.1080/15228053.2025.2452028

Purcell, S., Brown, C., & McCormick, L. (2024). Glasgow essentials: Redeveloping induction resources to improve students’ sense of inclusion and belonging. Journal of Learning Development in Higher Education, 32, 1–8. https://doi.org/10.47408/jldhe.vi32.1448

Robayo-Pinzon, O., Rojas-Berrio, S., Rincon-Novoa, J., & Ramirez-Barrera, A. (2024). Artificial intelligence and the value co-creation process in higher education institutions. International Journal of Human–Computer Interaction, 40(20), 6659–6675. https://doi.org/10.1080/10447318.2023.2259722

Zhang, H., & Cao, J. (2025). From digital disruption to mental health: The impact of AI-induced educational anxiety on teacher well-being in the era of smart education. BMC Public Health, 25, Article 4010. https://doi.org/10.1186/s12889-025-25372-7

Downloads

Published

27-03-2026

How to Cite

Bach, L., Watts, C., Henry, S., & Basiri, A. (2026). Evolving frameworks for responsible and human-centred AI practice in higher education and beyond. Journal of Learning Development in Higher Education, (39). https://doi.org/10.47408/jldhe.vi39.1829

Issue

Section

Letters