15 May 2026

In this article, we explore how King’s Health Partners is helping to lead a national effort to ensure artificial intelligence in health and social care is safe, equitable, and effective for all. 

What is Responsible AI UK? 

Responsible AI UK (RAi UK) is a UKRI-funded national consortium bringing together researchers, clinicians, innovators, and policymakers to shape how artificial intelligence (AI) is developed and deployed across the UK. 

Its mission is to ensure that AI systems are trustworthy, transparent, and genuinely beneficial to society, not just technically capable.

Within this wider initiative, a dedicated Health and Social Care Working Group focuses specifically on the unique opportunities and challenges that AI presents in clinical and care settings. From diagnostic algorithms to administrative automation, AI is already reshaping how health services operate. The Working Group both undertakes joint projects, and draws on the independent work of its members. It creates a forum where insights from across institutions and disciplines can be shared, discussed, and translated into coordinated action.

KHP leads this Working Group. As an academic health partnership that sits at the intersection of world-class research, clinical excellence, and NHS delivery, KHP brings together the academic rigour and the on-the-ground perspective that responsible AI development demands, especially where the stakes are patient safety, health equity, and public trust.

What does the Working Group do?

The RAi UK Health and Social Care Working Group convenes clinicians, researchers, ethicists, and industry partners to develop practical frameworks, guidance and tools for the responsible adoption of AI in health and social care. Its work spans evaluation and governance, equity and bias, patient and public involvement, and the translation of AI research into real-world clinical benefit.

One example of this in practice is a recent policy roundtable on building trust in AI deployment in the NHS. This brought together clinicians, policymakers, and patient advocates to examine the conditions under which AI tools can be safely and confidently introduced into clinical practice. The roundtable drew on new public opinion polling exploring what patients and the public actually think about AI in their care. The findings challenge some of the assumptions often made on their behalf, and are now informing recommendations to national policymakers and senior leaders across the healthcare system.

The group has also contributed directly to the national conversation around AI adoption in health and care, through a BMJ Digital Health & AI paper, Responsible AI UK: priorities for delivering England's health and life sciences plans, which sets out a clear, evidence-based framework for what is needed to realise the NHS's AI ambitions without cutting corners on safety, equity or accountability.

A central strand of work is the Responsible AI NHS Champions network, a growing community of multidisciplinary NHS staff from across the UK. Through this network, training, resources and peer support are shared to help members identify AI risks and opportunities, engage colleagues in evidence-based conversation, develop best practice, and act as a conduit between frontline practice and national policy.

Why does responsible AI matter?

AI holds genuine promise for healthcare, from supporting early cancer diagnosis, to reducing administrative burdens on overstretched staff. But without the right safeguards, it also carries serious risks. These could include perpetuating health inequalities if trained on unrepresentative data, eroding patient trust if deployed without transparency, or causing direct harm if systems fail in unpredictable ways.

The case for responsible AI is not about slowing innovation. It is about making sure that innovation genuinely works for patients, clinicians, and communities - particularly those who have historically been underserved. That means asking hard questions about how AI tools are built, tested, monitored, and governed, and ensuring that the people most affected have a meaningful voice in those decisions.

A national network is essential to this work. AI adoption in the NHS is accelerating rapidly and unevenly, with individual organisations navigating complex decisions largely in isolation. As part of the wider RAi UK consortium, the Working Group can draw on a vast ecosystem of research and expertise spanning fields beyond health, including law, finance, and the social sciences. This brings insights to bear on shared challenges and providing the infrastructure and expert community needed to raise standards across the system as a whole.

Who leads the RAi UK Health and Social Care Working Group?

The Working Group is chaired by Nicholas Raison, Clinical Senior Lecturer at King’s College London and Honorary Consultant Urological Surgeon at King’s College Hospital NHS Foundation Trust. Fin MacAskill serves as National Specialty Advisor for RAi UK Health and Social Care and is a Urological Surgical Trainee at Guy’s and St Thomas’ NHS Foundation Trust. Strategic oversight is provided by Professor Ian Abbs, Interim Chief Medical Officer at Guy’s and St Thomas’ NHS Foundation Trust, helping ensure the Working Group’s activities align with KHP’s organisational priorities and the wider national agenda.

“As a surgical trainee, I see every day how much AI could change the way we deliver care, and how easily it can go wrong without the right scaffolding around it. The HSCWG brings the voice of those working directly with patients across the country, the RAi NHS Champions, directly to experts sharing best practices to implement responsible, trusted deployment of AI in the NHS.” - Fin MacAskill.

“We are well aware of the enormous potential AI has to help address some of the most pressing challenges facing health systems, particularly the NHS. But if we are to realise those benefits fully, we must ensure AI is implemented responsibly — protecting patients, maintaining public trust, and ultimately helping deliver the best possible care.” - Nicholas Raison.

Beyond KHP, the Working Group works with a wide range of partners and collaborators. These include the World Health Organization (WHO) - with whom the group engages on global standards for AI in health - the South East London Integrated Care System, supporting the development of its AI governance model; and the NHS England AI Lab and Ambassadors Network, to help accelerate safe adoption, strengthen public trust, and support more efficient, patient-centred care across the NHS.

What’s next?

The Working Group has an ambitious programme ahead. This includes the development of a BMJ Collection spanning multiple specialty use-cases of AI and the key actions required to make adoption work in practice. This will take forward the recommendations from the policy roundtable with national stakeholders, as well as further expansion of the NHS Champions network to reach more trusts and integrated care systems across the UK so that responsible AI becomes a lived reality in day-to-day NHS practice. 

Longer term, the aspiration is to influence both policy and practice at the national level, helping shape a regulatory and operational environment in which AI genuinely serves patients and clinicians, and is grounded in evidence, equity and genuine public accountability.

If you work across KHP or the wider NHS and want to get involved, there are several ways to do so. You can join the Responsible AI NHS Champions network, participate in Working Group events and consultations, or simply follow our work and share it with your networks via the RAi UK LinkedIn or monthly newsletter.