14 May 2026

AI is creating a significant shift in how people access healthcare, yet public attitudes to the technology remain deeply divided, according to a major new study by King's Health Partners, Responsible AI UK, and the Policy Institute at King's College London.

The research finds that one in seven (15%) of the public have used AI chatbots for health advice instead of contacting a GP or other NHS service, and one in ten (10%) say they have used AI for mental health therapy or wellbeing support instead of seeing a trained professional.

But the findings raise questions about the risks of this shift. One in five (20%) of those who sought health advice from AI say the technology did not encourage them to seek a professional opinion – and a similar proportion (21%) report having decided against seeking professional healthcare advice because of something an AI chatbot said. This comes as recent evidence shows AI chatbots misdiagnose in up to 80% of early medical cases*.

The findings come as the government moves to accelerate the adoption of AI across the NHS, and as the National Commission into the Regulation of AI in Healthcare considers how the regulatory framework needs to evolve. At present, there is no single regulatory framework for AI in healthcare – a gap that critics including the Nuffield Trust and Royal College of Physicians have described as a "wild west" of AI adoption. The study finds:

  • The public are split on whether AI should be used in clinical decision-making, with around two in five both supporting (37%) and opposing (38%) its use. But the strength of opposition is notably greater than the strength of support – with those strongly opposing (15%) the use of AI in clinical decision-making almost double those who strongly support it (8%).
  • 18-to-24-year-olds are the age group most opposed to clinical use of AI in the NHS, with half (49%) saying they oppose it – compared with 36% of those aged 65 and over. Opposition is also far higher among women (46%) than men (30%).
  • The public are still most likely to hold the treating doctor or healthcare professional accountable (34%) if AI misses a health problem in a clinical image, with far fewer blaming the company that developed the AI (6%).
  • The public significantly overestimate how widely AI is already being used by GPs – guessing on average that 39% of GPs use AI in clinical decision-making, when the true figure is 8%.
  • Three-quarters (76%) say AI tools used in patient care should be officially approved and regulated, even if this slows down their adoption. Only 17% take the opposing view – that doctors should be able to choose AI tools freely without official approval.
  • The top emotion felt by the public about the NHS using AI for clinical tasks is anxiety about safety and accuracy (39%), with the public overall twice as likely to select a negative emotion (63%) as a positive one (28%), and with women more likely than men to feel anxious about AI in clinical settings (46% vs 31%).

One in seven already turning to AI over their GP

The most common reasons people give for using AI chatbots for health advice are convenience (46%), curiosity (45%), and uncertainty about whether their concern was serious enough to contact a GP (39%). A quarter (25%) say they did so because they were waiting too long for NHS services.

Among those who have used AI for health advice, 59% say it has been good for their physical health and 53% for their mental health. However, views on its impact on the wider public are less positive. Overall, more people believe AI chatbots are bad (42%) rather than good (31%) for mental health, while opinion is more evenly divided on physical health (33% bad, 36% good).

The age group most likely to report negative personal effects from AI health use is 18-to-24-year-olds, with one in four (25%) saying it has been bad for their mental health and one in five (19%) saying the same for their physical health – the highest of any age group. Yet even among this group, more report positive than negative effects.

Accountability, regulation, and the right to opt out

The public's demand for oversight is clear. Three-quarters (76%) say AI tools used in patient care should be officially approved and regulated, even if this slows down their adoption – with 17% taking the opposing view that doctors should be able to choose AI tools freely without official approval.

That demand sits in sharp contrast to the current reality. With no single regulatory framework for AI in healthcare in the UK, governance is lagging behind adoption. However, the National Commission into the Regulation of AI in Healthcare is now examining how regulatory oversight needs to evolve.

The public's expectation of being informed and given a choice about AI in their care also sits at odds with reality. Across four possible common uses of AI in the NHS – reading test results, reviewing X-rays, listening to appointments, and deciding queue priority – majorities of the public (58–63%) say they should be told in advance and given the option to opt out. This preference is consistent across age groups, gender and NHS employment status, and suggests a broad public consensus.

Yet in most circumstances – with the exception of an AI tool that listens to your appointment and writes up a summary for your healthcare records – as with non-AI technologies like digital x rays or electronic patient records, the NHS is not required to offer an opt-out. However, all patients can choose whether their confidential patient information is used for research and planning via the National Data Opt-Out service.

When asked who should bear responsibility if AI misses a health problem in a clinical image, the public are most likely to hold the doctor or healthcare professional using the AI accountable (34%), ahead of the NHS trust that rolled out the tool (24%). One in five (20%) say responsibility should be shared, and 6% would hold the company that developed the AI primarily responsible.

On patient data, the public are more likely to feel uncomfortable (47%) than comfortable (33%) with their NHS health records being used to train AI if the data could identify them personally. Comfort rises when the data would not identify them (40% comfortable, 36% uncomfortable)

The public trust doctors over AI – but that trust is conditional

Across a range of common clinical scenarios, the public consistently place more trust in doctors than in AI. Trust is highest for psychological therapy, where 46% say they trust a doctor much more, and just 1% say the same for AI.

However, when the same question frames the doctor as being at the end of a long and busy shift, trust in the doctor falls across every scenario. For psychological therapy – the area of greatest trust in doctors – the proportion saying they trust a doctor much more drops from 46% to 25%, while for skin cancer detection from photographs it falls from 30% to 16%. The proportion willing to trust AI equally or more grows in each case.

If an NHS-approved AI system disagreed with a doctor's diagnosis, a majority (55%) say a second doctor should review before any decision is made, with just 7% willing to follow the AI's advice on the basis that it may be more accurate.

Whilst the public trust doctors more than AI, they are also most likely to hold the treating doctor to account if something goes wrong – if AI misses a health problem in a clinical image, like a scan or X-ray, the public are most likely to hold the treating doctor or healthcare professional responsible (34%), followed by the NHS trust that deployed the tool (24%). Just 6% would primarily blame the AI developer.

A consistent gender divide runs through public attitudes to AI in healthcare

Across almost every measure in the study, men are more favourable towards AI in healthcare than women – and the gap is often substantial.

Women are considerably more likely than men to feel anxious about the NHS using AI for clinical tasks (46% vs 31%), and far more likely to oppose its use in clinical decision-making overall (46% vs 30%). Discomfort with the prospect of a GP using an AI chatbot in a consultation is also markedly higher among women (65%) than men (45%).

The divide extends to consent: women are consistently more likely than men – by 7-11 percentage points across all four clinical scenarios tested – to say patients should be told in advance and given the option to opt out of AI being used in their care.

Prof Graham Lord, Executive Director, King's Health Partners, said:

“This research underlines the scale and pace at which AI is already shaping how people access healthcare. While the opportunities are significant, it also highlights concerns about safety and accountability.

“When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced. To realise AI’s potential, we need greater transparency about what works, what is safe, how decisions are made, and how issues are handled - so staff and patients can feel confident in its use.”

Amy Clark, Senior Policy Fellow at the Policy Institute at King's College London, said:

“These findings reveal a striking gap between how AI is being used for health and how the public feels about it. People are already turning to AI chatbots instead of their GP – driven by convenience and stretched NHS capacity – yet the wider public remains anxious about where this is heading. What stands out is that women and young people are among the most sceptical, which challenges the assumption that familiarity with new technology creates acceptance.”

Prof Sarvapali (Gopal) Ramchurn, Chief Executive Officer, Responsible AI UK, said:

“This latest research adds to the growing evidence that the general public is trusting AI even when they should not, and not only for health-related questions but also for legal, financial, and work-related issues. By connecting researchers with AI experts, policy makers and NHS clinicians, Responsible AI UK aims to ensure that public trust is appropriately calibrated; through better guidance, better regulation, and better assurances for novel AI-powered services.”

ENDS

Notes to editors

Read the report, The use of AI in UK healthcare: Public perceptions and healthcare priorities.

In addition, the polling findings informed a policy roundtable co-hosted by King's Health Partners, the Policy Institute at King's, and Responsible AI UK on 24 April 2026. Around 50 representatives from policy, NHS, research, industry, the voluntary sector, and people with lived experience identified six priority actions to support the trusted deployment of AI in the NHS, from improving public storytelling and building a diverse AI workforce, to clarifying liability and centralising regulatory assessment. Read the full recommendations here.

Study details

Fieldwork was conducted via Focaldata's in-house platform, with API integration to an online panel network. Data collection took place between 24 and 30 March 2026, with a total of 2,093 respondents from a nationally representative group of those aged 18+ in the UK completing the survey. Data was weighted by age, gender, region, and education status.

* Rao AS, Esmail KP, Lee RS, et al. Large Language Model Performance and Clinical Reasoning Tasks. JAMA Netw Open. 2026;9(4):e264003. doi:10.1001/jamanetworkopen.2026.4003
Contact: hugh.mccann@kcl.ac.uk / 07758446216