ORCID
- Hannah Gardiner: 0000-0002-2246-8320
Abstract
OverviewIn recent years the number of AI tools available for mental healthcare and wellbeing purposes has increased. This builds from a burgeoning digital health sector in which 20,000+ wellbeing apps are reportedly available on the app store. These apps are distinct both from AI tools which have been purpose-built for NHS use, and from unintended uses of companion chatbot apps – which were never intended for mental health purposes. All the cases of severe harm identified through this research were from unintended uses of general companion chatbot apps. But there are ethical considerations around the use of all AI tools in mental healthcare.Public sector responses are underway to improve data availability and support improvement in evidence generation and deployment. There are also collaborative responses underway to address the ethical challenges from multiple government agencies in the UK and globally. This builds on considerable existing regulation and guidance (examples are outlined in the POSTnote).
DOI Link
Publication Date
2025-01-31
Publisher
UK Parliament Post
Deposit Date
2026-02-06
Additional Links
Recommended Citation
Gardiner, H., & Mutebi, N. (2025) 'AI and Mental Healthcare - ethical and regulatory considerations', UK Parliament Post: Available at: 10.58248/PN738
