The Patient Safety Collaborative Evaluation Study (The PiSCES Study)
MetadataShow full item record
Background Having investigated avoidable deaths and other occurrences of harm to patients at Mid-Staffordshire Hospital, the Francis Inquiry made 290 recommendations for actions to reduce the likelihood of such events recurring. A prominent part of the government’s response was to ask Don Berwick to chair a National Patient Safety Advisory Group to advise the government on a ‘whole-system’ Patient Safety Improvement Programme. The Group proposed establishing Patient Safety Collaboratives (PSC), drawing upon the experience of Quality Improvement Collaboratives, particularly the Institute of Healthcare Improvement (IHI) 'Breakthrough Series' From 2014, Collaboratives in the NHS were implemented through the regional Academic Health Science Networks (AHSN). Most research about the effects of Collaboratives has been uncontrolled and fragmented across a range of activities and target outcomes, often self-reported. Few studies report clearly how Collaboratives carried their work out, making it hard to identify what the ‘active ingredient’ is. Few contained evidence about the determinants of ‘success’ (as opposed to abundant hypotheses and conjectures). Neither is it known what kinds of clinical work (e.g. for which care groups) may be more amenable than others to improvement by PSC methods, although Collaboratives based hospitals have been most widely reported. We evaluated how this action taken in response to the Francis Inquiry was implemented and some of the consequences, and used our findings as the evidence base to present some some policy implications and further research proposals. Research Questions (RQ) This study addressed six research questions: RQ1: How has PSC implementation varied across the 15 Academic Health Science Network (AHSN) regions? RQ2: What organisational changes have providers made? How have they done this and what have they learned from the PSCs? RQ3: How were resources used for PSCs’ implementation activities? What are the costs of participation and implementation? RQ4: Have the PSCs made a detectable difference on rates of harm and adverse events involving patients as measured using routine data? RQ5: Has change in practice taken place on the front-line of services? RQ6: What generalisable knowledge can be shared about this? Methods We made a mixed methods observational comparison of PSC mechanisms, contexts and outcomes. We combined three methods each of which broadly corresponded to one stage of PSC implementation: 1. An Implementation study of how PSCs were set up, of AHSN roles in establishing and maintaining regional networks, and of how provider-level NHS managers and clinicians used PSC-initiated ideas and resources to influence clinical practice, monitor and improve clinical quality and safety. Our study looked at all 15 PSCs, studied three of them in greater detail, and within them selected different types of provider for in-depth study. 2. Patient safety culture surveys. The Francis and Berwick reports emphasised strengthening safety culture as a method for making clinical practice safer. Using the Safety, Communication, Operational Reliability and Engagement (SCORE) survey, we measured changes in patient safety ‘culture’ in six clinical teams undertaking PSC-initiated activities. We also analysed NHS Staff Survey data. 3. Analysis of routine administrative data. To assess how much patient safety and outcomes had changed we quantitatively analysed routinely collected administrative data relevant to PSCs’ intended outcomes. Our data sources were 61 semi structured in-depth interviews of key informants: SCORE survey data from 72 sites (first round) and from the six of these sites which had also made a second-round (repeat) survey during the study period: and England-wide data on in-patient satisfaction, quality improvement, managerial support for staff, fairness and effectiveness of procedures for reporting errors, recommendation of one’s own work-place, incident reporting and hospital mortality. Findings How PSC implementation varied across the 15 AHSNs (RQ1) Each AHSN applied elements of three strategies for improving patient quality and safety at provider level: • A facilitative strategy, which built where possible on existing QI and safety work in healthcare providers, but was constrained by the local history and resources – or lack of them – in these areas of work. A facilitative strategy made it harder to attribute any changes in working practices and outcomes unequivocally to PSC activities. • An educative strategy of educating, training and developing individual ‘change agents’ to implement changed working practices to improve patient safety at clinic level. • A national priority focussed strategy of adopting ‘work-streams’ from among the current national priorities, resulting in several PSCs developing similar work-streams (e.g. sepsis prevention). There were tensions between the facilitative approach and the national priority focus, which some informants thought was closer to a performance management approach. In general, PSCs and NHS staff favoured shifting from a ‘blame’ culture to learning culture focused on service development as more conducive to activities to improve patient safety. Where SCORE surveys were used (which was increasingly, but from a small base), they were implemented the same way everywhere. PSCs differed in terms of which elements and mechanisms of collaboratives they emphasised. Partly because the Francis report was a response to problems in hospital services, and because Collaboratives originated in (US) hospitals, participation was proportionately greater among acute hospitals than elsewhere, which partly reflected the technical challenges of making the Collaborative model relevant to non-hospital services. General practices apart, the only non-NHS providers participating were some care homes and pharmacies. Organisational changes that providers made and what they have learned from the PSCs (RQ2) Not all provider organisations participated in the PSCs. The willingness of NHS senior managers to engage with PSCs varied across setting. When they were willing, organisational upheaval including leadership changes made trusts’ engagement harder to sustain. In providers that did participate, the main organisational factors reported to aid PSC implementation were: • Initial expenditure for start-up training and preparing management information systems to serve (also) as a measurement system for clinical teams’ QI work • Recruiting trained QI and safety experts or ‘champions’ at all organisational levels, most critically at Board and clinical team levels; this was often done with PSC support and encouragement. • Ensuring that these champions had the leadership skills to motivate and empower clinical teams and to create safe spaces for staff to speak up or suggest changes. • Building structures and processes, at both whole-organisation and at clinical team levels, to sustain the changed working practices. • Allocating staff time not only to engage in QI and learning events, but so that they can subsequently utilise their learning at work. • 'Bottom-up' approaches to safety improvement promoted provider-level engagement and motivation by adapting the activities that PSCs were promoting to local needs. • Measurement support for front-line staff At the time of this study, the development and use of formal measurement systems to support QI activities had not yet materialised. The other change we had expected but did not observe was in safety climate, particularly at clinical team level. Although PSC activity, including the SCORE surveys, had impacts upon clinical teams’ working practices in the sites we studied (see below) these changes occurred without measurable changes in workplace safety climate. In summary, we found: 1. Some qualitative evidence of safety climate change in the intended direction, including increased staff engagement and shifts away from a blame culture towards a more ‘open learning culture’. 2. No significant change safety climate in six study sites by early 2018 on most of the SCORE survey domains. 3. Change in the intended direction in the relevant NHS staff survey data domains, but evidence that this change began before PSCs existed. To suggest that any safety culture changes in particular clinical teams are diluted within much larger NHS Digital data-sets might be valid for the NHS Staff Survey but is not applicable for the SCORE survey results, which were precisely localised to the relevant clinical teams. A possible explanation is that safety climate changes are as much a consequence as a cause of changes in working practices, in a virtuous circle of mutual reinforcement. Organisational changes do not occur straight away; sufficient time is required to implement a complex set of activities across all levels of the NHS: 1. At least 18 months for PSCs and then providers to establish themselves and start to change working practices. In practice this can take a lot longer before any impact is seen at the patient level. 2. Allowing individual staff members time at work to attend learning events and then put what they learnt into practice. 3. Continuing the PSCs long enough to engage ‘late adopters’ besides ‘early adopters’. 4. Time for plan-do-study-act (PDSA) cycles and other QI activities be repeated and become institutionalised on an open-ended time-scale. Other major constraints surrounding the activities of PSCs we found were NHS providers’ concurrent operational pressures and the concomitant resource and financial constraints, staff shortages and turnover. At an individual level the barriers included difficulties utilising expertise post training due to factors including a performance culture (i.e. conflicting priorities in the work-place), lack of time, high staff turnover (including shift rotations and moves between work locations), and psychological resistance to change. Costs of participation in and implementation of PSCs (RQ3) One of our study PSCs provided broad information how spending on PSCs had been allocated at AHSN level (to which programmes, and to broad categories such as support staff, training etc.). At the time of our fieldwork detailed information to account for; the training and network activity the PSCs provided, monetary flows from PSCs to providers, as well as indirect opportunity costs the provider organisations incurred was not completely available. The same applied to information about how these extra resources impacted on health benefits for the patients due to changes in working practices noted below, making it unfeasible to evaluate the cost effectiveness of the PSC programme. Have the PSCs made a detectable difference to rates of harm and adverse events involving patients as measured using routine data? (RQ4) We analysed routine administrative data about relevant safety outcomes and found that: 1. Qualitative evidence of changed working practices which one would expect (given their supporting evidence) to improve patient safety and service quality. 2. Quantitative analysis of administrative data showed no significant change by early 2018 that could plausibly be attributed to PSCs alone. 3. Longer-term changes in the intended direction were occurring. In our judgement the reasons for these paradoxical patterns are: 1. Dilution of any effects of PSCs upon service outcomes because the available datasets combine data about activities in which PSCs were involved with data about much larger activities in which PSCs were not yet involved,such as trust-level data. 2. PSCs’ effects were constrained by countervailing factors: demand overloads, insufficient staffing relative to demand, staff turnover and financial constraints. 3. Time lags: when our fieldwork finished PSCs were about half-way through their initially-planned life-span and had spent much of it getting their activities started. This meant the period for which routine data could have captured any relevant effects was a year or less. We infer that PSC activity had many of its intended effects but they were too localised and diluted to be measurable in the larger-scale routinely-reported administrative datasets. Change in practice on the front-line of services (RQ5) In our case study sites we found evidence of changes in practice at front-line, clinical team level. In practice the participating clinical teams had become more multidisciplinary. They had also started to undertake what in effect was the Model for Improvement: collecting information about their working practices, changing the latter, reviewing the effects, then making further adjustments: the quality improvement cycle. The SCORE survey, and its practical impacts, can be understood as a special case of such activity, and one with a relatively quick impact upon working practices. SCORE surveys developed beyond measurement activity into a practical intervention on the part of PSCs. Changes in working practices were both clinical (e.g. falls reduction) and organisational (e.g. pathway re-design) and were reported in both hospitals and general practices. Conclusions: Policy and management implications The findings summarised above tend to support some of the policy-makers’ original assumptions about how PSCs would work but suggests revisions to other policy assumptions that would lead to more effective PSCs and thus safer care for patients:- 1. PSCs have not yet had sufficient time to establish and sustain the clinical team-level safety improvement activities and outcomes that current policy intends. Our evidence suggests three years from the outset is in practice too short a time for that. In our opinion (albeit an opinion consistent with our findings so far) PSCs should continue in their current form for longer before any judgement can be meaningfully made about their impact on patients. 2. The PSCs are complex adaptive systems, reacting and responding to different local situations in varied ways. Attempts to manage PSCs uniformly and force them into particular directions (including work streams) are likely to hamper their ability to promote the locally-originating work that will ultimately lead to better patient care. In our opinion NHSI should study the emergent systems, support positive behaviours and resist the temptation to apply a ‘one size fits all’ managerial approach. 3. NHSI and the Department of Health need to provide clear and supportive timelines and financial arrangements for the PSCs. One disruptive aspect of the implementation of the PSCs was the lack of clear direction from the central NHS bodies, partly due to the perceived chaos surrounding the change from NHSE to NHSI, and to the financial uncertainly that PSC leads felt. At the time of writing there are suggestions that NHSI should review the PSCs. In our opinion it is too soon for that and it will again create an impeding uncertainty. 4. Recognition of the influence of the wider evidence-based medicine (EBM) movement and institutions (e.g. NICE) in promoting safety culture, something PSCs’ activity reinforced and exploited. However development of EBM is uneven (for example, it is better developed in general medicine than mental health). Start-up support for Collaboratives may be especially important in domains where EBM remains less developed and embedded. 5. Culture change is too big for PSCs alone to achieve without a massive increase in their scale. Learning by clinical teams is a discrete step linking culture change to changed working practices and this has implications for the kind of training required. The necessary kernels for this training are quality improvement methodologies and the psychology of change (‘human factors’). As PSCs have shown, clinical teams are the critical audience for this training. 6. If providers are to become ‘learning organisations’ for PSC purposes the requirements include: a 'bottom-up' approach to safety management; that provider managers allow clinical teams discretion to adapt QI activities to their local needs; that clinical teams are allowed to take ownership of a given project or changes in work processes, something our evidence suggests also promotes staff engagement and motivation. This is a different approach from the work-stream specific collaboratives; mandating clinical teams to work on areas they have not chosen will probably not have as effective outcomes for patient care. 7. NHSI is now addressing the absence of cross-provider measurement systems for PSC purposes (for clinical teams across different providers to compare activities and learn from each other). Caution will be needed in how these cross-provider data are used. The focus has to be on data for improvement; if the data are used for performance management (or even perceived as such) the benefits of the collaborative approach will diminish.
Place of Publication
The following license files are associated with this item: