EXPLORING THE INFLUENCE OF SOCIAL RELATIONSHIPS ON MULTISOURCE FEEDBACK ASSESSMENTS FOR UK GENERAL PRACTITIONERS: A SOCIAL NETWORK ANALYSIS
MetadataShow full item record
One of the most common approaches to assess the performance of qualified doctors is that of multisource feedback (MSF). Previous research often cites MSF to be a valid, reliable and feasible method of assessing performance. However, potential biases in the self-selection of raters has been highlighted as a concern for the utility of MSF, particularly when used in high-stakes assessments such as medical Revalidation.
This research uses general practice as the study setting to explore the extent to which social relationships influence the rater selection choices made by doctors. A case study approach was adopted recruiting three GP practices varying in staff team size and geography. Social relationships between staff were measured through a network questionnaire, with rater selection data collected for participating GP’s most recent MSF assessment. Finally, qualitative interviews were conducted to provide a narrative to the network findings using a framework analysis approach.
Variation in the structure of socialising and trust networks was observed between all three cases. Staff frequently socialised with and trusted the same colleague(s), largely socialising tribally with colleagues from within their own occupation. All doctors interviewed selected their own raters, with the vast majority discussing social relationships to be a factor impacting their choices. A network analysis using multiplex exponential random graph models (ERGM) demonstrate a positive tendency towards GP’s requesting performance feedback from those with whom they had a social relationship. The rurality of the practice and the size of the workforce had no clear impact on the study results.
Biases in the selection of raters may have significant consequences for the assessment validity of MSF with the potential to jeopardise patient safety and quality of care. Recommendations to address biases in the selection of raters are discussed, alongside highlighting the limitations of this study and the implications for future research.