In this issue of the journal, Wegwarth et al report on a study that sought to identify general practitioner (GP) characteristics that predicted prescribing of potentially hazardous medications or, as the authors put it, ‘too much medicine’.1 An online survey of 304 English GPs measured their risk literacy, conflicts of interest, and perceived benefit-to-harm ratio in low-value prescribing scenarios. National Health Service record data were used to derive prescribing volumes for the participating GPs for antibiotics, opioids, gabapentin and benzodiazepines. The range of risk literacy scores was dichotomised and those GPs with low risk literacy were found to prescribe more opioids, gabapentin and benzodiazepines than GPs with high risk literacy—although no difference was found for antibiotics. The other two independent variables—conflicts of interest and benefit/harm perceptions—were not associated with prescribing volumes.
The risk literacy questions in the survey gauged GPs’ ability to interpret clinical trial results with regard to treatment effectiveness in various formats, such as relative and absolute risk, and number needed to treat. On the face of it, the link between this ability and the act of prescribing appears tenuous. When prescribing potentially hazardous medications, GPs are unlikely to engage in some deliberative estimation of numerical risks but may prescribe (or not prescribe) out of habit or because the drug worked (or did not work) for other, similar patients. Fuzzy Trace Theory suggests that judgement and decision-making rely predominantly on vague and qualitative representations of information, called ‘gist’, as opposed to ‘verbatim’ representations such as risk in a probability format. Gist is a ‘subjective interpretation of information based on emotion, education, culture, worldview and level of development’ and predicts decisions better than verbatim representations.2 Numeracy—our ability to understand numbers—influences our ability to extract the right gist from verbatim information such as numbers and graphs. Thus, those with good risk literacy would be expected to extract a more accurate gist regarding the benefit-to-harm ratio of a potentially hazardous medication and, in the future, prescribe less of it or, rather, prescribe it more appropriately. Wegwarth et al’s study did indeed find that higher risk literacy was positively associated with appropriate harm/benefit perceptions.
A similar study, which also used a clinician survey and record review, attempted to measure how clinicians’ risk/benefit perceptions (measured using exploratory factor analysis on the survey responses) were related to their actual antibiotic prescribing in the emergency department.3 It found that clinicians who prescribed more antibiotics were making simple categorical (gist-based) choices between continued illness and possibly beneficial treatment, assuming antibiotics were essentially harmless—what the authors called a ‘why not take a risk?’ approach. Those who prescribed fewer antibiotics were choosing between the patient remaining sick and potentially getting worse with antibiotics, agreeing more with the gist that antibiotics may be harmful—and may have no benefits. Thus, understanding the predominant gist that underlies one’s decisions can help explain the rationale behind those decisions.
In contrast, Wegwarth et al did not find an association between benefit-to-harm perceptions and prescribing volumes of any of the medications of interest. This could be due to characteristics of the convenience online sample that may not match well those of the general GP population (eg, gender, practice location, qualifications). It might also be a statistical fluke or due to the chosen measurement and analytic methods. Harm/benefit perceptions were measured with only three questions, each referring to a different prescribing scenario (antibiotics for acute otitis media, benzodiazepines for insomnia and strong opioids for non-cancer chronic pain). Responses in each scenario were dichotomised (correct vs incorrect) and differences in prescribing volumes for a specific class of medication were measured. For example, differences in prescribing volumes for antibiotics in general were measured between correct versus incorrect perceptions of the benefit-to-harm ratio for antibiotics in otitis media. The dichotomisation of the benefit/harms responses as correct or incorrect and the attempt to associate prescribing behaviour for a medication in general with harm/benefit perceptions of that medication in a specific scenario may well have reduced the chance of detecting significant differences. It is also noteworthy that although questionnaires measuring risk literacy and conflicts of interest are available and were used by the authors, there are no validated questionnaires for measuring clinicians’ weighting of harms against benefits of treatments.
Signal detection theory proposes that, when dealing with ambiguous stimuli, detection accuracy depends on two psychological factors: ‘discrimination’ and ‘response bias’.4 First, discrimination refers to our ability to detect a ‘signal’ or target stimulus among ‘noise’ or ‘foils’, for example, explosives in the luggage going through an airport security scanner, the face of the perpetrator in a police line-up, chest pain that may indicate an impeding heart attack or just anxiety and so on. Second, response bias refers to our inclination or willingness to declare ‘signal’ in such ambiguous decision situations. It can be conceptualised as a threshold value on a continuous perceptual or judgement variable describing the degree of evidence for a signal. Stimuli above that threshold will be treated as signal and acted on, while those below that threshold will be treated as noise. Decision-makers with a high threshold need more evidence to call out a signal than those with lower thresholds. Notably, discrimination and response bias are independent of each other, which explains why two people with equally good discrimination may give different responses.
In two online studies of GPs’ referral decision-making in cases of suspected cancer—colorectal in one study, lung in the other—we found that while GPs’ discrimination did not correlate between the two studies (r=0.01, p=0.88), response bias did (r=0.39, p<0.001).5 6 That is, GPs who were more inclined to refer for suspected colorectal cancer in one study were also more inclined to refer for suspected lung cancer in the other. If response bias for a certain type of decision, for example, referring for suspected cancer, prescribing antibiotics or potentially hazardous medications, is a relatively stable characteristic of the decision-maker, then it is important to identify and measure its determinants and constituents.
The optimal decision threshold is the one that maximises benefits relative to costs.7 It depends on the prior probabilities of signal and noise events, the benefits of a correct response (correct detection—aka a hit—or correct rejection) and the costs of an incorrect response (miss or false alarm).7 Thus, one’s personal threshold, typically measured from responses to large numbers of vignettes, must depend on the perceived values of all these variables, which would likely be experienced as a gist-based judgement rather than solving a mathematical equation. In medicine, the costs of a miss typically outweigh those of a false alarm, but clinicians may well differ in their appreciation of these potential outcomes even for the same patient case.
Wegwarth et al have gone some way to measuring GPs’ subjective weighing of benefits versus harms in low value prescribing, but substantially more detail is needed in this field. At the most obvious level, clinicians may not be aware of all the important harms that could result from these medications and/or their likelihood. Furthermore, we tend to ask clinicians how they weigh the benefits against the harms of taking a certain action (prescribing, in this case) but we do not ask them how they weigh the benefits against the harms of not taking that action (not prescribing) or of taking a different action. Only by exploring both aspects of this type of decisions will we gain a more complete and accurate understanding of behaviour, which will enable us to design targeted interventions.3
Ethics statementsPatient consent for publicationNot applicable.
Ethics approvalNot applicable.
Comments (0)