Abstract
Background
Computerized clinical decision support is any computerized tool designed to help clinicians make clinical decisions. 1 Decision support often takes the form of a computer-generated alert in response to a user’s action, for example, a warning which is displayed if the user prescribes a medication to a patient with a known allergy to that medication. It is one of the most promising strategies for improving patient care 2 and the ability to provide such support is viewed as one of the important benefits of using a computerized patient record system. 3 However, this benefit is often not fully realized in practice. Alerts are often overridden, in the Netherlands 4 and elsewhere.5–7
There are many reasons why a clinician might override an alert. Clinicians may generally dislike guidelines and clinical rules, due to a sense of loss of autonomy8–10 or concern that their use will lead to “cookbook medicine.”8,9 Previous experience with dysfunctional decision support systems can contribute to poor acceptance of new alerts. 10 Also, the presence of too many alerts can lead to ignoring all alerts (“alert fatigue”),4,11 especially too many false-positive alerts. 12 The use of less-interruptive styles of decision support for some alerts has been proposed to help reduce alert fatigue, 13 which may be particularly important in light of the possible adverse effects of interruption on the quality of care. 14
Dutch general practitioners (GPs) were among the first to widely adopt electronic patient records, 15 and today, some form of clinical decision support is used in 89 percent of Dutch general practices. 16 The HAG-net-AMC (General Practice-network Academic Medical Center), a regional network of GPs participating in research and education, is planning to implement decision support based on a set of 81 “if-then” clinical rules pertaining to geriatric medicine. The rules were selected using a Delphi method for their relevance to Dutch general practice and are in the form of “condition-action” rules (e.g. “If a (vulnerable) elder is prescribed an ACE inhibitor” (condition), “then s/he should have serum creatinine and potassium monitored within 2 weeks after initiation of therapy and at least yearly thereafter” (action)). 17 The clinical rules serve as concrete examples of clinical tasks proposed for decision support.
We developed and administered a survey to learn which of these rules doctors want supported, and the reasons behind their choices. The aim is to gain insight into the motivation for prospectively accepting or declining support, as well as gain information to assist in designing support that better suits the doctors’ workflow and expectations.
Materials and methods
A summary of the survey development and deployment process is presented in Figure 1.

Survey development and deployment process. The survey was developed based on input from the literature and interviews with clinicians and experts in decision support. The content was validated by conducting a second set of interviews, including a concurrent “think-aloud” process. The final survey was deployed online.
Selection of clinical rules for the survey
The complete set of 81 rules was prioritized for the decision support implementation by a focus group of eight doctors from the practice group. This process is described in detail elsewhere, 18 but briefly, participants independently indicated whether they would turn support “on” or “off” for each rule and then rated the importance of the rule on a scale from 1 to 10. The decision was made to focus on medication-related rules; 23 rules mentioned medications or specific medications. Three rules were eliminated because they were conceptually similar, leaving 20 rules to include in the survey. This selection was checked to ensure that it included rules which had been rated as both high and low importance by the focus group, so as to capture both reasons for wanting and not wanting support in the survey.
Survey development
We expected that general attitudes and previous experience with decision support could globally influence responses to individual rules; therefore, we also administered an “Attitudes and Experience” questionnaire, the details of which have been published previously. 19
The survey was developed by conducting structured interviews with two experts in decision support systems, a hospital specialist, a hospital resident, and two GPs. We included experts in decision support and hospital physicians with the expectation that they had had experience with different systems than the GPs and thus could contribute a broader perspective on reasons why systems are ultimately accepted or rejected. The interviews were structured in three parts, described in detail in Table 1.
Interview structure and questions.
The four rules used in the interviews were selected to be representative of rules relating to medication management. 21 We did not record the interviews, because our clinical advisors felt this would likely lead to social desirability bias. Interviewees were asked to write down their answers and the interviewer noted any verbal comments on the same paper. All interviews were conducted by one researcher (SM), with the first three interviews observed by a second researcher (SE). Results from the interviews were discussed in meetings with four medical informaticians (SM, SE, MA, and AA) and used to identify new reasons not found in the literature and eliminate reasons which interviewees said were not relevant. Comments reflecting general attitudes or experience were added to the “Attitudes and Experience” questionnaire. 19
Content validity of the resulting survey was assessed by conducting a second set of interviews. We used a concurrent think-aloud verbal protocol: 22 doctors who had not participated in the previous interviews were asked to “think aloud” while filling in the survey, using the four example rules. All interviews were conducted by one researcher and were not recorded. The researcher noted their comments on note paper that was visible to the interviewee. The researcher did not prompt the subject while they were filling in each section, except to remind them to “think aloud.” At the end of each section, subjects were also asked specifically if they would like to add or change anything. Any changes suggested would then be incorporated into the survey for use in the next interview. If any part of the survey was unclear or if new reasons were suggested, this part of the survey would be revised. The process would end after three consecutive interviews with no changes to the survey. The final formulation of the survey was reviewed by two experts in medical informatics (SE and AA) and two clinicians (SdR and EvdG) prior to deployment.
Deploying the survey
The survey was translated to the Dutch language by a native Dutch speaker and resident in geriatric medicine (EvdG) and back-translated by a native English speaker (SM). It was deployed as an anonymous, online survey among all GPs participating in the HAG-net-AMC and thus to all potential future users of the system. Participants were invited by an email from the secretary of the steering committee of the HAG-net-AMC on 6 July 2011 and reminded with two follow-up emails 2 and 4 weeks later. Responses were collected up to 3 months after the initial email. To preserve anonymity, we did not attempt to identify participants. To eliminate potential bias due to the order of presentation of the rules, the order was randomized for each participant.
Analysis
All statistical analyses were performed using R 3.0.1. 23 The difference in medians of ordinal variables was assessed with the Mann–Whitney U test and difference in proportions with Pearson’s chi-square test or Fisher’s exact test. Multivariate analysis was performed using logistic regression models. Adjusted odds ratios (aORs) are adjusted for age and gender, unless otherwise noted. Generalized estimating equations (GEEs) with logistic regression were used to account for the effect of clustering by user, with results reported as the odds ratio (OR). 24 The binomial test was used to determine whether the choice of reasons for wanting or not wanting support differed significantly from chance. Free-text comments were assessed separately by two researchers (SM and SE), with disagreements resolved by a third researcher (AA).
Results
Survey development: interviews
Table 2 shows the reasons for wanting or not wanting support that were suggested in the interviews.
Reasons for wanting or not wanting support suggested by the interviews, with selected quotations.
Survey development: survey instrument
The resulting survey consisted of six questions for each clinical rule, with the phrasing of the question modified to reflect the conditions and actions of each rule. The first three questions establish whether the respondent agrees in principle with the need to improve performance: “This rule constitutes minimal care that every general practitioner should know,” “This rule
Results from survey of GPs
Demographic characteristics of respondents are reported in Table 3. In total, 43 GPs were invited to complete the survey, of which 28 completed all questions for at least one clinical rule (65%), and 20 completed the survey (47%) for all clinical rules, and the remainder completed questions for 1–6 rules (median 2). The demographics of respondents were representative of the study population. Subjects who completed the survey tended to be older than those who finished only part of the survey (median age categories 50–60 years vs 40–50 years, 95% confidence interval (CI) = −20 to 0, p = 0.02).
Demographics of participants and population.
Choice of clinical rules to support
In response to the question, “On my computer, I would turn support on/off,” more than 50 percent of respondents indicated that they would turn support “on” for 17/20 rules. Responses per rule ranged from 28 to 88 percent in favor of support. Responses were not significantly different between those who did and did not complete the survey (64% positive for those who completed the survey vs 76% for those who did not (aOR = 0.90, CI = 0.67−1.20, p = 0.47)). The proportion of rules selected to be supported was not significantly associated with responses to the Attitudes and Experiences questionnaire 19 (aOR = 1.02, CI = 0.99−1.05, p = 0.17).
Type of support
For each clinical rule, at least two different types of support were suggested, including at least one option for a more interruptive form of support (e.g. a pop-up) and one for a less-interruptive form of support (e.g. adding the item to a task list). More than one answer was allowed, and respondents were allowed to choose a type of support whether or not they wanted support for the rule. Results are summarized in Table 4.
Type of support selected.
More-interruptive forms of support were chosen significantly more often than less-interruptive forms of support (χ2 = 23, p < 0.0001). However, this also varied significantly between rules (χ2 = 43, p = 0.001). Noninterruptive options were preferred for four rules. A type of support was chosen in 95 percent of responses where support was wanted and 58 percent of responses where support was declined. A noninterruptive option was selected in 51 percent of answers when support was wanted and 40 percent when support was not wanted, though this difference was not significant (OR = 0.638, CI = 0.35–1.16, p = 0.14, clustered by user).
Establishing the need to improve performance
The need to improve performance was assessed through three questions, summarized in Table 5.
Perceived need to improve performance.
There was no gap for 25 percent of responses, and a negative gap for 1 percent of responses (n = 5).
Feeling that the rule constitutes

Histogram of responses to the question, “This rule is followed in practice in __% of cases” for rules where support was wanted (left, n = 229) and declined (right, n = 140). The x-axis represents the response categories (0%, 10%, 20%, …, 100%) and the y-axis represents the percentage of responses in that category. Respondents tended to want support for rules when they estimated current compliance to be in the high-middle range and decline support when they estimated current compliance to be low or very high.
Reasons for wanting support
Of the rules where respondents indicated that they wanted support, respondents gave a median of two reasons, and 93 percent of responses included at least one reason (Figure 3). Responses that were chosen significantly more often were a sense of responsibility, concern about forgetting to perform the action, and belief that failure to perform the action will harm the patient (Table 6). Selection of all responses differed significantly from chance (p < 0.01).

Reasons for wanting support.
CI: confidence interval.
n = total number of times this response was chosen; the percentage is the percentage of times this answer was chosen out of the times it was displayed, regardless of whether any reasons were given.
Reasons for declining support
Of the rules where respondents indicated that they did not want support, respondents gave a median of two reasons, and 95 percent of responses included at least one reason (Figure 3). Responses that were chosen significantly more often were: feeling that they would know when the action is needed, not forget to perform the action, concerns about interruption, and feeling that support could be useful for others rather than themselves (Table 7). Selection of all responses differed significantly from chance (p < 0.01).
Reasons for declining support.
CI: confidence interval.
n = total number of times this response was chosen; the percentage is the percentage of times this answer was chosen out of the times it was displayed, regardless of whether any reasons were given.
The two reviewers disagreed as to whether only 3 of the 42 free-text comments constituted a new reason, which were then reviewed by the third reviewer. After discussion, one comment about wanting support was interpreted as constituting a new reason, namely, “I will answer yes if it is accepted policy of the NHG.” The NHG (College of General Practitioners) produces the national guidelines for general practice; thus, this reason can be generalized as “It supports our accepted guidelines.” None of the comments about declining support were interpreted as new reasons.
Discussion
A survey was developed consisting of six questions about each of 20 clinical rules, covering whether the respondent agrees in principle with the need to improve performance on this clinical rule, alert design choices, and whether the respondent would personally want support for the rule and their reasons for that choice. Responses were received from 65 percent of the 43 practitioners invited, and 47 percent completed the survey. Despite expressing concerns about interruption, the doctors chose an interruptive style of support (e.g. pop-ups) in 54 percent of responses, compared to 35 percent for less-interruptive options. Feeling that the rule represents minimal care was associated with wanting support, as was feeling that the rule should be followed in a higher percentage of cases and a larger perceived gap between how often the rule is followed and how often it should be followed in practice. The most common reasons given for wanting support were a sense of responsibility, concern about forgetting to perform the action, and belief that failure to perform the action will lead to harm for the patient. One additional reason was suggested in the comments, namely, that the support facilitates following accepted guidelines. The most common reasons for declining support were feeling that they would already know when action is needed, not forget to perform the action, concerns about interruption, and feeling that support could be useful for others rather than themselves.
To our knowledge, this study represents the first structured, prospective investigation of clinicians’ intention to accept decision support for specific clinical rules and the reasons for those choices. An important contribution of this study is the introduction of the survey instrument. We used both literature resources and interviews in the development of this survey to ensure content validity. To reassure our interview subjects of their anonymity, we chose not to record the interviews. However, this could have introduced bias, since only one researcher conducted a majority of the interviews. This was mitigated by having the note paper always visible to the interview subject. We also chose to focus only on medication-related rules for this survey. Medication data quality is generally sufficient to build decision support; thus, it was likely that these rules could be implemented, and the authors felt that it might be disappointing to the participants if many of the rules selected in the survey were later discovered to be unimplementable. With 20 rules, the survey took about 30 min to complete, and the authors felt that adding more rules would reduce response rate. However, results could differ for other clinical domains, and this should be investigated in future studies.
The primary limitation is the small sample of doctors surveyed. However, we chose to limit the survey to doctors in the practices where the decision support intervention is planned, because this makes the survey highly relevant for this population. This may have also contributed to our high response rate. However, it is possible that other results would be obtained from different groups of GPs or from other types of clinicians such as hospital specialists. Further study is needed to determine whether these findings are also seen in other groups of clinicians. It is possible that responses were affected by social desirability bias, although we tried to minimize this by ensuring the anonymity of all participants. However, this meant that we were unable to detect whether the eight doctors who participated in the focus group also filled in the online survey. Given that doctors probably filled in the online survey
A recently published study describes a process for the related task of prioritizing implementation of medication-related decision support alerts. 26 The authors identified six factors to determine whether an alert should be implemented: quality of care, legal and regulatory compliance, organizational liability, daily workload, patient safety, and likelihood of occurrence. In terms of these constructs, patient safety was the primary determining factor in our subjects’ intention to accept support, followed by quality of care and workload concerns. This likely partly reflects the difference in objectives (alert prioritization vs intention to accept support) and partly differences in the health-care systems (the United States and the Netherlands).
Other studies have retrospectively evaluated the reasons for alert override and acceptance.7,27 Zheng et al. 27 identified six constructs affecting acceptance: performance expectancy (whether the alerts are expected to help), ease of use, effort expectancy (to perform the recommended action), social influence, facilitating conditions, and perceived use of the alerts. Seidling et al. 7 conducted a retrospective analysis of drug–drug interaction alerts and found that knowledge quality, display characteristics, text, setting, patient age, dosing alerts, alert frequency, and severity level all impacted acceptance. A common theme among these works and our study is that clinicians conceptually accept support if they anticipate that the alerts will help them to provide better care.
There seemed to be a nonlinear relationship between wanting support and responses to the question, “This rule
Selection of the reason “This rule is (not) relevant for performance indicators” was markedly lower than the other reasons, even though several interviewees felt it was important. This may be because at the time of our interviews coincided with the introduction of new national performance indicators. We considered the free-text comment “It supports our accepted guidelines” to be different because our subjects likely think of “performance indicators” as external measures, whereas guidelines are used internally. Further research is needed to determine an optimal set of questions with utility for determining when decision support is likely to work and particularly whether a survey such as this one can help determine what kind of decision support will be most effective.
Although we encouraged our participants to “think outside the pop-up box” by describing both an interruptive alert and an alternative, most participants chose more interruptive alerts. The type of support chosen did not appear to be correlated with wanting support or any of the other factors measured in our survey. However, there could be other unmeasured attributes of our rules influencing this choice. Alternatively, this choice may be due to awareness of a higher response rate for alerts which require immediate attention7,28 or thinking about the alert in isolation or simply greater familiarity with this form of support. We know that too many alerts will lead to alert fatigue, but it is not currently known how many is “too many,” and what circumstances might mitigate this effect. The results of this survey suggest that simply having clinicians select the type of support from a multiple-choice list will not by itself solve the issues of alert overload and alert fatigue; we must design alerts with an understanding of the clinical workflow, the needs of daily practice, and the capabilities of computers and offer solutions that integrate with the care process. 29 Further research is needed in determining which rules justify interruptive forms of support and, for the others, what kind of support can be offered that is both less interruptive and effective.
Conclusion
A new survey instrument is introduced to prospectively assess clinicians’ intention to accept proposed decision support. The instrument includes measures of the perceived agreement with the rule and need to improve compliance, proposed designs for support, and whether the respondent personally wants support and their reason for that choice. The instrument was piloted with 20 clinical rules proposed for inclusion in a decision support system, in a population of 43 GPs. Feeling that the rule represents minimal required care, that the rule should be followed in a larger percentage of patients, and a perceived gap between how often the rule is currently followed and how often it should be followed were all associated with wanting support. The reasons given for wanting support included a sense of responsibility, concerns about forgetting to perform the action, and belief that failure to perform the action will lead to harm for the patient. Reasons given for declining support included feeling that they would know when to act and would not forget to perform the action, concerns about interruption, and feeling that support could be useful for others. Despite the concerns about interruption, doctors tended to choose pop-ups over other forms of support, implying that asking clinicians to choose the type of support is not sufficient to address issues of alert overload and alert fatigue.
