Abstract
Introduction
Hospital organizations are increasingly asked to provide clarity over and account for their performance. As part of a much broader wave of transparency and accountability, of what Michael Power (1997) has termed “the audit society,” hospital performance is measured by the use of performance indicators, rendering information about processes and care outcomes accessible and comparable. Performance league tables or “rankings” are a case in point.
Rankings are currently hotly debated in many public service industries across the world (Black, 2015; de Rijcke, Wallenburg, Wouters, & Bal, 2016; Saisana, d’Hombres, & Saltelli, 2011). Much is expected from an increasing transparency of the performance of public services as performance data are believed to enable consumer choice and to contribute to competition between service providers. Health care is no exception to this, and in the Netherlands, as in other countries, various hospital rankings coexist. With their appeal to numerical comparison of hospital organizations, rankings have been taken up by an increasing number of organizations, including (social) media and patient organizations. Hospital rankings in the Netherlands are produced by patient organizations, health insurers, and professional associations. In addition, two yearly rankings for hospitals are published by the national newspaper
Although rankings are widely used, we have very little understanding of how hospitals respond to them. In practice as well as in the wider literature on performance measurement, most attention is paid to improving methods of evaluation and the unintended effects of performance measurement in spite of good intentions (Freeman, 2002). It is argued that rankings can give rise to negative unintended consequences as scrutiny and surveillance intensify and organizations become overly focused on metrics rather than the quality the metrics are intended to assess (Scott & Orlikowski, 2012). Yet, how rankings and underlying indicators actually interact within organizational practices, the kinds of (organizational) practices they produce, and how rankings may impact on organizations’ governing abilities receive less attention. Sauder and Espeland (2009), in a study on the rankings of law schools, have shown that through surveillance and normalization, rankings shape the institutional practices of law schools, changing actors’ perceptions, expectations, decisions, and actions. In a similar fashion, this article embarks on the actual day-to-day practices in which hospitals engage to organize for rankings.
In this article, we aim to achieve in-depth insight in how rankings, as a representation of the wider and increasingly important institutional logics of the market and managerialism in health care (van de Bovenkamp, Stoopendaal, & Bal, 2016), affect hospital organizations and the people who work in them by examining and displaying the everyday organizational practices that are enacted in hospitals, and how these impact upon hospitals’ governing abilities. We do so through a qualitative, comparative ethnographic study, researching the ways in which rankings and their underlying performance indicators influence quality and other policies and practices in three hospitals in the Netherlands. We ask to what extent and in what ways rankings are actually being used within hospitals themselves to change the institutionalized practices of doing quality work and to govern the accompanied relations between actors within the hospital organization, focusing specifically on management–professional interactions as these are key to hospital governance practices.
Theoretically, we draw on two literatures that, to our knowledge, have been rarely linked but can produce valuable insights to how rankings actually play out in hospital organizations: actor–network theory (ANT) and institutional work. More specifically, we take from ANT the focus on the performativity of rankings and the emphasis on the work that is needed to make rankings happen within organizational contexts (Callon, 1998; MacKenzie, 2009), as well as ANT’s focus on materiality (i.e., “non-human entities”; Orlikowski & Scott, 2013). These theoretical insights enable us to point out how indicators define content, shape reality, and transform hospitals in their own image. However, we will argue, this transformation requires institutional work (Lawrence & Suddaby, 2006; Lawrence, Suddaby, & Leca, 2011). The notion of institutional work captures how social actors create, maintain, or disrupt institutions (Currie, Lockett, Finn, Martin, & Warring, 2012; Lawrence & Suddaby, 2006). In bringing ANT and institutional work literatures together, we uncover that rankings “as such” may not necessarily change daily activities and vested professional work practices, but they set into motion the development of new governing tools that reconfigure both work routines and relationships between actors and evoke new interdependencies between managers and health care practitioners as well as new leverage for management over professional domains. By analyzing ranking practices in such a way, we not only attempt to shed light on the impact of rankings on hospitals but also hope to contribute to a further development of practice and material-based theories concerning institutional work (Jones, Boxenbaum, & Anthony, 2013; Monteiro & Nicolini, 2015).
The main analytical questions guiding this research are how do hospitals respond to rating and ranking practices, and with what consequences for the governing abilities of hospital organizations?
This article proceeds as follows: First, we develop our theoretical frame on the governance of performance, in which we connect ANT and institutional work. Next, we present our methods, followed by our empirical findings uncovering how rankings, through their underlying practices of quantification, standardization, and commensuration, render clinical work accessible and manageable. We then show how the performativity of rankings of making hospitals into governable entities is not an “automatic” process, but is rather contingent and situated, requiring continuous work among actors involved. We conclude by considering the implications for both theory and practice.
Developing an Institutional Work/ANT Perspective on Governing Performance
Studying governance practices in health care has a long tradition in the sociology of professions, often focusing on the ways in which health care professionals are governed by formal quality improvement systems. Those studies have, for example, shown how professionals are able to “adapt” quality systems to strengthen their own position (Currie, Humpreys, Waring, & Rowley, 2009; Waring, 2007), leading to what some have called “soft autonomy” (Levay & Waks, 2009). Waring (2007) points out how doctors resist (external) managerial quality regulations through seeking to subvert and “capture” components of the reform. As a result, he argues, these regulations are internalized within medical practice and culture, leading to new forms of self-surveillance and self-management. Similarly, Levay and Waks (2009) show how Swedish medical specialists have used accreditation systems and registries to transform professional governance while protecting the profession from outside pressures. These accounts, however, tend to emphasize the abilities of professions to incorporate and “repair” outside regulation (Micelotta & Washington, 2013), implicitly prevailing the professional over the managerial account. It thereby overlooks how professionals incorporate (external) surveillance mechanisms and how these reconfigure existing governance practices—often in unexpected ways (van de Bovenkamp, de Mul, Quartz, Weggelaar-Jansen, & Bal, 2014; Wallenburg, Hopmans, Buljac-Samardzic, den Hoed, & IJzermans, 2016; Waring & Bishop, 2013).
In this article, we wish to come to a more entangled and contingent understanding of health care governance evolvement. To that end, we draw on and bring together insights from ANT and institutional work literatures. Although scholars have pointed at the potential fruitful combination of these theoretical fields (Lounsbury & Crumley, 2007; Suddaby, Saxton, & Gunz, 2015), especially highlighting the role of materiality in institutional work (Jones & Massa, 2013; Monteiro & Nicolini, 2015), this has been hardly teased out in the institutional work literature yet (Suddaby et al., 2015).
The institutional work agenda sets out to redirect attention from institutions per se to the “purposive action” by which they are accomplished (Smets & Jarzabkowski, 2013). Institutional work encapsulates the activities actors conduct to maintain, disrupt, or create institutional arrangements (Lawrence & Suddaby, 2006). It focuses on how actors are continually engaged in the partial reenactment of routines and practices that may ultimately lead to field dynamism, but may also result in the strengthening of existing institutional arrangements (Currie et al., 2012; Jarzabkowski, Mathiessen, & Van de Ven, 2009; Law et al., 2011). Lawrence and Suddaby (2006) point out institutional work as “intelligent, situated institutional action” (p. 219). They set the stage of institutional work research by articulating three conceptual layers. First, institutional work highlights the awareness, skills, and reflexivity of individual and collective actors. Second, it points at the more or less conscious action of individual and collective actors. Third, institutional work encompasses a practice perspective, suggesting that we cannot step outside of action that occurs within practices with institutionalized rules. In later work, Lawrence, Leca, and Zilber (2013; Lawrence et al., 2011) stress the intentionality of the concept of institutional work. Intentionality encapsulates both the focus on actors as consciously and strategically reshaping social situations, and practical intentionality. Practical intentionality draws attention to the acting upon emerging and unexpected situations; the managing of the exigencies of immediate situations in which goals are yet unclear and strategic actions are not figured out yet (Lawrence et al., 2013; Lawrence et al., 2011; Slager, Gond, & Moon, 2012).
Many scholars studying institutional work have concentrated on
These more recent accounts of institutional work envision the actual processes of institutional work. They uncover the everyday practices of social actors in which the outcomes of actions cannot be fully overseen or are unintended, as practices are emergent and recursive (Bjerregaard & Jonasson, 2014; Lounsbury, 2008; Smets & Jarzabkowski, 2013). This shifts attention from the purposive action of foresighted actors who envisage desirable institutional arrangements to a more differentiated, dynamic, and empirically grounded understanding of how different modes of agency unfold as actors develop and realize their interests in particular institutional settings (Smets & Jarzabkowski, 2013; Suddaby et al., 2015). This view thus warrants a dynamic understanding of how institutional evolvement takes place in the messy and unfolding practice of doing institutional work amid multiple other actors’ work strategies (Bjerregaard & Jonasson, 2014; Slager et al., 2012; Suddaby & Viale, 2011). Smets and Jarzabkowski (2013) argue that actors’ intentionality should not be labeled in a narrow sense of institutional work as purposive action, but rather as the emergent accomplishment of their practical work. They suggest focusing more on the object of institutional work and the actual work carried out, than on intentionality per se.
To enrich institutional work theory, and particularly its emergent and contingent features, we turn to ANT literature. ANT is not so much a theory but rather a way of looking at the world or sociality and materiality (Latour, 2005). It conceives of the world and its entities as relational; entities (whether human or non-human) have no inherent quality but acquire their attributes as a result of their relations with other entities (Latour, 2005; Law, 1999). For example, Power, Scheytt, Soin, and Sahlin (2009) argue that irrespective of whether rankings are true or not, they are social facts that generate actions and reactions. Non-human actors (i.e., instruments, techniques, materials) play an important role herein next to and in continuous interaction with human actors. They not only facilitate and constrain human activity but also extend it in time and place as a form of “distribution” (Latour, 2005; Monteiro & Nicolini, 2015). Hence, it is not only the abstract existence of quality systems like a registry or an accreditation system that shape transparency of medical performance but also (for example) the availability of computers and the feasibility of the necessary software, as well as the availability of “scoring forms.” These instruments, as a way of “distributed cognition” (Mackenzie, Muniesa, & Siu, 2007, p. 78) are decisive for the actual use and impact of quality systems. Quality systems thus inhabit performative (MacKenzie et al., 2007) or “constitutive” effects (Dahler-Larsen, 2013), and can be conceived of as interactive phenomena.
In this article, we aim to articulate the constitutive effects of rankings and underlying performance indicators and link these to the institutional work actors perform to influence the governability of hospital organizations. How this is done and how actions actually play out require insight in specific hospital practices. Before we turn to these hospital practices, we first present our research methods and provide a background account of hospital rankings in the Netherlands.
Method
Background: Introducing Rankings in Hospitals in the Netherlands
Rankings are part of a wider attempt to improve transparency in the health care sector. In the Netherlands, transparency has been a main policy goal and hotly debated topic since the introduction of a market-based system in health care in 2006. The Dutch system of “regulated competition” is composed of three related markets: a market of health care providers who compete for contracts with (private) health insurers, health insurers who compete for the insured, and patients who select their health care providers and health insurance (Helderman, Schut, van der Grinten, & van de Ven, 2005). Transparency of health care quality is seen as a crucial element of the market to work, as quality information is needed by patients to choose between providers, insurers to procure care, and providers to improve their services (van de Bovenkamp et al., 2016). Rankings are part of this transparency aim as they make performance visible and comparable. Since the early 2000s, several rankings are produced yearly. These include rankings produced by patient organizations, health insurers and professional associations, and the media. It is against this background that we conducted our research.
Research Design
To gain insight in the impact of rankings hospitals’ organizing activities and its sociomaterial practices on work floor levels, we used a practice-based approach (Orlikowski & Scott, 2015). We conducted a comparative, ethnographic study in three Dutch hospitals (Bal, Quartz, & Wallenburg, 2013). Methods used were semi-structured interviews and observations. Hospital selection was done by looking at similar size hospitals with a similar background (all hospitals were top-clinical teaching hospitals) but in different competitive environments. We expected that the level of competition hospitals find themselves in will influence the ways rankings affect hospitals, with more competitive regions showing higher levels of tight coupling (Berwick, 2002). 1 Furthermore, we selected hospitals with different governance systems and routines (i.e., top-down, lean-oriented, collaboration, and negotiation-oriented) as we expected this would impact upon hospitals’ governing abilities. The studied hospitals were roughly similar in size, with 551 (Hospital A), 673 (Hospital B), and 709 (Hospital C) beds, respectively. Although we did not quantitatively measure competitive environments, based on the amount of hospitals in a range of 30 kilometers, Hospital A was in the strongest, and Hospital C in the least competitive environment (see Table 1).
Characteristics of researched hospitals and data collection.
Within each of the hospitals, semi-structured interviews were held with quality managers, communication staff, information department staff, medical doctors, nurses, executive directors, and, where possible, members of the board of trustees (
We observed relevant meetings (e.g., quality and safety committees and/or other related committee meetings, meetings with outsider stakeholders such as insurers and the health care inspectorate), registration work (e.g., registration of patient data at clinical wards, activities of coders and information and/or communication managers), and, where possible, clinical work (in Hospital A, observing clinical work was not allowed as physicians felt that observations during care provision would embark on patients’ privacy 2 ).
Observations were written down during breaks or soon after meetings and were then written in observation reports. Observational data were supplemented by informal interviews that explored participants’ work and experiences, skills and knowledge involved, as well as participants’ practices of sense-making. Interviews and observations were done in a period of about 3 months in each of the hospitals.
Based on our data collection, we wrote detailed, “thick” descriptions (Geertz, 1973) for each of the hospitals. These formed the input for the data analysis and the cross-case analysis presented in this article. The themes we discuss below emerged by combining sensitivity toward existing sociological literature on performance measurement and institutional work (as so-termed “sensitizing concepts”) with emerging insights from the data, a process which may also be classified as “abduction,” providing situational and theoretically based generalizations of our findings (Stoopendaal & Bal, 2013; Tavory & Timmermans, 2013). We first coded our data inductively, which eventually led to the following eight key codes (Bal et al., 2013): (a) importance of doing well on rankings, (b) reputation of rankings, (c) organizational effects of rankings, (d) quantification/administrative work, (e) unintended consequences, (f) narratives and stories, (g) social relations and steering within the hospital, and (h) strategic reputation management. We then iteratively compared these codes deductively with theoretical concepts: performativity, constitutive effects, valuation, and institutional work. Out of this coding, four themes emerged encapsulating how rankings impact on hospital governance and the work involved, which we will now turn to.
Results
The results are presented in four sections. In “The Ambivalence of Being Ranked” section, we discuss rankings’ ambivalence, showing how rankings are both critiqued, embraced and experimented with in the hospitals, opening up new spaces for managers to use institutional work in clinical practice. “Investments in Form” and “Local Hospital Practices of Governing Data” sections describe the quantitative and qualitative work rankings evoke to standardize and quantify care processes and render these valuable. This involves the production of data, constituting new work routines and sociomaterial practices of care valuation in which quality staff, hospital managers, physicians, and nurses are engaged. In the “Governing Professional Work: Shifting Responsibilities and Changing Roles” section, we subsequently discuss how ranking practices trigger a more entangled profession-management hospital development, reconfiguring hospital governance.
The Ambivalence of Being Ranked
The day after the AD ranking is published, rankings are discussed at a meeting of the steering committee on indicators, and our field notes indicate how criticism we have encountered in all our case study hospitals takes off: “The rankings are meaningless”; “The criteria are opaque and change all the time”; “The one year you’re in the top 10 and the next year you are way below”; and “Patients don’t understand this, they don’t know where they have to be.” The committee chair refers to a specific hospital: “They were as dead, and now they are in the top 3,” indicating this to be impossible if the rankings were really displaying quality of care. (Observation, Hospital A, November 12, 2012)
Rankings, we found, are often met with a discourse of insignificance. This amounted first in the kind of criticism voiced in the observation above, where the validity and reliability of rankings is questioned. The high volatility of rankings was one way in which our respondents knew they do not represent the reality of quality of care in hospitals. Rankings “don’t do much” (quality manager, Hospital C). Likewise, patients were not thought to be choosing on the basis of the rankings as they were mainly following their General Practitioner’s advice or went to the most nearby hospital. Criticism moreover relates to how rankings interact with practices in health care that are more complex than “can be put into numbers”: Indicator thinking . . . makes that you look for measurable things. The biggest mistakes in health care, however . . . are made in the realm of non-measurable things. The biggest mistakes in healthcare are, if you ask me, always individuals that make diagnostic mistakes. Like: “You have pain in your knee, I put in an implant.” While the correct question must be “You have pain in your knee, are you able to walk on it 5 more years? Then I will wait with the implant.” It is much more difficult to research what goes wrong on this terrain for the Inspectorate. But whether the washing machine runs on 129 or 130 degrees heat doesn’t matter. Bacteria are already dead 10.000 times. That is not interesting, but you can very well link numbers to such questions. (Microbiologist Hospital B, May 28, 2013)
Rankings are mainly criticized for faulty design, the inability to actually improve performance, and the difficult relationship with the complex nature of health care. As this microbiologist points out, indicators draw attention to what can be measured and define cutoff points for what is good, and what is not. Yet quality of care is much more complex and involves all kinds of qualitative valuations that are hard if not impossible to quantify. Such valuations are actively produced in ongoing practice (“can you walk with your knee for five more years?”) that is hard to capture in an indicator. Hospital administrators frequently critiqued indicators and rankings: During one of the interviews, a hospital administrator criticizes current measurement policies that, according to him, do not reflect reality: “According to the numbers we were a kind of “death hospital,” but it all depends on how you measure mortality rates.” According to him, hospitals are heavily disciplined by ranking policies, clearly objecting to the practice and even lecturing us on Foucault’s notions of discipline. After having said this, he turns to a pile of papers on his desk, showing us the figures of the performances of the different hospital wards: “You see, ward Z did an excellent job, they will have cake and a picture with me on the intranet next Monday! [smiling] They love it if we celebrate good performance.” (Observation notes, Hospital A, May 22, 2012)
Rankings seem to be ambivalent creatures. Although there was a lot of criticism, we also encountered professionals that are wary about their groups’ reputation and managers who use rankings to negotiate performance indicators (and other quality tools) with professionals—or celebrate good results, as the hospital administrator in the excerpt above. This pictures a more nuanced conclusion: Rankings
The ambivalence of rankings, we show in this article, enables organizational transition. Rankings are considered irrelevant, inescapable, and influential at the same time. They trigger hospital managers and professionals to “make the best of it” and set out managers and professionals alike to create sensible measurement and quality improvement practices, as we will point out in more detail below. The ambiguous status of rankings and the underlying calculative practices leaves both professionals and managers with a feeling of “experimenting” rather than the application of strict rules or standards that embark on professionals’ autonomy. Experimenting, that is, searching for and probing feasible and sensible answers to “outside” requests for commensurable quality data without strictly relating to these quantitative measurement practices as their eventual yields are mutually contested, we will argue in this article, provides a leeway for hospital managers to conduct institutional work and render professional work and, with that, care provision more manageable. Two main practices of institutional work were central in our data: investments in form and governing professional work. In the next section, we first reveal how “investments in form,” that is, the building of quality systems, allows for new ways of collecting and valuing information about care provision. Thereafter, we will turn to the governing of professional work.
Investments in Form
The new requests for data on performance urge hospitals to “invest in form” (Thevenot, 1984) to make data collection possible, and to perform well on rankings. Hospitals have invested huge amounts of money in building electronic patient records (EPRs), dashboards, data warehouses, and in training both quality staff and practitioners in using these systems to make measurement happen “in the right” way (Bal et al., 2013). As Sauder and Espeland (2006) point out, rankings are the final product of an elaborate, often messy, and arbitrary series of practices, decisions, and coordination. Getting “the right numbers” surpasses the idea of registration work; it entails many added interventions to retrieve and “make up” the requested information. Qualitative information (e.g., the pain a patient suffers from after surgery, a postoperative complication) must be translated into quantitative data to calculate performance on a certain indicator.
From our observations, it emerges that this information cannot easily be retrieved from hospital data as the required information is scattered, kept in both paper-based and EPRs, hospitals databases, finance systems, and personal databases of individual medical specialists. Moreover, information is reported in various and diverse ways: in numbers, in short notes, in figures, and in excel sheets. Some kinds of data were not collected at all (e.g., the number of pressure ulcers, the number of elderly with feeding disorders), or only in rather diverse ways among different departments, like the registration of complications (cf. de Kam, Wallenburg, & Bal, 2016). Although all hospitals worked hard to get registrations going—and were largely successful in doing so—registration work also encountered many problems in getting to the data that were needed to report on indicators. A warehouse administrator explains, “Often, the same indicator is registered a number of times with dissimilar results . . . In consequence, this generates 4 times the same data; and often this data differs depending on the file. There is too much of the same registration. We work with too many sub-systems.” He goes on to explain that it would be almost impossible to merge all data in one ICT system, particularly as different professional groups use dissimilar ICT systems with different demands and also with different needs with regard to data secrecy. Therefore, he manually merges all sources. (Data warehouse administrator, Hospital C)
Collecting the data required for and to score well on rankings urges standardization work. Standardization work, as a form of institutional work, involves substantive work beyond the mere development of technical solutions (Slager et al., 2012), as is reflected in the excerpt above.
Moreover, actors need to decide upon what kinds of data are collected and how data are processed. Hence, organizations must be reoriented toward data collection and processing, reshaping existing, professional dominated, ways of knowledge production, knowledge sharing, and decision making. Whereas clinical (i.e., medical and nursing) knowledge used to circulate within groups of professionals, hardly retrievable to nonclinicians such as hospital managers and quality staff, now clinical knowledge must be made visible to “outsiders” to measure performance. Furthermore, “other kinds” of knowledge become more important and legitimate, like knowledge on quality systems and information and communication technologies. In Hospital A, for instance, an electronic dashboard was developed displaying and comparing the scores of each nursing ward on the Inspectorate’s performance indicators on nursing care (i.e., pressure ulcers, delirium, malnutrition, pain). The EPR played a key role in registering data. Electronic measuring tools were built in the EPR, enabling nurses to register patients’ scores in a standardized manner. When registrations were delayed, the computer screen turned yellow and flashed “over time,” urging nurses to submit the numbers. Furthermore, ward managers monitored the registration of patient scores in their own electronic management system, informing nurses when registrations had not been done yet, and keeping an eye on the results. At each ward, printed versions of the monthly dashboard scores hang on the wall of the nursing station, rendering visible how the nursing ward did compared with others. The electronic data registration was thus enacted as a way of disciplining nurses in “correct” registration. However, during observations, we also encountered nurses who “worked around” the system by filling in pain scores without asking the patient first as they felt it to be ridiculous and “not professional” to ask a patient who walked around for the amount of pain they suffered. Instead, nurses used their clinical gaze to estimate patient pain scores.
These counting and registering practices uncover the institutional work carried out by hospital managers, quality managers, and clinical practitioners. Managers, through developing measurement tools and enacting surveillance practices, attempt to enhance standardization of data collection and good performance on the indicators—and, as such, come to grips with professional work. Nurses, in turn, incorporated these registering practices in their work, but also negotiated the way numbers are collected. Here, our data parallel the findings of Slager et al. (2012) who show that standardization as a process involves multiple parties in ongoing negotiation and the constant work that goes into this standardization processes. Yet, our observations also reveal the constitutive effects of the sociomaterial practices of data collection and quantifying performance. The use of the EPR, the electronic monitoring systems of the managers, and the printed versions of the dashboard in the nursing station extend and reconfigure institutional settings as they embark on vested clinical work practices and accompanied authority relationships. Below, we will tease out further how this governing of data proceeded in the three hospitals, and how this impacted on hospitals’ institutionalized ways of organizing and accounting for health care delivery.
Local Hospital Practices of Governing Data
To govern the process of data collection, all hospitals formed steering groups or similar entities of which medical staff, quality staff, information managers, controllers, and hospital managers were members. In this section, we unravel how the collection of data in the hospitals proceeded, and how such processes are embedded in the institutionalized context of hospital organizations. Data collection and quality policies are not universal but get shaped in, and are given meaning through local institutional arrangements, for instance, in the way hospital managers and practitioners interact.
In all three hospitals, the steering groups developed and “experimented with” new structures and practices to facilitate data collection. In Hospitals A and B, so-termed “indicator steering groups” were set up that had to develop a vision on indicators and increase the support for indicators in the hospital. Specific goals related to minimizing the registration burden of clinical staff, to monitor national developments, and to start a movement in which external indicators would be used as internal steering information in the hospital.
Social scholars have highlighted the normalizing workings of rankings as they set norms that introduce homogeneity “by situating the individual within a comparable grouping but also measures individual differences so that the individual is both the product of the norm and the target of normalization” (Sauder & Espeland, 2009, p. 64). In similar vein, we found that performance indicators drive quality systems as these become more “indicator alike,” demonstrating the constitutive effects of rankings. Yet, we also found significant differences. The three hospitals differed in how they organized for rankings and made sense of them in accordance with their institutionalized quality practices. Hospital B, for example, did not seek to benchmark departments but measured improvement over time, which fits in with this hospital’s tradition of lean management. Hospital B had an internal governing system, in which care group managers together with medical directors steer departments. The organization was characterized as “informal” by many respondents, where “much information flows on corridors and only to limited extend on paper” (care group manager, Hospital B, May 2, 2013). Respondents described this governance system as decentralized, where managers and professionals share leadership and where professionals enjoyed much freedom with regard to indicator governance. The secretary of the executive director—who played an important coordinating role toward quality and safety policies throughout the hospital—argued this “does not yet lead to quality improvement, but rather helps the organization to get a bit of an overview on what happens with regard to quality and safety” (interview, April 23, 2013). However, deviating from the “indicator logic” should be deliberate and justified: We give primacy to our professionals. If professionals say that something is wrong, then we let them argue why it is wrong. We [i.e., executive managers] then often take over that perspective. And of course one would listen to their argument carefully. If their argument is “I simply don’t feel like it” then that is a totally different story then “come and see, I show you how things are related to one another.” (Executive director, Hospital B, May 2, 2013)
Hospital A, in turn, aimed to centralize performance measurement, which is reflected in the design and use of the electronic dashboard. Furthermore, a quality manager made detailed analyses of the relative performance of the hospital that then got sent throughout the hospital.
The quality manager shows me the Excel sheets. Long lists with data, including explanations for the comparisons. Colours indicate points of attention. She shows me the ZiZo
3
mirror reports. “We work with percentages: the bottom 25%, the middle 50% and the top 25%. Then you have to look at what is good or bad; if we score badly we make it red and give that to the hospital board, if we do well we make it green,” she explains. “We list the things where we are scoring bad, and where we score well too: you also have to give a positive message!” Score-sheets also go to the ward managers who discuss them at the wards. Also the board of trustees, who have a quality & safety committee that meets 4 to 5 times a year, is informed. At one of their meetings, the performance indicators will be discussed as well as the policies taken for improvement of the scores. (Observation quality manager, Hospital A, September 4, 2012)
This excerpt reveals how rankings pursue the development of management tools, enabling hospital managers to come to grips with distributed care practices, rendering these more tangible and “governable”—an argument that we will further tease out below. For now, it is relevant to note the contrast between the centralizing policy of Hospital A and the decentralizing approach of Hospital B. Furthermore, compared with Hospitals A and B, Hospital C had developed a more external focus on market-based health care. This hospital had underwent a restructuring process trying to generate a better link with the health care market and the capacity to react to changes more quickly. Here, the department for quality and safety was one of the first quality and safety departments nationally, and was comparably well staffed. Due to the increasing national demand to comply with indicators and other performance tools, staff for indicator compliance increased. Also, the hospital restructured its quality department whose staff has been decoupled from quality improvement work and does (officially speaking) not support improvement work on the shop floor anymore. In that way, the department differed from many other hospital quality departments in the country, including Hospitals A and B, where quality improvement and quality assurance work were more integrated. Moreover, the changed structure of the department for quality in Hospital C extrapolates the relevance that the hospital ascribed to monitoring quality and safety, and to the compliance management with indicators in particular.
In sum, governing of quality data differs between hospital settings. Although rankings perform a centralizing tendency, stimulating managerial action, they play out differently. Local organizational contexts mediate the (re)production of ranking and indicator practices (Finn, Currie, & Martin, 2010), shaping the institutional work that actors (in this case, hospital managers, health care practitioners, and quality staff) use. How performance indicators are monitored and how rankings are worked upon depends on local institutionalized practices of governing health care quality—leaving more or less room for social actors to influence ranking practices and quality work. We will spell this out further in the next section in which we discuss the learning and governing effects of rankings, and the deliberate choices that are made within hospital organizations. We will show how rankings open up space for hospital managers to (re)negotiate professional work practices, strengthening the governability of hospitals.
Governing Professional Work: Shifting Responsibilities and Changing Roles
Analyzing indicator scores regularly led to changes in processes of care delivery in all hospitals we studied. As might be expected, those care processes that are measured in the rankings are the focus of attention (cf. Hammarfelt & de Rijcke, 2014). Nursing indicators are a prominent example. As the quality manager in Hospital A noted, the hospital now pays a lot more attention to pressure ulcers as an effect of performance indicators. Similarly, a bad score on malnutrition among elderly on the Elsevier ranking forced this hospital to change its nutrition policies for elderly: We had lost many points on malnutrition. We have a lot of patients suffering from malnutrition in this hospital, and we already paid a lot attention to that. Yet, the Elsevier ranking took in consideration whether we evaluated our treatment on the fourth day of admittance. We often didn’t, due to lack of capacity. The Elsevier ranking triggered us to revise our protocol. Yet, we did not have enough dieticians and the board of directors did not want to employ anymore of them. We then decided to train capable nutrition-assistants to do the job. (Quality manager, Hospital A, September 4, 2012)
Rankings are calculating practices that define priorities; scoring badly on rankings creates support for change. Also in Hospitals B and C, a range of quality programs/projects emerged as reactions to bad indicator scores—again indicating the constitutive effects of rankings. Yet, as Asdal (2011) points out, calculative practices are also relational. Within this relational space, rankings and their underlying indicators are negotiated. Clinical practitioners and hospital managers face an ever-increasing amount of indicators and conduct “prioritization work” in response. This prioritization work reveals the room managers leave—and the value they pay to—the clinical expertise of professionals vis-à-vis the demands of rankings systems. Some ranking requirements are relatively easy to meet and do not prompt issues of professional autonomy, like in the excerpt on malnutrition above. Others are more sensitive and induce institutional work among both professionals and managers.
Hospital C is an exemplary case and front-runner in this respect. The executive director openly argued that some indicators are more relevant than others. An example is postoperative wound infection (POWI). Taking the case of hip replacement, the executive director explained that the financial system allows the hospital to get reimbursed for hip replacement, but not for a postsurgical infection. Complications are on the payroll of hospitals. The financial consequences of indicators such as POWI move these high on the agenda. Observations show that indeed POWI receives much attention in management meetings and the quarterly reports alike (executive director, Hospital C, November 12, 2012). Here, managers enacted the ranking as a management tool to change surgical practice. Hospital B also handled a prioritization system, yet with a different emphasis. Here managers aim to trigger professional responsibility: I always try to give priority to such indicators where I can see a direct relationship between indicator and work practice . . . For example, the volume of repeat operations. There is a direct link between . . . lets say . . . you can push one button and that has effect. And a repeat operation builds on indicators that show a clear causality between behaviour and outcome. And in such cases, you have to have discussions . . . What you also do [with regard to choosing indicators] . . . you look at where you can generate benefit rapidly and that leads to visible improvement. (Care group manager, Hospital B, May 2, 2013)
Rankings, and particularly their underlying indicators, provide managers with the means to intervene in professional practices. Performance indicators act as tin openers for managers to come to grips with formerly closed or hidden health care practices. Power is exerted through rankings’ calculating practices; rankings draw attention to those clinical practices and routines where improvement seems needed. However, this power is dynamic, contingent, and relational (Clegg, 1987; Finn et al., 2010). Clinical indicators, for instance, appeared to leave more room for deliberation than nurse-related indicators. Furthermore, a more centralized hospital as Hospital C allowed for more management involvement and steering capacity than Hospital B that has a history of lean management and deliberation. Moreover, these negotiations and deliberations involve prioritization work: Sometimes we decide that we don’t want to comply . . . One example concerns the urologists where there was a story like “you have to do this within seven days,” but our process was organized differently and that was actually better for the patient, because we do all the diagnostics at one point in time, so we decided not to change. (Executive director, Hospital A, November 6, 2012)
Although in all three hospitals bad scores on rankings sparked more centralization, making clinicians more organization-oriented, the amount of management control differed. Both Hospitals A and B offered professionals space for negotiation and adaptation, which could also mean ignoring the indicator. But even in those hospitals, medical specialists were sometimes convinced to change their practices. For example, in the case of breast cancer care, Hospital A was losing points because of the policy of the surgeons to aim for breast-saving surgery whereas the
These examples reveal the prioritization work both managers and physicians conduct, which can be identified as institutional work: managers, by way of “bad scores” intervene in and question professional practice, deliberatively seeking answers to the conflicts rankings evoke. Physicians, in turn, prioritize indicators as well, as they are perfectly aware that they cannot ignore them all together, drawing managers’ attention to the indicators that are most relevant to them. Through these (re)negotiations, clinical routines and professional and management practices are reshaped. How this institutional work is carried out, by whom, and with what results again depend on the local institutional context. This is nicely reflected in the case of Hospital C, where professional-led performance negotiation is exclusively distributed into external networks. Observations from a management meeting on delirium care are insightful in this respect: Managers and doctors hotly discuss whether lowering the age from 70 to 60 [for screening on delirium] makes sense, and whether screening for delirium is a useful indicator [with regard to investment and return for patient safety] after all. The executive ends the discussion and argues: “We have to put a stop to the discussion. We lose out in important points in the
This “it has to happen” argument was often encountered in meetings where professionals started to question the relevance of indicator sets in Hospital C. Although this governance mode helps to end discussions among managers and professionals, it redistributes negotiation into informal, often profession-based circuits, like the nurses mentioned above who estimated patient’s perception of pain themselves to fulfill the requirement of measuring pain 3 times a day. Hence, the governance of performance and of professionals is never a single translation, but a way of how hospitals are seeking alignment of performance measurement with hospital strategies. Overall, these practices of seeking and probing lead to more intertwined manager–professional strategies of dealing with quality requirements and accounting for care delivery, enhancing managers’ influence on the organization and delivery of hospital care.
Discussion and Conclusion
Hospital rankings, as a relatively new performance measuring practice in health care, spark institutional transformation. This transformation is shaped by the institutional work social actors conduct, and is mediated by new and institutionalized practices of performance measurement and accounting. Institutional work is played out amid and conducted through the ambiguous and experimental practices that hospital rankings evoke. Through their “calculating selves and calculable spaces” (Miller, 1994), rankings standardize, simplify, and quantify performance information, rendering clinical work accessible and manageable. These practices do not come naturally, however, neither are they straightforward. Our research shows that hospital rankings produce a lot of quantifying work of valuing organizational, medical, and nursing work, and putting these into numbers. Hospitals invest in forms, including the introduction and use of many different information technologies and the training and disciplining of clinical staff to collect and register indicator information, and standardize care processes to enable data collection. Hospitals set up committees and steering groups to govern the process of data collection and use and hire quality and information managers to aggregate data and report them to external sources. This technical, social, and quantifying work, we have shown, exerts influence on the regulatory, normative, and cognitive level of hospital governance. Rankings give shape to new ways of collecting and governing data, create new sense-making processes, and alter the governance of professional work in hospitals. As a consequence of rankings, new interdependencies between clinicians and managers emerge.
This enhanced manageability is shaped and encouraged by rankings’ ambiguity and performativity. Rankings, as we have seen in the three hospitals we studied, induce ambivalent responses. They are embraced, engaged, and questioned at the same time. This induces prioritization work: not all indicators are lived up to as managers and professionals negotiate which are the important ones that need to be met—and which are not. This prioritization work demonstrates that strategic managerial choices are more common than overall indicator compliance. Moreover, it reveals the leeway rankings allow for both managers and professionals to use institutional work to pursue private goals and safeguard interests, while incorporating ranking practices and logics. This ambiguity, we have demonstrated, enables managers and professionals to collaborate in their struggle of dealing with indicators—rendering it a shared effort and enhancing managers’ legitimate authority to intervene in clinical practice.
Furthermore, rankings are performative; rankings and the underlying performance measurement practices both organize connections and establish the rules according to which these connections are to be organized (Hammarfelt & de Rijcke, 2014; Orlikowski & Scott, 2013). These performative or “constitutive effects” (Dahler-Larsen, 2013) are related to what Barad (2007) has pointed out as “discursive materiality”: “knowledge-making practices that are material enactments contributing to and part of the phenomena.” The registries, excel sheets, and dashboards we encountered enabled rankings and reconfigured clinical practices and relationships, as well as produced this relationality. Noticeably, actions moved beyond the direct (medical) professional scope as they triggered a wider organizational transformation which professionals are part of, indicating the increasing entanglement of professionals and hospital organizations (Noordegraaf, van der Steen, & van Twist, 2014; Wallenburg et al., 2016).
Through ranking practices, and through their constitutive effects, social actors (i.e., hospital managers, quality staff, nurses, physicians) conduct institutional work as rankings insert new expectations and requirements and, in so doing, open up spaces to renegotiate vested practices and power relationships. Quality managers, for instance, have emerged as a new and legitimate professional group in hospitals; they have built data systems and set up improvement projects, often in close relationship with practitioners and hospital managers, thus connecting and adding to both groups. This moreover reveals that constitutive effects do not happen automatically but are actively shaped through the institutional work actors use. Furthermore, the constitutive effects rankings produce are mediated by local institutionalized hospital contexts: We found substantial differences in valuation practices and outcomes between the studied hospitals. Performance governance thus never is a one-to-one translation of external demands, but seeks alignment with—and evolves through—a hospital’s overall organizational structure and history. The centralized approach of Hospital C, for instance, enabled more stringent regulation and “performance indicator-steering” by hospital managers than lean-oriented Hospital B, which allowed more variation in dealing with indicators. However, also in Hospital B, in the end, rankings were taken seriously, and noncompliance always had to be deliberate—also revealing the nonescapability of rankings.
Although our research explicitly focused on hospital rankings, wider lessons can be drawn about the working of rankings in organizations. A growing body of literature exemplifies how rankings reconfigure the organization and daily work in law schools, child centers, universities, and the travel sector (among others). These studies resonate in revealing how rankings redefine existing practices of organizing and working, setting new targets and redefining what “good” practice entails (Espeland & Sauder, 2007; Hammarfelt & de Rijcke, 2014; Orlikowski & Scott, 2013). Based on our research, we suggest to focus on how the influence of rankings may work out differently in different types of organizations in a field, and the consequences hereof for the governing abilities of these very organizations.
Our findings have implications for both theory and practice. First theory. This research contributes to the developing literature on institutional work in three important ways. First, we have highlighted the constitutive influence of materiality in institutional work (cf. Monteiro & Nicolini, 2015), demonstrating the fruitful contribution of the ANT perspective to institutional work scholarship. Materialities like excel sheets, databases and rankings themselves, possess certain affordances that mediate and produce actions, pursuing institutional disruption, maintenance or change. This can be both direct (e.g., a ranking as a list in the newspaper that forces physicians to change their clinical routines to better fit in with the indicators’ logics and obtain a higher score) and indirect (e.g., measurement practices that render clinical work visible and accessible to hospital managers). This finding also underpins the distributive and contingent nature of institutional work. Second, and furthering on combining institutional work and ANT, this research has uncovered the intertwinement of institutional work and performativity. We have shown that hospital rankings are not performative in themselves. Rather, the constitutive effects that rankings produce create room for social actors to conduct institutional work and rearrange institutional arrangements. Performativity thus enables institutional work. Third, and again relating to the former theoretical arguments, we have highlighted the practice-oriented approach of institutional work. Rather than contributing to the ever growing taxonomy of institutional work, revealing and defining new types of work, we have shown that institutional work is recursive and embedded in day-to-day practices. It is about collective problem solving and finding deliberative solutions to issues at stake. Furthermore, institutional work involves experimenting, using the room that is produced to probe and discover new strategies and feasible practices of transition. These transitions may change existing institutional arrangements in more profound ways than imagined beforehand, reflecting the contingent and experimental nature of institutional work.
Then practice. Our research suggests that rankings enhance manageability. Yet, we have also shown that such changes need to be both uncovered and “made” in real-time practices, in continuous negotiations between actor groups involved. Based on this research, we argue that managers and professionals alike should invest in techniques (e.g., databases, measuring tools) to produce new arrangements. As subsequent transitions are negotiable and contingent, they require patience and reflexive practitioners that continuously monitor and evaluate the transitions they are part of.
