Abstract
Introduction
Who has and should have power in evaluation 1 and what is and should be the role and power of evaluation in public policy and governance are basic questions that need recurrent discussion, particularly when the conditions for evaluation are changing. In the last few years, the conditions for evaluation have changed and created new challenges for evaluation. The changes can be understood and described in different ways. Dahler-Larsen (2021) describes them as four shifts away from research-based stand-alone evaluations that inform “political or managerial decision-making” in representative democracy. There has been a shift from government to governance, from ad hoc evaluation to evaluation systems, a shift in epistemic foci (with less emphasis on how a policy works and more emphasis on performance criteria) and a shift in the status of validity and reliability (“from being seen as logical preconditions of evaluative information to being contextual factors of variable importance”) (Dahler-Larsen, 2021: 20). Evaluation has become more institutionalized and released from democratic control (Dahler-Larsen, 2021). These and other changes in the conditions for evaluation (Hanberger, 2012, 2018; Picciotto, 2015) imply a need for further discussion and exploration of power and evaluation in public policy and democratic governance. 2
The article contributes to knowledge of power and evaluation by developing a framework for enhancing understanding and exploring both how power manifests and evolves
Largely, power
Against this background, the article further discusses and explores power and evaluation from a democratic governance perspective, how power manifests and evolves in the evaluation and policy process through actors’ mobilization of power and when it manifests itself as constitutive power. It contributes to enhancing understanding and knowledge of why actors act the way they do in the evaluation process and how their action together with constitutive effects of evaluation affect evaluation reports and the power of evaluation to support key functions in democratic governance.
To gain comprehensive understanding and knowledge of power and evaluation in this context, the article integrates notions of power through structure and agency. It conceives power as a multifaceted and dynamic phenomenon that manifests and affects evaluation in many complementary and sometimes conflicting ways.
The aim is to develop a framework to enhance understanding and support exploration of power
How can power be understood and explored
How can the power
The article continues with a brief overview of how power is discussed in the political–philosophical literature and the evaluation literature. The article then, based on this literature and research on evaluation and governance, develops the framework. Next, it demonstrates how the framework can be applied to the evaluation of a Swedish teacher-training program. It ends with conclusions about the knowledge generated by the framework and discussion about its advantages and limitations and needs for further research.
Notions of power in the political–philosophical literature
Evaluation researchers borrow, explicitly or implicitly, concepts of power from the political–philosophical literature. Therefore, the article begins with a brief overview of how power is conceived and discussed in this literature in relation to decision-making.
This large and growing literature is characterized by deep disagreements over how power should be understood and defined. The discussion of the nature of power is ongoing, reflecting that power is a multifaceted, contested, and dynamic phenomenon.
One main disagreement is between those who understand and define power as getting someone else to do what you want them to do (referred to as “power-over”) and those who define it as the ability or capacity to act (referred to as “power-to”). Classical definitions of the former are Max Weber’s (1978) definition of power as “the probability that one actor within a social relationship will be in a position to carry out his own will despite resistance” (p. 53) and Robert Dahl’s (1957) definition “A has power over B to the extent that he can get B to do something that B would not otherwise do” (pp. 202–203).
Hannah Arendt (1970) defined power as “the human ability not just to act but to act in concert” (p. 44), which is a classic definition of the power-to notion. Amy Allen (1999) extended Arendt’s notion by defining power as “the ability of a collectivity to act together for the attainment of an agreed-upon end or series of ends” (p. 127). However, and as Steven Lukes (2005) recognized, power is a capacity that may or may not be used; he maintains that power “is a potentiality, not an actuality—indeed a potentiality that may never be actualized” (p. 69). These notions of power can be used to reflect actors’ use of power in the evaluation and policy process, but there are more notions that need attention in decision-making. According to Rye (2015), an eclectic approach provides a richer and more flexible approach to studying power in organizational contexts.
Bachrach and Baratz (1962) argued that power is exercised not only through decision-making itself (first face), but also by non-decision-making (second face), that is, by excluding issues from the political agenda. Lukes (2005) recognized a third face of power, with less focus on actors’ behavior and more on the hidden and subtle ways power manifests. Lukes argued that power evolves both through actors’ behavior and through structures, and that power favors certain interests over others. The “dominated” are often unaware of the subtle ways power operates. The fourth face of power, based on Foucault’s (1977) notion of disciplinary power, reflects the fact that societies produce modern and postmodern subjects and identities and is a form of power that is implicit, sometimes invisible and shaped through discourses, knowledge production, governance techniques, and institutional arrangements (Digeser, 1992). In addition, the systemic or constitutive conceptions, according to Haugaard (2010), view power as “the ways in which given social systems confer differentials of dispositional power on agents, thus structuring their possibilities for action” (p. 425). This notion takes into account “the ways in which broad historical, political, economic, cultural, and social forces enable some individuals to exercise power over others, or inculcate certain abilities and dispositions in some actors but not in others” (Allen, 2016: n.p.). The constitutive and disciplinary notions of power can be used to reflect the way power manifests through an evaluation system, for example.
Power has more connotations as it is often combined with other terms. Empowerment is one term frequently used and is of interest for this article as evaluation can empower or disempower actors in governance. One kind of empowerment comes from “downward delegation,” defined as when “an existing power-holder delegates some of his/her capacity for action to a subordinate” (Barnes, 1988: 71). When an agent is empowered by a power-holder, she is given discretion to decide and act under certain conditions regarding certain matters. In contrast, Wartenberg (1990: 207) discussed power that is beneficial to both parties, using the term “transformative power” when the relationship between power-holder and subordinate is more likely to be mutually beneficial (in ideal cases). It is a voluntary relationship that requires openness and trust. Most notions view empowerment as a zero-sum game, overlooking the fact that empowerment is largely a collective resource that can grow, as empowering some does not automatically entail the disempowerment of others (Ball, 1992).
As pointed out by Haugaard (2010) and Allen (2016), how we conceptualize power is highly shaped by the political and theoretical interests that we bring to the study of power. It is also recognized that our conceptions of power are themselves shaped by power relations (Allen, 2016; Lukes, 2005). Lukes (2005) underscored that “how we think about power may serve to reproduce and reinforce power structures and relations, or alternatively it may challenge and subvert them” (p. 63). Hence, how evaluation researchers conceive and manage power issues in evaluation can either reinforce established power structures or challenge them.
How evaluation researchers apply these and other notions of power is discussed next.
Notions of power in the evaluation literature
Rutkowski and Sparks (2014) recognize that evaluation occurs in a complex political terrain where international organizations can assume some sovereignty. They conceive power as a resource and as national political power-over international organizations and evaluation. Similarly, Eckhard and Jankauskas (2019) assume a resource-based notion of political power, conceived as power-over and power-to act focusing on how stakeholders can influence evaluation with agenda-setting power and other political resources. Furubo and Karlsson Vestman (2011) discuss the power of evaluation itself, claiming that the evaluator is a stakeholder with “its own interests and power dynamics to safeguard.” Evaluators’ power can manifest as safeguarding future evaluation commissions or defending the chosen evaluation approach, for example. These authors assume an actor-based notion of power viewing power as a resource that power-holders (national politicians and evaluators) can use to influence evaluation.
In contrast, Raimondo (2018) assumes a constitutive notion of power when exploring evaluation systems. She argues that “evaluation systems derive their power from their capacity to structure knowledge, establish and diffuse norms about worthwhile interventions inside and outside organizations” (Raimondo, 2018: 35). This framework assumes three main ways in which international organizations enact their power: “classification,” “meaning-making,” and “diffusion of norms.”
Evaluation (systems) can shape language, norms, what is considered valid knowledge, and social interactions among actors (Andersen, 2020; Dahler-Larsen, 2012; Furubo and Karlsson 2011). Andersen (2020) recognizes that an evaluation system can “redistribute power and authority from politicians, interest-groups and citizens to civil servants with the most analytical capacity” (p. 270).
Some evaluation models and approaches are explicitly developed to manage power imbalances in evaluation. Baur et al. (2010) discuss how asymmetric power relations among stakeholders can be managed by evaluators in responsive evaluation. Power imbalances can be used constructively by establishing trust, including marginalized groups, and promoting dialogue and mutual learning processes among stakeholders that empower all stakeholders. The aim is not to reach consensus on problems or future actions but to gain respect and understanding: We argue that it is essential for responsive evaluation that the evaluator does not try immediately to ease conflict or silence powerful voices in a dialogue. Instead, evaluators should work with stakeholders towards a situation in which all feel empowered to work on practical improvements together. It is a shared responsibility of all stakeholders to solve conflicts and to learn to hear silenced voices. The evaluator can help create awareness about this and he holds a mirror up to stakeholders. (Baur et al., 2010: 245)
These authors also assume an eclectic notion of power combining power-over, power-to act, and transformative power.
Haugen and Chouinard’s (2019) conceptual model of power, developed for analyzing power in culturally responsive evaluations (CREs), is of special interest here as it conceptualizes how power can manifest in the evaluation process. Their model, derived from a research synthesis of CREs and based on Foucault, frames power as a dynamic, relational, and productive concept that structures knowledge and that manifests at multiple levels in evaluation settings (Haugen and Chouinard, 2019: 378). The model recognizes that power can be visible, hidden, or invisible.
This conceptual model provides a multidimensional understanding of how power can manifest in CRE processes. Although the framework recognizes many dimensions of power that may be in play in the evaluation process, the authors do not suggest how to explore these dimensions or how the governance structure (included in political power) constitutes and affects power in the evaluation and policy processes.
Dahler-Larsen’s (2015) discusses what largely affect the power of evaluation. Evaluation is a social practice set up to change another practice (evaluand) and to achieve this it must protect itself from contestability (cf. Haugaard, 2010). The evaluation must be backed up by such as trust in the applied methodology, data and the institution that carries out evaluation and in “the virtues related to using evaluation for good purposes such as learning or improvement” (Dahler-Larsen 2015: 31). Moreover, To function effectively, an evaluation must exploit the differential between the (relative) fluidity of the social material it seeks to change and the (relative) solidity of its own fixation in the world. I call this difference “the contestability differential.” All evaluation plays with the difference between what is solid and what is not solid. (Dahler-Larsen 2015: 31)
This implies that if an evaluation is not conceived as solid, its authority and power diminishes. However, this is but one factor that, under certain conditions, can influence evaluation use (Dahler-Larsen, 2021) and the power of evaluation.
Andersen (2021) recognizes that an evaluation system can both increase power-over evaluands and decrease the power of evaluation to promote change. “On the one hand, this [the evaluation system] maximises compliance with the recommendations of evaluations—thus increasing evaluations’ power over evaluands. On the other hand, this fixation of subject-positions and epistemological perspective also decreases evaluations’ power to invoke radical change and development’’ (Andersen, 2021: 53). His observation indicates a relation between power in and of evaluation and the article will discuss this relation and paradox later.
The cited literature contributes to a many-sided understanding of different dimensions and aspects of power and evaluation. However, it provides limited guidance for exploring power
Framework
The framework, illustrated in Figure 1, is based on the cited research on power and research on evaluation and governance (Bovens et al., 2006; Chelimsky, 2009; Dahler-Larsen, 2012, 2015, 2021; Furubo and Karlsson, 2011; Hanberger, 2006, 2009, 2011, 2012; Howlett, 2014; Klijn, 2008; Leeuw and Furubo, 2008; Picciotto, 2015; Schoenefeld and Jordan, 2017; Trochim, 2009; Weiss, 1993, 1999; Widmer and Neuenschwander, 2004). It recognizes that public-sector evaluation is embedded in a governance structure (with a division of power between levels of government) and governance model (e.g. management by objectives) and guided by an evaluation policy. Political and administrative actors use their power to set the rules of the evaluation in line with the governance structure, governance model, and evaluation policy, which frames and governs the evaluation to the functions it should support in governance (Hanberger, 2011, 2012). The framework treats power

Framework for exploring power
The framework conceives power as a multifaceted and dynamic phenomenon that permeates and affects evaluation and is enacted through structure and agency. Power can manifest as
First, as indicated by downward arrows the governance structure, governance model and evaluation policy both affect power
The horizontal dotted arrow indicates that there is some kind of relation between power
Second, the framework direct attention to how the above notions of power can manifest and evolve in the following four phases of the evaluation process (first column in Figure 1): the design phase (I), the implementation phase (II), the conclusion phase (III), and the communication phase (IV). During the design phase (I), the preconditions for the evaluation are interpreted and the evaluation model put into practice, providing roles and power to actors at the outset of the evaluation process. How actors use their power to comment on the design can affect the evaluation reports. In a formative evaluation, the shaping of policy (or program) continues during the implementation phase (II) and actors can use their power to influence the development of the evaluation during its implementation. During this phase, solutions to emerging issues regarding selection of cases, data collection, and unexpected challenges may trigger actors to use their power. In the conclusion phase (III), power dynamics can manifest depending, for example, on how conclusions and recommendations in a draft report are received. Power in the communication phase (IV) often manifests in what the actors choose to highlight, overlook, or misrepresent with reference to the evaluation (system). If findings are disputed, actors can use their power to question, for example, the evaluation design, the evaluator’s competence, or the conclusions’ empirical support. At this stage, the evaluation process merges with the policy process (Hanberger, 2011). Whether and how actors in the evaluation process and the continuing policy process use their power with reference to the evaluation can affect the power
Third, the framework reflects the power
The constitutive and disciplinary power of evaluation (systems) can support the functions that the governance model relies on and at the same time provide little support to or impede other democratic governance functions.
Table 1 lists factors and conditions that can affect power relations and trigger actors’ use of power
Conditions and factors affecting power
Table 2 lists conditions and factors that can increase or decrease the power
Conditions and factors that can increase/decrease the power
Applying the framework in empirical research
It is suggested that empirical studies, based on the framework, can be carried out in five steps: (1) Case description, (2) exploration of power
The framework does not prescribe specific methods. Participant observation, stakeholder interviews and questionnaires (ex nunc or ex post) can be used for data collection. Analysis of the manifestations of power can use some kind of interpretative policy analysis (Fischer et al., 2007; Yanow, 2000) and focus on prevailing manifestations of power and how power dynamics evolve and how evaluation and democratic governance interplay.
Applying the framework to an evaluation
In this article, an evaluation of a Swedish national teacher-training program, the “Literacy Lift,” is used to demonstrate how the framework can be applied in empirical research in five steps. Steps 1–4 are developed below and Step 5 in the “Discussion” section. The data collection methods used is participant observation and collection of policy documents. The participant observation, undertaken by the author as project leader, include observations of power when actors take action, no action, interact with and react to one another, and other ways power manifest. These observations and collected policy documents are analyzed as an interpretative policy analysis (Yanow, 2000) and a qualitative directed content analysis (Hsieh and Shannon, 2005) of the research questions.
The case
The Swedish government initiated the Literacy Lift national teacher-training program in 2013 in response to the country’s decade-long failings in the Program for International Student Assessment (PISA; Ministry of Education, 2013). The government commissioned the National Agency for Education (NAE) to develop the program. It started in 2015 and ended after the 2019–2020 school year for schools, but continued for preschool teachers during the 2020–2021 school year. The program mainly consists of a learning platform with around 40 web-based literacy modules; groups of teachers led by a supervisor work with two modules for one school year according to a structured training model. As of 2020, Literacy Lift has reached around 25 percent of Swedish teachers in preschool, compulsory, and upper-secondary education. The program’s aim is to enhance teachers’ continuing professional development through what is called “collegial learning” in literacy, to improve teaching and, ultimately, students’ literacy and Sweden’s scoring on future PISA tests.
After procurement, the NAE commissioned a group of researchers to undertake a real-time evaluation of the program. The evaluation, designed as an objectives-oriented stakeholder evaluation, has presented 13 interim reports and a final report. In short, the evaluation shows that the program has helped enhance teachers’ knowledge of and insight into literacy and helped improve the teaching of literacy, but it has not achieved some of the program’s ambitious goals, such as improving all students’ literacy and improving Swedish students’ performance in PISA. Nor has it succeeded in institutionalizing structures for continuing collegial learning and teaching about literacy in all subjects.
The NAE repeatedly reported program progress to the government using interim evaluation reports together with their own follow-ups. The government extended the Literacy Lift program in 2016 and the evaluation was extended as well. The staff responsible for the program and the evaluation at the NAE were changed three times, whereas the evaluators stayed the same. The agreement to develop empirically thick interim reports and a brief analytical final report based on the interim reports, made at the beginning of the evaluation, was questioned by the new evaluation manager and program staff. The new demand was to explicitly describe all the data, methods, and statistical analyses in the final report. The evaluators presented a final draft report in April 2019 without objection from the reference group, but the NAE requested additional statistical analysis of the result interpretations. The evaluators first refused to do this. If they had not agreed to do this, the report would not be accepted for publication and the remaining money for the commission would not be paid. The additional statistical analysis did not change any major results or conclusions, and the final report was approved in October 2019 but not approved for publication. The NAE presented its own report to the government together with the final evaluation report in June 2020, and at the same time, the evaluation report was approved for publication.
Stakeholders in the program and evaluation comprise the previous and current government, the NAE, school owners, school principals, head-teachers, and teachers/preschool teachers, reference group members, evaluators, Swedish Association of Local Authorities and Regions (SALAR), and Swedish Association of Independent Schools (SAIS).
How power manifests and evolves in the evaluation process
In the
As noted, the design phase of the evaluation continued during program implementation. Adjustments were continuously made in the evaluation and, for example, new case studies were developed. The NAE and the evaluators discussed adjustments and extensions in which both parties had power as to whether or not to accept adjustments and extensions. NAE’s managers and staff changed during program implementation and agreements made with the evaluators at an early stage dissolved when new managers and staff used their power to demand changes in the design of the final report. The discussion about extending the evaluation to cover the extended Literacy Lift program reflects a brief moment of transformative power—that is, the NAE’s and the evaluators’ power increased when their commissions expanded, benefiting both parties. The case illustrates the dynamic nature of power, that is, how power manifests as power-over, power-to act, and transformative power at the beginning of the evaluation process.
During the
The NAE used its power-over the evaluators to request recurrent feedback on the progress of the evaluation and to assess the quality of the draft reports. The NAE frequently asked the evaluators to add coverage of action taken to improve the modules after the first data were collected. The evaluators used their power to judge the relevance of the NAE’s and the reference group members’ comments and suggestions and to decide what to modify, delete, or add in the draft reports. Not all evaluators always perceived the comments and how to address them in the same way. The project leader of the evaluation used his power-over colleagues to suggest how to manage the NAE’s comments, what changes to make in draft reports, and what analysis could be saved for the final report. Two stakeholders, that is, SALAR and SAIS, empowered themselves and claimed to have the right to comment on and demand changes in the electronic questionnaires. How these two organizations obtained this power-over the NAE and the evaluators is unknown. In any case, the evaluators had to obtain approval before the questionnaires could be distributed. The NAE’s power-over the evaluators during the implementation phase was mostly veiled but became obvious when the NAE repeated its comments on result interpretation and what should or should not be highlighted in the interim reports.
In the
In the
How power of evaluation manifests
The power
The training model, that is, the systematic work procedures for the modules, of which the government and NAE expressed high expectations, was questioned in the evaluation. The evaluators expressed doubts that the model could be
A
Relation between power in and of evaluation
The relation between power
Conclusion
The framework contributes to enhance understanding and knowledge of power
The application of the framework shows how power manifests
The power
The case also illustrates the constitutive power of the evaluation in shaping the notion of what is valid and reliable knowledge. The NAE’s demands helped reinforce and constitute the notion of statistically significant knowledge as the most valid type of knowledge. The case also illuminates the program’s disciplinary power and how the evaluation reduced this power when questioning the feasibility of the training model to serve as a blueprint for continuing teacher training. Hence, the case demonstrates how the NAE’s and the evaluators’ mobilization of power affected the constitutive and disciplinary power of the evaluation and the program.
The relation between power
Discussion
The framework takes into account the four shifts away from research-based ad hoc evaluation informing decision-makers in representative democracy (Dahler-Larsen, 2021) and other trends that have changed the conditions for evaluation (Hanberger, 2018; Picciotto, 2015). To what extent institutionalized evaluation systems replace stand-alone evaluations and decision-makers lose democratic control over evaluation are empirical questions. The case demonstrates how power in evaluation manifests as supporting political and administrative decision-making and reflects more rather than less administrative and democratic control. The NAE continues to commission stand-alone evaluations at the same time as the government, the NAE, quasi-governmental and private actors have institutionalized at least 30 evaluation systems for primary and secondary education in Sweden (Benerdal, 2019; Lindgren et al., 2016). Thus, stand-alone evaluations and various overlapping monitoring and evaluation systems operate in a crowded policy space in the Swedish education system. Decision-makers tend to lose control over evaluation systems (Andersen, 2021; Dahler-Larsen, 2021; Lindgren et al., 2016) but as this case shows not over stand-alone evaluation. Further research can contribute to knowledge of whether stand-alone evaluations are replaced by evaluation systems in different policy fields and governance models and if, and if so how and why, administrative and democratic control is affected.
The case demonstrates the multiple and subtle ways power can manifest and evolve in the evaluation process. For example, new actors can enter the scene and empower themselves and act as power-holders, as two stakeholders (i.e. SALAR and SAIS) did over the NAE and the evaluators. These actors’ empowerment illustrates how NPM governance affect evaluation (Hanberger, 2012; Schoenefeld and Jordan, 2017), shaping new “political opportunity structures” (Schoenefeld and Jordan, 2019) that provides quasi-governmental actors power to influence the evaluation. The case also demonstrates how the NAE shaped the constitutive power of the evaluation when it used its power-over the evaluators to reinforce the conception of statistical significance-based knowledge as the most valid type of knowledge (Andersen, 2020; Dahler-Larsen, 2012). This could be a strategy to avoid a polarized and mediatized discussion of program effects and to escape blame (Howlett, 2014) for publishing evaluations of low quality and protect the Agency from political pressure (Chelimsky, 2009).
For an evaluation to effectively change another practice, in this case, to contribute to improve teaching in literacy and teacher training (referred to as the policy improvement function in the framework), it must protect itself from contestability and be backed up by trust in the methodology, the data, and the institution that carries out evaluation (Dahler-Larsen, 2015). The NAE used its power-over the evaluators not only to ensure trust in the methodology of the evaluation but also to build trust in the NAE as a competent knowledge-steering agency. With reference to the evaluation, it developed the program and governed local school actors to implement the program according to the training model. The power of the evaluation to change teaching practice was intertwined with the power of the program. However, most important to change teaching practice during the program was the application of the training model, supported by a state subsidy, the NAE’s demands on participants, and its control of program implementation. The NAE used the evaluation to refine the program and improve program implementation indicating some power of the evaluation to support developing teaching practice. The power of evaluation can also manifest after the program. However it is not known if the program material and the evaluation reports, which are free to use and accessible from the NAE’s website, are used by teachers and head-teachers in developing teaching practice after the program.
The framework recognizes that public-sector evaluation is confined by and affects governance (Step 5). The facts that the previous and current government have power to develop and change the conditions for evaluation and the NAE has power to develop and revise terms of reference for evaluation reflect the division of power between the political and administrative levels in representative democracy and the state-model of governance (Hanberger, 2009). The NAE used its delegated power to manage the evaluation to match the policy cycle, which diminished the power of the evaluation to support further functions in democratic governance. The NAE used the evaluation to reinforce elite-democratic governance at the same time as it blocked the collective learning and discursive democratic functions (Habermas, 1996; Hanberger, 2006). The evaluation was subjected to the NAE’s evaluation policy (Habermas, 1996; Hanberger, 2006; Trochim, 2009) and response system (Hanberger, 2011) thus confined by administrative power (Widmer and Neuenschwander, 2004). The project leader for the evaluation used his power-to act in the evaluation process and managed some but not all power imbalances (Baur et al., 2010). He failed to get the report published when finalized which blocked the evaluators’ opportunity to support the collective learning and democratic discourse functions.
The framework treats power
The framework’s advantage is that it contributes to a comprehensive, multifaceted, and dynamic understanding and exploration of power
The framework does not support the exploration of power
Another limitation, when using the framework empirically, is that power is by nature difficult to explore; actors may not want to share why they use or do not use their power because this would reveal information that is meant to be kept secret.
As indicated how power manifests differ not only between but also within stakeholder groups/organizations. NAE’s senior managers, evaluation managers and literacy staff did not use their power in the same way in interaction with the evaluators. Members of the reference group also used their power somewhat differently to comment on the program and evaluation. How power manifests in shaping and utilizing political opportunities in evaluation and in promoting (organizational/collective) learning (Schoenefeld and Jordan, 2019) is intertwined, multifaceted, and further research should reflect power differentials both between and within stakeholder groups/organizations.
To enhance understanding and knowledge of what affects the power of evaluation the factors and conditions listed in Table 2 and other functions that could increase or decrease the power of evaluation to support key governance functions merit further research in different policy fields and governance contexts.
If Andersen’s (2021) recognized paradox of evaluation systems also appears in, for example, evaluation systems that combine indicator-based monitoring systems and recurrent in-depth evaluations, and how constitutive and disciplinary power affects the evaluation process and the power of evaluation, and how actors can affect these kinds of power merit further research in different policies and governance contexts.
It is hoped that this article can inspire further research and discussion of power and evaluation while the conditions of evaluation and democratic governance are changing.
