Abstract
Keywords
Introduction
Qualitative research plays a vital role in political development and the design of statutes and directives, including research projects funded by the European Commission. Consequently, ensuring the quality of this research is important. Moreover, a large and still growing body of scientific literature addresses the issue of quality and quality criteria in research. Within this body, greater agreement exists regarding quality criteria in quantitative research compared with qualitative, particularly in terms of validity and reliability (Bryman, Becker, & Sempik, 2008). An explanation for this state of affairs lies in the diagnostic exploratory nature of qualitative research (de Ruyter & Scholl, 1998; Kapoulas & Mitic, 2012), which rests on the belief that the social world is comprised of subjective experiences and understandings that can change over time and social contexts (Dew, 2007). This multifaceted, contextual nature of qualitative/exploratory research makes it difficult to establish common quality criteria, as reflected in the lack of consensus among researchers regarding how to approach quality criteria in qualitative research (Cohen & Crabtree, 2008; Mays & Pope, 2000; Sandelowski, 2015; Stige, Malterud, & Midtgarden, 2009). However, clear guidelines for how to judge the quality of the particular research account or project remain desirable (Hammersley, 2007; Leung, 2015).
The lack of clear quality guidelines in qualitative research inspired us to design a framework for assessing the research quality of qualitative project methods employed in the European Commission financed project Secured Urban Transportation—A European Demonstration (SECUR-ED). The SECUR-ED project, which ran from April 2011 to September 2014, was a complex security demonstration project involving 40 European partners across industry, research, and transport operations. The project aimed to provide public transport operators with a set of tools to improve urban transport security, identified through development and testing of a number of security capacities (which can be security processes, technological tools, or training approaches) in demonstration scenarios in several European cities. The project came up with security solutions and recommendations that can bring value to public transport security stakeholders, as outlined in the “White Paper for Public Transport Stakeholders” (SECUR-ED, 2014b).
The SECUR-ED project applied a wide range of project methods, including common research methods such as interviews, questionnaires, and literature studies; project management methods such as mailing lists, cooperation portals, physical meetings, and teleconferences; methods that support project coordination, cooperation, and collaboration; and methods developed for the transport operator partners (capacities/solutions). However, the assessment framework presented in this article had a different focus on methods; it was designed by the authors to study the qualitative methods that the SECUR-ED project developed to handle its size and complexity. The six selected methods are qualitative by nature: common glossary, interoperability notation, scenario description with business process model and notation (BPMN), capacity mapping matrix, demo city dashboards, and capacity evaluation approach. (The following sections will discuss these evaluations and methods.)
The agenda for this article is to describe our study and approach to the development of the assessment framework, the framework itself, and application of the framework to a selection of SECUR-ED project methods. In addition, we discuss the outcome of our assessment process, including the novelty and possible broader application of the framework, to assess similar research projects. It is important to note that our analysis did not address whether the methods would have been sufficient or necessary to achieve the SECUR-ED project’s overall goals regardless of how well they were implemented. In other words, our assessments do not say anything about the degree of success of the SECUR-ED project as a whole.
Method
For an overview of current research into quality and quality criteria, we searched electronic online databases, such as Academic Search Elite, ScienceDirect, Web of Science, and SwetsWise, for combinations of the key words: research quality, qualitative research, validity, and reliability. Based on the resulting publications and reference lists (snowballing) (Greenhalgh & Peacock, 2005; Webster & Watson, 2002), we identified the quality criteria of transferability, systematic design/reliability, and transactional validity, which we integrated into a framework to assess the quality of the SECUR-ED project methods. The criteria identification and framework integration are described and illustrated in the next section.
As partners of the SECUR-ED consortium, we had access to working papers in progress and peer-reviewed research reports from the project that described and discussed SECUR-ED methods, which were evaluated in our study. These reports constituted important empirical data for our evaluation of the SECUR-ED methods and for developing and testing the framework. In addition to these documents, we had access to project partners, covering researchers, industry partners, and transport operators. Specifically, individuals involved in the development and application of the methods provided oral and written input based on their experiences. Also, our assessments of each method as well as different versions of our assessment framework were presented to project members (including dedicated reviewers) on several occasions, such as in meetings, teleconferences, and emails. Thus, written and oral contributions were successively collected and triangulated over time. In sum, we obtained specific rich information to enable comparison and review of a selection of SECUR-ED methods.
Developing the Assessment Framework
As described, qualitative research provides important insights and input for development of policies and statutes, which implies the need for guidelines to judge the quality of a particular research account or project. Accordingly, in this section, we argue for the identified assessment criteria and the integration of the criteria into a framework to assess project methods.
Assessment Criterion: Transferability
Polit and Beck (2010) suggested that “generalization is an act of reasoning that involves drawing broad conclusions from particular instances—that is, making an inference about the unobserved based on the observed” (p. 1451). Some qualitative researchers have questioned generalizability of any type of findings. In their view, findings are always embedded within a context, making it problematic to extrapolate “the particular” (Erlandson, Harris, Skipper, & Allen, 1993; Lincoln & Guba, 1985). Therefore, an alternate approach of transferability should be considered. Transferability, or the case-to-case translation model in Firestone (1993), assumes that the researcher’s job is to describe the time and context in which the particular findings were true, whereas the reader’s job is to determine the extent to which the findings may apply in another context (Polit & Beck, 2010). Therefore, the degree of transferability between two given contexts depends on the thoroughness of the researcher’s contextual descriptions that allow other researchers to determine the applicability of the findings (Lincoln & Guba, 1985).
Applying the same understanding to the SECUR-ED project methods, the logic becomes the following: The better the context or background described in documents, the better the reader will be able to assess the applicability and transferability of the method to another setting both within the transportation sector and in other sectors. In this study, we are the readers who will assess transferability, based on the available descriptions of context and background. On other occasions, readers may be decision makers or advisers within the private sector or public authorities.
Assessment Criterion: Systematic Design/Reliability
For quantitative research, the concept of reliability implies “dependability, stability, consistency, predictability, accuracy” (Kerlinger, 1973, p. 422), and is considered a precondition for validity. More specifically, Kirk and Miller (1986) identified three types of reliability: The degree to which a measurement, given repeatedly, remains the same; the stability of a measurement over time; and the similarity of measurements within a given time period. Careful considerations of measurements are imperative in quantitative research. However, the quantitative researcher’s focus on measurements and stability does not align well with the qualitative researcher’s view of the social world as complex, multifaceted, and contextual (Dew, 2007; Moran-Ellis et al., 2006). Therefore, an alternative perspective suggests that reliability in qualitative research can be reached through systematic operation at the design level (methods and techniques, interview protocols, and so forth; de Ruyter & Scholl, 1998, p. 13), which implies keeping “detailed account of the research steps undertaken” (Kapoulas & Mitic, 2012, p. 361).
To assess the degree of systematic research design employed in the SECUR-ED project methods, we reviewed project documents and gathered oral and written inputs from individuals involved in the project.
Assessment Criterion: Transactional Validity
Within qualitative research, validity has traditionally been defined as the researcher’s efforts to determine the degree of correspondence between claims about knowledge and the reality being investigated (Eisner & Peshkin, 1990). This definition is comparable with how quantitative research applies internal validity and is understood as a means to determine whether observations and measurements truly capture what they are intended to capture (LeCompte & Goetz, 1982). More recently, the concept of validity in qualitative research has evolved to an approach Cho and Trent (2006) labeled transactional validity, which “assumes that qualitative research can be more credible as long as certain techniques, methods, and/or strategies are employed during the conduct of the inquiry” (p. 322). Based on this development of transactional validity, we considered the following techniques or strategies to assess the SECUR-ED project methods (Onwuegbuzie & Leech, 2007):
Prolonged engagement: Entails investing enough time to understand the culture, establish trust with study participants, and check for distortions such as a priori values and constructions (Lincoln & Guba, 1985). In the context of the SECUR-ED project, prolonged engagement translates to the participants’ investment to understand and use project methods. As suggested by Billups (2015) and Matthews and Kostelis (2011), this investment cannot be measured in numbers, but must be long enough to establish a familiarity with project methods.
Persistent observations: Aims to identify the characteristics and elements in a situation that are most relevant to the phenomena under investigation and to focus on them extensively to achieve depth (Lincoln & Guba, 1985). In the context of the SECUR-ED project, persistent observations refer to the participants’ investment to identify and observe the most relevant characteristics of project methods over time, which implies that prolonged engagement to understand and use the methods is required. This investment cannot be measured but must be long enough to enable identification and observation of the most relevant characteristics of project methods.
Triangulation: Involves combining multiple and different sources of information to deepen and widen understanding of the multifaceted, complex nature of the social world and phenomena (Moran-Ellis et al., 2006; Olsen, 2004). Triangulation reduces the possibility of chance associations and prevalent, systematic biases, and thereby increases the confidence in the research data and interpretations (Denzin, 1970; Fielding & Fielding, 1986; Maxwell, 1992; Thurmond, 2001).
Member-checking/informant feedback: Member-checking, also known as informant feedback, is a continuous process in which the researcher seeks feedback on collected data, analytic categories, interpretations, and conclusions from the study group (Cho & Trent, 2006; Lincoln & Guba, 1985). In the context of the SECUR-ED project, member-checking entails participants’ active and continuous discussions of project methods with other project members.
Peer debriefing/review: Constitutes an external and logical evaluation of the research process, such as procedures, methods, interpretations, and conclusions (Glesne & Peshkin, 1992; Lincoln & Guba, 1985; Maxwell, 1996).
Rich and thick description: The researcher provides rich and thick description of the setting, participants, methods, data collection (such as observations of specific events and behaviors), data analysis, interpretations, the researcher’s role, and so forth (Becker, 1970). This detailed information strengthens the credibility of findings and enables the reader to assert whether the findings can be transferred to another setting because of shared similarities (Erlandson et al., 1993).
To summarize, we can determine the transactional validity of the selected project method and the overall validity of the project methods by assessing whether and to what degree these techniques/strategies apply to each SECUR-ED project method.
The Quality Assessment Framework
The three identified quality criteria—transferability, systematic design/reliability, and transactional validity—combined with our operationalization approaches and sources of data form a framework for assessing the quality of SECUR-ED project methods (Figure 1). The figure shows how quality was assessed through a five-step process: First, the assessment criteria were identified, and then operationalized. Next, the assessment process was operationalized, and by extension, the method’s intended use was evaluated against its actual application, and quality was assessed using the three identified criteria. Thereafter, the identified and operationalized criteria were applied to selected SECUR-ED methods. These methods were first identified and their use within the project was described. Against that baseline, the quality assessment of the project methods was conducted. Data for the assessment included written project documents and oral and written input from project partners, which was documented and systematically reviewed. Finally, the quality aspect of the project methods was assessed and synthesized.

The quality assessment framework.
Applying the Quality Assessment Framework on SECUR-ED Developed Methods
To assess each project method, we reviewed and compared the data (documents and input from project members) and resolved discrepancies and disagreements in our assessments and descriptions to reach a consensus (Cho & Trent, 2006; Denzin, 1978; Eisner & Peshkin, 1990; Patton, 1990). The process started with the simple formula of identifying the method intent (the intention behind the particular method) and method use (the method’s actual use during the project), which subsequently informed our assessment of transferability, reliability, and validity in the quality assessment step illustrated in Figure 1.
We subsequently assessed each SECUR-ED method qualitatively, according to the three criteria of transferability, reliability, and validity. The results of this evaluation were summarized and merged into a few variables and presented together in a matrix (Table 1): research quality measured by transferability, reliability, and validity, using a 3-point scale of
An Overview of Our Quality Evaluations of the Applied SECUR-ED Project Methods.
Evaluation Results
This section presents the results of our assessment for each of the SECUR-ED project methods. It follows the schema of method intent, method use, and quality assessment.
First Method Assessed: Common Glossary
Method Intent
The common glossary document provided a set of security terminology, definitions, and acronyms used throughout the SECUR-ED project, with the main intention to create a common communication platform. The document also was intended to be a living document, and therefore to be continuously updated according to the needs and circumstances throughout the project’s life.
Method Use
Within the SECUR-ED project, a discussion about some words and their underlying concepts clarified different approaches, making the establishment of a glossary a valuable process. This discussion was important because the project involved partners from different backgrounds, such as security, public transport, and information technology.
However, active usage of the glossary among the various project working groups was diverse. Not everybody was aware of the glossary, and some conceptual structures were reinvented first and aligned later. Some teams also used terms and structures provided in other project documents. Although the glossary’s initial version was finalized in October 2011, one major update occurred in December 2012.
Furthermore, city and organizational representatives participating in the project mainly had to follow their local terminologies, often in their national language, to interact with local stakeholders. This additional challenge did not encourage a strict adherence to the terminology in the glossary.
Quality Assessment
Transferability
A number of the security terminologies and definitions included in the common glossary were relevant to a wide range of sectors, from transportation and industries to offshore operations and health care. Examples of transferable terms include accident, crisis, security, cybersecurity, information security, risk, risk management, risk assessment, safety, access control, incident management system, user requirements, and interoperability. In addition, most definitions were quite thorough and detailed, implying that the glossary had transferability potential both within and outside the transport sector.
Reliability
This method appeared to lack reliability because the glossary was not applied consistently and systematically during the SECUR-ED project period, and some teams were unaware of the glossary or used only local terminologies.
Validity
In addition to the lack of peer review for this method, active usage and revisiting/refinement of the glossary throughout the project period were not done, but could have improved understanding of project tasks (utilizing glossary terminologies). In other words, the SECUR-ED project lacked prolonged engagement of the method. The existing understanding of terminologies also should have been validated through member-checking as data were gathered throughout the project. In sum, the method did not add to overall validity of the SECUR-ED project.
Second Method Assessed: Interoperability Notation
Method Intent
The SECUR-ED project developed a specific notation for interoperability (the extent to which systems and organizations are able to work together) in the domain of public transport security (SECUR-ED, 2014a). The intention was to enable later subprojects to describe their components and systems in a consistent format and to use a common interface specification language that would reduce the risk of misunderstandings between SECUR-ED partners and activities.
Method Use
The method was applied to several security demonstration scenarios in the SECUR-ED project in which the method helped to identify and visualize relevant roles and systems, including their relationships. However, these scenarios were part of only two project tasks in SECUR-ED. The method also was adapted and applied in a software tool, RED (“Requirements Editor and Designer”), but the process for using this tool was neither conceptualized nor implemented during the SECUR-ED project period.
Quality Assessment
Transferability
As described in SECUR-ED project documents, the interoperability notation is rich in background and context, which facilitates its use in similar contexts within the transportation sector and across sectors.
Reliability
Although useful during two project tasks, the method had limited usage in the SECUR-ED project as a whole. Although the method was applied in the development of a software tool, the process for using the tool was never properly established. Subsequently, we found limited systematic design related to this method and a limited contribution to reliability in the SECUR-ED project.
Validity
The method was used to identify and visualize relevant roles, systems, and their relationships during two of the SECUR-ED project tasks, and thus facilitated triangulation of data and understandings in the project, which strengthened validity. However, the method was not implemented and documented more broadly in the SECUR-ED project and constituted a limited contribution to overall project validity.
Third Method Assessed: Scenario Description With BPMN
Method Intent
The BPMN method provided a graphical representation for specifying business processes, which through consistent use is intended to improve project communication and setup between the tested security capacities and demonstration scenarios in the SECUR-ED project. Through a credible story, these demonstration scenarios illustrated how a security risk may materialize and how SECUR-ED results could improve preparedness of involved parties.
Method Use
BPMN process diagrams were used for different approaches and deliverables in the SECUR-ED project, but with different intentions. The quality of process documentation also varied; for example, some deliverables described the process diagrams in plain text only. In sum, the BPMN diagrams had diverse use and documentation in the SECUR-ED project.
Quality Assessment
Transferability
BPMN is an established and broadly used notation, which makes it highly transferable to other contexts and sectors. However, as applied in the SECUR-ED project, BPMN involved fragmented descriptions with varying levels of detail, which resulted in limited transferability to contexts within the transportation sectors and across sectors.
Reliability
Variations in the project’s usage of the BPMN method, accompanied by inconsistent levels of descriptions, demonstrate a lack of systematic design related to this method. Therefore, the method as used in the SECUR-ED project did not strengthen overall reliability of the project.
Validity
Variations in BPMN’s usage and inconsistent levels of descriptions implied a lack of a consistent focus, persistent observation, and “rich and thick descriptions,” prerequisites for establishing validity. Consequently, BPMN as used in the SECUR-ED project did not contribute to the project’s credibility.
Fourth Method Assessed: Capacity Mapping Matrix
Method Intent
The capacities mapping matrix (CMM) is an Excel spreadsheet listing project capacities, demonstration cities, and project deliverables, and was developed to provide an overview of the status of all the deliverables and capacities. The intention was to identify and list proposed SECUR-ED capacities and to link them to the deliverables to ensure that information about the capacity is captured in any deliverable. Another reason for developing the CMM was to link the capacities to the demonstration cities to get an overview of each demonstration site. Moreover, the CMM identified the responsible capacity providers and points of contact to facilitate the handover between the capacities and the demonstration teams.
Method Use
In addition to its intended use, the CMM was used in project management to check and confirm capacity assignment to a demonstration. A project partner regularly updated the CMM, allowing project management to keep track of the status of the capacities. Various partners in the project utilized the CMM. For example, public transport operator used the CMM to identify capacities of interest, which deliverables might provide more technical information about the capacities, and who to contact for more information. Furthermore, this contact information was used to benchmark project performance, as the evaluators updated and adjusted the CMM to identify which capacities were planned and demonstrated in which cities. This modification helped project managers to identify any capacities that were planned but not demonstrated, and the reasons why.
Quality Assessment
Transferability
The CMM is a simple tool that can easily be used by other large-scale demonstration projects to map solutions and other technologies and actions to demonstration sites or scenarios. The CMM also was regularly updated and well documented, which suggests potential transferability of the method within the transportation sector.
Reliability
The CMM gave a clear overview of all the capacities in the project and which were or were not ready. As the project progressed, the CMM increasingly served as a tool to map the capacities and to link them to the demonstration cities. Overall, the CMM method saw active/systematic use during the SECUR-ED project, and thus contributed toward strengthening reliability of the project.
Validity
The CMM benefited both project management and other partners in the project. Overall, the method’s extended use project-wide improved the overview and understanding of the status of each capacity, and thus served to triangulate data. The method also became well documented (“rich and thick descriptions”) through regular updates, indicating that the method contributed to the SECUR-ED project’s credibility.
Fifth Method Assessed: Demo City Dashboards
Method Intent
The dashboard, which is an Excel spreadsheet file summarizing the process of each of the demonstrations, initially was intended as a tool for reporting progress to project management. The intention was to regularly update these dashboards during the preparations of the demonstrations.
Method Use
The dashboard was used as a reporting tool in a project coordination report, with summarized information about project milestones completed, selected capacities for each demonstration, and potential planning and implementation risks and mitigation actions taken. In addition, information from the dashboards was used to present progress of demonstration cities during project management board meetings. However, the SECUR-ED partners did not integrate the dashboards from the beginning of planning the demonstrations and updated them only sporadically.
Quality Assessment
Transferability
The initial intention was for the dashboards to be used as a tool to report the progress of the demonstrations. Similar spreadsheet files can be used easily in other comparable projects for that aim. However, the method was not used systematically and actively, and was not well documented, which suggests limited transferability potential within the transportation sector.
Reliability
As a reporting tool, the dashboards’ design with different color schemes was simple and comprehendible and could serve a coordination purpose. However, the dashboards were neither integrated into preparations for the demonstrations nor regularly updated. We concluded that although the dashboards were used in the coordination report, city demonstrations, and presentations at project management board meetings, the tool did not achieve systematic use during the SECUR-ED project and therefore did not contribute toward strengthening reliability of the project.
Validity
Systematic usage of the dashboards was not achieved, indicating a lack of triangulation and documentation efforts; therefore, the method did not contribute toward improving understanding and overview of project status, planning, and coordination. Consequently, the method did not strengthen the SECUR-ED project’s credibility.
Sixth Method Assessed: Capacity Evaluation Approach
Method Intent
The objective of the capacity evaluation approach was to provide a comprehensive assessment of all capacities considered in SECUR-ED. Specifically, the assessment process was intended to identify the most promising capacities and methods based on their performance in the demonstrations, using a cost-effectiveness measure. A related objective was to generalize the findings from the individual demonstrations, to evaluate how the demonstrated capabilities might work in other settings and in other cities. To achieve these objectives, a common framework (method for assessing capacities) was established to align all parts of the work involved in the assessment. This framework specified how to use all available data and expertise to evaluate individual capacities, combine results from separate assessments, and generalize the conclusions.
Method Use
Project documents demonstrated active usage of the capacity evaluation approach, including preliminary assessments of numerous capacities in the SECUR-ED project derived from interviews and material produced in the project. As the overarching framework for the assessment approach, project documents revealed active usage of the method for assessing capacities. However, the assessment was very incomplete for many capacities and was done very differently for each capacity, with almost no or a much differentiated reflection. To really assess capacities using the suggested criteria was difficult because the capacities are intertwined with other capacities and existing solutions.
Quality Assessment
Transferability
The thoroughness of the assessment framework/method described in various project documents suggests potential transferability of the capacity evaluation approach. However, the assessments—based on the framework—were incomplete and had little and/or differentiated reflections, indicating a lack of documentation and transferability. Overall, we found limited transferability of this method.
Reliability
The capacity evaluation approach, as described in project documents, demonstrates a high degree of systematic design. In this sense, the method contributed positively to the reliability of the SECUR-ED project as a whole. However, inconsistent usage of the method as well as incomplete reflections demonstrated a lack of systematic design, which indicated an overall limited contribution to the reliability of the overall project.
Validity
“Rich and thick descriptions” of the assessment method and the capacities evaluated suggested this particular method’s contribution to the credibility of the SECUR-ED project. Various project documents, reports, and evaluations from operators and observers at various demonstration events—supportive of both triangulation and member-checking—further improved the credibility contribution of the method. However, the framework was not applied systematically across capacities and the reflections were incomplete and/or differentiated, which suggested a lack of persistent observation and prolonged engagement. Consequently, the credibility contribution of this method was limited.
Evaluation Summary
Our evaluations of the quality of the different methods applied in the SECUR-ED project are summarized in Table 1, using a simple 3-point scale: If the method did not comply with good practice as described by a criterion, we enter a score of “unsatisfactory” along this criterion; if there was limited compliance with a criterion, the method was scored as performing “less satisfactory” along this criterion; and if the criterion was roughly met, we enter a score of “satisfactory.”
As shown in Table 1, we have assessed six SECUR-ED project methods. Only one of these (capacity mapping matrix) demonstrated satisfactorily in the quality criteria of transferability, reliability, and validity. One method (common glossary) demonstrated satisfactorily in transferability but unsatisfactorily in both reliability and validity. One method (interoperability notation) also demonstrated satisfactorily in terms of transferability, but less satisfactorily in both reliability and validity. Two methods (scenario description with BPMN and demo city dashboards) scored similarly: They both demonstrated less satisfactorily in transferability and unsatisfactorily in both reliability and validity. Finally, one method (the capacity evaluation approach) scored uniformly less satisfactorily in all quality criteria of transferability, reliability, and validity.
Overall Assessment of Project Methods
Three of the six SECUR-ED project methods assessed in our study were accompanied by detailed descriptions and/or represent established methods or notations, which demonstrate potential transferability to other contexts both within and outside the transportation sector (see Table 1). On the flipside, the scoring shown in Table 1 demonstrates that three of the six methods had unsatisfactory reliability; therefore, clearer requirements and better internal communication and information-sharing could be obtained if such project-wide methods were used consistently and in a way that ensured comparable results. Stability and consistency are prerequisites for rich and thick descriptions, triangulation, persistent observation, and prolonged engagement, and therefore, three of the six SECUR-ED project methods had unsatisfactory contributions to the validity to the project because of the identified variations and inconsistent usage. It should be noted that four of the six methods scored somewhere in between (less satisfactorily) on one or more of the quality parameters, implying that the overall score picture has many nuances. Nevertheless, a ranking can be made as follows. The capacity mapping matrix comes out best, interoperability notation second, capacity evaluation approach third, and common glossary fourth, and scenario description with BPMN and demo city dashboards both came fifth. By scoring high on all quality parameters, the capacity mapping matrix should be of particular importance as “best practice” standard for future European Union (EU) projects. Moreover, we suggest improved coordination and routines for project internal information-sharing be applied to the design and implementation of methods in future projects, and that these projects replicate the documentation efforts demonstrated in relation to several of the SECUR-ED methods. It must be noted the scoring could turn out totally differently if applied in another project.
Limitations and Strengths of the Study
Viswanathan (2005) pointed out the general difference between conceptual and operational in measurements and scientific research. Time and length can be defined conceptually as the shortest distance between two points, and measurement follows directly. Weight and temperature involve more abstract conceptual definitions as well as larger distances between the conceptual and the operational. In the field of social science, for example, when measuring of attitudes toward objects, the distance between conceptual and operational can be large. As the distance increases, so do the different ways to measure something and measurement error, and accurate measurement is central to scientific research. The difference between conceptual and operational—which can be huge—is very relevant for this evaluation and could be considered a weakness of our study.
However, one can also argue that qualitative data and findings are embedded in a given context and subjective experiences, and therefore can change from time to time and place to place (Dew, 2007). This fact makes any extrapolation of “the particular” in qualitative research highly challenging (Erlandson et al., 1993; Lincoln & Guba, 1985), including attempts at repeated and stable measurements, and therefore necessarily less accurate (see our discussion in determining the reliability criterion). Consequently, while we acknowledge that the way we assessed and measured quality criteria in our study—using a 3-point scale of
We compensated for lack of measurement precision with triangulation of data across several researchers and the use of several methods (meetings, teleconferences, and emails) and sources of data (documents and inputs from project members), which increases our study’s credibility. We also documented and applied our assessment framework in a systematic and consistent manner, which strengthened both the transferability and reliability of our study.
Conclusion
SECUR-ED was one of the largest demonstration projects in the EU, with 40 partners operating in different cultural and legal environments. Managing such a comprehensive consortium with partners from different countries, with different mother tongues, cultures, and understandings of how to do the job, is a daunting endeavor. A huge amount of tasks had to be performed within a short timeline, and one failure to deliver would delay somebody’s work somewhere in the consortium. This fact is important to take into consideration as it is much easier to get a favorable score if you have full control of the activity and people involved.
As a contribution to existing research, we have demonstrated how to conceptualize, operationalize, and apply quality criteria in the assessment of project methods that are qualitative in nature, thereby answering the identified need for clear quality guidelines within qualitative research. Therefore, our assessment approach and framework is a novel approach that we believe can hold value in future project evaluations across research areas and sectors. More specifically, we believe that the three quality criteria at the core of the framework and assessment process constitute higher level concepts; that is, the concepts are not embedded in the particular findings and are therefore contextual independent and applicable to other settings and participants (Erlandson et al., 1993; Glaser, 2002; Lincoln & Guba, 1985; Misco, 2007; Morse, 2004). Findings resulting from explorations based on the framework can potentially be compared across research areas and sectors. Consequently, we recommend that research projects conducted across contexts and topics should employ this framework to facilitate further development and validation of the existing assessment framework and process. To take this initiative further along into standards development would provide additional value to policy and statute development.
