Abstract
Introduction
Rationale
Critical thinking is an important educational life skill, and there is widespread agreement about the need for critical thinking to improve achievement and deepen understanding across the disciplines. Teaching critical thinking is necessary in fields such as science education (Fettahlıoğlu & Kaleci, 2018), high school chemistry (Suardana, Redhana, Sudiatmika, & Selamat, 2018), business ethics, undergraduate engineering (Adair & Jaeger, 2016; Ralston & Bays, 2015), nursing/medical education (Papp et al., 2014), and music education (Kokkidou, 2013). Despite the widespread interest and research into the teaching of critical thinking, students do not have practical guidelines to apply critical thinking in different contexts. The solution to this problem is the seven consequences: “what, when, why, where, who, how and what for” first described by Aristotle in the Nicomachean Ethics (Sloan, 2010)
The six critical thinking questions, “what, when, where, how, who, and why,” are a scheme that is widely respected in philosophy; used in the law; practiced in journalism, the cornerstones of inquiry in the sciences; and used in every-day conversation to gather information. This study of the Aristotelian critical thinking questions fills a gap in educational practice by providing a simple and yet powerful definition of critical thinking and a set of tools easily understood by students.
To the best of our knowledge, this study of patterns of critical thinking is the first experimental study of Aristotelian critical thinking questions.
Educational Practice of Teaching Critical Thinking
The 3CA (Concept Maps, Critical Thinking, Collaboration, and Assessment) model is a competency-centered approach to instruction in which students learn patterns of critical thinking. The model has four components: (a) concept maps are a visual method for displaying information as nodes with connecting links; the nodes are visual representation of knowledge being learned; (b) the links of concept maps are the critical thinking questions, and are an important conceptual innovation, and make possible the measurement of patterns of critical thinking; (c) the collaborative phase includes the collaborative construction of a new shared map based on the individual concept maps; and (d) assessment occurs when students collaborate to generate multiple-reasoning items for their assessment. The items investigate the reasoning of students as they ask the questions: “what, when, where, how, who, and why.” Each of these practices originates in the activities of the classroom, and through practice and use, the language games of everyday classroom life are transformed into the language games of the thinking.
Students first work individually to apply critical thinking questions to concept maps. This first step or homework phase of the model is necessary to ensure that all students have a shared base of knowledge. During the collaborative phase, students work together, exchange concept maps, and then create a new shared concept map using the critical thinking questions (Zandvakili, Washington, Gordon, & Wells, 2018). In the final step of the process, students work collaboratively to construct the multiple-reasoning questions that are the basis for the assessment of their performance in the course (Figure 1 depicts the components of the model).

Components of 3CA (concept map, critical thinking, collaboration, and assessment) model conceptual framework.
As students construct and interrogate the structures of their concept maps, they are also seeing for the first time a representation of their own thinking. This combination of critical thinking and concept maps affords an opportunity to assess patterns of critical thinking by counting the frequency of use of the critical thinking questions as links between concepts (Figure 2). The collaborative component of the model shifts learning from an individual to a group mode of learning. Different patterns of critical thinking emerge as students exchange concept maps and engage in social and cognitive synthesis. Assessment is threaded throughout the model, and the culminating moment in the model is when students create multiple-reasoning questions to be used in their own evaluation for the course.

Critical thinking concept map (adapted from Zandvakili & Washington, 2019).
Overview
The aim of the study was to contrast individual and collaborative approaches to the construction of concept maps with critical thinking. We hypothesized that applying the critical thinking “WH questions” to a child development textbook will produce different patterns of critical thinking. We also investigated whether there are significant differences in critical thinking patterns that persist over time periods.
Research Questions
Background and Literature Review
Concept Maps: The Deconstruction of Text Into Personal Knowledge
Concept maps are a node-link diagram in which each node represents a concept and a link that identifies the relationship between the two concepts. (Schroeder, Nesbit, Anguiano, & Adesope, 2017, p. 431) Concept maps originated at Cornell University in 1984 in the work of Bill Trochem and a doctoral student, Dorothy Torre, he described concept maps as a form of visual or picture thinking which is fast, automatic, effortless, often unconscious, brings images to mind, spreading neural activation, and enabling the individual or group to respond more easily than before (Donnelly, 2017, p. 186).
When we understand something, we say that we see it. We arrive at the solution to a problem through “insight.” To better communicate our ideas, we aim to make them “clear” (Fan, 2015). Likening visual experiences to cognitive processes suggests a metaphoric connection between how we see the world and how we think (Zandvakili et al., 2018).
Concept maps are examples of what Jonassen (2000) calls “mind tools” that amplify and reorganize cognition extending the range of the human mind. Nesbit and Adesope (2006) reported that the collaborative constructions of concept maps are more beneficial than working individually. It is expected that collaborative learning is more effective than individual learning. Please see the later discussion of collaborative learning. As mentioned in Zandvakili et al. (2018), Schroeder et al. (2017) found that using concept maps over time increased the effectiveness of the learning and retention of the concepts. In a meta-analysis study of concept maps they observed that concept maps or knowledge maps are diagrams that represent ideas as node link-assemblies, and there has been a steady increase in the number of published studies over the past thirty years. All in all, Concept maps are considered as a good tool to assist the instructor to organize knowledge and an appropriate tool for students to notice the important concepts in different materials (Novak, 1991; Jonassen, Beissner & Yacci, 1993).
Critical Thinking
Research on critical thinking
There is now widespread consensus by scholars that critical thinking skills are teachable and learnable. For example, two programs to enhance critical thinking skills were described by Halpern (1998). In the first study, general problem-solving abilities and skills were taught based on Piagetian theory of “cognitive development.” In the second study, the students were taught a particular type of critical thinking skill that used visual math presentations more like professionals than beginning students. Kennedy, Fisher, and Ennis (1991) surmised that pedagogical interventions with the purpose of enhancing students’ critical thinking abilities have produced more positive outcomes. A meta-analysis by Abrami et al. (2008) of 118 empirical studies found positive effects of critical thinking with a 0.34 average effect size. Deductive instruction, modeling, collaborative learning, and constructivist skills are different strategies proposed by different researchers to apply critical thinking. According to many researchers such as Halpern (1998), Abrami et al. (2008), Facione (1990), and Case (2005), the impact of explicit and direct instruction in improving critical thinking is positive, beneficial, and widespread. They also found that the effect size of the interventions varied as a function of the type of intervention. One of the persisting issues in applying critical thinking is the problem of the transfer of critical thinking from one domain to another. Abrami et al. (2008) have documented this problem in their descriptions of the different interventions.
Critical thinking and asking questions
“Asking questions or interrogation is part of the natural history of what it is to be human” (Zandvakili et al., 2018). There is a small literature on individual children learning to ask questions, and that literature will be briefly reviewed. Pinker (2003) describes children as intuitive psychologists who recognize intentions before they copy them. The first motivation to be like others fuels the need to benefit from the information and knowledge of others and the second is normative, the desire to follow the norms of a community. Learning to ask questions is a normative experience that children learn and imitate. The desire to acquire knowledge is present in children between 2 and 5 years of age as they learn the language games of question asking. According to Chouinard, Harris, and Maratsos (2007), children use an average of 107 questions an hour when engaged in conversation with adults. These youngsters are using language and conversation in a purposeful and intentional way to gather information and fill in gaps in their knowledge.
Asking questions is also a signature skill of detectives and scientists. The famous English detective, Sherlock Holmes, continually amazes with his ability to observe the dress and presentation of the self by the cast of characters in the novel. From careful observation, he observes and notes the clues necessary to solving the baffling mystery. He makes clear that asking “important questions” is not an easy task. Scientists too are engaged in asking questions and using the clues from careful observations to solve problems.
Asking critical thinking “WH questions.”
According to Sloan (2010), the patterns of critical thinking questions “what, when, why, how, who and where” were first described in Aristotle’s Nicomachean Ethics. Aristotle asked, how should we identify the dispositions that make for a virtuous person, and how to judge if an action is virtuous. Aristotle identified the seven consequences, “what, when, why, how, where and what for,” as a schema to determine whether an act is virtuous (Guest, 2017; Sloan, 2010). Sloan (2010) also mentioned that Cicero adapted the seven consequences as a rhetorical tool that became a staple of discourse in the legal world of the courts. These same seven consequences became the six “WH questions” with the dawn of modernity and are the standard province of journalists.
The six “WH questions” collectively provide information for understanding a narrative. Answers to each question are a declarative statement that provides different information. The critical thinking “WH questions” are deceptively simple, singular, and important sources of information. These singular language games merge together into the development of the language games of thinking critically. The critical thinking games may start with who, when, what, where, how, and where, while novelists and scientists alike begin inquiry from these different perspectives. The focus upon a single “WH question” misses the point that much of information sharing involves the serial application of the different “WH questions” to a problem.
The study of patterns of data emerged with the cognitive science revolution and high-speed computers. Relying upon the cognitive and neurosciences, Mattson (2014) argues that the superior pattern processing of human minds is the foundation of imagination, thinking, and creativity. Superior pattern processing is made more salient through the use of computer technology that permits the multivariate analysis of data. Our study of the patterns of the six critical thinking questions is made possible because of the philosophical analyses of Aristotle, and the cognitive and neurosciences that have created methodologies to analyze complex data for patterns. These educational practices are direct translations of philosophical inquiry and cognitive science into educational practice.
Collaboration
“The word collaboration is derived from the Latin collaborare and means to work together” (Zandvakili et al., 2018, p. 51). A half-century of research has confirmed that collaboration results in an improvement in educational achievement. Nevertheless, in schools, the focus remains upon individual learning rather than group or team learning. This tradition is reinforced and maintained by the increasing reliance upon summative assessment of the individual. In contrast to this tradition, Scardamalia and Bereiter (2014) emphasize that in the coming decades, team learning and not individual learning will be the focus of attention. They urge researchers to go beyond collaboration to study cognitive responsibility for team learning. They make their case by beginning with the observation that all learning is group learning. Their emphasis on what they call cognitive responsibility can be described under the rubric of team learning to include a shared commitment to achieving a goal or completing a task.
According to Dillenbourg (1999), cooperation is distinguished from collaboration because the latter involves participants working together on the same task, rather than in parallel on separate portions of the task with each person responsible for some portion of the task. However, Dillenbourg (1999) also note that some spontaneous division of labor may occur during collaboration. Thus, the distinction between the two is not necessarily clear-cut (as mentioned in Lai, 2011).
Andrews and Rapp (2015) reviewed the literature on collaboration citing its distinctive advantages and challenges in enhancing cognitive and psychological development. While the advantages of collaboration for the well-being of individual participants are social, affective, and psychological, the challenges include passing on and the incorporation of incorrect information into existing knowledge structures. It is also necessary to notice that the collaborative approach to teach critical thinking is highly recommended by different scholars (Abrami et al., 2008; Bonk & Smith, 1998; Heyman, 2008; Thayer-Bacon, 2000).
Assessment
“Assessment is a process of reasoning from evidence guided by theories, models and data on the nature of knowledge representations and the development of competence and expertise in typical domains of classroom instruction” (Pellegrino, Chudowsky, & Glaser, 2001, p. 59; Pellegrino, DiBello, & Goldman, 2016). Pellegrino and his colleagues shift our attention to a systems approach that includes data, theory, practice, and evaluation instruments. This review of the 3CA model of assessment uses a systems approach to describe the elements of critical thinking maps, collaboration, and student-made multiple-reasoning items. The Pellegrino model of assessment also includes data, and the data supporting the 3CA model will be discussed later.
The first major source of evidence for the 3CA model is the creation of the critical thinking–concept maps that is the result of applying critical thinking questions to concept maps, which is a unique, visual form of knowledge representation. Chen, Allen, and Jonassen (2018) found that combining concept maps and linguistic knowledge is a mixed method approach that results in deeper learning. Lachner, Backfisch, and Nückles (2018) argue that the feedback received by students results in an increase in knowledge and understanding. The current approach to the application of the critical thinking questions to concepts asks students to apply a single critical thinking question to connect two concepts. The data for the efficacy of the critical thinking maps are found in the repeated measures of patterns of critical thinking from Weeks 1 through 9, and which is a kind of formative assessment.
A second distinctive feature of the 3CA approach to assessment is the collaborative construction of critical thinking maps. This stage of instruction is also called social and cognitive synthesis; it is the moment in the model in which students exchange critical thinking maps and construct a shared map. The data for this stage are the collaborative assessment of critical thinking.
The last stage of assessment in the 3CA model is the collaborative construction of test items by students for their own assessment, and later, the student constructed test is compared with a publisher produced test. These summative test data are the evidence for the efficacy of this stage of assessment. According to Ashtiani and Babaii (2007, p. 213), “cooperative test construction is the last temptation of educational reform.” They situate collaborative test construction within the alternative approaches to assessment that emphasizes learner-centered education featuring students as active participants in designing the assessment process. They cite the support of the following studies for cooperative test construction: Allwright (1984) argues that putting students in charge of creating and administering tests they have created reduces anxiety and helps make education a joyful experience; Murphey (1994) describes student-made tests as an “effective way to mine students’ perceptions which teachers can use to build upon what a group knows as a whole” (p. 12). Rash (1997) proposes that student constructed tests enable teachers to see where students are and provide students with information about the gaps in their knowledge; Baron-Cohen (2004) argues that student constructed tests help learners to remain accountable for their learning and recognize relevant materials, and promote positive relations between teacher and students; Brink, Capps, and Sutko (2004) compared student-made tests with a standardized test in a manufacturing-engineering class and found that the creation of man-made tests was more beneficial for above average students, and students who prepared good comprehensive questions were significantly higher achiever than those who did not prepare good quality questions.
The student construction of multiple-reasoning items is a five-part process: (a) indexing the question to the content of the chapter; (b) using the critical thinking questions as stems (why, how did this event happen?); (c) the construction of the alternative answers for the question; (d) all the questions from all the students are posted on Moodle, 1 the class website; and (e) all the students have access to all the items from which the test will partly determine their grade for the class.
The data collected for formative and summative assessment in this study are convincing evidence of the efficacy of the 3CA model of critical thinking. Student learning is the ultimate aim of classroom instruction, and the patterns of critical thinking are the evidence that should be the focus of inquiry.
Method
Research Design
The design of this research study is both exploratory and experimental. It is exploratory because 3CA is a new model and each one of the components changed and improved as a consequence of trial and error. The design is a comparative study with the random assignment of students to groups. This is a 3 × 3 factorial design with repeated measures. The three social conditions—individual, individual-collaborative, and collaborative—and the three time periods were 3 weeks in duration. Formative assessments were made weekly using the critical thinking–concept maps.
Participants
The sample included 64 undergraduate students, ranging in ages from 18 to 22 years with an average age of 19 years old, enrolled in a general education course at a major university in north east who are satisfying their requirements for graduation. There were 22 students in the collaborative group and 42 students in the individual group. Eighty-two percent of the students were female, 12% male, and 6% did not self-identify; 60% of the class were White, 14% Asian, and 26% Other.
Setting
The current experiment was conducted over a period of 9 weeks, and each week all students prepared a concept map with critical thinking questions prior to coming to class, so over a 9-week period, each student produced nine concept maps with critical thinking. The data were collapsed into three time periods. Time 1 (data averaged over first three sessions), Time 2 (data averaged over second three sessions), and Time 3 (data averaged over last three sessions).
Instruments
The students used the textbook
The
“The Daily Procedures of the 3CA model” is a PowerPoint presentation to clarify the steps and procedures that students follow to meet the standards of collaborative and individual group performance.
Student-made test for final exam.
Previously used standardized test: Tests based upon test items distributed by publishers.
Experimental conditions
A collaborative group was further divided and considered as two conditions: (a) The individual homework phase of the collaborative group is called the individual-collaborative (IC) condition; and (b) the collaborative in-class phase of the collaborative group is called collaborative-collaborative (CC) condition.
An INdividual group is called IN condition.
Procedures
Training
The class was a four-credit child development course that met for 4 hr once a week. The first day of class was devoted to teaching students the procedures for the experiment. The procedures focused on teaching guidelines for the construction of concept maps, implementation of critical thinking questions on the maps, and generation of multiple-reasoning questions.
Individual/homework phase of collaborative group (IC)
Step 1: Students at home generated one digital map and they applied critical thinking strategies, “what, when, where, how, who, and why,” and came to class with a digital concept map of an entire chapter.
Step 2: Students prioritize key concepts in their map: The concepts were ranked from the most to least important. The purpose of the generation of the prioritization list was to identify the key concepts for the construction of the multiple-reasoning questions.
Step 3: Students generated six questions based on the six critical thinking “WH questions.”
Step 4: Students posted three documents on Moodle: Concept map, Prioritized list, and six critical thinking questions.
Collaborative/in-class phase of collaborative group (CC)
Step 1 (social synthesis): Students were randomly paired together and learned to collaborate in the classroom setting.
Step 2 (cognitive synthesis): Each pair of students exchanged a concept map of the chapter which was generated at home individually. They gave feedback to each other, criticized, agreed, disagreed, and identified gaps in their knowledge. While they were discussing and giving feedback to each other, they applied critical thinking strategies (what, how, why, when, where, and who) to a shared concept map. Each pair of students synthesized and combined the individual maps to generate a new collaborative map of the chapter.
Step 3 (collaborative prioritization): Students in pairs prioritized the concepts recognized in the collaborative maps and ranked them from the most important to the least important to be later used for the generation of questions.
Step 4 (assessment): Each pair of students constructed a total of 12 critical thinking questions. They answered each other’s question, criticized, and revised the items as a group.
Step 5: A member of the team posted all the questions on Moodle so that all items were open to public view and available to all to study for the exams.
Note 1: The semester consisted of three blocks of 3 weeks each. During each block of 3 weeks, the students produced approximately 150 items. In addition to assessing themselves formatively each session during the semester, students were also informed that their questions would be used in the final exam. Grades were determined using the following rubric:
40%: Concept maps/application of critical thinking,
35%: Collaboration/creation of criterion referenced questions,
25%: Final (student-made exam/old exam).
Note 2: The students’ test items were reviewed weekly, modified, and changed slightly, if necessary, by the instructor to improve accuracy, ease of understanding, and clarity between the stem and the multiple-reasoning alternatives.
Individual group (IN)
The individual group was the control group in which students always worked alone during the homework phase and the in-class phase. During the homework phase of the individual group, students worked alone preparing their concept maps with critical thinking questions. This phase was exactly like the individual phase of the collaborative group. Both groups completed their homework assignments of creating their individual critical thinking–concept maps followed by the construction of multiple-reasoning questions.
The second step for the individual group was different from the collaborative group because the individual group did not work together to create a shared concept map with critical thinking questions. Instead, the individual group listened to a lecture, viewed a video relevant to the chapter, and created an individual concept map. The purpose of this exercise was to help students generalize and think visually and more critically about the issues and problems of the day. After watching the video and listening to a short lecture by the instructor, each student produced a concept map of the lecture/video and then created six critical thinking items and posted them to Moodle.
The grade for the Individual group was based on the following factors:
40%: Concepts/critical thinking,
35%: Participation in class,
25%: Final (student-made exam).
Data Analysis
Analysis of the concept maps–critical thinking questions
A breakthrough moment occurred when the researchers found a way to analyze the concept maps using critical thinking questions. The insight of joining digital concept maps with application of critical thinking questions resulted in a new tool to analyze critical thinking skills. The use of digital maps with the critical thinking questions as links provided a database that lends itself to statistical analysis. This new innovative methodology has the potential to provide a reliable and valid approach to measuring the processual events in learning and achievement. The data from this research were a demonstration of the efficacy of this new methodology.
The actual procedures for the analysis of the concept map–critical thinking methodology take the following steps: The frequency data from the concept maps/critical thinking questions were obtained by counting the frequency of each of the “WH questions” from the weekly concept maps and using SAS (Statistical Analysis System, 9.1.3, SAS Institute, Cary, NC, USA) for further statistical analysis. The digitalization of the critical thinking–concept maps creates the possibility of digitizing the complete process from data entry to data analysis.
The distributions of the data were tested using the Kolmogorov–Smirnov test, and a transformation was done when necessary to obtain homogeneous variance. The distribution of the data was obtained by PROC UNIVARIATE in SAS. The transformations were done based on the Tukey Ladder of Powers, so that we changed the shape of the skewed distributions into a normal distribution.
To answer the research questions, we used repeated-measures analysis of variance (ANOVA) using PROC GLM in SAS. Repeated-measures ANOVA was appropriate because we evaluated the same subjects over three specific time points (Beginner, Intermediate, and Advanced) on the same dependent variables (“WH questions”). These measurements were made under different conditions. The conditions were the levels of the independent variable experimental conditions (CC-IC-IN). Furthermore, Multidimensional Preference Analysis of the dependent variables (“WH questions”) was performed using PROC PRINQUAL to discover changes and patterns in “WH questions” and to look for the possible clusters of the groupings of critical thinking questions in the 3CA model so that we could see what attributes the “WH questions” have in common. Where points were tightly clustered in a region of a plot, it would represent the groupings with the same preference patterns. Vectors that pointed in the same direction (or roughly the same direction) represent the “WH questions” with similar preference patterns.
An independent-sample
A power transformation of the student-made test data was performed, and this transformation produced a normal distribution of the skewed data from the student-made test.
To control family-wise error rate, Fisher’s least significant difference (LSD) at alpha = .05 was used for the main effects. Considering having three groups (levels of conditions) and three time points, Fisher’s LSD was the most appropriate method for controlling the family-wise error rate.
To partition the responses (dependent variable) into linear or quadratic trends, trend analyses by orthogonal polynomial contrasts were conducted when the effect of time period was significant.
All the graphs were created using spreadsheet programs such as Microsoft Excel 2017 and/or SAS.
Results
ANOVA With Repeated Measures
The first step in the analysis of the data was applying ANOVA with repeated measures to the six different “WH questions”: what, why, how, when, where, and who. The repeated measures were for the three different time periods: Time 1 = beginning, Time 2 = intermediate, and Time 3 = advanced.
All the significances of the comparison of means are presented in Table 1. A general picture of all the trend analyses for repeated measures for each of the critical thinking questions is presented in Figure 3.
A Comparison of Means Between Conditions Across Time Intervals of “WH Questions.”
Significances of the comparison of Means.

Trend analyses of “WH questions” among conditions as affected by time intervals.
What
With 99% confidence, there was a significant difference between student conditions for generation of “What questions.” A comparison of means shows that individual (IN) and individual-collaborative (IC) groups had a significantly higher mean and they generated more “What questions” than the collaborative group. With 99% confidence, there was a significant difference among the time intervals based on the repeated-measures ANOVA report. A comparison of means for the interaction between times revealed that student generated more “What questions” in the intermediate and advanced time intervals as compared with the beginner period (Table 1). There was no significant interaction between time intervals and student groupings, however. A trend analysis revealed that the generation of “What questions” across time follows the quadratic trend as students across the three conditions had the tendency to generate more “What questions” in the intermediate period as compared with the beginner period. There was no significant difference between advanced and intermediate periods (Figure 3).
Why
There were significant differences for both times and groups. With 99% confidence, there was a significant difference between student conditions for the generation of “Why questions.” A comparison of means indicated that the collaborative group (CC) had a significantly higher mean than individual (IN) and individual-collaborative (IC) groups. With 99% confidence, there was a significant difference among the time intervals based on the repeated-measures ANOVA report. A comparison of means for time interval revealed that students generated more “Why questions” in the advanced and intermediate time intervals as compared with the beginner period. However, there was no significant interaction between time intervals and experimental conditions (Table 1). A trend analysis revealed that the generation of “Why questions,” across time, followed the quadratic trend as students had a tendency to generate more “Why questions” in the advanced period in the collaborative group as compared with the intermediate and the beginner periods (Figure 3).
How
There was no significant difference between student groups for the generation of “How questions.” With 99% confidence, there was a significant difference among the time intervals based on the repeated-measures ANOVA. A comparison of means revealed that students generated more “How questions” in the advanced and intermediate time intervals as compared with the beginner period (Table 1). There was a significant interaction between time intervals and groups. A trend analysis revealed that the generation of “How questions” across time followed the quadratic trend. The collaborative group generated fewer “How questions” in the beginner’s time period as compared with individual (IN) and individual-collaborative (IC) groups. However, the tendency to generate “How questions” shifted in favor of the collaborative group from intermediate period and increased over time (Figure 3).
When
There was a significant difference between groups in the generation of “When questions.” A comparison of means showed that the collaborative (CC) group generated more “When questions” as compared with the individual (I) and individual-collaborative (IC) groups. There was no significant difference among time periods, and there was no significant interaction between time periods and groups for the generation of “When questions” (Table 1). A trend analysis revealed that the generation of “When questions” across time followed the quadratic trend. The collaborative group had the tendency to generate more “When questions” in the beginning period. However, it decreased in frequency toward the advanced period (Figure 3).
Where
There were no significant differences between groups, time intervals, and the interaction between time and groups.
Who
There was no significant difference between student groups for the generation of “Who questions.” With 99% confidence, there was a significant difference among the time intervals based on the repeated-measures ANOVA report. Mean comparison showed that student generated more “Who questions” in the intermediate level as compared with the beginner period. There was no significant difference between the intermediate and advanced level despite the student tendency to use less “Who questions” in the advanced period (Table 1). There was a significant interaction between time intervals and groups. A trend analysis revealed that the generation of “Who questions” across time followed the quadratic trend. All the groups had a tendency to generate more “Who questions” across time; however, the generation of “Who questions” from the intermediate period leveled off and then decreased during the advanced period (Figure 3).
Multidimensional Preference Analysis
Multidimensional preference analysis between collaborative group (CC) and individual group (IN) is presented in Figure 4. The biplot shows the cluster of preferences for the two groups. As can be seen from Figure 4, the preferences of the IN group clustered around the questions of “what, where, and who.” On the contrary, the preference for the CC group clustered mostly around “why, how, and when.”

Multidimensional preference analysis between CC and IN groups.
As can be seen from Figure 5, the preferences of the IC group clustered around the questions of “what and where.” On the contrary, the clusters for the CC group clustered mostly around “why, how, who, and when.”

Multidimensional preference analysis between IC and CC groups.
The t Test Between STMT and OLDT
There was a statistically significant difference between the two tests. The mean and
The
Discussion
This research study produced three outcomes of importance: (1) the methodological innovation of combining concept maps and critical thinking, (2) the finding of patterns of critical thinking, and (3) an instructional process, which makes critical thinking transparent and open to public view. Innovations 2 and 3 were not possible or plausible if not for the finding that it is possible to teach and facilitate the learning of patterns of critical thinking.
Methodological Innovation
The first methodological breakthrough of combining concept maps and the scheme of critical thinking has several educational advantages: (a) Computerized concept maps are a reliable way of recording data on concepts and their links (critical thinking questions); (b) the concept maps are a way of, first, transforming text into visual representation and, second, as can be seen in Figure 2, applying a scheme of critical thinking to the concepts on the map; and (c) the critical thinking maps are a compelling and interesting display of thinking. This is the first-time opportunity students will have to see their thinking represented on paper.
The second important accomplishment of this research is the demonstration it is possible to identify the patterns of the “critical thinking questions: what, when, why, where, who, and how” used by the students working independently or collaboratively. The different “critical thinking WH questions” are the source of the search for specific information necessary to solve problems.
The transforming of the text into concepts linked by critical thinking questions is a way of visualizing the thinking of the students. Our data indicate that students favor applying the “what questions” in applying the scheme of critical thinking, and the other critical thinking questions follow as individual set out to solve a problem or explain a particular set of circumstances. “Why and how” follow “what” when students and others are trying to understand a situation.
Patterns of Critical Thinking Questions
What
“What” is the most frequently used question by students and reflects the fact that students begin their critical thinking with the questions, “what is this” and “what happened?” Asking a “what question” invites a declarative statement as an answer. The preference for “what question” is a request for information and is also a reflection of the style of teaching in many college courses. G. E. Forman (visual learner’s tendency in preschool children, personal communication, September 20, 2018) in a report of his ongoing research found that “how and why” questions increased if students were shown a video of “what” happened. Without the video, students asked significantly more “what questions.” This same finding was replicated in this study with the collaborative group which showed a drop in the frequency of “what questions” and an increase in “how and why questions.”
Why
“Why” is the most researched of the five “WH questions.” “Why” is famously used by scientists and detectives. In both cases, “why” is only one of the interrogatives applied to understand the situation. “Why” is most often used when there is a mixture of knowledge and ignorance. “Why questions” are used to explain a causal relationship between two events. In this study, “Why” was used significantly more often by the collaborative group. “Why” is also used more often in the second and third time periods. This suggests as the students became more critical thinkers, they tended to use more “why questions.”
How
“How” was the third most frequently used interrogative in this study. The interrogative “how” is used in teleological explanations when the speaker is explaining how some event came about. “How” usually refers to a process, whereas “why” emphasizes a cause. “How” is usually an interrogative about a series of events.
As you can see in Figure 3, the trend for “how” is an increasing pattern for the students in collaborative group; in other words, they used more “How questions” over the three time intervals. There was a significant interaction between time and group. When students collaborated, they used the critical questions of “how and why” significantly more often, while using fewer “what questions”; therefore, they moved beyond factual information and began to ask, “why and how” something happen.
When
“When an event occurs?” is an important information in a child development class. The collaborative group used significantly more “when” questions than the individual groups. There are no significant differences with regard to time or the time group interaction in the usage of “when.” This is understandable, because the collaborative group goes beyond asking the questions of “what happen” to asking the questions “when does it happen,” so they are focusing on context more. This finding is in consistent with the higher frequency of the use of “why and how.”
Who
The three experimental groups were not significantly different from each other in terms of frequency of use of the “who question.” There was a significant difference due to time but not to group. Students used the question “who” significantly more often in the last two time intervals. The increasing use of “who” is attributable to the students’ understanding of the major theories and theorists in child development. Throughout the child development text, there is an emphasis on the major figures in developmental psychology such as Erikson, Freud, and Vygotsky. As students came to understand the theories in the text, they became more aware of who are the major theorists in the field.
Where
The “where questions” were used infrequently in all groups, and there were no significant differences between the groups. This is reasonable, because the child development text used in the class did not emphasize the “where” aspect of the context; the emphases were upon the theory and decontextualizing the events.
All in all, from the ANOVA and multidimensional scaling data, we concluded that the “what question” plays a unique and influential role in critical thinking. What is it about the question “what”? In G. E. Forman’s (visual learner’s tendency in preschool children, personal communication, September 20, 2018) research on problem solving, he found that when students have video available, they do not ask ‘what” questions. The function of “what” is to bring to mind a picture. The video in Forman’s research took the place of the images that normally come to mind in the midst of problem solving. If the question “what” is accompanied by an image, then perhaps that explains why students were slow to change their patterns of critical thinking, because it is also necessary to change the images that come to mind when the question “what” is asked.
Instructional Process: Making Thinking Visible and Transparent
The 3CA model is a student-centered approach to instruction designed to be visible and transparent to students. Students value visibility and transparency because this gives them a sense of fairness in the educational process. There is a shifting of responsibility from the teacher to the student; this new way of teaching is different for students; and they are used to being told what to do. The first question by students who entered the class was, “What is the catch? this can’t all be true. I get to make up the questions for my exam.” “Yes, is the answer.” The second question raised by students is, “Do we have to do homework every week?” “the answer is yes!” another question on their minds is, “Can you teach thinking? this professor claims that he can teach thinking, I am not so sure about that, I will have to wait and see,” the professor answer is yes, we can teach thinking and you can learn to think: Learning to think is learning a second language. Learning a second language is something that everyone can do and will do in accord with life in their communities. This course is like the first semester of a course in a second language. You will learn some new skills, learn some new words, concepts, but you will not yet be fluent in the language of thinking. Most importantly you will recognize the second language when you hear or see it being used. You will also recognize that some patterns are familiar to you. The instructional pattern is among the oldest in history, collaboration, dating back over millennia. The tools of inquiry, questions and answers, are even older dating back to the origins of being human. A healthy respect for asking questions is one of the outcomes of this class designed to teach thinking. The structure of the thinking processes is a mystery to many students. They are often encouraged to think critically without specific instructions as to how to go about thinking clearly.
The 3CA model shows students a pathway to thinking as they practice their critical thinking skills in the classroom. Students acquire an increasing sense of agency as they practice creating critical thinking maps, using critical thinking to construct items for their own assessment and collaboration, creating shared maps, and creating the items using critical thinking for their grade for the semester. During each class session, students see that critical thinking is a concrete set of experiences requiring practice. As students go about practicing the activities in the classroom, they learn to practice the public language games in their heads. After practice, the language games practiced in public become language games played in the privacy of the mind.
Some students using the 3CA model state they are visual learners, and as a result they prefer to use visual materials in the 3CA model. The central place of critical thinking maps throughout the instructional process provides students with helpful support according to those students who describe themselves as visual learners. This study was not designed to explore the fit between self-described visual learners and the 3CA model with its focus upon visual maps. About 40% of students describe themselves as visual learners in a recent study (Clarke, Flaherty, & Yankey, 2006). It will be interesting to see in future research if the use of concept maps provides visual learners with a learning dividend.
The philosopher Ludwig Wittgenstein (2009) was also interested in the problem of making thinking visible. He worried that separating public language games from private language games suggests that thinking belonged to a different and obscure realm of the ethereal. The idea of the language game was an important contribution from his great book,
In Zettel, paragraph 605, 606, Wittgenstein (1970) put it this way.
One of the most dangerous ideas for a philosopher is oddly enough that we think with our heads or in our heads. The idea of thinking as a process in the head, in a completely enclosed space, gives him something occult.
Wittgenstein is not denying there are mental events and images, he is commenting on the origins of mental states and images. He argued that mental states begin as a public practice much as when you learned to count. He commented that before you learned to count in your head, you learned to count in your hands. Similarly, we learn to read out loud before we learn to read silently. He chose the example of counting because he wanted to show that the most abstract of language games have their origins in everyday activities that are open to public view.
Implications of Findings
The major implications of this study are empirical and theoretical. On the practical side, our data indicate that teaching thinking is feasible and realistic across disciplines and ability levels. The use of digital concept maps with the critical thinking questions deserves replication. The use of this innovation makes it possible to identify the patterns of critical thinking in a systematic way. Digital concept maps afford researchers the opportunities to measure the changing frequencies of the different critical questions. The focus of the current research using concept maps has been to emphasize a single connection between concepts. It is possible to go beyond the single connections between concepts and explore multiple critical links between concepts. According to Zandvakili et al. (2018, p. 47), “Understanding is the application of multiple critical thinking questions to a single concept.” As students go about using multiple links to a concept, they are deepening their understanding of the concept and become better critical thinkers.
On the theoretical side, there is the serendipitous possibility that the Aristotelian consequences, “what, when, who, where, what, how, and what for,” are the corpus for one of the language games of thinking. If that is true, then this particular language game of thinking has a syntax and something resembling Chomsky’s concept of “merge” (Berwick & Chomsky, 2016). There is ample evidence in our data that students do use the combinations of questions in different systematic and appropriate ways. It is common sense to suppose that we all use the sequence of question asking in ways that are appropriate to the setting. The experience of merge in thinking occurs when two ideas come together in a recursive fashion and one folds into the other. The recursive function of merge lends to language the fluency that we take for granted.
Limitations
This exploratory experiment combined the components of concept maps, critical thinking, collaboration, assessment, and mastery into a single package. There is a need for a factorial study that systematically explored the inclusion of different components of the model. It is also important to explore the use of this model in different classroom settings. The beauty of the model is that it can be conducted within any classroom. An individual teacher can implement the model without having to coordinate their efforts with others. This suggests that the model can be tried out in a variety of settings without having to involve large numbers of educational personnel.
