We Can’t Teach ‘Critical Thinking’ Until We Learn How to Assess It

The Victorian era writer John Ruskin once observed that the fantastic creatures from Greek mythology were all created from different parts of familiar animals, with nothing in them created by the imagination. The modern era writer Woody Allen, noting the same thing, proposed his own mythical creatures, one of which had the head of a certified public accountant. The conclusion Ruskin drew was that humans are incapable of imagining something they have never actually experienced, and the best they can do is patch together new arrangements of that with which they are already familiar.

I believe this is one of the most important reasons that it is so hard for the teaching of thinking skills to take hold in education. Teachers and curriculum designers who were never asked to think in their own educations cannot imagine how to include it in their own teaching. More importantly, they have no idea how to assess it.

This was made painfully clear when I was asked to intervene in the unsatisfactory development process for an online world history course. Despite the fact that the developer had been given clear directions that thinking skills were to be a focus in the course, there was no thinking to be found so far. All assessments were fact-based multiple choice. The developer seemed not to understand what was wrong with that, and she eventually produced what she thought was wanted. For example, she had a short answer question in which students were to write out what the Phoenicians believed about something, as if putting a fact in words required more thought than identifying it on a list of multiple choice items.

She was then given an example of a project in which students compared the different Mesopotamian cultures, identified key similarities and differences, and proposed a theory that would account for them. She could not see any difference between that example and her question about the Phoenicians. The only thing she saw in the sample project was that the students had identified given facts about the different cultures. The actual thought process involved in finding, comparing, and using information to support a new idea was invisible to her.

For others, thinking means the process by which students arrive at a known answer on their own. That is a step in the right direction, but it is really just a step. Let’s look at two ways a teacher can lead a class discussion, say, in discussing Adrienne Rich’s powerful poem, “Rape.” The teacher can point out a specific word and ask, “Why did Rich use this word in this case?” The wording of the question implies that there is a correct answer that the teacher knows. A student responding will either get the right answer, a completely wrong answer, or an answer that is wrong but not too far wrong. In the best case scenario, a wrong answer will lead another student to disagree, which only changes who tells the first student that he or she was mistaken. Eventually the teacher will tell who got it right.

Let’s say, instead, that the teacher had laid the groundwork by teaching proper thinking skills for attacking literature. The teacher asks, “Does anyone see anything interesting in the poem’s diction?” Knowing that diction means word choice, a student might say something like, “I notice that in describing the policeman taking the report, Rich uses the word machine instead of typewriter twice.” Having also been trained that connotation is a key component of diction, the student, or another student, might volunteer that the word machine connotes cold and unfeeling. Another student might volunteer that Rich says the policeman rides a stallion, which is technically incorrect because a policeman would never ride a stallion. Why not? They are too wild and uncontrollable. They are too male. They are — well, they are like rapists.

The process continues, each student’s ideas and observations building collaboratively upon what went before. No one is guessing what is in the teacher’s mind. Differing ideas bring different levels of value, but they all bring value. Even an observation that leads to a dead end is not wrong — it is just an observation that turns out not to be productive, and the process of finding the dead end is helpful in itself. Students are not only learning to think, they are learning the collaborative process by which modern work teams complete their projects.

To use this technique, the teacher must be comfortable facilitating a process that may go in a totally unpredictable direction. Students may notice things the teacher hadn’t already known. Students might be puzzled by and ask questions about something the teacher doesn’t know. The skilled teacher will not be bothered by this because the act of working this out with the students is an excellent lesson in itself. A teacher who cannot say “Hmmm. I have never thought of this before. Let’s think about it” will not be happy with such an approach.

A course that focuses instruction on thinking skills needs to focus assessment on thinking skills as well, and here teachers and course designers are often equally baffled. When I proposed the example of the comparison of Mesopotamian cultures in the world history course, both the course designer and even the people who had written thinking skills into the course design requirements were confused. How could you grade such a thing? Unless you specifically identified the points of comparison so you could tick them off as the student described them, unless you could identify the specific theory that the student must propose, how could you score it?

True education leaders have no trouble understanding how to write a rubric for and score a project like this. The fine free response questions in the College Board’s Advanced Placement program do a very nice job. The problem is that few people are trained in this process. In the program in which I once worked, I was asked to confer with a program that was not happy with the scoring guides we had provided for our free response questions in our AP courses, guides closely modeled on those used by the College Board to score AP free response questions. The program could not understand how anyone could score a free response without being given a list of specific facts that must be included in the free response essay and a specific position an essay must take. They wanted a specific number of points for each itemized fact the student included, and they could not conceive of any other way to grade a free response question.

Assessment is actually the key. If students are assessed by how they use thinking skills to analyze a new work of literature, devise a scientific experiment, draw a conclusion about historical data, or apply appropriate mathematical processes to solve a problem, then they will need to be taught how to do it in the first place. Until teachers and course designers learn how to do this themselves, it simply isn’t going to happen. They will instead keep creating assessments that, like the fantastic creatures of Greek mythology, are merely unholy compilations of old and familiar body parts.

4 Responses

  1. “A teacher who cannot say “Hmmm. I have never thought of this before. Let’s think about it” will not be happy with such an approach.”

    It’s exactly those moments that make teaching exciting.

    John’s point about thinking skills being invisible is good. Until people are trained to observe, much of what’s around them is invisible.

    Just as he was frustrated by “blind” history teachers, so have I been frustrated by science teachers who don’t understand science. If you don’t understand science, you will necessarily focus on content, on the words, formulas, and procedures in your books.

    It’s not the teachers’ fault, of course. They never learned science properly and so don’t know the difference. They’re blind to it. They haven’t explored the unknown with imperfect tools and so cannot share that experience with their students.

    Many state science standards just get in the way by listing too much content that must be learned for a high-stakes test. These standards may mention scientific thinking skills, but the tests do not check for them. Even if a question or two did, students could miss those questions and still get good scores.

    Yet, assessment must happen at some level, even if it’s just the teacher writing a test. You must measure if you expect to move things forward. But it’s a good question, how to assess thinking. Ideally, those grading would not have to read free responses until they went mad from bad grammar, poorly constructed paragraphs and the like.

    Many of today’s students expect (have been trained by their teachers) to be able to memorize some material and repeat it back verbatim to get a good grade. I think that we can improve our tests by eliminating such questions and requiring that student put different ideas together to get answers.

    That’s what I began to do with my own online science lab system. You might be amazed at the howls I got from teachers whose “excellent” students were getting grades of 30 or 40 on my quizzes. These were multiple-choice tests. I only had asked students to think beyond the narrow boundaries of the material presented to them.

    So, I had to reduce the number of thinking questions and put in more trivial questions that could be unambiguously found in the lab support materials. I didn’t throw out the thinking questions. I just bumped them to higher, teacher-selectable level. Now, I can sell successfully. Sigh!

    We should challenge our students and especially challenge them to think for themselves. Despite the whining you may hear from them, in the end, they’ll come back and thank you.

    It’s not a college degree that leads to better jobs, it’s the thinking skills that you’re supposed to develop. If you don’t, then you’ve wasted your time and money, and the school is diminishing your country’s future.

  2. What about gathering some training material in critical thinking for teachers and course designers?
    For instance, in NPR’s Andy Carvin on Tracking and Tweeting Revolutions by Hari Sreenivasan, Carvin gives a great explanation of how to apply critical thinking in a floating, uncertain situation.

  3. I believe that critical thinking assessment is about measuring what knowledge is coming out of students rather than simply what knowledge may have been put in that actually had relavance and stayed. I believe the root of the word education is the Latin educere which can be translated as “to bring out”. The Germanic word for education is ausbildung, literally translated as “out picture”. From these roots, it seems the purpose of teaching may have always been to get knowledge out of students, and that assessment should have always been about measuring that level of knowledge coming out, and not so much of what went in.

    In a time and place where we as educators controlled the majority of all knowledge that went into a student, measuring that input was probably a relatively good indicator of the amount of knowledge we could expect to come out. It was also easy to accomplish in “multiple choice” formats. In a world like today where knowledge is everywhere and from everyplace, educators are realizing that input measurement alone is sorely insufficient.

    I will agree that if teachers will start to think about what knowledge is coming out of each student as opposed to simply what they put in the result would be proportional to the amount of critical thinking that must have occurred in the student. It is not the entire answer to the problem, but it should be a start in the right direction.

    • Interesting redefinition of critical thinking, William. I’d just query your time line in:

      In a time and place where we as educators controlled the majority of all knowledge that went into a student, measuring that input was probably a relatively good indicator of the amount of knowledge we could expect to come out. It was also easy to accomplish in “multiple choice” formats. In a world like today where knowledge is everywhere and from everyplace, educators are realizing that input measurement alone is sorely insufficient.

      The primary school my parents, then my siblings and I went through was often boring, but that was due to a rigid bureaucratic misreading of Piaget’s writings on the stages of cognitive development by the Geneva educational powers-that-be: you were not meant to understand given things, and be able to do further things with them, before a given age, hence before a given grade. So the primary school curriculum was very slow.
      However, we were not given multiple-choice tests, at least not for grading. And we were required to deal with arithmetic problems by writing a description of what we were looking for at each stage on the left of each operation we used.
      As far as I remember, the first time I was faced with multiple choice for testing was when I applied for a postgrad interpreters’ course in London in the 1970’s. I found questions like “Do you prefer to a) listen to a violin concerto b) watch a soccer game?” rather baffling. As my then future husband said when I told him about it: “They should have added “c) kick a violin to bits”.”

      End of historical reminiscing/nitpicking on your time line.

      Now as to your main suggestion, i.e. testing students’ output rather than their capacity to reproduce teachers’ input, what seems new to me is the relatively new forms this output can take, which traditionally-trained teachers are not necessarily familiar with: video for instance.

      I was fortunate in having a high school student in the early 90’s who later became a film director. Once I set the humdrum “French as foreign language learning” task “Pick a French TV broadcast you like and present a 5 minute extract of it in class”, and he came up with a video collage of several episodes of the McGyver series (in the French version) with his voice-over deconstruction of them.

      That was when he was in grade 9, and I could kind of cope with it, at least with his verbal commment. Then in grade 12, when the task I’d set was to pick one of the Théophile Gautier’s short stories we’d been reading and write an essay on a given aspect of it, he asked if he could write a film script instead.

      Though a script is verbal too, its assessment as part of a potential film requires competences I didn’t have. So I asked a colleague who taught visual arts to co-assess the script with me, and a few years later, I signed up for a workshop on text image narrations. It was a great workshop, but only 6 teachers had signed up for it.

      However, that was back in the 90’s, when there were few “video-speaking” students. Now things have dramatically changed because video-speaking has become much simpler.

      What proportion of teachers received the training necessary to assess the critical/creative thinking in students’ videos like, e.g., the ones Michael Wesch is gathering for the “The Visions of Students Today” project via the VOST2011 youtube tag, or the ones Steve Hargadon is gathering for the TEDactive project?

Leave a comment