Although clickers present little technical challenge, pedagogically they require careful consideration, as different instructional applications may result in vastly different instructional outcomes.
Examples of implementation variations are: displaying the distribution graph immediately after voting or delaying it till after peer discussion, revealing the correct answer right away or withholding it as long as possible, whether or not to use a second vote, and whether or not to combine it with other activities such as demonstrations or peer discussion.
Check out clicker-based pedagogies for best practice tips.
As the adoption of clickers spreads wider and wider on campus, you may start asking yourself, "Should I use clickers in my courses"? If the reason you ask this question is that you are facing some challenges in your teaching that you would really like to address, clickers may very well be a good solution to your problems. The key to successful use of instructional technology is to always begin with the diagnosis -- identify a specific weakness in the learning environment, and then choose a particular technology to address that weakness. One such weakness that is increasingly relevant is student engagement.
Student engagement is identified by Duncan (2006) as one of the two "oldest and most fundamental challenges" faced by education, the other being keeping track of students’ state of understanding. Duncan argues that clickers can address both challenges. If you really feel the need to engage your students more or know more about their thinking, considering using clickers can be a good first step. However, more careful examination of your instructional goals as well as the pedagogical means by which you want to achieve those goals is needed before you ask students to go and pay for their clickers. Without the second step, you may soon find yourself overwhelmed by student resistance.
< Back >
This question is better posed as “What can clickers potentially do?”, or “What do I want clickers to do”? Here’s why:
Clickers are tools, not methods. They do not automatically bring about learning. However, they do possess more enabling features than alternative feedback tools such as raising hands. For example, they collect and tally answers much faster and more accurately, with less possibility for conformity (students selecting an answer chosen by the majority). They encourage every student to participate because of the double benefit of anonymity and accountability they provide – a student’s remote is identifiable to the teacher but not to other students.
All the above are affordances of the technology which can only serve learning when used with solid instructional methods. For example, you can have all students participate in answering a question, but it does not necessarily mean they will seriously think about the question before answering or really engage with the course material. Similarly, if the instructor does not adjust his or her teaching based on the immediate feedback collected with clickers, the great benefit of formative assessment will be lost. So the advantages of using clickers lie in the fact that they support the implementation of sound, interactive instructional techniques.
When used effectively, clickers can help you do the following and much more depending on what instructional goals you want to achieve. Here are just a few of the things you can do with clickers in the classroom.
When used in pedagogically sound manners, clickers can address the three principles of student learning summarized by McConnell :
First, not only instructors but students themselves do not have a clear idea of what they know and what they do not know. Research shows that top students tend to underestimate their performance level while low performance students tend to overestimate their own state of understanding. When students are not aware of their weaknesses, they do not see a reason to work on them. With clickers, students can get immediate feedback about where they stand in their understanding both in absolute and relative terms. This increased awareness will help them monitor their own progress and learn better.
Secondly, clickers can be easily used to stimulate peer discussion. After students commit to an answer, they have more invested interest in knowing the correct answer to that question. Listening to the reasoning of their fellow students and verbalizing their own thoughts can help students correct their faulty understanding, clarify their nebulous thoughts or at least consolidate what they already know. Moreover, students can learn more from different perspectives than merely from one expert source. Sometimes students find the explanation by their peers easier to understand because they think in similar patterns. In addition, the person with the correct answer may have just struggled through similar difficulties, so the fresh memory enables him/her to explain in a manner more accessible to the still confused student(s).
Thirdly, clicker questions, or ConcepTests, as termed by Eric Mazur (the pioneer of the Peer Instruction method), are best designed as conceptual questions rather than factual recall quizzes. Furthermore, when they test knowledge in the grey area where there is no single best answer, they push students to think harder and activate the cognitive processes/habits of mind used by professionals in the field. This will help them develop higher order thinking skills and make them better learners beyond the university walls.
This piece is based on Dr. David A McConnell's presentation Clicker Pedagogy in Larger Classes: How using clickers can improve learning.
< Back >
If you have difficulty getting students to do their pre-class reading, you are not alone. Research suggests that only 20-30% of college students complete their reading assignments on any given day (Hobson, 2004). More shockingly, in a survey done by Derek Bruff, one student out of 56 read their textbook before coming to class.
When you implement clickers in the classroom, you will find it a more compelling issue because preparatory reading is critical to the successful use of clickers. Then are there any good ways of tacking this problem?
Why do students not comply with course reading assignments?
Understanding the reasons behind low reading compliance is the first step to finding solutions. Hobson (2004) points out that the research literature focuses excessively on the students as the source of the non-compliance problem and therefore seldom looks at the other side of the “teaching and learning coin” for solutions. Based on survey results that students do not perceive course readings as necessary for course success, Hobson identified some drawbacks in course design as contributing factors such as a week relationship between course readings and academic success; no justification in the course syllabus for reading selections; no distinction between "required" readings and readings that are "required to succeed", and a mismatch between text difficulty level and student background knowledge and reading abilities.
As the non-compliance problem is caused by more than one questionable feature in course design, multiple solutions are needed. Below are some of the solutions proposed by Hobson:
This piece is mainly based on Eric H. Hobson's idea paper Getting Students to Read: Fourteen Tips.
Reading quiz administered either before class or at the beginning of class is a common technique employed by instructors to encourage preparatory reading. While it works for some instructors, others are hesitant to adopt it for various reasons. Henderson and Rosenthal (2006) identified two disadvantages of using reading quiz and recommended reading questions as a better solution. If the problems summarized below echo with your concerns about reading quiz, you may consider the recommended alternatives that follow.
The challenges of reading quiz
In the context of teaching physics, Henderson and Rosenthal (2006) discussed the challenges of creating quiz questions at the right level. Simple factual questions may reinforce the already prevalent student conception that physics is all about memorizing and plugging numbers into equations, while deep conceptual questions are not fair to students, who encounter the material for the first time. Another disadvantage of reading quiz is that it is a teacher-centered approach. Although an experienced teacher may be well aware of common student difficulties, there is always a gap between what the teacher thinks is difficult or confusing for students and the reality of student understanding. Moreover, with reading quiz, students depend on external assessment rather than the more desirable self-assessment.
Reading questions as an alternative incentive
Reading questions (questions that students formulate after reading the assigned material), on the other hand, come from students and reflect students’ perspectives and concerns about the topic in question. According to Henderson and Rosenthal (2006), reading questions have the following merits:
• provide instructors with information about student understanding,
• encourage students to read the text before class,
• promote a positive, student-centered class atmosphere,
• facilitate one-on-one interactions between the students and instructor, and
• provide instructors with readymade material for classroom discussions.
Another strength not explicitly mentioned by Hederson and Rosenthal but can be argued is that the reading question approach can help students hone their questioning skills and raise their metacognitive awareness, especially when their questions are graded on quality/level of thought. The abilities to ask good questions and self assess are among the most durable learning outcomes we can help students achieve.
It is important to incorporate students’ reading questions into class activities. Occasionally mentioning the name of the students who posed a good question is even better. It is a good way to sustain student motivation for writing up the questions because it shows students their concerns are taken seriously if not all addressed.
A variation of reading questions
Apart from reading quiz and reading questions, a third alternative that combines the two techniques has proven effective. It takes the form of web-based assignments consisting of both questions that test student understanding and questions that elicit student input about what they consider most difficult or interesting about the reading material. This incentive is an important part of both the Just-in-Time Teaching Method and the Peer Instruction method, with the latter borrowing the idea from the former.
Crouch and Mazur (2001), in discussing their ten years experience with using PI in introductory physics described their evolving strategies for promoting pre-class reading. They started with reading quiz but soon found that they “do not help students think about it”. Later they tried short summaries, which also proved disappointing because students were unable to write effective summaries. The incentive they have found most effective is a three-question (all open-ended) web-based assignment they have used since 1998. The first two questions have students grapple with what the instructor perceives as difficult concepts in the reading and the last one asks the students what they find most difficult or confusing. Derek Bruff, the author of Teaching with Classroom Response Systems: Creating Active Learning Environments also uses a similar reading incentive. Apart from getting students to read, he found reading questions a good source of clicker questions.
Crouch and Mazur (2001) also saw the benefits of reading questions beyond improving reading compliance.
"Access to the students’ responses to these questions allows the instructor to prepare for class more effectively by providing insight into what students find difficult, complementing the instructor’s ideas about what material needs most emphasis in class. Time spent preparing is comparable to that required for a traditional lecture class; the instructor spends less time reviewing other textbooks and notes for ideas on what should be covered, and more time finding out from the students what they understand. This sort of preparation produces a class better suited to the students’ identified needs. Student response to these reading assignments is particularly positive when their questions are answered (in class or by answers to FAQs posted on the course Web site)." (p. 973)
< Back >
The answer is a resounding "Yes!"
The use of clickers can bring a lot of interactivity into large lecture rooms where students have been accustomed to being anonymous and non-participatory. Don’t be surprised if your students do not share your excitement about the change - interactive teaching is not within their expectations for a large enrolment course and not necessarily their preference. If you use conceptual-based clicker questions and peer discussion (highly recommended), it collides with an even more fundamental student expectation - learning means to get a good grade by memorizing what is taught and regurgitating it on the exams (Duncan, 2006).
Trees and Jackson (2007) also state that students who prefer the flexibility in class attendance and participation in a traditional lecture course may harbor negative attitudes toward the use of clickers. It is quite normal for students to resist the sudden shift in instructional method that yank them out of their comfort zone. However, since the success of clicker use depends crucially on student attitudes, "Instructors must work to facilitate student acceptance and to frame student perceptions of the technology" (p. 38).
One of the ways to "frame student perceptions" about clickers is to explain to students the benefits of their use. Such explanation is termed "metacommunication" by Ian Beatty (2004). Beatty stresses that instructors should communicate to students the rationale behind important course components and instructional decisions, such as learning objectives, the benefits of new teaching techniques and the rationale behind course assignments. Such higher level communication is "The most powerful tool for changing students’ attitudes about learning and enlisting them as active collaborators in their own education" (p. 11).
In our own clicker survey at Concordia, we find that some students focus their energy on locating the correct answer rather than finding out the reasoning behind each answer. Explaining to students answering clicker questions is not an end in itself but a means to develop their conceptual understanding and higher-order thinking skills is essential. It will help them answer questions and participate in discussions in a more meaningful way.
Here is how Douglas Duncan explains to his students why he is using clickers and Peer Instruction and what the students are expected to do to benefit from the experience.
< Back >
The criteria of a good clicker question vary across disciplines, but there is some general agreement in the literature on what makes an effective clicker question.
A good clicker question is different from a good homework or exam question (Beatty et al., 2006). Qualitative, conceptual questions that entail deep understanding and higher order thinking are preferred to quantitative and factual questions that mainly involve calculation and recall (Beatty, 2004, Crouch et al., 2007). The best questions, in terms of their potential to stimulate discussion, are those open to multiple interpretations rather than limited to one single solution (Miller et al., 2006).
Three important common characteristics of good clicker questions are:
1. The incorrect answers (distracters) are based on student common misconceptions or difficulties.
2. The level of difficulty is appropriately challenging.
3. The questions are unambiguously worded.
1. Base incorrect answers on common misconceptions
There are many ways to identify student common mistakes. If you have taught the same course for several times, you may have developed a good instinct about where students are confused. Nevertheless, it is better for that information to come directly from the students. Methods to come up with good answer choices include investigating student mistakes in exams and homework assignments from previous years and researching the literature on student common difficulties in your discipline (Crouch et al., 2007). A good way to make your questions reflective of the specific needs of the current students is to have them complete open-ended reading questions that are due shortly before class. You can even elicit student misunderstandings right in the middle of the lecture. When you are uncertain about how students might approach a problem, you can ask an open-ended question and have students throw their ideas at you. Then you can write down some of the ideas as answer choices and turn the original question into a clicker question using the “on the fly” function of the clicker software. Finally, you can always note down any misunderstanding that emerges from class activities as material for future clicker questions.
2. Target your clicker questions at the middle level
A correct answer rate of 35%-70% suggested by Eric Mazur (the pioneer of the Peer Instruction method) and followed by many other clicker users is generally considered an optimal level of difficulty for clicker questions. “If fewer than 35% of the students are initially correct, the ConcepTest (Mazur’s term for clicker questions) may be ambiguous, or too few students may understand the relevant concepts to have a fruitful discussion (at least without some further guidance from the instructor). If more than 70% of the students can answer the question correctly alone, there is little benefit from discussion” (Crouch et al., 2007, p10).
3. Word your questions to avoid misinterpretation
There is a difference between intentional and unintended ambiguity in clicker question design. Clicker questions that test the grey area of the subject matter are beneficial to students because they force students to think in cognitively more complex ways. However, if a question is not clearly worded, it may fail to assess what it is intended to assess. Students may have chosen the wrong answer only because they misinterpreted the question.
A good way to avoid misinterpretation of clicker questions is to test them out on a smaller group of students before bringing them to class. When you ask a question in class, being on the lookout for possible misinterpretations will enable you to react faster and adjust more easily when they do happen. You can even turn it to your favour if the misinterpretation offers you an opportunity to help students explore the topic deeper or from a wider perspective as it sometimes does. Being flexible is one of the qualities required of instructors teaching with clickers. Creating good questions takes practice. If you habitually refine your questions as you go, based on student feedback, you will certainly improve with time.
Other resources on creating good clicker questions:
Beatty ID, Leonard WJ, Gerace WJ, Dufresne RJ (2006) Question driven instruction: teaching science (well) with an audience response system. In: Banks DA (ed) Audience response systems in higher education: applications and cases. Idea Group Inc, Hershey, pp 96–115.
(Advice on designing questions is on p. 11)
If you happen to teach physics, you are in the luck. Beatty and his colleagues (Beatty et al., 2006) have developed some frameworks for constructing clicker questions for physics teachers. However, those who do not teach physics will also find this article a good resource.
Most instructors would agree that creating good clicker questions is hard work. Then what makes this process so demanding? Some say that coming up with plausible incorrect answers, others say targeting the questions at the appropriate level. Both are important factors that influence the quality of a question. However, the most challenging and also the most critical part of creating a clicker question, according to Beatty et al. (2006), is determining what you want your students to learn and be able to do by that question.
Threefold goal of clicker questions
Beatty et al. (2006) made a clear distinction between constructing effective clicker questions and quiz questions: every clicker question “should have an explicit pedagogic purpose consisting of a content goal, a process goal, and a metacognitive goal”. For the content goal, questions should avoid merely testing memorization of facts but focus on the understanding of “concepts, principles, and their inter-relationships’ (p. 32). For the process goal, questions should be able to foster certain cognitive skills that enable students to apply their knowledge in diverse contexts. These cognitive skills may vary across different disciplines. For physics, the “12 habits of mind” can serve as a good reference. Finally, an effective question should also include a metacognitive goal that influences students’ perspectives and approach to learning. After all, the ultimate goal of education is to prepare students for lifelong learning.
Four mechanisms that help achieve the goals
The authors identified four mechanisms through which clicker questions realize their threefold goals: “through focusing students’ attention by posing the question, through stimulating cognitive processes as students ponder the question, through feedback provided to students and instructor by collective viewing of the response histogram, and through articulation and confrontation of ideas during discussion”. These four mechanisms explain perfectly why and how clicker use has the potential to promote learning.
First, questioning itself is a powerful technique because it directs students’ attention to what you want them to focus on and raises their level of consciousness of the subject matter at hand, which is an essential first step to understanding.
Secondly, when the question is well designed, it prompts students to think hard and activate certain cognitive processes they need to arrive at a successful solution.
Thirdly, the tallied student responses generated by the clicker software provides feedback to both the instructor and the students about where students are in their understanding. Note that only when this information is actually used by the instructor and the students to adjust subsequent teaching and learning can it adds value to learning. This is the concept of “formative assessment”, assessment FOR learning as opposed to assessment OF learning.
The fourth mechanism entails the use of peer discussion. A large body of literature suggests that successful clicker-based pedagogies usually contain a discussion component. Examples are Peer Instruction and Question-driven Instruction. The rationale for peer discussion and class-wide discussion is that “the act of articulating beliefs, perceptions, assumptions, expectations, understanding, and reasoning is inherently valuable to students”, because “Thinking is often ill-formed, nebulous, and inconsistent” (Beatty et al., 2006, p. 33). In addition, discussion exposes students to different perspectives some of which will challenge their own. Confronting students’ ideas with other possibilities works far better than telling students what the solutions are. Apart from small group discussion, a whole class discussion is also beneficial because it tells you more specifically (than the histogram) about what caused the gaps in students’ understanding and how you can help fill those gaps.
Question design tactics for activating the four mechanisms
These mechanisms can be activated by various TACTICS recommended by the authors. See the original article Designing effective questions for classroom response system teaching for a detailed description of various question design tactics that can help you meet your goals.
Bloom's Taxonomy is a framework for classifying learning objectives. It was developed by Benjamin Bloom and collaborators in the 1950s and first appeared in the handbook The Taxonomy of Educational Objectives. The taxonomy consisted of six categories, Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation, representing increasing levels of abstraction and sophistication in cognitive learning. Bloom’s taxonomy has been widely used in educational settings as a framework for designing learning activities and assessment items.
Diagram source: Dr. Robert Cassidy’s (Psychology Department, Concordia University) lecture slides.
In 2001, a revision of Bloom's Taxonomy with the title A Taxonomy for Teaching, Learning, and Assessment was published. In the revised version, the authors used verbs rather than nouns to describe the objectives, highlighting the cognitive processes the learner engages in during knowledge construction.
With the increased use of clickers in the classroom, Bloom’s taxonomy is becoming a popular framework for designing clicker questions – questions for formative assessment. Click here to access an article that describes how to use Bloom’s taxonomy for creating clicker questions.
For more information on Bloom’s taxonomy, please check the following resources:
Anderson, L. & Krathwohl, D. A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman, 2001.
Taxonomy of Educational Objectives: The Classification of Educational Goals; pp. 201–207;B. S. Bloom (Ed.) Susan Fauer Company, Inc. 1956.