I assign students to groups the first day of class (typically three to four students in adjacent seats) and design each lecture around a series of seven to 10 clicker questions that cover the key learning goals for that day. The groups are told they must come to a consensus answer (entered with their clickers) and be prepared to offer reasons for their choice.
It is in these peer discussions that most students do the primary processing of the new ideas and problem-solving approaches. The process of critiquing each other’s ideas in order to arrive at a consensus also enormously improves both their ability to carry on scientific discourse and to test their own understanding.
Clickers also give valuable (albeit often painful) feedback to the instructor when they reveal, for example, that only 10 percent of the students understood what was just explained. But they also provide feedback in less obvious ways.
By circulating through the classroom and listening in on the consensus-group discussions, I quickly learn which aspects of the topic confuse students and can then target those points in the follow-up discussion. Perhaps even more important is the feedback provided to the students through the histograms and their own discussions. They become much more invested in their own learning.
When using clickers and consensus groups, I have dramatically more substantive questions per class period — more students ask questions and the students represent a much broader distribution by ethnicity and gender — than when using the peer-instruction approach without clickers.
A third powerful educational technology is the sophisticated online interactive simulation. This technique can be highly effective and takes less time to incorporate into instruction than more traditional materials. My group has created and tested over 60 such simulations and made them available for free (www.phet.colorado.edu). We have explored their use in lecture and homework problems and as replacements for, or enhancements of, laboratories.
The “circuit construction kit” is a typical example of a simulation. It allows one to build arbitrary circuits involving realistic-looking resistors, light bulbs (which light up), wires, batteries, and switches and get a correct rendition of voltages and currents. There are realistic volt and ammeters to measure circuit parameters. The simulation also shows cartoonlike electrons moving around the circuit in appropriate paths, with velocities proportional to current. We’ve found this simulation to be a dramatic help to students in understanding the basic concepts of electric current and voltage, when substituted for an equivalent lab with real components.
Circuit Construction Kit. Courtesy: Physics Education Technology Project, University of Colorado
As with all good educational technology, the effectiveness of good simulations comes from the fact that their design is governed by research on how people learn, and the simulations are carefully tested to ensure they achieve the desired learning. They can enhance the ability of a good instructor to portray how experts think when they see a real-life situation and provide an environment in which a student can learn by observing and exploring.
The power of a simulation is that these explorations can be carefully constrained, and what the student sees can be suitably enhanced to facilitate the desired learning. Using these various effective pedagogical strategies, my group and many others have seen dramatic improvements in learning.
Comparison of Learning Results from Traditionally Taught Courses and Courses Using Research-Based Pedagogy
We now have good data showing that traditional approaches to teaching science are not successful for a large proportion of our students, and we have a few research-based approaches that achieve much better learning. The scientific approach to science teaching works, but how do we make this the norm for every teacher in every classroom, rather than just a set of experimental projects? This has been my primary focus for the past several years.
A necessary condition for changing college education is changing the teaching of science at the major research universities, because they set the norms that pervade the education system regarding how science is taught and what it means to “learn” science. These departments produce most of the college teachers who then go on to teach science to the majority of college students, including future school teachers. So we must start by changing the practices of those departments.
There are several major challenges to modifying how they educate their students. First, in universities there is generally no connection between the incentives in the system and student learning. A lot of people would say that this is because research universities and their faculty don’t care about teaching or student learning. I don’t think that’s true — many instructors care a great deal. The real problem is that we have almost no authentic assessments of what students actually learn, so it is impossible to broadly measure that learning and hence impossible to connect it to resources and incentives.
We do have student evaluations of instructors, but these are primarily popularity contests and not measures of learning. The second challenge is that while we know how to develop the necessary tools for assessing student learning in a practical, widespread way at the university level, carrying this out would require a significant investment.
Introducing effective research-based teaching in all college science courses—by, for instance, developing and testing pedagogically effective materials, supporting technology, and providing for faculty development—would also require resources. But the budget for R&D and the implementation of improved educational methods at most universities is essentially zero. More generally, there is not the political will on campus to take the steps required to bring about cultural change in organizations like science departments.
Our society faces both a demand for improved science education and exciting opportunities for meeting those demands. Taking a more scholarly approach to education—that is, utilizing research on how the brain learns, carrying out careful research on what students are learning, and adjusting our instructional practices accordingly—has great promise.
Research clearly shows the failures of traditional methods and the superiority of some new approaches for most students. However, it remains a challenge to insert into every college and university classroom these pedagogical approaches and a mindset that teaching should be pursued with the same rigorous standards of scholarship as scientific research.
Although I am reluctant to offer simple solutions for such a complex problem, perhaps the most effective first step will be to provide sufficient carrots and sticks to convince the faculty members within each department or program to come to a consensus as to their desired learning outcomes at each level (course, program, etc.) and to create rigorous means to measure the actual outcomes.
These learning outcomes cannot be vague generalities but rather should be the specific things they want students to be able to do that demonstrate the desired capabilities and mastery and hence can be measured in a relatively straightforward fashion. The methods and instruments for assessing the outcomes must meet certain objective standards of rigor and also be collectively agreed upon and used in a consistent manner, as is done in scientific research.
Other articles in this series:
W. Adams et al. (2005), Proceedings of the 2004 Physics Education Research Conference, J. Marx, P, Heron, S. Franklin, eds., American Institute of Physics, Melville, NY, p. 45.
R. Hake (1998), The American Journal of Physics. 66, 64.
D. Hammer (1997), Cognition and Instruction. 15, 485.
D. Hestenes, M. Wells, G. Swackhammer (1992), The Physics Teacher. 30, 141.
Z. Hrepic, D. Zollman, N. Rebello. “Comparing students’and experts’ understanding of the content of a lecture,” to be published in Journal of Science Education and Technology. A pre-print is available at http://web.phys.ksu.edu/papers/2006/Hrepic_comparing.pdf
E. Mazur (1997), Peer Instructions: A User’s Manual, Prentice Hall, Upper Saddle River, NJ.
G. Novak, E. Patterson, A.Gavrin, and W. Christian (1999), Just-in-Time Teaching: Blending Active Learning with Web Technology, Prentice Hall, Upper Saddle River, NJ.
K. Perkins et al. (2005), Proceedings of the 2004 Physics Education Research Conference, J. Marx, P. Heron, S. Franklin, eds., American Institute of Physics, Melville, NY, p. 61.
E. Redish (2003), Teaching Physics with the Physics Suite, Wiley, Hoboken, NJ.
Originally presented in Change magazine, September/October 2007.