In the world of higher education, there is a big push towards student-centered learning and teaching. Lectures are way out of style. In are small group activities, inquiry-based projects, flipped classrooms and self-determined curricula. A growing number of institutions are flirting with online learning. Some have online platforms that can be accessed if inclement weather prevents faculty from getting to campus for class. Others are encouraging faculty to adopt at least some online components in their in-person classes too, like online discussions outside of class. It’s all about putting the student and his/her interests at the center of your course plan. It sometimes also seems all about making things more convenient or more enticing for our students. Appealing to the current generation of online-savvy, text-messaging and Googling students is the current imperative.
So, how’s it going? Are we producing hordes of life-long learners who are better able to obtain good jobs or pursue fulfilling careers upon graduation? Are we finding our students like learning better or are more engaged with their own learning? How do we know if our new strategies are effective?
And, perhaps more to the point: How do we know if our innovative ideas and new practices are better than the centuries old practices of lecture and questions from the lectern?
Do our students prefer the new student-centered approaches? How do we know? Have we asked them?
Okay, that’s a lot of questions. And, of course, I do not have the answers to most of them.
Here are some standard ways for measuring teaching effectiveness.
1. Course evaluations
Most institutions require faculty to administer some form of course evaluation in order to gather data about student satisfaction with a course and with a professor’s teaching of that course. These evaluations are often an important component in determining the quality of teaching. College administrators reason that good teachers will provide courses with which students are well satisfied. Satisfied students will rate both the course and the teacher highly, right? But, does that tell us anything about whether our new innovations are an improvement over archaic methods like lecturing? I’d say “No” because the particular students in your course that semester only experienced what you provided. They have no basis for comparison with what you did in the past. The information you get out of course evaluations, then, is just whether the students liked that particular class and your particular teaching of it that semester.
You can compare your evaluations in your new-and-improved course with those you’ve received in previous renditions of that course. But, what does that tell you? That you are a better teacher now (or worse)? That your new approaches are helping (or hindering) your students’ learning? Or, that they are more (or less) satisfied with your approaches (compared to other courses they might have taken in their life)? These scores probably tell you nothing about whether the students learned better compared to previous students taking this course using your older techniques. I don’t believe there’s any evidence that satisfaction with a course and actual learning/mastery are correlated in a course evaluation questionnaire.
2. Grades and other end-of-course evaluations of student achievement
We tend to believe that students earning high grades have learned a lot in our courses, right? Does this mean that, if our new strategies for teaching are successful, we should see our students earning higher grades than before or more students earning high grades than before? If you think about it, the best teaching and best learning strategies ought to produce more students who have fully mastered the material….so 100% of our students should achieve the highest possible marks (all A’s, all 100’s, etc). But, doesn’t that make you cringe? If all your students are getting A’s, folks will get suspicious. Is your course hard enough? Are you grading rigorously enough? There must be something wrong if all your students achieve the goals you set for your course. (It can’t be that you are using highly effective strategies, can it?) At many institutions there are calls to cap the number of high grades, to deal with grade inflation. It seems we (higher education) want our students to learn and master everything, but we don’t want them all to earn the highest marks. Is it possible that grade inflation is actually evidence that teaching methods today have improved student achievement? What do grades, especially final grades, tell us?
There’s a tricky link between student satisfaction and grades. This issue is truly a powder-keg in higher education, with study upon study suggesting no link between student evaluations and grades, and study upon study suggesting that student expectations of grades do influence their evaluations of a course and its professor. It’s definitely a dense thicket trying to separate satisfaction, achievement and teaching effectiveness.
3. Rubrics, standardized tests and learning outcomes
In this blog, I’ve talked about how wonderful grading rubrics are. You can develop a really detailed rubric that lays out your expectations and goals for an assignment. I recommend being completely transparent with your students by handing out this rubric in advance of the assignment. If smart students care about their work, they will follow this rubric closely and will be more likely to achieve the goals of an assignment. They will have a higher likelihood of earning a high grade, indeed, of mastering the material or goals. If students achieve all the goals of every assignment and so demonstrate they have mastered that course, that is strong evidence that your teaching strategies and approaches have enabled that mastery, right? So, it seems to me, we WANT to see a higher percentage of high grades. It seems to me that grades and teaching prowess should correlate. But, of course, if teaching evaluations determine salary or promotion AND student satisfaction is influenced by student expectations of high grades…..then, you can see how grades quickly become a tricky tool for measuring teaching effectiveness (or even, perhaps, student mastery).
Standardized tests are thought to measure learning or achievement. These tests, though, are best for gauging student memory of content. It’s harder to measure things like critical thinking, analysis, research skills. Of course, there are tests that purport to assess these things, but these tests are not often administered after each course. Instead these tests, like the GRE or MCAT, are summative in nature, evaluating content achievement/retention of a field of study, as opposed to any one particular course. They are also only taken by a subset of the students in your courses (possibly only the cream of the crop). Should we consider standardized tests for particular courses, too? Some faculty think of final exams as a sort of achievement/mastery test. To really tell if our students have learned as a result of our teaching strategies, though, we probably need to do a before/after type of exam. The improvement in scores by the end of the semester can be taken as evidence of learning. We can report these score differences as evidence of learning outcomes. I’m not sure these scores tell us much about the effectiveness of particular teaching strategies. I wonder, though. If our teaching methods are better for student learning, shouldn’t students perform better on final exams or other types of learning outcomes assessments? If so, shouldn’t they receive better grades?
Again, we get the heebie-jeebies if we start using the percentage of A’s as a measure of teaching effectiveness.
4. Direct observation of teaching or student interviews (alumni, senior exit surveys, etc)
Some institutions gather this kind of evidence of effective teaching primarily for those faculty members coming up for tenure. Virtually no institutions, departments or professors use these methods to explore whether new teaching/learning strategies are an improvement over old ones. A few studies that include exit surveys of new teaching approaches have been published in the pedagogical literature.
Most of us try some new things and hope that we can tell if they work better than what we did in the past. How do we know our new approach is better? Well, I look at the quality of my students’ work. I also watch my students for signs of engagement in my course. Do they come to class prepared? Do they participate in discussion in a meaningful way? Do they talk about the course material outside of class? [And, if more of my students are demonstrating engagement, preparation and higher quality participation…..they also tend to earn higher grades!] If I am trying something new, I tell my students explicitly and then I ask them to provide me with feedback, in the form of a mid-semester evaluation or one at the end of the semester. This information helps me determine if a strategy is an improvement or not, but, quite honestly, the students haven’t had my course in more ways than one, so they are not in a position to judge whether something is an improvement.
So, evaluating whether one teaching strategy is better than another is far from an exact science. It’s more of a quagmire.