student ratings: myths vs. research evidence
this article was originally published in the fall 2003 issue of the cft’s newsletter, teaching forum.
micheal theall, ph.d.
the following article is reprinted with permission of the author and of focus on faculty (fall 2002), a publication of the brigham young university faculty center, ed. d. lynn sorenson.
michael theall has twenty-six years of experience as a faculty member and as a professional in instructional design, development and evaluation. he has founded faculty centers for teaching, learning and evaluation at three universities: the university of illinois, the university of alabama, and youngstown state university (oh). theall and colleague jennifer franklin recently received a career achievement award from the american education research association (aera). they are authors of “the student ratings debate,” a monograph for new directions for institutional research (2001), among numerous other research publications.
student ratings of instruction are hotly debated on many college campuses. unfortunately, these debates are often uninformed by the extensive research on this topic. marsh’s often-cited review of the research on student ratings shows that student ratings data are: a) multidimensional; b) reliable and stable; c) primarily a function of the instructor who teaches the course; d) relatively valid against a variety of indicators of effective teaching; e) relatively unaffected by a variety of variables hypothesized as potential biases; and f) seen to be useful by faculty, students, and administrators. [1]
the researchers who have synthesized all the major studies of ratings have reached the same conclusions as marsh. but even when the data are technically rigorous, one of the major problems is day-to-day practice: student ratings are often misinterpreted, misused, and not accompanied by other information which allows users to make sound decisions. as a result, there is a great deal of suspicion, anxiety, and even hostility toward ratings. several questions are commonly raised with respect to student ratings. current research provides answers to many of these questions.
- are students qualified to rate their instructors and the instruction they receive?
- are ratings based solely on popularity?
- are ratings related to learning?
- are ratings affected by situational variables?
- do students rate teachers on the basis of expected (or received) grades?
- can students make accurate judgments while still involved in their schooling?
- guidelines for good evaluation practice
- notes & bibliography
are students qualified to rate their instructors and the instruction they receive? generally speaking, the answer is “yes.” students can report the frequencies of teacher behaviors, the amount of work required, how much they feel they have learned, and the difficulty of the material. they can answer questions about the quality of lectures, the value of readings and assignments, the clarity of the instructor’s explanations, the instructor’s availability and helpfulness, and many other aspects of the teaching and learning process. no one else is as qualified to report what transpired during the semester, simply because no one else is there for the entire semester. students are certainly qualified to express their satisfaction or dissatisfaction with the experience. they have a right to express their opinions in any case, and no one else can report the extent to which the experience was useful, productive, informative, satisfying, or worthwhile. while opinions on these matters are not direct measures of the performance of the teacher or the content learned, they are legitimate indicators of student satisfaction; there is a substantial research base linking this satisfaction to effective teaching and learning.
but students are not necessarily qualified to report on all issues. for example, beginning students cannot accurately rate the instructor’s knowledge of the subject. a colleague’s rating is more appropriate for this purpose. likewise, peers are better qualified to judge content currency, curricular match, course design, or assessment methods. both students and peers are in unique positions to provide enlightening perspectives. for effective evaluation, remember to use multiple sources of data and ask questions that respondents can legitimately answer.
are ratings based solely on popularity? there is no basis for this argument and no research to substantiate it. when this topic arises, the term “popular” is never defined. rather, it is left to imply that learning should somehow be unpleasant, and the “popularity” statement is usually accompanied by an anecdote suggesting that “the best teacher i ever had was the one i hated most.” the assumption that popularity somehow means a lack of substance, knowledge, or challenge is entirely without merit. in fact, several studies show students learn more in courses in which teachers demonstrate interest/concern for the students and their learning; of course these teachers also receive higher ratings.
are ratings related to learning? the most acceptable criterion for good teaching is student learning.there are consistently high correlations between student ratings of the “amount learned” in a course and students’ overall ratings of the teacher and the course. even more telling are the studies in multi-section courses that employed a common final exam. [2] in general, student ratings were the highest for instructors whose students performed best on the exams. these studies are the strongest evidence for the validity of student ratings because they connect ratings with learning.
are ratings affected by situational variables? the research says that ratings are robust and not greatly affected by situational variables. but we must keep in mind that generalizations are not absolute statements. there will always be some variations. for example, we know that required, large-enrollment, out-of-major courses in the physical sciences get lower average ratings than elective, upper-level, major courses in virtually all other disciplines. does this mean that teaching quality varies? not necessarily. what it does show is that effective teaching and learning may be harder to achieve under certain sets of conditions. there is a critical principle for evaluation practice embedded here: to be fair, comparisons of faculty teaching performance based on ratings should use sufficient amounts of data from similar situations. it would be grossly unfair to compare the ratings of an experienced professor teaching a graduate seminar of ten students to the one-time ratings of a new instructor teaching an entry-level, required course with an enrollment of 300.
do students rate teachers on the basis of expected (or received) grades? this is currently the most contentious question in ratings research. there is consistent evidence of a relationship between grades and ratings: a modest correlation of about .20. the multisection validity studies (mentioned in question 3) provide the most solid evidence that ratings reflect learning (a correlation of about .43). these findings lead to the conclusion reached by most researchers: that there should be a relationship between ratings and grades because effective teaching leads to learning which leads to student achievement and satisfaction. ratings simply reflect this sequence.
can students make accurate judgments while still involved in their schooling? some argue that students cannot discern real quality until years after leaving the classroom. there is no research proving this statement: however several studies compare in-class ratings to ratings by the same students the next semester, the next year, immediately after graduation, and several years later. [3] all these studies report the same results: although students may realize later that a particular subject was more or less important that they thought, student opinions about teachers change very little over time. teachers rated highly in class are rated highly later on, and those with poor ratings in class continue to get poor ratings later on. this question is connected to the larger technical matter of overall reliability of ratings. the research indicates that ratings are very reliable. whether reliability is measured within classes, across classes, over time, or in other ways, student ratings are remarkably consistent.
guidelines for good evaluation practice in addition to emphasizing that student ratings are an important part of evaluation, theall also suggests several rules for improving the entire teaching evaluation process.
- establish the purpose’s of the evaluation and who the users will be
- include stakeholders in decisions about evaluation process and policy
- keep in mind a balance between individual and institutional needs
- publicly present clear information about the evaluation criteria, process, and procedures
- establish a legally defensible process, including a system for grievances
- be sure to provide resources for improvement and support of teaching and teachers
- build a coherent “system” for evaluation, rather than a piecemeal process
- establish clear lines of responsibility/reporting for those who administer the system
- invest in the superior evaluation system and evaluate it regularly
- use, adapt, or develop instrumentation suited to institutional/individual needs
- use multiple sources of information for evaluation decisions
- collect data on ratings and validate the instrument(s) used
- produce reports that can be easily and accurately understood
- educate the users of rating results to avoid misuse and misinterpretation
- keep formative evaluation confidential and separate from summative decision making
- in summative decisions, compare teachers on the basis of data from similar teaching situations
- consider the appropriate use of evaluation data for assessment and other purposes
- seek expert, outside assistance when necessary/appropriate
the bottom line is: good practice leads to good decisions.
notes & bibliography notes
- marsh, h. w. “students’ evaluations of university teaching: research findings, methodological issues, and directions for future research.” international journal of educational research, 1987, 11, 253-388.
- cohen, p. a. “student ratings of instruction and student achievement: a meta-analysis of multisection validity studies.” review of educational research, 1981, 51, 281-309.
- centra, j. a. determining faculty effectiveness. san francisco: jossey bass, 1979; and frey, p. w. “validity of student instructional ratings. does timing matter?” journal of higher education, 1976, 3, 327-336.
references and bibliography
- arreola, r. a. developing a comprehensive faculty evaluation system. 2nd ed. bolton, ma: anker publishing company, 2000.
braskamp, l. a. and j. c. ory. assessing faculty work. san francisco: jossey bass, 1994.
centra, j. a. reflective faculty evaluation. san francisco: jossey bass, 1993.
knapper, c. and cranton, p., eds. “fresh approaches to the evaluation of teaching.” new directions for teaching and learning 88 (winter 2001).
theall, m., p. a. abrami, and l. mets, eds. “the student ratings debate. are they valid? how can we best use them?” new directions for institutional research 109 (2001).
theall, m., and j. l. franklin. student ratings in the context of complex evaluation systems. in m. theall and j. franklin, eds. “student ratings of instruction: issues for improving practice.” new directions for teaching and learning 43 (1990).