A recent article in the Chronicle of Higher Education confirms my own experience that most students enrolled in online course do not complete standard course evaluations forms. Online teachers don’t have the luxury of handing out evaluations forms on the last day of classes (naturally before marked papers are returned,) and assigning a graduate student to stand over the captive audience, refusing to allow any to leave without completing the form. The result is return rates that although never 100% are typically well over 75% and good enough to get a sense of a meaningful and representative sample of the population. Unfortunately, in online courses, identical questionnaires delivered as a component of a learning management system or emailed directly to student produce only a trickle of responses. This low response rate causes some professors and administrators to question not only the validity of the sample results, but also the value of exercise itself.
Why does it matter? Course evaluations serve a number of useful functions. Most obviously very poor student reviews result in a variety of formative remedial actions including changes in the way in which the course is designed, evaluated or taught. Chairs and deans review the results and in come cases prescribe actions ranging from referrals to the teaching and learning centre, accent reduction treatment, mentoring, observations by the Chair. The results are often one component of decisions related to promotion and tenure.
The validity of course evaluations has been the subject of considerable research over many years. The literature is broad enough for both supporters and opponents to find at least evidence to support their view, but overall synthesis are consistently positive about the value and the objectivity of these evaluations. (Hattie and Marsh 1996). Although participant evaluation is only the first and least useful of Kirkpatrick’s levels of training evaluation, it is by far the easiest and fastest to obtain and is correlated with retention, course achievement and use in practice.
Anderson, Cain, & Bird (2005) describe a number of potential advantages of online course evaluations including
(1) provides rapid feedback;
(2) is less expensive to administer;
(3) requires less class time;
(4) is less vulnerable to professorial influence;
(5) allows students as much time as they wish to complete; and (
(6) allows students multiple opportunities to evaluate faculty members.
Further a 1997 meta analysis (Murray, 1997) found that student evaluations, when accompanied by appropriate remiedial activities did lead to improvements in teaching quality and outcomes. All of these advantages however are of little importance if they are not completed by students. Also note that few of these reasons offer tangible benefit to students.
There many possible reasons for non-completion of evaluations in online courses. It may be an indicator of lack of involvement in the course or of integration with the institution or the individual instructor. It may also be an indication that students in online courses are more empowered to exercise their own power and independence and rejecting a system that doesn’t provide enough benefits for them to engage and take the time and effort necessary to participate.
Some institutions and individual instructors have introduced a variety of incentives and structures designed to improve rates. Taking a cue from survey results indicating that immediate reward increases response rates (mail surveys don’t include that shiny $.25 quarter in their mailouts for nothing. Also shown to have effect is delayed or potential rewards as the “enter your response in a draw for a free IPod). Unfortunately these systems often require disclosure of personal identity – how can I get my IPod, if I don’t provide my name, thus destroying the anonymity required of student evaluations. Other schools have used mechanisms within their LMS systems to nag or even refuse release of marks or further progress in the course or program until evaluations forms are submitted. Such coercive actions raise ethical issues about consent and also raises identity and privacy implications.
In a very interesting control group study with 400 students Loveric (2006) had a return rate of 92.5% on the paper and pencil survey as compared to .045% with the online students. However, when the students were bribed by offer of revealing questions on an upcoming test, if they completed the evaluation, return rate rose to 52%! Dommeyer , Baum, Chapman & Hanna (2003) showed that a raise in online completion rates from 43% to paper based average of 75% when given a variety of incentives.
Perhaps a better solution to bribes and pressure is to use social software systems such as RateMyProfessor to obtain better returns. RateMyProfessor claims to have data on “Over 6,000 schools, 1 million professor and 6 million reviews”. There is little way to assess the participation rate on RateMyProfessor and I suspect that online courses are also not represented with many reviews. However, it is important to note the features that attract students to participate on these sights.
First and most important is transparency. Students get to see the reviews posted by other students and their aggregated results. They also get to compare these ratings with other teachers in related programs or even course sections. The results are immediate. Second, the items are created to attract interest of students. Most academic chairs don’t really care how “hot” their faculty rate, but supposedly students do, though the definition of hotness is illusive at least to this 50 plus writer! Recently RateMyProfessor has added a video Professors strike back, site with asynchronous discussion forums that follow. This allows opportunity for discussion, airing of disagreements and offers a level playing field where students and professors can debate individual ratings, review procedures or any other issue of common concern. Perhaps not surprisingly few professors chose to engage in this site, but the opportunity is available.
This transparency creates a learning opportunity for all respondents, creates opportunity for dialogue and provides low cost incentive and social reward for students to participate. The system allows anonymity, but the discussion allows disclosure, as participants are comfortable in doing so. RateMyProfessor runs on an ad revenue model and so provides a very low costs service to everyone.
External systems do not allow anyone to determine the return rate (how many students are enrolled in each section of a course?), thus making any results suspect to bias (both negative and positive response) A better solution would be for schools to operate their own public systems. Hopefully these systems would evolve standardized questions allowing for cross institution aggregation and comparison. Such systems would also to be monitored so as not to provide a platform for slander, obscenity etc. But this a challenge met in various ways by all public social systems.
No doubt such a sanctioned system would have challenges and perhaps never pass academic senates dominated by tenured faculty with little to gain and potential for loss of prestige and reputation. Yet, this is exactly the sort of transparency that educations systems need. The student isn’t just a customer of the education system or the schools, to which they pay tuition and invest their time. Rather they are very important partners. Teachers are much like medical doctor who can advise on healthy learning and offer treatments for aliments. But the responsibility for healthy living and following treatment regimes is vested in the patient. Likewise students are responsible for their learnign, but teachers are responsible for creating meaningful paths to that learning and motivating participation. Teaching is not the only function of a faculty member and students are not the sole judges of quality, but their opinions are essential to evolving more effective and engaging education practice. Thus, we need to devise ways to solicit and reward their feedback and techniques and motivations incorporated in social sites may well be the key.
Anderson, H., Cain, J., & Bird, E. (2005). Online Student Course Evaluations: Review of Literature and a Pilot Study. American Journal of Pharmaceutical Education, 69(1)
Dommeyer CJ, Baum P, Chapman KS, Hanna RW. (2003) An experimental investigation of student response rates to faculty evaluations: The effect of the online method and online treatments. Presented at: Decision Sciences Institute Nov. 22-25, 2003; Washington, DC. Available at: http://www.sbaer.uca.edu/research/dsi/2003/procs/451-7916.pdf
Hattie, J., & Marsh, H. W. (1996). The relationship between research and teaching: A meta-analysis. Review of Educational Research, 66(4), 507-542
Lovric, M. (2006) “Traditional and web-based course evaluations –comparison of their response rates and efficiency. University of Belgrade
Murray, H. (1997). Does evaluation of teaching lead to improvement of teaching? International Journal for Academic Development,, 2(1), 8-23