A recent article in the Chronicle of Higher Education confirms my own experience that most students enrolled in online course do not complete standard course evaluations forms. Online teachers don’t have the luxury of handing out evaluations forms on the last day of classes (naturally before marked papers are returned,) and assigning a graduate student to stand over the captive audience, refusing to allow any to leave without completing the form. The result is return rates that although never 100% are typically well over 75% and good enough to get a sense of a meaningful and representative sample of the population. Unfortunately, in online courses, identical questionnaires delivered as a component of a learning management system or emailed directly to student produce only a trickle of responses. This low response rate causes some professors and administrators to question not only the validity of the sample results, but also the value of exercise itself.
Why does it matter? Course evaluations serve a number of useful functions. Most obviously very poor student reviews result in a variety of formative remedial actions including changes in the way in which the course is designed, evaluated or taught. Chairs and deans review the results and in come cases prescribe actions ranging from referrals to the teaching and learning centre, accent reduction treatment, mentoring, observations by the Chair. The results are often one component of decisions related to promotion and tenure.
The validity of course evaluations has been the subject of considerable research over many years. The literature is broad enough for both supporters and opponents to find at least evidence to support their view, but overall synthesis are consistently positive about the value and the objectivity of these evaluations. (Hattie and Marsh 1996). Although participant evaluation is only the first and least useful of Kirkpatrick’s levels of training evaluation, it is by far the easiest and fastest to obtain and is correlated with retention, course achievement and use in practice.
Anderson, Cain, & Bird (2005) describe a number of potential advantages of online course evaluations including
(1) provides rapid feedback;
(2) is less expensive to administer;
(3) requires less class time;
(4) is less vulnerable to professorial influence;
(5) allows students as much time as they wish to complete; and (
(6) allows students multiple opportunities to evaluate faculty members.
Further a 1997 meta analysis (Murray, 1997) found that student evaluations, when accompanied by appropriate remiedial activities did lead to improvements in teaching quality and outcomes. All of these advantages however are of little importance if they are not completed by students. Also note that few of these reasons offer tangible benefit to students.
There many possible reasons for non-completion of evaluations in online courses. It may be an indicator of lack of involvement in the course or of integration with the institution or the individual instructor. It may also be an indication that students in online courses are more empowered to exercise their own power and independence and rejecting a system that doesn’t provide enough benefits for them to engage and take the time and effort necessary to participate.
Some institutions and individual instructors have introduced a variety of incentives and structures designed to improve rates. Taking a cue from survey results indicating that immediate reward increases response rates (mail surveys don’t include that shiny $.25 quarter in their mailouts for nothing. Also shown to have effect is delayed or potential rewards as the “enter your response in a draw for a free IPod). Unfortunately these systems often require disclosure of personal identity – how can I get my IPod, if I don’t provide my name, thus destroying the anonymity required of student evaluations. Other schools have used mechanisms within their LMS systems to nag or even refuse release of marks or further progress in the course or program until evaluations forms are submitted. Such coercive actions raise ethical issues about consent and also raises identity and privacy implications.
In a very interesting control group study with 400 students Loveric (2006) had a return rate of 92.5% on the paper and pencil survey as compared to .045% with the online students. However, when the students were bribed by offer of revealing questions on an upcoming test, if they completed the evaluation, return rate rose to 52%! Dommeyer , Baum, Chapman & Hanna (2003) showed that a raise in online completion rates from 43% to paper based average of 75% when given a variety of incentives.
Perhaps a better solution to bribes and pressure is to use social software systems such as RateMyProfessor to obtain better returns. RateMyProfessor claims to have data on “Over 6,000 schools, 1 million professor and 6 million reviews”. There is little way to assess the participation rate on RateMyProfessor and I suspect that online courses are also not represented with many reviews. However, it is important to note the features that attract students to participate on these sights.
First and most important is transparency. Students get to see the reviews posted by other students and their aggregated results. They also get to compare these ratings with other teachers in related programs or even course sections. The results are immediate. Second, the items are created to attract interest of students. Most academic chairs don’t really care how “hot” their faculty rate, but supposedly students do, though the definition of hotness is illusive at least to this 50 plus writer! Recently RateMyProfessor has added a video Professors strike back, site with asynchronous discussion forums that follow. This allows opportunity for discussion, airing of disagreements and offers a level playing field where students and professors can debate individual ratings, review procedures or any other issue of common concern. Perhaps not surprisingly few professors chose to engage in this site, but the opportunity is available.
This transparency creates a learning opportunity for all respondents, creates opportunity for dialogue and provides low cost incentive and social reward for students to participate. The system allows anonymity, but the discussion allows disclosure, as participants are comfortable in doing so. RateMyProfessor runs on an ad revenue model and so provides a very low costs service to everyone.
External systems do not allow anyone to determine the return rate (how many students are enrolled in each section of a course?), thus making any results suspect to bias (both negative and positive response) A better solution would be for schools to operate their own public systems. Hopefully these systems would evolve standardized questions allowing for cross institution aggregation and comparison. Such systems would also to be monitored so as not to provide a platform for slander, obscenity etc. But this a challenge met in various ways by all public social systems.
No doubt such a sanctioned system would have challenges and perhaps never pass academic senates dominated by tenured faculty with little to gain and potential for loss of prestige and reputation. Yet, this is exactly the sort of transparency that educations systems need. The student isn’t just a customer of the education system or the schools, to which they pay tuition and invest their time. Rather they are very important partners. Teachers are much like medical doctor who can advise on healthy learning and offer treatments for aliments. But the responsibility for healthy living and following treatment regimes is vested in the patient. Likewise students are responsible for their learnign, but teachers are responsible for creating meaningful paths to that learning and motivating participation. Teaching is not the only function of a faculty member and students are not the sole judges of quality, but their opinions are essential to evolving more effective and engaging education practice. Thus, we need to devise ways to solicit and reward their feedback and techniques and motivations incorporated in social sites may well be the key.
References
Anderson, H., Cain, J., & Bird, E. (2005). Online Student Course Evaluations: Review of Literature and a Pilot Study. American Journal of Pharmaceutical Education, 69(1)
Dommeyer CJ, Baum P, Chapman KS, Hanna RW. (2003) An experimental investigation of student response rates to faculty evaluations: The effect of the online method and online treatments. Presented at: Decision Sciences Institute Nov. 22-25, 2003; Washington, DC. Available at: http://www.sbaer.uca.edu/research/dsi/2003/procs/451-7916.pdf
Hattie, J., & Marsh, H. W. (1996). The relationship between research and teaching: A meta-analysis. Review of Educational Research, 66(4), 507-542
Lovric, M. (2006) “Traditional and web-based course evaluations –comparison of their response rates and efficiency. University of Belgrade
Murray, H. (1997). Does evaluation of teaching lead to improvement of teaching? International Journal for Academic Development,, 2(1), 8-23
An interesting article Terry thank you.
The low response rate for online courses is one reason why UBC (Canada) and insitutions in Australia (QUT and UoW) are investigating alternate evaluation methodologies. This has included the investigation of ICT data dervied from student online activity. We (UBC, QUT and UoW) have been working on the application of student data derived from interactions with the institutional LMS e.g. BlackBoard or Vista to inform and evaluate teaching practice. The research to date has established some interesting predictors of student academic performance – discussion posts, student networks, time online, frequency of sessions, etc. all provide an early indication of future student performance. Thus, this data can be readily and easily integrated to identify early signs of “at-risk” students requiring additional learning support or to evaluate the impact of learning activities. Additionally, the visualisation of this data assists teaching staff in making proactive and informed decisions regarding redesign of learning activities, support interventions and the overall curriculum. (The graphical rerpresentation of student and staff networks – sociograms has been very powerful). The aggregation of the LMS data can also inform senior management of adoption rates of various tools, and differences in faculty school and course teaching approaches. This data can then be correlated with other insitutional standardised evaluations e.g NSSE, student satisfaction or in Australia the CEQ to provide a rich picture of overall teaching performance from an insitutional perspective.
This research and prior work by Cath Finnegan, John Campbell and others has aimed to develop a suite of data-mining tools and resources that can assist teaching and management staff in benchmarking performance and evaluate the impact of implemented learning and teaching activities in a just-in-time manner. Perhaps for online courses the standardised survey may no longer be required? Or at least, less dominant in the types of tools used to evaluate our teaching practice.
I am more than happy to forward some resources regarding this work if interested
Thanks
Shane
Hi Terry,
Interesting post that feels current to the times of more openness or disclosure, and perhaps more vulnerability of professors and instructors as in “RateMyProfessor”.
I think the cross comparison for institutions by more standardized questions could be difficult but also worthwhile.
As a student at AU — I have always filled out the questionnaire, but I often thought if people are busy, what is the incentive — especially if it is an “altruistic” act and the student does not really know if it will matter. Significant acts are more likely to feel worthwhile doing.
Jo Ann
I mentioned this post to my partner, who works at a business school that specialises in distance education. She said they also suffer from low online feedback return rates. Here are a couple of the reasons given:
1) No true anonymity, they really left that the survey could be traced back to them
2) Too many questions
3) Not the correct questions, they were too quantitative rather than qualitative
It was suggested by the students that a better approach might be to set up a sample group and monitor them throughout the duration of the course.
Cheers.
Any time you have to coerce someone to participate 9whether through bribery or waterboarding) it’s not worth the effort and the information is suspect.
Students shouldn’t rate teachers anyway – – most students don’t have a well developed sense of their own self – how are they then fit to judge the performance of a teacher or even a course?
I’m all for getting information surreptitiously – embed questions into the course to elicit the information or responses you need by cloaking the request as an opportunity to respond.
People love to give their opinion, as long as they don’t feel that it is a requirement but an opportunity to express themselves. Here’s a platform – give me your opinion . . .