The release last month by the US Dept of Education’s Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies provides another salvo in the simplistic showdown between online and face to face learning. As expected online learning (both at a distance and a classroom continues to out perform unmediated education.
First, let me repeat the standard whine accompanying every educational meta analysis – there are far too few studies, many of the conditions between control and experimental group are not held (and perhaps cannot be held) constant and as always when one says ‘online learning’ the term includes a very wide range of learning activities, modes of learning, types of teacher intervention and divergent focus on collaborative, cooperative or individual work – and many other variables that have long been associated with changes in learning outcomes. So when “online’ learning is conceived of as the independent variable- it really means this is the variable we are going to focus on, make a vague attempt to control those we can and ignore the rest! This occurs even though we know there there are a lot of potentially confounding variables in play. However this variability applies to the complex face to face (F2F) classroom environment as well as online. To be fair to the researchers on this study, attempt were made to tag studies for differences in ‘practice variables’- those under control of the teacher/designer and ‘conditions’, rather unchanging environmental differences between experimental and control groups. However, again messiness intrudes – as noted by he the authors “Many of the reviewed studies, for example, did not indicate (a) whether or not the online instructor had received training in the method of instruction, (b) rates of attrition from the contrasting conditions and (c) contamination between conditions.” Retention and completion rates are a concern in all types of distance education, so not documenting the independent variables’ association with successful completion mares many studies.
As an example of the confusion of terms, methods and technologies, the study has a brief anecdotal section on “individualized instruction”. I went right to that section hoping it talked about changing condition of online study from the usual cohort, to the older self-paced, independent study mode. Unfortunately, what the authors meant by the title was interventions involving more machine interactive learning – tailored responses with additional help for incorrect answers (positive effect) and individualized adaptation presenting different environments to different students (positive effect again).I appreciated the study’s review of the literature and notes that “available research evidence suggests that promoting self-reflection, self-regulation and self-monitoring leads to more positive online learning outcomes.” Attempts to guide the online interactions of groups of learners were less successful than the use of mechanisms to prompt reflection and self-assessment on the part of individual learners”. These results suggests the value that arises from effective use of blogs, self awareness exercises (profile comparisons?) and meta cognition training and experience.
I also found two older references to studies showing that when student-student interaction was increased through cooperative learning strategies, the effect of increased teacher moderation (adding teacher presence to rich student-student collaboration context) was not realted directly to the learning outcomes (see Bernard, R. M., & Lundgren-Cayrol, K. (2001). Computer Conferencing: An Environment for Collaborative Project-Based Learning in Distance Education. Educational Research & Evaluation, 7(2/3) ). This seems to confirm my earlier grand “interaction equivalency theory“
Despite these limitations and mixing of apples, oranges and bananas and struggling to think about coherence in the midst of a fruit salad, the math manipulations involved in a meta-analysis of the 46 studies (51 effect sizes) produced some interesting results. Of most press value (and ammunition for skeptical colleagues) is “The overall finding of the meta-analysis is that classes with online learning (whether taught completely online or blended) on average produce stronger student learning outcomes than do classes with solely face-to-face instruction. The mean effect size for all 51 contrasts was +0.24, p < .001.” This effect size is not huge, but the significant difference shows that something is happening in these complex contexts that is resulting in higher student learning outcomes. The study goes on to examine comparison between online and blended (blended significantly higher) and completely online and Face to face- a smaller, but significant difference in favor of online.
This type of study (under current miniscule research funding models) will never answer the really important questions regarding types and quantity of instructor, social and cognitive presence, nor tell us the types of learning activities that are more effective in which contexts. However, we have been struggling for 30 years with perceptions that online learning and all other modes of distance education are not as effective as classroom education at enhancing student learning outcomes. This and other recent research (see Bernard et al 2004, Sitzmann, R., Kraiger, K., Stewart, D., & R., W., 2006) is showing that distance education is beginning to show consistent improvements over classroom learning (thus ending the no significant difference debate? – not likely!).
This study shows that the affordances of networked technology enhance student learning whether at a distance or on campus (blended). That really is no surprise, but should be useful ammunition when laggards and late adopters continue to press for ‘evidence that this stuff works’.
The study will also be useful as defense for the most common type of network enhanced education programming – that being blending online learning with face to face classroom activity. As expected learning gains result when students have opportunities for both. But as Kirkpatrick’s four levels of evaluation and Philips important fifth ROI addition tell us, if learning gain happens, but at greater cost or time expense, such as by doubling the infrastructure or time commitment – the gains may not be cost effective. Rather, blended learning teachers have to juggle input costs to arrive at strategies for getting the best mix of asynchronous online, synchronous online, face-to-face and individualized and cooperative learning activities.