SHARE YOUR PERSPECTIVE
San Francisco Bay Area (from the top of Mt. Diablo)
Where do you learn best?
Who would have thought that such a question could be asked of a phenomenon that has only just begun? Yet the winds of hype, fanned by the popular press, seem to be blowing MOOCs down a path of inevitable “disruption” faster than anything any of us have seen in a long time. But here’s the thing: both the new MOOC providers (including the commercial start-ups and the prestigious universities entering the online space) and the popular press seem to be defining MOOCs as equivalent to online learning writ large. Not only do the Ivy League schools now developing MOOCs imply that they are inventing online learning but the popular media is buying it all without really digging deeper.
Those of us who have been quietly (and successfully) building, delivering, and evaluating online learning for almost two decades seem to have been overloooked amid the MOOC sound and fury. But we as a community know a few things about online learning and what makes it work and the main thing we know is that a MOOC does not define online learning. It is simply a particular type of online learning, a subset of the larger whole. Online learning contains many different types of delivery strategies and the MOOC is but one flower in the garden.
Yet, by all standards the MOOC phenomenon does qualify as a disruptive technology by extending the boundaries of the classroom concept far beyond what resulted from online learning when it was itself a disruptive technology. There is an adage that goes, “radical ideas threaten institutions but then become institutions that reject radical ideas.” MOOCs might even qualify as one of Nassim Taleb’s black swans; a completely unpredicted event characterized by its monumental impact and retrospective explanation. That is, we could have predicted the advent of the MOOC if only we had looked at the data (maybe even big data) a little more carefully.
The point is we didn’t and here they are.
Certainly, in the early days of online learning (asynchronous learning networks-ALN), the modality faced the same intense of scrutiny that MOOCS are undergoing at the moment. For ALN the standard seemed be comparisons with face to face environments resulting in the “no significant difference” phenomenon, a line of research that really didn’t go anywhere. From that early research we learned that using old research paradigms for new approaches wasn’t very effective but if we could redesign and transform them they would emerge into next generation evaluation models. Almost from the onset, however, online learning in the Sloan-C initiative had a firm foundation grounded in the pillars that famed it in terms of:
In many respects the pillars gave online learning a clear evaluative definition that provided a prototype model that did not face the definitional problems encountered by blended learning and now MOOCs. Using the pillars as a strategic platform, we at the University of Central Florida were able to formulate set of set of necessary conditions for effective online initiatives.
These principles, informed by the pillars, provided a clear roadmap to determining outcomes. Evaluation and assessment in these contexts were not easy because the baselines changed continually but they were doable.
If MOOCs are now a bona fide disruptive technology, Clayton Christensen might argue that they should be spun off from our sustaining technologies and operate in markets that do not yet exist, possibly competing with the establishment. From an evaluation perspective this would present a formidable challenge because our evaluation models are based on existing paradigms. By what measure should we assess MOOCs? We know that they have an abysmal completion record. Yet, 10% of 160,000 is still a significant reach, if considered against the Sloan-C pillar of Scale. And we know that many of these MOOC participants are in less-developed parts of the world with limited access to higher education. The press is filled with anecdotes of global students from disadvantaged circumstances who are immensely grateful for the opportunity to simply learn. As assessed against the Sloan-C pillar of Access, MOOCs are clearly hitting the ball out of the park.
However, when we start looking at the other pillars and attempt to measure MOOCs against traditional student assessments, the lines grow blurrier. If someone signs up for a MOOC with no intention of completing it, yet still gains value from the parts she completes, is a completion measurement even relevant?
It seems to us that the measurement of a MOOC is based on learner-defined criteria for what they want to get out of the experience rather than teacher-defined criteria as expressed by leaning objectives, assessments, and course completion. The expectations are completely reversed. Do traditional satisfaction and engagement measures make any evaluative sense in this context? Do learning outcomes resemble anything that we have encountered to date when MOOCS seem to have lurkers, drop- ins, passive participants and a proportionally small number of active students?
During the past two decades, the Sloan-C pillars have served as an effective template for collecting meaningful evaluation and assessment data for continuous improvement of online learning. As we consider the advent of MOOCs and the concomitant hype surrounding them, will online learning become what Nassim Taleb calls antifragile: a learning modality that becomes stronger because of the stresses MOOCS place on it? On the other hand, what can we take from our existing findings about online learning that will strengthen this new and emerging massive approach to teaching and learning? Perhaps using the pillars as a springboard we can develop a whole new template for evaluating outcomes from MOOCs. Possibly a reframed set of pillars will emerge, one that will be the basis of meaningful information on which we can make informed educational decisions about an environment that is playing by a whole different set of rules.
Thomas B. Cavanagh
Charles D. Dziuban
Thomas B. Cavanagh, Ph.D. is Associate Vice President of Distributed Learning at the University of Central Florida. In this role he oversees UCF’s distance learning strategy, policies, and practices, including program and course design, development, and assessment. Tom has administered e-learning development for both academic (public and private) and industrial (Fortune 500, government/military) audiences. Tom currently serves as chair of the EDUCAUSE Learning Initiative Advisory Board and on the Board of Directors of the Florida Virtual Campus (and chair of the Distance Learning and Student Services Members Council). He is also an award-winning author of several mystery novels.
Chuck Dziuban is Director of the Research Initiative for Teaching Effectiveness at the University of Central Florida (UCF) where he directs the evaluation of the distributed learning program. He has spoken on how modern technologies impact learning at more than 90 universities in the United States and throughout the world. In 2000, Chuck was named UCF’s first ever Pegasus Professor for extraordinary research, teaching, and service and in 2005 received the honor of Professor Emeritus. In 2005, he received the Sloan Consortium award for Most Outstanding Achievement in Online Learning by an Individual. 2012 the University of Central Florida established the Chuck D. Dziuban Award for Excellence in Online Teaching in recognition of his contribution to teaching and learning with technology. His coedited book with Tony Picciano and Charles Graham, Blended Learning: Research Perspectives Volume II will be released by Routledge in November 2013.