Mar 142012

Learning Technology, School Reform and Classroom Practice (Part 2)

Learning Technology;  By Larry Cuban, March 14, 2012

Learning Technology: Dilemmas in Researching Technology in Schools (Part 2)

Learning Technology: If you are a technology advocate, that is, someone who believes in his or her heart-of-hearts that new devices, new procedures, and new ways of using these devices will deliver better forms of teaching and learning, past and contemporary research findings are, to put it in a word–disappointing. How come?

For those champions of high-tech use in classrooms, two dilemmas have had technology researchers grumbling, fumbling, and stumbling.

Gap Between Self—Report and Actual Classroom Practice

Journalist accounts and many teacher, student, and parent surveys of 1:1 programs and online instruction in individual districts scattered across the U.S. report extraordinary enthusiasm. Teachers report daily use of laptops and elevated student interest in schoolwork, higher motivation from previously lackluster students, and more engagement in lessons. Students and parents report similar high levels of use and interest in learning. All of these enthusiastic responses to 1:1 programs and online instruction do have a déjà vu feel to those of us who have heard similar gusto for technological innovations prior to the initial novelty wearing off. [i]

The déjà vu feeling is not only from knowing the history of classroom machines; it is also because the evidence is largely drawn from self-reports. And here is the first perennial dilemma that researchers face in investigating teacher and student use of high-tech devices.

Researchers know the dangers of unreliable estimates that plague such survey and interview responses. When investigators examined classrooms of teachers and students who reported high frequency of usage, these researchers subsequently found large discrepancies between what was reported and what was observed. None of the gap between what is said on a survey and what is practiced in a classroom is intentional. The discrepancy often arises from what sociologists call the bias of “social desirability,” that is, respondents to a survey put down what they think the desirable answer should be rather than what they actually do. [ii]

So a healthy dose of skepticism about teacher claims of daily use and students long-term engagement is in order because few researchers have directly observed classroom lessons for sustained periods of time where students use laptops and hand-held devices. Until more researchers go into classrooms, it will be hard to say with confidence that teacher daily use of computers has changed considerably with abundant access to IT.[iii]

While many researchers understand clearly the limits of self-reports, prize classroom observations, and direct contact with teachers and students, the high cost of sending researchers into schools prohibits such on-site studies. Instead, researchers face this value conflict in costs and time efficiencies vs. direct observation by fashioning compromises where they use survey questionnaires and, perhaps interviews—all self-reports. These researchers do not solve the problem of the bias of “social desirability” and unreliability of self-reports; they manage this perennial dilemma.

Recurring Dilemma of Inadequate Research Design

Another dilemma is that many researchers see electronic devices in schools as hardware and software devices that are efficient, speedy, reliable, and effective in producing desirable student outcomes such as higher test scores. These researchers have designed studies that have compared films, instructional television, and now computers to traditional instruction in order to determine to what degree the technology has shown that teachers are more efficient and effective in their teaching and students learn more, faster, and better. Such studies have been dominant in IT research in the U.S. for over a half-century “with the most frequent result being ‘no significant difference.’”[iv]

Other researchers, however, see the introduction of innovative technologies as interventions into complex educational systems that interact and adapt to the institution’s goals, people, and practices. They design studies that bring practitioners and researchers together to study real-world problems of how teaching and learning can be improved through the use of high-tech innovations. They are more interested in refining the innovation, adapting it to the contours of actual schools and classrooms rather than evaluating the success of the technology—which is what the dominant group of technology researchers are engaged in. While most researchers see electronic devices as tools, these researchers see it as a process, not a product of learning how institutions adapt and change the innovation.

For researchers who adopt this point of view, design-based interventions would make the most sense. Here researchers and practitioners work together to identify the problem that they would investigate, come up with hypotheses, design the intervention and then implement it. Collecting and then analyzing data on the intervention and its outcomes in actual classrooms and then teachers decide whether to put into practice the results means that the research is process-driven.

More design-based interventions might well reduce the grumbling, fumbling, and stumbling that afflicts researchers and champions of more hardware and software in classrooms.

Learning Technology, School Reform and Classroom Practice  (Part 1)

[i] Education Development Center and SRI International, “New Study of Large-Scale District Laptop Initiative Shows Benefits of ‘One-to-One Computing,’” June 2004, Saul Rockman, “Learning from Laptops,” Threshold, Fall 2003, ; David Silvernail and Dawn Lane, “The Impact of Maine’s One-to-One Laptop Program on Middle School Teachers and Students,” Research Report #1, February 2004 (Maine Education Policy Research Institute, University of Southern Maine).

[ii]John Newfield, “Accuracy of Teacher Reports,” Journal of Educational Research, 74(2), 1980, pp. 78-82, Sociologists point out that self-reports of church attendance are inflated. See:

[iii] Efforts to get sharper findings out of different sources and methodologies—often called “triangulation”—can be helpful to reduce skepticism of self-reports but problems remain. See Sandra Mathison, “Why Triangulate?” Educational Researcher, 1988, vol. 17, p. 13 at:

[iv] Tel Amiel and Thomas Reeves, “Design-Based Research and Educational Technology: Rethinking Technology and the Research Agenda,” Educational Technology and Society, 11 (4), 2008, pp. 29-40.


Tom McDonald,; 608-788-5144; Skype: tsmw5752