STANFORD -- In a rush to meet requirements under the No Child Left Behind Act that instructional materials purchased with federal aid be scientifically proven effective, educational software companies are promoting research that is substandard and often misleading, according to a new report in the spring 2007 issue of Education Next.   The vast majority of studies on education products of any kind--fully 75 percent--reviewed by the federal government do not meet its scientific standards, warns Todd Oppenheimer, author of the Education Next report.
 
The nation’s K–12 schools spent nearly $2 billion on electronic curricular products in 2006, up 4.4 percent from the previous year and surpassing the 2.6 percent growth rate of the overall instructional products market for U.S. schools. To keep pace with this growth, and to take continuing advantage of federal education subsidies, many companies pushed through questionable studies supposedly documenting their products’ effectiveness. In fact, a number of companies promoted their “scientific” research very soon after NCLB required it--an impossible feat, points out Oppenheimer, considering the many years it takes to conduct solid scientific studies.
 
“As the educational software industry booms, many products are being put to use in schools based on evidence that looks statistically rigorous but isn’t,” said Oppenheimer. “The flaws in this research are significant but hard to find. The deceptions here are therefore poorly understood, both by the products’ sales representatives and by their customers--school administrators.”
 
According to the Institute of Education Sciences (IES), the primary overseer of research within the Department of Education, scientifically based research fits the following criteria: It randomly assigns its test subjects to comparable groups; it yields reliable, measurable data; if the study makes any claims about what causes its effects, it substantially eliminates plausible competing explanations; its methods are clear enough that other researchers can repeat or extend them; and, finally, the study has been accepted by a peer-reviewed journal or equivalent panel of independent experts. So far, very few commercial programs meet these standards, writes Oppenheimer, although many claim that they do.
 
“Unfortunately, scientific proof is defined in many ways,” Oppenheimer said. “Experts don’t all agree on what constitutes good research, and while the research community debates this question, the commercial sector has felt free to devise its own interpretations.”
 
To combat this problem, the What Works Clearinghouse (WWC) was created within IES in 2002 to review studies done on instructional products, both electronic and traditional, and rate the quality of their achievement data. As of December 2006, the WWC had gone through 255 studies and put out reviews on 51 products.  Notably, 75 percent of the studies did not meet the agency’s scientific standards, even with some reservations. 
 
 The real-world experiences with software that school districts are reporting illustrate some dramatic troubles. In July 2001, the Los Angeles Unified School District (LAUSD) decided to spend nearly $50 million on Waterford Early Reading, distributed by Pearson Digital Learning.  Some teachers found the program helpful, but many did not; after several years of use, the district’s own evaluation unit pronounced the program a failure. Pearson prepared a preliminary evaluation of its own for then LAUSD superintendent Roy Romer, full of numbers indicating that Waterford was producing dramatic achievement gains.  In light of its own evaluations, however, LAUSD scaled back the Waterford program, using it as more of a sideline specifically for students with learning difficulties. 
 
The WWC has not yet evaluated Waterford, but it is one of the 15 products that IES has elected to study as part of its own evaluation of educational software, using gold-standard methods of scientific research.  But the $15 million WWC study, begun in 2003 and due sometime early in 2007, may produce problems of its own, notes Oppenheimer. Ironically, those problems may limit the study’s neutrality and usefulness and leave it vulnerable to further marketing hype, Oppenheimer writes.
 
In setting up the study, instead of identifying the products that needed evaluation, such as those most widely used or questioned, IES sought volunteers from the educational software industry as a whole. It then chose 15 products from among the companies that stepped forward.  Although IES will not release the individual evaluations of these 15 packages, it will offer product details to their manufacturers. Those companies will then have a free hand to spin the evaluations however they like because no one else will have the information needed to rebut them, says Oppenheimer. More important, the study will answer nothing more than the most general question: Does educational software, as a class, tend to work?  For teachers or administrators trying to make shopping decisions on individual educational products, the study will be of little help.
 
Read “Selling Software” in the new issue of Education Next, now online at www.EducationNext.org.
 
Todd Oppenheimer is the author of The Flickering Mind: Saving Education from the False Promise of Technology, which was a finalist for the 2003 Book Award from Investigative Reporters and Editors. He is also the 1998 recipient of the National Magazine Award for reporting in the public interest.
 
Education Next is a scholarly journal published by the Hoover Institution that is committed to looking at hard facts about school reform. Other sponsoring institutions are the Harvard Program on Education Policy and Governance and the Thomas B. Fordham Foundation.
overlay image