Brief highlights flaws in the use of international testing to drive educational policy
BOULDER, CO (October 30, 2015) – For 15 years, journalists, advocates and policymakers have cited scores on international tests, such as the Program for International Student Assessment (PISA), to conclude that American student achievement “lags woefully behind” other nations, threatening our future and suggesting an urgent need for education reform.
A brief published today by the National Education Policy Center at the University of Colorado Boulder explores such policy analyses and claims around PISA as well as a second prominent international test, the Trends in International Mathematics and Science Study (TIMSS).
In International Test Score Comparisons and Educational Policy: A Review of the Critiques, Stanford education professor Martin Carnoy focuses on four main critiques of analyses that use average PISA scores as a comparative measure of student achievement.
The ranking is misleading, Carnoy asserts, because:
- Students in different countries have different levels of family—not just school—academic resources;
- The larger gains reported on the TIMSS, which is adjusted for different levels of family academic resources, raise questions about the validity of using PISA results for international comparisons.
- PISA test score error terms—the difference between measured achievement and actual achievement—are considerably larger than the testing agencies acknowledge, making the country rankings unstable.
- The Shanghai educational system is held up as a model for the rest of the world on the basis of data on a subset of students that is not representative of the Shanghai student population as a whole.
Professor Carnoy also assesses the underlying social meaning and education policy value of international comparisons. First, he describes problems with claims that average national math scores are accurate predictors of future economic growth. Second, he explains that using scoring data in this manner has limited use for establishing education policy, due to the lack of causal inference analysis. This is the well-known “correlation is not causation” problem.
Third, there is a conflict of interest arising from the Organization for Economic Cooperation and Development (which administers the PISA) and its member governments acting as a testing agency while simultaneously serving as data analyst and interpreter of results for policy purposes.
Fourth, Carnoy questions the usefulness of nation-level test score comparisons with regard to countries such as the United States with such diverse and complex education systems. The differences between states in the U.S. are so large that employing state-level test results over time to examine the impact of education policies would be more useful and interesting.
Despite such compelling critiques of international testing, Carnoy concludes that these tests will not go away, nor will they stop being wrongly applied to shape educational policy. However, he says, “there are changes that could be made to reduce misuse.” He concludes with five policy recommendations, including reporting test results by family academic resource subgroups of students with different levels of resources.
In a companion report released today by the Economic Policy Institute (EPI), Carnoy and co-authors Emma García and Tatiana Khavenson provide detailed analyses explaining how and why comparisons using data at the level of U.S. states are more useful than comparing the U.S. with other countries for understanding and improving student performance.
Find International Test Score Comparisons and Educational Policy: A Review of the Critiques by Martin Carnoy on the web at: http://nepc.colorado.edu/publication/international-test-scores.
Click Here To View the Press Release on the NEPC Website
Reference Publication:
International Test Score Comparisons and Educational Policy: A Review of the Critiques
Contact:
William J. Mathis, (802) 383-0058, Email
Martin Carnoy, (650) 725-1254, Email
The mission of the National Education Policy Center is to produce and disseminate high-quality, peer-reviewed research to inform education policy discussions. We are guided by the belief that the democratic governance of public education is strengthened when policies are based on sound evidence. For more information on NEPC, please visit http://nepc.colorado.edu/.
This policy brief was made possible in part by the support of the Great Lakes Center for Education Research and Practice (http://greatlakescenter.org/).