Print this Report Summary

Reports & Research

June 29, 2018

PPIC Report
California’s K-12 Test Scores: What Does the Data Tell Us?

(Editor’s note: On Jun 26, the Public Policy Institute of California released a 26-page report analyzing data from California’s standardized testing of students. The following text contains the report’s conclusions. A link to the complete report can be found at the end of this article.)

California’s test scores in mathematics and English provide important information about the state’s K-12 system. Most importantly, the results inform parents, teachers, school administrators, and state policymakers about our children’s success in mastering these two basic subjects. In addition, test scores represent the only academic performance measures for students in elementary and middle schools used in the state’s K-12 accountability system. Thus, the data are central to evaluating whether schools and districts are performing adequately.

This report uses these publicly available data to explore how students, including the major subgroups of students, have performed during the past three testing cycles. These data create a useful and detailed picture of the current status of achievement. As noted, English proficiency is low in the lower grades and gradually rises through grade 11, when about 60 percent of students test as proficient. By contrast, math test results fall and by grade 11 only about one-third of students score at proficient levels. These trends are consistent with our estimates of the amount students learn each year. For instance, in mathematics, students do not learn enough each year in grades 4 through 8 to keep pace with the standards.

We also examined the performance of student subgroups. Achievement levels for low-income students are much lower for these groups, a finding that has been documented previously. In addition, our regional analysis of low-income student performance showed, on average, very small differences. None of the regions has been successful in boosting the performance of this group. By contrast, the performance of higher-income students varied significantly by region, although the definition of the higher-income group is broad and may not result in comparable groups from one region to the next.

We also found that student score growth was much lower in 2016-17 than in 2015-16. In 2015-16, students made large gains over the previous year in both English and mathematics. However, in 2017, English scores grew only slightly more than what was needed to maintain proficient performance levels. In mathematics, gains fell far short of keeping pace with standards.

Was 2016-17 simply a disappointing year or was 2015-16 growth unusually large? The answer must await more experience with the SBAC tests. A number of systemic factors-better understanding of the SBAC tests, continued implementation of the standards, and experience with online testing-seems to have boosted 2015-16 scores. It remains to be seen whether the 2016-17 results are representative of what we can expect in the future.

CDE’s group data also fall short in important ways. Our ability to use CDE public release files to understand the progress of EL and disabled students is hindered by the movement of students between districts and programs. We showed how EL data understate the success schools are having in helping this group become fluent in English. Moreover, student movement between programs undermines our ability to use subgroup averages to assess year-to-year student growth. This represents a vexing problem for researchers and policymakers because the SBAC tests were designed to measure achievement growth from grade to grade.

School and district data are affected even more significantly than statewide data by changes in program participation and movement of students in and out of districts. These changes make the state’s public data on school and district performance unusable for generating district growth estimates. What is more, they also appear to affect the state accountability ratings for English and mathematics performance. The State Board of Education and CDE are exploring whether to change the way accountability measures are calculated, which may address these issues. Moving away from using average group scores would permit the department to develop more accurate indicators of performance levels and growth. However, the state also should revisit other questions, such as attendance rules that determine when students are included in school and district accountability data.

CDE should also reassess how it releases annual SBAC test data. CDE’s public use files provide researchers and policymakers a wealth of data on SBAC scores. Because SBAC was designed to measure the annual progress of students, data released by CDE should allow examination of the gains students make each year. Student-level data that would allow researchers to look at growth are available through a CDE application procedure. But that application process creates unnecessary barriers.

Providing educators, policymakers, and the public accurate information about the progress of K-12 students is the central reason why we test students each year. Testing data could provide essential facts to the public about whether LCFF, now in its fifth year of operation, is succeeding in its goal of improving outcomes for low-income, EL, and foster students. Accurate data on the gains made by students with disabilities would inform policymakers on the challenges districts face in educating this group. And parents and community members would have access to better information about the growth of student scores at local schools. Many of these important uses do not require student-level data but can be satisfied with group averages that are adjusted for student movement. To help realize the promise of the SBAC data, CDE should work with researchers and policymakers to revamp its test data release program.

To read the complete 26-page report, click on the following link:

Source:  PPIC

A Total School Solutions publication.