Putting “failing schools” in context

At the beginning of January, the Texas Education Agency issued its new statewide summary of schools that failed to meet state standards for 2013 and whose students are eligible to seek transfer under the state Public Education Grant program. The feature of the summary that grabbed the most headlines was a doubling in the number of “failing” schools statewide from 456 in 2011 to 892 in 2013.

But what does “failing” actually mean? Did Texas public schools get twice as bad in the last two years?

No. What happened was that TEA raised the standards for passage that were in place in 2011 to tougher standards that took effect in 2013.  Primary among these was the introduction of the State of Texas Assessment of Academic Readiness (STAAR) exams in 2013, which replaced the considerably less difficult Texas Assessment of Knowledge and Skills (TAKS).

Schools in which 50 percent or more of students do not attain state standards on the 2011 or the 2013 exams are placed on the “failing” list.  Schools could also run afoul of the state if they fall into the bottom tier of the state’s overall accountability ratings, which includes other measures like attendance and discipline, for either 2011 or 2013.

In an e-mail interview, Carla Stevens,   assistant superintendent for research and accountability at the Houston Independent School District, noted that STAAR increased rigor over TAKS in a number of ways, including an expansion of the number of free-response mathematics and science problems (instead of merely multiple choice), increasing the type and scope of the writing tasks, lengthening the tests, and increasing the overall score required to pass the assessment.

Stevens stated that districts were struggling with some of the aspects of how the new accountability standards used STAAR scores to calculate failing schools. First, she cited state assessments being used as the only measure to evaluate elementary and middle schools (without using other tests or disciplinary measures).  Second, she noted that the accountability ratings only measure current-year performance instead of measuring cohort movement across time. Finally, among other issues, she noted that former drop-outs who had recently re-enrolled were counted the same as all other students. As a result, schools might not aggressively attempt to get drop-outs back in school.

HISD “does not have concerns over the new STAAR assessments [themselves], but it does have several concerns regarding the use of these assessments in the new accountability system.” Stevens wrote.

See the entire e-mail interview between THESIS and Stevens here.

Breakdown of Houston area public schools on the PEG list

Of the 19 school districts principally located in Harris County, nine had schools on the list in 2013, in comparison to six in 2011.  Mirroring state trends, most districts increased the number of schools on the list. Houston ISD led the way in raw numbers increasing from 26 schools on the watch list in 2011 to 53 in 2013. Aldine ISD went from zero in 2011 to 16 in 2013, while the Pasadena and Spring independent school districts increased their respective numbers of schools from zero and three schools in 2011 to seven and eight schools on the list in 2013.

Among Harris County districts, Spring and Aldine now have the largest share of schools on the state failing list in 2013, with Spring having 22 percent of its schools on the list and Aldine placing 21 percent.

Houston currently has 17 percent of its schools on the watch list.

County districts with zero schools on the list in both 2011 and 2013 were Channel View, Cypress-Fairbanks, Deer Park, Galena Park, Huffman, Humble, Katy, La Porte, Sheldon and Tomball.

Houston ISD’s overall changes in performance mirror overall state trends, with its 104 percent increase in the number of district’s failing schools roughly matching the state increase of 96 percent  (from 456 schools in 2011 to 892 in 2013). Overall, Harris County districts recorded an increase of 149 percent – considerably higher than Houston or Texas as a whole.

The demographic and structural correlates of “failure”

People familiar with Houston and its suburbs probably have noticed a rather distinct pattern in that list of failure rates. Wealthier suburban districts — which tend to have more white students and invest more money per student — have very low failure rates. Districts like Katy, Deer Park and Cypress-Fairbanks have no schools on the PEG list. Contrast those rates with Houston ISD ,the large urban district, which now has more than 50 schools on the list, or other minority-heavy and poorer districts, like Aldine or Spring.

To show that these observations hold up more generally, I gathered data on each of the 19 school districts located primarily in Harris County to explore how poverty relates with performance on accountability measures (I excluded North Forest, due to its annexation by HISD in 2013) on the percentage of minorities among the student-age population, the percentage of households living below the poverty line, district per-pupil spending and the student-teacher ratio. Poverty estimates came from the 2010 figures gathered by the U.S. Census Bureau’s Small Area Income and Poverty Estimate survey,  while the student-teacher ratio, per pupil spending and percentage-minority figures come from the National Center for Educational Statistics.

Using these data, I calculated simple correlation coefficients between each of these variables and the percentage of district schools on that state watch list for both 2011 and 2013. Table 1 shows the correlations for each variable for both years.  Positive numbers mean that the variable listed increases with failure rates, while negative numbers indicate that failure rates decrease as the variable listed increases.  Correlations closer to zero indicate weaker relationships, while numbers closer to 1 or -1 indicate stronger relationships:

Correlation between Demographics and Failure rates among 19 Harris County Districts
Variable 2011 2013
District proportions of minority students .012 .426
District Households in Poverty .408 .433
District Student-teacher ratio .285 .396
District Per-pupil spending -.108 -.242

The results confirm conventional wisdom.  As district per-pupil spending increases, failure rates decrease. As the student-teacher ratios increase, failure rates increase. As poverty rates increase, failure rates increase. And as the proportion of minority students increase, failure rates increase. The percentage of households in poverty is the strongest predictor of increased school failures. These relationships all increase in strength as we move from the baseline standards in 2011 to the tougher standards.

Note that the variation in some of these variables is understated. For example, the Census and NCES report the percentage of minorities and poverty rates at the district level, which lumps all school-age children together, including those that don’t attend public schools – and children who attend private schools tend to come from wealthier families.

Finally, these estimates are only for a small share of schools in Texas – no one should draw too many inferences from a sample size of 13, but it’s still striking how what a state considers as “failure” varies in large part with underlying demographic problems.

Comments are closed.