Third-Party Evaluation of TAACCCT

All rounds of the SGA asked applicants to implement and replicate evidence-based models, programs, and practices. The emphasis on evidence-based strategies seems to reflect the Obama administration’s “preference for competitive grants and evidence of effectiveness in designing its grant programs” (Haskins & Margolis, 2014, p. 192), mimicking the efforts by ED to apply the gold, silver, and bronze standard. However, this approach may not have accounted for the lack of evidence of the impact of a wide range of reforms in the context of the community college, including numrous reforms referenced in the TAACCCT SGA such as career pathways, Prior Learning Assessment (PLA, and intensive student services. Even so, a comprehensive approach to evaluation was seen as critical to advancing the TAACCCT grants, by providing funding for grantees to secure third-party evaluation that could measure implementation relative to student outcomes using designs that would produce evidence on what is working.

Round 1 of TAACCCT asked applicants to use an evidence-based framework for preparing proposals, recognizing that levels of evidence can be strong, moderate and preliminary. These levels are aligned to the Clearinghouse for Labor Evaluation and Research (CLEAR) standards of evidence created by the United States Department of Labor (DOL). Supplementary guidance for blueprint for the TAACCCT evaluation advocated by ED was the i3 grant program that sought evidence of impact consistent with the levels of rigorous evidence used by the Institute for Education Sciences (IES). In this schema, strong evidence refers to research and evaluation that addresses causal inference and conclusions (i.e., high internal validity) with sufficient sites and participants to suggest that interventions can be scaled up. Well-implemented experiments and QEDs supporting the effectiveness of programs and reform strategies fit this definition. Moderate evidence is generated by experimental, QEDs, and correlational designs with strong statistical controls for selection bias that provide some information useful to causal inference and conclusions, but lack broader generalizability. The third level of evidence, preliminary, refers to research yielding promising evidence of limited generalizability that are based on descriptive tracking studies and pre- and post-treatment comparison studies (Zandniapour & Deterding, 2018). On their own, these studies lack sufficient quality evidence to support scaling up.

The DOL also required grantees to provide annual performance report results for grant participants, as well as matched comparison groups. Rounds 2 through 4 continued the focus on rigorous evidence begun in Round 1 but moved to a requirement for a third-party evaluation to estimate the impact of the grant on student outcomes. Applicants were required to submit an evaluation plan and budget focusing on both implementation and impact. DOL’s directive was to use the most rigorous evaluation design feasible to estimate the effects of the grant, whenever possible using experimental or quasi-experimental design (QED). By QED, we mean study designs that are not experimental design in the form of randomized control trials but quasi-experimental in that use alternative designs that enable researchers to estimate causal effects when randomization is determined to be infeasible or inappropriate. The DOL also required national evaluation activities involving all TAACCCT grants to assess implementation and impact, with this aspect conducted by Urban Institute and ABT Associates. Review of the TAACCCT evaluation plans showed that most third-party evaluations intended to use propensity score matching (PSM), with 70% of Rounds 3 and 4 specifying PSM as the evaluation design most feasible to estimating causal impacts. Only about 17% of the evaluations planned to conduct correlational (non-causal) pre-post or outcomes-only studies, and even fewer included plans for experimental designs (Cohen et al., 2017).

The TAACCCT grant program wrapped up one year prior to this writing but only a handful of TAACCCT evaluation studies having been published (see, for example, Bragg & Krismer, 2016). This study fills a critical void in understanding the overall impact of TAACCCT as a federal policy focusing extensively on community and technical colleges.

Third-Party Evaluation of TAACCCT

Table of Contents

Close