The evaluation plan for TAPDINTO-STEM Alliance utilizes both internal and external evaluations.

The Evaluation Team coordinates their work with the Executive Committee, the Leadership Team, and the Steering Committee to ensure that evaluation approaches and metrics align with both the goals and objectives of the project, and with the unique contextual features of the project hubs with current and future partners in their regions.

Meet the Evaluation Team

photo of Linda Thurston

Dr. Linda Thurston

External Evaluation Lead

The external evaluation team leader is Linda P. Thurston, Ph.D., professor emerita at Kansas State University, where she served as Associate Dean for Research and Graduate Studies in the College of Education, professor in the Department of Special Education, Counseling and Student Affairs, and Lydia E. Skeen endowed chair in education.

Thurston is the founding director of the Office of Educational Innovation and Evaluation at K-State, providing evaluation services for agencies, universities, and organizations around the country. In addition, Thurston has been PI on NSF and USDE awards for gender and disability projects and was co-PI on the KS-LSAMP project. She served as a program officer at NSF for over 2 years. Her evaluation work includes numerous NSF projects aimed at broadening participation in STEM.

Dr. Thurston may be contacted at lpt@ksu.edu.

June Gothberg

Dr. June Gothberg

External Evaluation

June Gothberg, Ph.D., is a member of the external evaluation team. Gothberg worked for 12 years on the National Technical Assistance Center on Transition and taught research and evaluation courses at Western Michigan University. She is currently at Cornell University, serving as a Senior Evaluator/Data Associate with the Yang-Tan Institute on Employment and Disability.

Gothberg has been an active advocate for disability inclusion in evaluation and is co-developer of the Universal Design for Evaluation Framework. Gothberg and Thurston met in the Disabilities Topical Interest Group (TIG) at the American Evaluation Association meeting in 2006 and have worked in leadership positions within the TIG for over a decade. Gothberg’s work on the team includes analyzing all research and evaluation instruments for universal design, conducting the 5-year Case Study, and leading the work on the Social Network Analysis study.

Clara Marshall

Clara Marshall

External Evaluation

Clara Marshall, M.P.H., is a member of the external evaluation team. Marshall worked in the evaluation field for six years before joining Concurrent Technology Corporation, where she is currently a Data Analyst. She is a dedicated health science professional with experience in data and research analysis, program evaluation support, and reporting. Marshall’s primary work on the team is leading the 5-year Asset/Capacity study.

David Shannon

Dr. David Shannon

Internal Evaluation Lead

Leading the internal evaluation is Dr. David Shannon, distinguished professor in educational research and evaluation at Auburn University, project Co-PI and coordinator of the six hub research studies. He works with the external evaluation team and the backbone on the internal evaluation. Shannon and Thurston worked on the SEAPD-STEM INCLUDES (DDLP) in same capacities, and Shannon was part of the RDE Alliance team.

The Evaluation Plan

The evaluation plan includes both formative and summative evaluation. The purpose of the formative evaluation is to provide information on (1) the on-going development of the Alliance; (2) progress toward goals; (3) implementation of interventions and activities; and (4) impact of activities on students. This information will be used by institutions, hubs, the Alliance Backbone, and project teams and advisory committees to move forward toward goals and to improve their on-going work. The purposes of summative evaluation are: (1) to understand the extent to which the project successfully accomplishes its objectives; (2) to determine the impact of the project on students and institutions / other partners; and (3) understand the expansion of the Alliance and the replications of the prototype from the Collective Impact framework.

The evaluation plan utilizes the developmental evaluation paradigm, aligns with the Universal Design for Evaluation Framework, and follows the guidelines set forth in the Collective Impact Forum’s Guide to Evaluating Collective Impact. Both qualitative and quantitative data collection and analysis methodologies are used. Qualitative approaches include conducting semi-structured interviews and focus groups to gather in-depth information from project PIs and managers, partners, participants, and other stakeholders. Quantitative methods utilize survey instruments administered to project participants and stakeholders, primarily using measures used with the Bridge Model and the DDLP (Dunn, et al., 2018). The evaluators worked with the hubs and alliance leadership to refine data collection instruments (e.g., surveys, interview protocols, asset mapping, and rubrics), protocols and timelines.

The evaluation team works closely with the Backbone on the Shared Management System (SMS) which includes data collection for the project. All instruments are integrated into the data collection portal so that data are continuously collected, input, analyzed, and reported for continuous improvement.

The methodology employed in the external evaluation includes quasi-experimental designs for repeated measures, asset/capacity mapping, bibliometrics, Social Network Analysis, and landscape review. The evaluation will also employ a multiple-case study with each hub and their partners representing a case. For the case study, data will be collected and analyzed throughout the project to provide an ongoing contextual understanding of INCLUDES collaborative efforts and implementation of programs within each case and across cases.

The evaluation team provides quarterly updates to the alliance leadership and submits an annual report, which is included with the alliance APPR.

Evaluation Resources

Sustainability is concerned with the long-term delivery and subsequent benefits of evidence-based interventions. To further this field, we require a strong understanding and thus measurement of sustainability and what impacts sustainability (i.e., sustainability determinants). This systematic review aimed to evaluate the quality and empirical application of measures of sustainability and sustainability determinants for use in clinical, public health, and community settings.

Hall, A., Shoesmith, A., Doherty, E. et al. Evaluation of measures of sustainability and sustainability determinants for use in community, public health, and clinical settings: a systematic review. Implementation Sci 17, 81 (2022). https://doi.org/10.1186/s13012-022-01252-1.

Systematic reviews of measures can facilitate advances in implementation research and practice by locating reliable and valid measures and highlighting measurement gaps. Our team completed a systematic review of implementation outcome measures published in 2015 that indicated a severe measurement gap in the field. Now, we offer an update with this enhanced systematic review to identify and evaluate the psychometric properties of measures of eight implementation outcomes used in behavioral health care.

Mettert, K., Lewis, C., Dorsey, C., Halko, H., & Weiner, B. (2020). Measuring implementation outcomes: An updated systematic review of measures’ psychometric properties. Implementation Research and Practice, 1. https://doi.org/10.1177/2633489520936644.

Join us in our mission to connect

Email Signup
Subscribe agreement

Join Our TAPDINTO-STEM Mailing List