
Michelle Brodesky, Methodist Healthcare Ministries of South Texas, Inc.
Lisa Wolff, Health Resources in Action
Like many funding organizations, Methodist Healthcare Ministries of South Texas, Inc., has been on a journey to incorporate strategic evaluation into its grantmaking, and in 2019, completed an ambitious suite of impact evaluations (Center for Evaluation Innovation 2020; Grantmakers for Effective Organizations 2017). Launched in 2015, the Sí Texas Project: Social Innovation for a Healthy South Texas was a strategic grant portfolio that collaboratively funded eight slightly different integrated behavioral health programs in various settings to improve mental and physical health outcomes. The evaluation, completed in partnership with Health Resources in Action (HRiA), had multiple aims, including:
- determining the portfolio’s impact;
- examining the impact of each intervention via program-level studies; and
- supporting grantees’ use of and capacity to engage in these and future evaluations (HRiA 2019).
At times, these goals conflicted. Throughout the evaluation, we confronted decisions that required consideration of the relative importance of each one. Several themes emerged—Rigor, Grantee Customization, Funder Role, and Evaluation Capacity Building—which we believe have relevance for funders considering an evaluation of a grant portfolio (Wolff et al. 2020). As we discuss our experience with each theme, we offer guiding questions to support funders in designing the approach that is appropriate for your organization, your grantees, and your context.
Rigor: Given Sí Texas’ first two impact evaluation goals and requirements of federal funding, the evaluation’s rigor was high. We tend to think of rigor as the extent to which an evaluation uses a transparent, systematic, and replicable process and whether its design can link the intervention to outcome changes. However, the most rigorous evaluation may not be appropriate. Alignment with aims and feasibility is critical. A randomized study doesn’t fit all evaluation purposes; likewise, a rigorous design that falters in application will not provide strong results.
Sí Texas included eight separate, rigorous studies (four randomized control trials, four quasi-experimental) to determine program-specific impacts, a portfolio-level evaluation to understand the overall effect, and process evaluation to examine the facilitators and barriers to program implementation within and across grantees. This level of rigor provided substantial evidence regarding programmatic impact and implementation but required extensive time and capacity from all involved— evaluator, funder, and especially grantees. Key questions to consider in weighing these trade-offs include:
- What are your evaluation goals? Do you need to learn about effectiveness of individual grant programs or of a portfolio?
- What evaluation approach is appropriate and feasible?
- What resources and capacity does your organization—and grantees—have for evaluation?
Customization: In balancing the Sí Texas evaluation goals, questions of how much to customize program-level evaluations loomed especially large. Striking the balance between assessing the impact of a grant portfolio and of individual programs is challenging. Decisions revolve around the extent of standardization of methods and measures across the portfolio versus customization to program-specific contexts, settings, and populations.
While we sought to understand both program- and portfolio-level impacts, in Sí Texas’ evaluation we often prioritized the program-specific evaluations, allowing greater variation than was ideal for a portfolio study. To best align with grantees’ unique settings and existing practices, many aspects varied, such as program enrollment criteria and data collection protocols. Other components were consistent across all grantees, including: common outcome measures (identified through a consensus session), level of rigor, and analytic techniques. These consistent study characteristics allowed us to pool data across sites for the portfolio-level analysis.
Substantial grantee customization of evaluations supported relevance and feasibility of program-specific evaluations, though sacrificed some precision for the portfolio-level evaluation. However, this degree of customization also provided grantees an opportunity for greater ownership of the evaluation. The following questions can help guide funder decisions about customization versus standardization:
- How different are the goals, populations, and approaches of the programs you fund?
- How do your organization and your grantees plan to use evaluation results?
- Do you intend to learn about the effectiveness/implementation of individual grant programs or of a portfolio?
- How much control are you able to give grantees in evaluation decision-making?
Capacity Building: In keeping with the third evaluation goal, Sí Texas included significant evaluation capacity building for grantees. Capacity building is the process to enhance grantee knowledge and skills to support their organizational effectiveness and sustainability. In Sí Texas, capacity building included in-person, collaborative learning sessions, virtual trainings, and partnership with a dedicated evaluator for each grantee. This approach supported grantees in meeting the expectations for rigor discussed earlier and was driven by Methodist Healthcare Ministries’ goal that grantees be equipped to take ownership of their studies. The challenge of intensive capacity building is the time required of all parties. Key questions when considering evaluation capacity building are:
- What is your purpose for evaluation capacity building? Supporting an immediate evaluation or grantees’ longer-term capacity?
- What evaluation knowledge, skills, and abilities do grantees have? What do they need to develop? How does this vary across grantees?
- What resources (financial, time, knowledge/skill) does your organization have to support evaluation capacity building?
Funder Participation: As the primary funding organization, Methodist Healthcare Ministries chose to be deeply engaged in nearly all aspects of this evaluation, monitoring to support federal requirements and aligning evaluation efforts with grant programs and budgets while learning from and with grantees. Methodsit Healthcare Ministries was an active participant in a three-way partnership between HRiA, grantees, and funding organization, engaging with each other in learning, decision-making, and problem-solving (Brodesky et al. 2020). This high level of participation allowed early identification and resolution of challenges and greater consistency across the portfolio. However, intensive participation by funders requires significant time and may reinforce unequal power dynamics with grantees if funder presence limits discussion or choices. Key questions when considering funder participation are:
- What external or internal accountability do you have for the evaluation? Will you need to provide oversight to meet requirements? Will you need to provide frequent updates to other stakeholders?
- What is your organization’s experience with evaluation? To what extent do you or your organization have the capacity – primarily staff skill and time – to provide oversight and support for grantee or external evaluation efforts?
- How much support will your grantees need to conduct evaluations?
Our experience with the Sí Texas evaluation is one example of the many scenarios a funder may face. Whatever the situation, we believe consideration of these factors and questions provide a framework for focusing funders’ evaluation efforts. Each factor demands clarity about evaluation goals, available resources, grantees’ capacity, and organizational values. Asking these questions early on can help funders and evaluators strike the right balance for their grant portfolio evaluations.
References
Brodesky, Michelle K., K. Errichetti, M. M. Ramirez, S. J. V. Martinez-Gomez, S. Tapia, L. Wolff, and M. V. Davis. “Collaborating to Evaluate: The Sí Texas Partnership-Centered Evaluation Model.” In Researching Health Together: Engaging Patients and Stakeholders in Health Research from Topic Identification to Policy Change, edited by E. B. Zimmerman. Thousand Oaks, CA: SAGE Publishing, 2020.
Center for Evaluation Innovation. Benchmarking Foundation Evaluation Practices: 2020. Washington, DC: 2020.
Grantmakers for Effective Organizations and Harder & Co. GEO 2017 Field Survey: Major Trends in U.S. Grantmakers’ Attitudes and Practices.” Washington, DC: 2017.
Health Resources in Action, Inc. Sí Texas Portfolio Final Evaluation Report: Methodist Healthcare Ministries of South Texas. Inc. Boston, MA: June 2019.
Wolff Lisa S., K. S. Errichetti, S. Tapia Walker, M. Davis, M. K. Brodesky. (2020). “Striking a Balance between Program-Specific and Portfolio-Level Evaluation: Lessons Learned from a Multi-Site Evaluation on the Texas-Mexico Border.” Evaluation and Program Planning. 83 (2020).