Skip to main content

more options


Partnerships

CORE facilitates networking, communication, and education related to evaluation, evaluation systems, and systems evaluation. CORE promotes the dissemination of evaluation results in peer-reviewed scientific research publications.

Overview:
CORE's Evaluation Partnerships offer a practical, "systems approach" to evaluation planning, implementation, and use of evaluation results in organizations.  The innovations of the systems approach include attention to the full range of program stakeholders; analysis of the program as an embedded "part" in a collection of programmatic, organizational and cultural "wholes"; very comprehensive program modeling with attention to program definition and boundaries; and recognition of where a program is in its "lifecycle" and the implications for evaluation.
A fundamental principle in the approach is that a program operates within the intersection of many "systems" - the organization, the community, regional and national entities, the research field, and so on - and that recognition of these provides a means of understanding the program better, finding commonalities across programs, tapping into resources within the systems and relevant research

Evaluation Partnerships

Many outreach professionals are experts in their fields and at conducting outreach. However, the same professionals are rarely also experts in the field of evaluation. Despite tightening budgets and decreasing resources, stakeholders and funders are increasingly requesting proof of program impact from their program managers. However, lack of evaluation expertise often results in inadequate or inappropriate evaluation methods, and costs of building evaluation capability and conducting evaluation are considerable. CORE partners with many differenty partners with different needs, but the goal is to build a strong evaluation plan that contributes to program management, as well as to support evaluation implementation and utilization. Our role is as an Evaluation Facilitator. We form a partnership where we provide evaluation expertise, and the program staff bring their program expertise. The Evaluation Facilitator and the client organization together form what we refer to as an Evaluation Partnership (EP). The role of Evaluation Facilitator is to educate and inform the team members about the process of evaluation, as well as to facilitate the group’s discussions and encourage them to deeply examine the organization’s and program’s needs, stakeholders.

History:

The first Evaluation Partnership (EP) was established in 2006 with Cornell University Cooperative Extension in NYC, as a systematic effort to build evaluation capacity within that organization, using an interactive approach that drew on strengths and knowledge of program staff, and evaluation expertise of CORE, led by Professor William Trochim. 


The trainings and materials developed in that year were formalized in a "Protocol" - a series of concepts and training steps designed to help program staff develop high-quality evaluations tailored to their specific program and its needs, and that drew on program and evaluation research wherever possible. 


In 2007, funding from the National Science Foundation and Cornell Cooperative Extension supported the continued development of the Systems Evaluation Protocol, including the development of a web-based system for evaluation planning (the "Netway"), which is based on critical systems concepts. The Cornell Center for Materials Research (CCMR) - Cornell's MRSEC began to work through the protocol to develop evaluation plans (NSF Award # 0535492). With continued commitment from NSF in Award #0814364 the Evaluation Partnerships were extended in 2009 to 3 additional MRSEs, in 2010 to 5 more MRSEs, and in 2011 to another 5 MRSECs.

In October 2012 CORE began to collaborate with Brian Leidy (PI, Military Projects) and Marney Thomas (Co-PI, Military Project) from Bronfenbrenner Center for Translational Research.

Goals:
The Evaluation Partnership project is about developing and extending the capacity to make good choices about evaluation, which will ultimately lead to improved programs. Specific goals include:

  • Build evaluation capacity
  • Develop an evaluation community
  • Develop and implement high-quality evaluation plans
  • Improve evaluation practices

 


Related Resources