Evaluation Protocol - Guide and Workbook
Protocol publications are available to download here (updated 10/25/16). You will be asked for contact info and your intended use of the protocol, and this information is just to help us monitor how it's being used. You will not be placed on any email lists anywhere due to this. You will be redirected to a new page with links to the resources, which you may wish to bookmark.
The Guide to the Systems Evaluation Protocol: Evaluation Planning(2016) (v3.1 updated 10/07/16). (80 pages, 13550 KB) This is the third version, and was written for outreach programs regardless of their evaluation expertise. Users are encouraged to have an Evaluation Champion to help their team work through the steps of the Protocol. This version, integrates into the Netway, and refers to worksheets available either online at the Netway (http://evaluationnetway.com) or from the accompanying workbook
The Workbook to the Systems Evaluation Protocol: Evaluation Planning(2015) (166 pages, 2871 KB) This is a companion piece to the Guide, with new and updated worksheets, as well as protocol descritiptions and answers to all the FAQs you may have about program modeling and evaluation planning. These resources were developed for and taken directly from the Netway - our online software for program modeling and evaluation planning (see www.evaluationnetway.com)
Outline of Protocol Steps - Evaluation Planning, Implementation and Utilization (2010) (4 pages, 325KB). Although most of our work has been in evaluation planning, we have found that programs frequently also need support with implementation of their plan, and utilization of their findings. We have put a great deal of effort into thinking about these phases of evaluation from the perspective of a systems thinker, but the implementation and utilization steps have not yet been as fully developed as the evaluation planning phase.
About the Protocol
In the Systems Evaluation Protocol (SEP) we have tried to integrate principles associated with systems theories into program evaluations in order to assure that programs that use it will incorporate such principles when developing program pathway models , identifying key pathways and nodes (outputs and outcomes), determining the boundary conditions for program models, assessing program lifecycles, and selecting evaluation designs that are appropriate to program evolution. The SEP is a standardized protocol that nevertheless enables any program to develop an evaluation uniquely tailored to that program. In this sense it addresses the administrative need in an evaluation environment to standardize evaluation approaches while respecting the variety of contexts within which programming is conducted.As a reminder, there are three phases to evaluation: Planning, Implementation, and Utilization. Currently, our Guide and Workbook present resources only for the first phase - Evaluation Planning.
Putting evaluation concepts into a simple set of steps requires that we present the Guide to the Systems Evaluation Protocol in a linear format. In fact, an important objective for us in this work has been to instill the idea that effective modern evaluation requires evaluators to move beyond a linear mindset.The Guide to the Systems Evaluation Protocol (SEP) is more than just the sequence of steps and a list of factors to be considered when designing an evaluation - it describes the process of developing an evaluation plan. The process of working through the Protocol will consist of collaborative meetings that will seemingly spiral through several focal points over time, as well as ongoing work around building a culture of evaluation in the participating organization. This process is essential to the nature of the SEP. It is through these discussions that members if the organization and its program practitioners will develop a new outlook on their work that will change both their understanding of how the program stakeholders perceive the program, as well as their sense of purpose in what they are doing and why. Good evaluation requires feedback, and is embedded within a dynamic changing system. Although any written document is by definition linear, systems evaluation is a non-linear and iterative process (see "Simple Rules"). We expect that in various contexts it will be appropriate to perform steps out of the presented sequence or in tandem, as well as to revisit steps repeatedly throughout the process.
Throughout the Guide, we refer to the reader as the "Evaluation Champion." The Guide specifically articulates the unique facilitation techniques and strategies that the Evaluation Champion may use, as well as the role that he or she plays when conducting systems evaluation. This term is intended to be inclusive, and applicable to any professional who may be using this Guide to plan or help plan an evaluation. An Evaluation Champion should be thought of not only as a facilitator of the Systems Evaluation Protocol but as a driving force encouraging everyone to think about evaluation, and to build evaluation activities into all program management and practice within the organization. In addition to the Evaluation Champion, we will also refer to the "Working Group." This is also intended to be an inclusive term, describing any members of the organization who are working together through the steps of the Protocol. In some cases this may include collaborating program staff exclusively, while in other cases this term may refer to members of the organization from various levels in the organizational hierarchy (program staff, administrators, funders) as well as participants and related stakeholders.
The Systems Evaluation Protocol has applications well beyond the field of STEM education. The guides outline the protocol steps for evaluation planning (but not for evaluation implementation and evaluation utilization). When followed, this series of repeatable steps can lead to the creation of a project logic model, a project pathway model and an evaluation plan. Program leaders and staff learned evaluation skills that are applicable to all the program activities of their organization. Their paradigms of program evaluation and development were broadened to encompass the greater system within which a program is embedded.
This work has been supported by grants from the National Science Foundation ( Award # 0535492, Award # 0814364, Award # 1346848, & Award 1322861)