Skip to main content

more options


AEA Presentations

Evaluation 2012 Presentations

Capacity Building “Spillover”: Further Applications of Evaluation Skills
Poster Presentation 135 to be held Wednesday, Oct 24, 7:00 PM to 8:30 PM
Presenter(s): Miranda Fang, Claire Hebbard, Monica Hargraves

Abstract: After participating in ECB training, program and organization staff are also increasingly using these evaluation skills and resources in work areas beyond program evaluation. Many programs who have participated in our NSF “Phase II Trial of the Systems Evaluation Protocol for Assessing and Improving STEM Education Evaluation” have used the on-line evaluation planning tool for additional projects. This poster presentation will illustrate observed spillover of how programs that have gone through evaluation planning training have applied these skills to other programs and management activities. Adapting evaluation skills to other management activities is a step toward building the organizational “culture of evaluation.”

Session Title: Overcoming Organization Culture to Adopt Evaluation Capacity Building
Multipaper Session 276 to be held in 200 D on Thursday, Oct 25, 1:00 PM to 2:30 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s): Thomas Archibald (Cornell)

Promoting Evaluative Thinking, the Missing Ingredient in Evaluation Capacity Building
Presenter(s): Thomas Archibald, Jane Buckley, Jennifer Urban, William Trochim

Abstract: When you do an evaluation for someone, she has an evaluation report. When you teach someone evaluation, she can evaluate her own program. But when you promote evaluative thinking, she sees her work and the world in a whole new light. This reworking of the common saying about fish and fishing is intended to suggest the transformative nature of evaluative thinking (ET), an essential element of evaluation capacity building that is too often absent. This paper reviews existing efforts to define, operationalize, and measure ET, and in doing so, explores ET’s significance. Then we present some pedagogical theories and practices (e.g., role plays, simulations, hands-on work) which we have found successful in promoting ET in our context, a multi-year ECB initiative with non-formal science, technology, engineering, and math (STEM) educators. We hope this paper will help people interested in ECB gain new perspectives on how to intentionally foster evaluative thinking.

Session Title: Simple Rules: A Demonstration of the Systems Evaluation Protocol for Evaluation Planning
Demonstration Session 315 to be held in 101 A-D/G-J on Thursday, Oct 25, 2:40 PM to 4:10 PM
Sponsored by the Systems in Evaluation TIG and the Organizational Learning and Evaluation Capacity Building
Presenter(s): Monica Hargraves, Jane Buckley, Thomas Archibald, Claire Hebbard, Jennifer Brown Urban, William Trochim

Abstract: The Systems Evaluation Protocol (SEP) offers a research-based approach that takes into account the complex systems within and around a program to develop high-quality program models and evaluation plans. The methodology draws on evaluation theory, evolutionary theory, and systems science. Consistent with a systems perspective, it is flexible enough to fit the needs of any program, and is comprehensive enough to meet the needs of multiple stakeholders. This demonstration will present the Protocol as an example of the systems concept of “simple rules,” explain the steps of the Protocol and their theoretical foundations. The Protocol integrates and operationalizes complex concepts to provide practical guidance for evaluation planning. The SEP has been utilized by diverse educational programs in multiple contexts in the NYS Cooperative Extension system, NSF-funded Science Outreach offices, and elsewhere. Examples from this experience will be presented.

Session Title: Gap or Trap? Rethinking Evaluation’s Response to Evidence-Based Programs and the Research-Practice Gap
Multipaper Session 742 to be held in 205 A on Saturday, Oct 27, 8:00 AM to 9:30 AM
Sponsored by the Theories of Evaluation TIG
Chair(s): William Trochim
Discussant(s): William Trochim

Abstract: Research and practice have historically operated in independent realms. We posit that evaluation is situated in a key position between these realms and, therefore, that evaluators play a key role in linking these two areas. Yet what that role is and how it can best be played are unclear. The evidence-based program (EBP) movement is perhaps the most dominant approach to bridging the research-practice gap, but EBPs and similar approaches provoke more hard questions than they purport to answer. This panel presents three distinct yet related perspectives on how evaluators might respond to the thorny questions surrounding EBPs and the research-practice gap. In turn, our papers reconceptualize EBPs to include and value a plurality of validities, explore some problematic assumptions and ethical effects of EBPs, and demonstrate a novel practical approach to research-practice integration. Given the pervasiveness of the research-practice gap and EBPs, this panel has broad implications for evaluation.

An Evolutionary and Developmental Systems Perspective on “Evidence-based Programs”
Jennifer Urban, Monica Hargraves, Thomas Archibald, Marissa Burgermaster, William Trochim

Both practitioners and researchers are responsible for program development, yet each follows a unique process, brings unique strengths and faces unique challenges. Researchers commonly develop programs based on basic research of defined constructs and move to establish their effectiveness through “rigorous” methods such as RCTs. Practitioners commonly develop programs based on knowledge of local context and are responsive to local needs. Typically, programs that are deemed “evidence-based” are researcher-derived and have established effectual validity. Practitioner-derived programs less frequently achieve “evidence-based” status and this may be due to their focus on establishing viable validity and ecological validity. This paper argues that it is our definition of evidence-based programs that is flawed and not necessarily practitioner-derived programs. By taking an evolutionary and developmental systems theory perspective on program development and evaluation, the criteria for evidence-based programming is reconceptualized to include and equally value effectual validity, viable validity, ecological validity, and transferable validity.

Mind the gap: Knowledge, Power and Tacit Assumptions in the Research-Practice Gap
Thomas Archibald, Monica Hargraves, Marissa Burgermaster, Jennifer Urban, William Trochim

The evidence-based program (EBP) and translational research (TR) movements are intended to “bridge the research-practice gap” and focus resources on doing “what works” in the “era of accountability.” Both of these movements contain, and rely on, a number of often unspoken assumptions about the nature of evidence, knowledge, and social action, putting the hierarchical division between “scientific” and “everyday” ways of knowing into sharp relief. Contrary to many dominant accounts, the work people do with EBPs and TR is not politically neutral, unbiased work focused on instrumental, technical problems; rather, it is implicated in contemporary (and contentious) transformations of social programs and social life. This paper problematizes the epistemological, ontological, and praxeological assumptions of EBPs and TR, rethinks the “research-practice gap,” and synthesizes salient theoretical perspectives on these questions in an attempt to promote more equitable and effective responses to the problems which EBPs and TR are purported to solve.

Applying the “Golden Spike” to the Research-Practice Gap: Leveraging Evidence by Linking it to Program Logic Models
Miranda Fang, Monica Hargraves, Jennifer Urban, Thomas Archibald, Marissa Burgermaster, William Trochim

There is a continued need to find mechanisms that enable a better connection between knowledge generation and application. The “golden spike” approach (Urban & Trochim, 2009) is a method that builds on theory-driven evaluation and logic modeling to help link program theory and evaluation measures to a research-derived evidence base. Using resources including the Educational Resources Information Center, we established a structured procedure for academic literature searches based on extracting key constructs and phrases from a program’s pathway model (a logic model showing hypothesized causal connections). This approach eases the undue burden often placed on practitioners and local evaluators to demonstrate evidence of long-term program impacts. In this session, we present how to apply this approach through literature searches guided by a program’s pathway model. Urban, J. & Trochim, W. (2009). The role of evaluation in research–practice integration: Working toward the ‘‘golden spike.’’ American Journal of Evaluation, 30(4), 538-553.

Session Title: Teaching Systems Evaluation
Think Tank Session 767 to be held in 102 D on Saturday, Oct 27, 9:50 AM to 10:35 AM
Sponsored by the Systems in Evaluation TIG and the Teaching of Evaluation TIG
Presenter(s): Jennifer Urban
Discussant(s): Marissa Burgermaster,Samantha Spencer, Kelly Panchoo, Michael Puccio

Abstract: Systems evaluation is a relatively new sub-field in evaluation and given its increased prominence and popularity, it is appropriate to consider how the next generation of systems evaluators will be trained. This think tank will convene evaluators who are interested in exploring the essential components of a course specifically focused on teaching systems evaluation. The structure of the course components discussion will be based on current best practices in curriculum design. It will emphasize the presentation of a balanced, inclusive view of systems evaluation in the development of a course on this topic. By the end of the think tank, attendees will have: (1) reached some consensus about the core components of a course on systems evaluation; and (2) identified any areas of discrepancy/disagreement and explored why these might exist and how best to present these issues to students of systems evaluation.

Session Title: Assessing Impact of Science and Technology and Media in Global Programs

Multipaper Session 959 to be held in 201 A on Saturday, Oct 27, 2:40 PM to 4:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s): John LaVelle, Claremont Graduate University

Enhancing the Quality of Rating Systems in International Aid Evaluation
Presenter(s
): Kanika Arora, William Trochim,

Abstract: Rating systems have evolved as an integral part of international aid evaluation. A wide array of donor agencies such as the World Bank, UNDP and the USAID routinely employ ratings with an aim to quantify qualitative judgment of evaluators. Ratings also enable organizations to aggregate and compare a complex portfolio of projects within and across agencies. However, there are a number of critical issues involved in the construction and implementation of a rating system, any of which can have a determinative effect on the validity or reliability of the ratings that are produced. This paper develops a conceptual framework for addressing important methodological challenges and enhancing the quality of rating systems. Results from a recent assessment of an international aid organization’s rating system were used as the basis of this framework. Recommendations provided to the organization to enhance the quality of ratings across various types of evaluations are also discussed.

Session Title: Honing your Cultural Competence: Providing Practical Ways to Enhance your Practice

Multipaper Session 743 to be held in 205 B on Saturday, Oct 27, 8:00 AM to 9:30 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Discussant(s): Jennifer Williams, JE Williams & Associates

Making Culturally Responsive Decisions in Evaluation Practice
Presenter(s): Wanda D Casillas (with  Rodney Hopson, Duquesne University and Ricardo Gomez, University of Massachusetts)

Abstract: We represent an organized list of culturally responsive principles and discuss the potential for making these ideas actionable with an illustrative example of culturally responsive decision-making. We begin by validating and situating the disconnect between culturally responsive (CRE) theory and CRE practice in a recent concept mapping study. We synthesize and organize a set of CRE principles that can be used to guide CRE action. We provided examples of how each of the principles were addressed in practice in the evaluation literature and then provided a vignette to illustrate a constellation of the principles in context. Recognizing that a well-constructed set of principles and an example of how they unfold in context may not provide sufficient guidance for practice, we illustrate a step-by-step example of how CRE principles can be applied at different decision points in a formal evaluation methodology, the Systems Evaluation Protocol.

Session Title: Using Community to Support Evaluation Capacity Building
Multipaper Session 117 to be held in Conrad C on Wednesday, Oct 24, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s): Syreeta Skelton, ICF International,

The Community PLEDGE: Building Evaluation Capacity Among Community-Based Program Leaders
Presenter(s): Natalie Cook

Abstract: In the Program Leadership Education in Design, Growth, and Evaluation (PLEDGE) program, aspiring community-based program leaders participate in a 10-week workshop series, led by an undergraduate research assistant in the Cornell Office for Research on Evaluation (CORE). Based on the Systems Evaluation Protocol (SEP), the participants assess their program names, write mission statements and descriptions, create pathway and logic models, choose for measures, and compile evaluation plans. Although the participants do not read the SEP themselves, they are presented with the key concepts, in a way that may be more accessible to individuals of varying education levels. At the PLEDGE symposium, participants present their programs to invited stakeholders, including executive directors and staff members of community organizations and representatives from local foundations. This paper will describe the process, including successes and challenges, of building evaluation capacity among community members in Ithaca, NY.

Session Title: Evaluation Policy and Evaluation Practice:Creating Frameworks for Quality Evaluation across Diverse Practice Settings
Multipaper Session 667 to be held in 200 A on Friday, Oct 26, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Policy TIG

Classifying Evaluation Policies
Presenter(s): Margaret Johnson,

Abstract: Efforts are currently underway in the US federal government to improve and strengthen evaluation practice and increase the use of evaluation results to inform policies and programs. However, these efforts remain unrealized, due partly to the lack of a comprehensive framework identifying the main types of evaluation policy an organization should consider. To generate a set of relevant types of evaluation policy for the US context, this study surveyed 600 members of the American Evaluation Association in 2009. Participants were asked to brainstorm examples of evaluation policy and then sort and rate them. Results were analyzed using a concept mapping technique. The end product is an evaluation policy inventory instrument, including step-by-step instructions for its use in organizations.

Session Title: Novel Applications of Case Study Methods: Evaluating Complex CTSA Research Environments (Clinical and Translational Science Awards)
Multipaper Session 925 to be held in 205 D on Saturday, Oct 27, 1:00 PM to 2:30 PM
Sponsored by the Health Evaluation TIG
Chair(s): Janice A Hogle, University of Wisconsin, Madison

Translational Forensics: Presenting a Protocol for Retrospective Translational Research Case Studies
Cath Kane,  Samantha Lobis, William Trochim

The Weill Cornell CTSC will present our protocol for retrospective case studies of successful translational research. Here, a case is defined as drug, medical device, or a surgical process/procedure that has already been translated into clinical practice. Key informants from the CTSC suggest these case examples in addition to using FDA approval, Cochrane Reports, and Medicare/Medicaid approval as proxies for the end point of successful translation. Once the cases and end points have been identified, we’ll work retrospectively using a kind of “translational forensics” via literature reviews and CV data mining in order to determine a genesis point for the cases. This information is then used to identify key informants, who are interviewed to document the entire story of the research translation from the genesis of an idea to clinical application. The interview data is analyzed and collated, relevant durations are calculated, and the TR pathways are depicted visually.

 

Evaluation 2011 Presentations

Evaluative Thinking: What is it? Why does it matter? How can we measure it?

Thomas Archibald, Jane Buckley and William Trochim

Evaluation capacity building (ECB) focuses on facilitating the development of individual and organizational competencies and structures—such as evaluation knowledge and an organizational culture of evaluation—that promote sustained, high quality evaluation. Evaluative thinking is also mentioned in the ECB literature as an important attribute, yet such references are often fleeting. In this paper, we first present our rationale for highlighting evaluative thinking as an important component of ECB practice and as an object of inquiry within research on evaluation. Second, we draw on cognitive and education research to help develop and clarify the construct of “evaluative thinking.” Finally, we explore some ways of operationalizing and measuring this construct, considering both qualitative and quantitative methods. Our exploratory work on measuring evaluative thinking is situated in a project designed to promote evaluative thinking and foster high quality internal evaluation among non-formal science, technology, engineering and math (STEM) educators.

Interested individuals may download the draft measures here. (Purpose and email will be requested)

 

Evaluation in the context of lifecycles: "A place for everything, everything in its place"

Jennifer Urban, Monica Hargraves, Claire Hebbard, Marissa Burgermaster, William Trochim

One of the most vexing methodological debates of our time (and one of the most discouraging messages for practicing evaluators) is the idea that there is a “gold standard” evaluation design (the randomized experiment) that is generally preferable to all others. This paper discusses the history of the phased clinical trial in medicine as an example of an evolutionary lifecycle model that situates the randomized experiment  within a sequence of appropriately rigorous methodologies . In addition, we propose that programs can be situated within an evolutionary lifecycle according to their phase of development. Ideally, when conducting an evaluation, the lifecycle phase of the program will be aligned with the evaluation lifecycle. 
            This paper describes our conceptualization of program and evaluation lifecycles and their alignment. It includes a discussion of practical approaches to determining lifecycle phases, the implications of non-alignment, and how an understanding of lifecycles can aid in evaluation planning.

 

Using Relational Databases for Earlier Data Integration in Mixed-Methods Approaches

Natalie Cook, Claire Hebbard, William Trochim

This paper discusses the challenges of data management and analysis of a mixed-methods research project. The focus of the paper is on the use of a single MS Access database to allow for both integrated data management and efficient integrated analysis. Modern evaluation teams are increasingly facing many challenges, and the technology to address those challenges is marginally sufficient to manage the complexity that it creates, especially when quantitative and qualitative data are integrated in the analysis. Communication is always critical, and – as in this case - is even more challenging when the team members are geographically dispersed.

 

Evaluation and Research-Practice Integration: What are Our Roles and How Can We Play Them Better?
Abstract

Thomas Archibald, Marissa Burgermaster

The need to effectively and efficiently integrate research and practice is a daunting problem facing most, if not all, scientific endeavors. This is especially true in social scientific inquiry. Traditionally, practitioners focus on particular contexts whereas researchers focus on the production of generalizable knowledge. The fields of biomedicine, education and other social domains have attempted to bridge the research-practice gap (e.g., evidence-based practice and translational research.) Often, these efforts have been criticized for their top-down nature. On the other hand, practitioner resistance to research often stymies the impact of research findings. Yet both researchers and practitioners want to focus on “what works” (especially in resource-constrained times.) We posit that evaluation can play a crucial role in research-practice integration, but that currently it is insufficiently clear how. In this session we will briefly present the issue and then facilitate brainstorm sessions to generate dialogue among our peers on this topic.

 

The Development and Validation of Rubrics for Measuring Evaluation Plan, Logic Model, and Pathway Model Quality

Jennifer Urban, Marissa Burgermaster, Thomas Archibald, Monica Hargraves, Jane Buckley, Claire Hebbard, William Trochim.

A notable challenge in evaluation, particularly systems evaluation, is finding concrete ways to capture and assess quality in program logic models and evaluation plans. This paper describes how evaluation quality is measured quantitatively using logic model and evaluation plan rubrics. Both rubrics are paper and pencil instruments assessing multiple dimensions of logic models (35 items) and evaluation plans (73 items) on a five point scale from one to five. Although the rubrics were designed specifically for use with a systems perspective on evaluation plan quality they can potentially be utilized to assess the quality of any logic model and evaluation plan. This paper focuses on the development and validation of the rubrics and will include a discussion of inter-rater reliability, the factor analytic structure of the rubrics, and scoring procedures. The potential use of these rubrics to assess quality in the context of systems evaluation approaches will also be discussed.

 

 

 

Evaluation 2010 Presentations

Session Title

Systems Theories in Evaluation Planning: Differentiating Planning Process from Evaluation Plan – Thinktank

A Systems Approach to Building and Assessing Evaluation Plan Quality panel

Links for the presentation:

Systems Protocol
Facilitator's Guide to Evaluation Planning
Logic Model Rubric
Evaluation Plan Rubric

Evaluation and Program Quality - A Systems Perspective on the Challenges in Finding Measures for High-Quality Evaluation multipaper

Empowerment Evaluations: Insights, Reflections, and Implications - The Need for Social Theories of Power in Empowerment Evaluation multipaper

From Agent-Based Modeling to Cynefin: The ABC's of Systems Frameworks for Evaluation - Using Systems Thinking Concepts in Evaluation of Complex Programs

Evaluation Management Policies: Examining Requirements of Quality Evaluation - Evaluation Policy Inventory - multipaper

Mapping Stakeholder Views of Evaluation Questions and Plans - panel

Practicing Culturally Responsive Evaluation: Graduate Education Diversity Intership (GEDI) Program Intern Reflections on the Role of Competence, Context, and Cultural Perceptions - Part II - Being Culturally Responsive in the Digital World

 

 

Evaluation 2009 Presentations

Building and Evaluating a System Based Approach to Evaluation Capacity Building - Systems approaches to evaluation capacity building are essential for developing effective evaluation systems. This session describes a multi-year NSF-supported project designed to develop a comprehensive approach to evaluation planning, implementation and utilization that is based on systems approaches and methods. We present the idea of a systems "evaluation partnership" (EP), the social and organizational network that is necessary to sustain such an effort, that emphasizes building consensus, using written agreements and delineating appropriate roles and structures to support evaluation capacity building. At the heart of the EP are: the systems evaluation "protocol," a specific well-designed sequence of steps that any organization can follow to accomplish a high-quality evaluation; and the integrated "cyberinfrastructure" that provides a dynamic web-based system for accomplishing the work and encouraging networking. This session describes the EP, the approaches used to evaluate it, and the results to date and sketches the plans for future development. William Trochim, Jane Earle, Thomas Archibald, Monica Hargraves, Margaret Johnson, Claire Hebbard.

Alignment of Program and Evaluation Lifecycles. This poster describes four major lifecycle phases in outreach programs, and their corresponding evaluation lifecycles; then presents results of lifecycle analyses from over fifty programs to date, and discusses the broad-based application of this approach. Evaluation is critical for linking educational programs to participant learning. But small programs may lack evaluation capacity and expertise, and often have limited resources for evaluation. Program managers must determine the need for evaluation, and identify if they are capable of conducting the evaluation themselves. New programs need rapid feedback, and are typically interested in process and participant satisfaction, but in later phases will be interested in program association with change. Mature programs want causal evidence, and the most mature programs will look at consistency of this evidence over various settings. These needs may conflict with the funders' need for causal evidence, no matter the level of development of the program. (handout)

Developing Criteria for Addressing Diversity in Evaluation of Science, Technology, Engineering, and Mathematics (STEM) Programs. This project was designed to 1) establish criteria for conducting a culturally responsive evaluation of STEM programs and 2) to establish criteria for assessing cultural responsiveness in STEM program planning. A secondary objective was to evaluate the relationship between points 1 and 2. We recruited staff from programs conducting evaluations throughout New York. As well as evaluators nationally that identified a concern with diversity issues in evaluation through membership with American Evaluation Association topical interest groups. We employed a standard concept mapping methodology where participants were asked to generate statements about behaviors and attitudes toward diversity in evaluation and program planning. After participants sorted and rated the statements produced, we employed The Concept System software to produce statement clusters and created a taxonomy behaviors and attitudes identified across staff and evaluators. A comparison of evaluator and program staff ratings was also conducted to assess for differences between groups. Wanda Casillas, William Trochim

Evaluation Planning for 4-H Science, Engineering and Technology Programs: A Portfolio From Cornell Cooperative Extension. Science, Technology, Engineering and Mathematics education has become a national priority in many arenas, including the 4-H system. The Cornell Office for Research on Evaluation (CORE) is conducting an NSF-funded research project that established 'Evaluation Partnerships' with twenty Cornell Cooperative Extension Offices to develop Evaluation Plans for 4-H Science, Engineering and Technology (SET) programs. The Evaluation Partnerships follow a Systems Evaluation Protocol developed by CORE that includes stakeholder and lifecycle analyses, and logic and pathway modeling. The selected 4-H SET programs cover a range of science topic areas and program delivery modes. This paper describes and analyzes the evaluation plans developed by this cohort, and examines the types of evaluation questions, measures, and designs identified in these plans. Of particular interest are questions of commonality of needs and transferability of solutions. The paper considers the implications of this research for evaluation theory and practice. Monica Hargraves

Federal Evaluation Policy and Performance Management - This study presents a look at the views of professional evaluators on the essential components of federal evaluation policy. In the spring of 2008, a random sample of members of the American Evaluation Association were surveyed to learn what they thought should be included in a comprehensive set of U.S. federal evaluation policies. Using the concept mapping methodology developed by Trochim, responses were grouped, rated and analyzed. The results constitute a taxonomy of evaluation policy at the federal level, as well as a comparative analysis of views by member sub-group.Margaret Johnson, William Trochim.

Clinical and Translational Science Awards (CTSA) Evaluators' Survey: An Overview - At the 2008 CTSA evaluators meeting in Denver, several members gathered structured input from their colleagues towards the development of a basic CTSA Evaluators Survey. The goal of the survey was to investigate common trends in three critical areas: 1) the management of evaluation, 2) data sources and collection methods, and 3) analysis or evaluation activities currently being conducted. The survey was conducted in the Spring of 2009 with CTSA Evaluation staff intended as both the survey participants and primary audience for findings. The goal of this survey was to provide a comparative overview of all current CTSA evaluators, to allow each team to solicit best practices, and to allow the various CTSA evaluation teams to orient themselves relative to their peers. As such, a review of the survey results will act as a fitting introduction to the series of case studies gathered for this multipaper of CTSA Evaluators. Cath Kane [with S Kessler (Northwestern) and K Wittkowski(Rockefeller)]

Evaluation 2008 Presentations

Defining and Implementing Evaluation Policies to Sustain Evaluation Practice in Extension Programs - The Cornell Office for Research on Evaluation (CORE) has been working with county Extension offices in New York State through an “Evaluation Partnership” (EP) that brings a systems-based approach to evaluation planning. Seven county Extension associations have been actively involved in the EP so far, completed evaluation plans for diverse educational programs in 2007, and are implementing them in 2008.  Their experience in Extension generally, and with systematic Evaluation Planning in particular, form the background for a pilot study of Extension Evaluation Policy.  Idea statements and other responses were gathered from Program Staff and Senior Administrators in these Extension Offices, as well as from statewide Extension and 4-H leadership.  These data were analyzed using Concept Mapping technology to yield: a taxonomy of Evaluation Policy components; a menu of specific tools and procedures that can support evaluation practice; and insights into what can help make evaluation “work” in an Extension environment.  The Concept Mapping analysis includes participant ratings of the individual idea statements on two criteria: “potential for making a difference” and “relative difficulty.”  The combination of these average ratings for each statement yields a “Go-Zone” of promising first steps toward an effective policy for promoting and sustaining evaluation practice (handout). Monica Hargraves

 

Evaluation 2007 Presentations