Document with Pen

Research Questions

From pre-publication copy of Gullickson, A. M., King, J. A., LaVelle, J. M., & Clinton, J. M. (2019). The current state of evaluator education: A situation analysis and call to action. Evaluation and Program Planning, 75 (August 2018), 20–30.

Future Research Agenda

CIPP components - Context

Sub-components - Needs Potential Research Questions

Who needs evaluation training and in what contexts?

What specifically do they need to know and be able to do?

How do those needs relate to each other?

What existing professional development curricula and programs have been designed to meet these needs? What content do they cover?

What explicit and implicit needs exist for prescriptive and descriptive evaluation theory?

How do these needs change from context to context, and country to country?

What are the goals of evaluator education programs? To develop practitioners, researchers, capacity-builders, educators, or…?

CIPP components - Inputs

Sub-components - Competencies Potential Research Questions

What are the implications the transdisciplinary nature of evaluation for identifying and prescribing a sub set of evaluation specific knowledge, skills, attitudes, and other characteristics[1] (KSAOs) (Brannick, Levine, & Morgeson, 2007) that all evaluators must have?

What are the commonalities across existing competency sets? Whatis core internationally and whatis contextual?

What aspects of existing competency sets are evaluation specific (i.e., essential), which are contextual (i.e., discipline or sector specific)?

Which competencies are essential for individuals and which for evaluation teams?

What, if any, competencies are missing from or hidden within broad statements in the current sets?

[1] Knowledge: level of mastery of a technical body of material; Skill: “capacity to perform tasks requiring the use of tools”; Ability: capacity to perform the required physical and mental acts that do not require tools; Other characteristics: “interests, values, temperaments, and personality attributes” (all quotations from Brannick et al., 2007, p. 97)

CIPP components - Inputs

Sub-components - KSAOs Potential Research Questions

What are the required KSAOs for the different groups who need evaluation training? To what extent are they present in existing competency sets?

Are there any additional capabilities or dispositions necessary for quality evaluation practice?

CIPP components - Inputs

Sub-components - Taxonomies and other frameworks Potential Research Questions What relevant developmental taxonomies already exist for the competencies/KSAOs (e.g., communication, interpersonal skills, project management, and interpersonal skills)? What are the developmental stages of the remaining competencies/KSAOs? That is, which skills and knowledge are more complex? How do these developmental stages relate within and across types of learners, competency domains, and contexts? How might frameworks for the development of expertise (e.g., Dreyfus & Dreyfus, 2005) relate to the competencies/KSAOs and their developmental stages? What levels of expertise in each of the competencies/KSAOs are necessary for good evaluation practice? What levels of skill/expertise should be the goal of different types of education (professional development, undergraduate, graduate, PhD)? Which of the competencies/KSAOs overlap with other disciplines?

CIPP components - Processes and Products

Sub-components - Insights from other disciplines Potential Research Questions What can we learn/borrow from them in terms of teaching strategies and assessment for the relevant competencies/KSAOs? For example, what are feasible, evidenced practices for learning from field experience? Project management?

CIPP components - Processes and Products

Sub-components - Teaching Potential Research Questions What kinds of curriculum, training situations, and learning experiences will best meet the needs of the different types of learners? What are good strategies for teaching to the various competencies/KSAOs? How might the teaching strategies and learning experiences differ by the level of the course (introduction, intermediate, advanced) and the end products/applications? What competencies/KSAOs can/should be grouped together for teaching?

CIPP components - Processes and Products

Sub-components - Assessment Potential Research Questions What are good strategies for assessing the various competencies/KSAOs? What measurement tools already exist to assess evaluator knowledge, skills and dispositions? What will need to be developed to address needs for baseline assessment at entry into courses, exit measures of learning progression, and as a tool for career planning? What competencies/KSAOs can/should be grouped together for assessment?

CIPP components - Processes and Products

Sub-components - Training for teachers Potential Research Questions What can we learn from existing research in education to enable teachers of evaluation to understand their impact on student learning? How can teachers learn and apply evidenced-based practices for differentiating instruction? What training is needed for those who will mentor or supervise[2] students in field experiences? [2] Rather than assuming a good evaluator will also be a good coach, these individuals will need instruction and practice in quality supervision. The transition phases and intervention strategies suggested by Brown (1985) provide a starting place.

CIPP components - Overarching

Sub-components - Models, criteria Potential Research Questions What logic models and program theories can inform and delineate education initiatives for evaluation? What existing criteria (e.g., the Program Evaluation Standards) define quality in evaluation practice and evaluation education? To what extent are they sufficient? What else is needed? Do we need articulated performance standards on those criteria to enable evaluators and evaluation consumers to assess the quality of evaluations and evaluation reports? If so, what should those be and who should set them? What differentiates an excellent evaluation from mediocre or poor one? What differentiates excellent evaluation education from mediocre or poor for the various types of learners and levels?

To address these questions, we propose the following list of potential tasks:

a) Identify current evaluator training curriculum and pedagogies (including professional development and other non-formal options) and conduct research on their efficacy in relation to evaluation practice

b) Add to the current competency frameworks via multi-sector job and task analysis related to evaluation

c) Mapping of the updated competencies to developmental taxonomies to understand learning progressions

d) Multi-disciplinary literature studies and synthesis to explore key knowledge related to these competencies to build on what already exists


e) Reviews of good practice in teaching critical thinking, argumentation and logic, and interpersonal skills


f) Exploration of indigenous ways of knowing and evaluating to expand our thinking


g) Consolidation and syntheses of research on how other disciplines have established their core and quality standards can springboard this work.


h) Adopting and sustaining a practice of iterative synthesis on key topics, learning from the education discipline; the New Zealand government process can serve as an exemplar (Timperley, Wilson, Barrar, & Fung, 2007). 


i) Develop an evaluation specific research data-base (or subsection within an existing database) to consolidate the relevant literature and improve consistency of searching and synthesis.


j) An international effort to focus on accreditation of evaluation training based on these developments


k) Research to explore and document quality evaluator training (both formal and informal)


l) Implementation of reporting standards (e.g., CHESS (Montrosse-Moorhead & Griffith, 2017) that enable valid meta-evaluation

© Copyright 2020 by The International Society for Evaluation Education

Proudly created with