Research Questions
From pre-publication copy of Gullickson, A. M., King, J. A., LaVelle, J. M., & Clinton, J. M. (2019). The current state of evaluator education: A situation analysis and call to action. Evaluation and Program Planning, 75 (August 2018), 20–30.
Future Research Agenda
CIPP components - Context
Sub-components - Needs
Who needs evaluation training and in what contexts?
What specifically do they need to know and be able to do?
How do those needs relate to each other?
What existing professional development curricula and programs have been designed to meet these needs? What content do they cover?
What explicit and implicit needs exist for prescriptive and descriptive evaluation theory?
How do these needs change from context to context, and country to country?
What are the goals of evaluator education programs? To develop practitioners, researchers, capacity-builders, educators, or…?
CIPP components - Inputs
Sub-components - Competencies
What are the implications the transdisciplinary nature of evaluation for identifying and prescribing a sub set of evaluation specific knowledge, skills, attitudes, and other characteristics[1] (KSAOs) (Brannick, Levine, & Morgeson, 2007) that all evaluators must have?
What are the commonalities across existing competency sets? Whatis core internationally and whatis contextual?
What aspects of existing competency sets are evaluation specific (i.e., essential), which are contextual (i.e., discipline or sector specific)?
Which competencies are essential for individuals and which for evaluation teams?
What, if any, competencies are missing from or hidden within broad statements in the current sets?
[1] Knowledge: level of mastery of a technical body of material; Skill: “capacity to perform tasks requiring the use of tools”; Ability: capacity to perform the required physical and mental acts that do not require tools; Other characteristics: “interests, values, temperaments, and personality attributes” (all quotations from Brannick et al., 2007, p. 97)
CIPP components - Inputs
Sub-components - KSAOs
What are the required KSAOs for the different groups who need evaluation training? To what extent are they present in existing competency sets?
Are there any additional capabilities or dispositions necessary for quality evaluation practice?
CIPP components - Inputs
Sub-components - Taxonomies and other frameworks
CIPP components - Processes and Products
CIPP components - Processes and Products
CIPP components - Processes and Products
CIPP components - Processes and Products
CIPP components - Overarching
To address these questions, we propose the following list of potential tasks:
a) Identify current evaluator training curriculum and pedagogies (including professional development and other non-formal options) and conduct research on their efficacy in relation to evaluation practice
b) Add to the current competency frameworks via multi-sector job and task analysis related to evaluation
c) Mapping of the updated competencies to developmental taxonomies to understand learning progressions
d) Multi-disciplinary literature studies and synthesis to explore key knowledge related to these competencies to build on what already exists
e) Reviews of good practice in teaching critical thinking, argumentation and logic, and interpersonal skills
f) Exploration of indigenous ways of knowing and evaluating to expand our thinking
g) Consolidation and syntheses of research on how other disciplines have established their core and quality standards can springboard this work.
h) Adopting and sustaining a practice of iterative synthesis on key topics, learning from the education discipline; the New Zealand government process can serve as an exemplar (Timperley, Wilson, Barrar, & Fung, 2007).
i) Develop an evaluation specific research data-base (or subsection within an existing database) to consolidate the relevant literature and improve consistency of searching and synthesis.
j) An international effort to focus on accreditation of evaluation training based on these developments
k) Research to explore and document quality evaluator training (both formal and informal)
l) Implementation of reporting standards (e.g., CHESS (Montrosse-Moorhead & Griffith, 2017) that enable valid meta-evaluation