Document with Pen

Research Questions

From pre-publication copy of Gullickson, A. M., King, J. A., LaVelle, J. M., & Clinton, J. M. (2019). The current state of evaluator education: A situation analysis and call to action. Evaluation and Program Planning, 75 (August 2018), 20–30. https://doi.org/10.1016/j.evalprogplan.2019.02.012

Future Research Agenda

To address these questions, we propose the following list of potential tasks:

a) Identify current evaluator training curriculum and pedagogies (including professional development and other non-formal options) and conduct research on their efficacy in relation to evaluation practice
 

b) Add to the current competency frameworks via multi-sector job and task analysis related to evaluation


c) Mapping of the updated competencies to developmental taxonomies to understand learning progressions


d) Multi-disciplinary literature studies and synthesis to explore key knowledge related to these competencies to build on what already exists

 

e) Reviews of good practice in teaching critical thinking, argumentation and logic, and interpersonal skills

 

f) Exploration of indigenous ways of knowing and evaluating to expand our thinking

 

g) Consolidation and syntheses of research on how other disciplines have established their core and quality standards can springboard this work.

 

h) Adopting and sustaining a practice of iterative synthesis on key topics, learning from the education discipline; the New Zealand government process can serve as an exemplar (Timperley, Wilson, Barrar, & Fung, 2007). 

 

i) Develop an evaluation specific research data-base (or subsection within an existing database) to consolidate the relevant literature and improve consistency of searching and synthesis.

 

j) An international effort to focus on accreditation of evaluation training based on these developments

 

k) Research to explore and document quality evaluator training (both formal and informal)

 

l) Implementation of reporting standards (e.g., CHESS (Montrosse-Moorhead & Griffith, 2017) that enable valid meta-evaluation