Ultimately, we work this hard to make a difference. We are trying to move the needle on what people do. This is true for the trainers reading this who want to know if time in the classroom– or on Zoom– transfers into a change in approach or behavior. It is also true for you nonprofit leaders who run programs towards some outcome: expanded services, a protected environment, communities enriched through the arts, etc.
How do we know if we made a difference? How do we do evaluation simply given all of the demands on our time and the fact that too often, evaluation is not funded.
Fortunately, we have evaluation experts to turn to as we expand how we measure the difference that we are making. Learning expert Will Thalheimer recently released the second edition of his book on learning evaluation, Performance-Focused Learner Surveys. I regularly share his brilliant Learning Transfer Evaluation Model (LTEM) model because it succinctly demonstrates how we might move evaluation from perception to actual performance.
Evaluation expert Chari Smith, author of Nonprofit Program Evaluation Made Simple, provides a similar model for nonprofit programs. Her books tagline cuts right to the point of evaluation: Get your data. Show your impact. Improve your programs. Think about that word “show” and what it means. (I’ve been thinking about it as I roll out a new finance training, Show Me The Money, and plan for ways that people will show what they know.) Show means to make visible, no longer hidden or assumed. This model again moves evaluation from perception to actual performance.
Some reflections distilled from the work of these two experts:
- Evaluation as a tool to learn and improve is a culture issue that needs to be integrated into everything we do. It involves strategy, planning, and systems.
- Evaluation starts with the end in mind. What are you trying to change? What should be different because of that class or program? The answer to that will drive the ways that you can measure that change.
- Evaluation invites us to consider what questions we ask and how we ask them. Thalheimer challenges us to write “distinctive questions” (questions with description answers to choose from) over questions with vague Likert scales. I particularly like how he invites us to consider what is acceptable in terms of the responses we see. For example, if a CPA learns a little at a finance training, that is just as acceptable as a novice learning a lot. Our goal is not for everyone to learn the same amount.
- Evaluation is a team effort. Smith tells the all-to-common story of a grantwriter needing the data described in a proposal that the program manager wasn’t collecting. From a training perspective, the trainer can measure learning in the moment but not how it was applied without collaboration with the manager or organization. We have to work together.
- Evaluation can be simple and done well. We have all been in those meetings where colleagues chase every possible “interesting” data point. When we focus on what we need to learn to improve our and their performance, we can pare down that data to just what we need.
How do you know if you make a difference?
Upcoming learning opportunity
If you are looking to hone your evaluation skills, I encourage you to join Smith for Logic Models Made Simple… and Impact Models Too! on May 12. She will share simple steps to tighten up your evaluation practice. You will leave more confident about ways to build your road map to the kind of evaluation that helps you show your impact.