Is your program evaluation form letting you down? Ask yourself these five questions.
Admit it: you have a love/hate relationship with all program evaluation forms. As programmers, we understand these forms' necessity in ensuring program integrity and value, but as participants, we dislike completing them. So how can we get the information we need while honoring the opinions (and time) of our program attendees?
Much of the current research regarding evaluation forms (and yes, people do research this topic) offers advice on creating tools that gather the most important information on your program and for your stakeholders.
Here's your five-question guide to crafting a quality evaluation form:
- Is it consistent across multiple programs? In the article "What's Wrong — and What's Right — With Rubrics," James W. Popham suggests the top two problems with evaluation tools are they are either too specific or too general. An evaluation used for a networking event may not equally evaluate a computer instruction class. However, there are common themes — event time, space and quality — that are applicable to both programs and important feedback to capture. Are there ways you can slightly reword questions to obtain the same data? For example, changing "The instructor was knowledgeable..." to "The presenter was knowledgeable" provides similar information while being more intuitive for the evaluator.
- Does it tell an accurate story? Before designing your rubric, consider what information you'll need to improve programs and report to stakeholders. With many library programs, you may want to ask your patrons to evaluate:
- the instructor or presenter
- the materials provided to participants
- the program space and length
Also, consider asking them targeted questions. The Association of College and Research Libraries (ACRL) offers the following questions for potential inclusion in evaluations:
- What was the most important thing you learned from the program today?
- Was there anything about the presentation you would change?
- Please suggest ideas for future speakers or topics.
- Does it include an incentive? We'd all love to believe attendees fill out evaluations for the pure intention of helping the library improve. However, anyone who has attended a library conference can tell you that swag always encourages more complete evaluations. Is there something you can give your attendees, such as a certificate of completion, a CE form or a tote bag, in exchange for a completed evaluation form? Find that incentive and watch your return rate increase!
- Is it short? Many evaluation experts recommend using a Likert Scale on your forms. While this rating program allows for a wide range of responses, don't assume that more choices means better choices. As Michael Simkins points out in his article on creating rubrics, "What's the difference between fair and good service?" If you're not sure, chances are your attendees won't know either. Simkins recommends using a rubric that includes four evaluative levels for the most effective feedback. Once you've completed your rubric, ask yourself: how many short answer questions do you usually tackle on evaluations? While it's important to leave room for attendees to offer restriction-free feedback, consider limiting the number of additional questions you ask.
- Does it use simple language? Employ direct and short questions as frequently as possible. One of my favorite evaluations included an intuitive version of a Likert Scale (see image at right). You may also want to ask someone who isn't attending your program to review the evaluation to ensure it makes sense and is jargon-free.
Take advantage of this opportunity to (quickly) pick attendees' brains. When it comes time to brainstorm new program ideas, it'll make your job easier and ensure better attendance.
Have you created an evaluation form of which you are particularly proud? Tell us about it!
Interested in learning more about evaluations? Here are the resources I referenced:
- ACRL Tips for Program Evaluation Forms
- Survey Monkey: The Likert Scale Explained
- Designing Great Rubrics: Simkins, Michael. Technology & Learning 20.1 (Aug 1999): 23-30. (Accessed via ProQuest)
- What's Wrong — and What's Right — With Rubrics. Popham, W James. Educational Leadership 55.2 (Oct 1997): 72-75. (Accessed via ProQuest)