Assessing Transformative Programming

This post is a summary of a talk that was recently presented at the Library Marketing and Communications Conference on Nov. 17, 2016.

As someone who started out as an instruction librarian, I tend to prefer the assessment methods of the classroom: pre/post instruction surveys, document analysis, peer-to-peer review, reflection, etc. In my role as the outreach & communications librarian this past year, I have endeavored to apply my experience in the classroom to how we measure the success of a program at Loyola Marymount University (LMU). To that end, I have begun creating expected learning outcomes for many of our events.

Part of the strategic mission of LMU's William H. Hannon Library is to prioritize "formative and transformative" programming. The library hosts between 45 and 50 events per year. These programs are wide-ranging in their format and appeal: a haunted library during Halloween, story time for the on-campus pre-school, therapy dogs during finals, in-house art workshops, author talks, poetry readings, read-outs, exhibition receptions, film screenings and so on. Our level of leadership in these programs varies depending on the strength of our partners and how many external organizations we are working with, but for each event we provide at least some strategic, operational or financial support to make these programs happen. In order to prioritize how we utilize our time and resources, we use the rubric of "transformation" as measured by learning outcomes.

Developing Learning Outcomes

Creating learning outcomes for library programming is not all that different from developing learning outcomes in the classroom. Of course, in library instruction the learning outcomes often come before developing the lesson plan. With library programming, it's often the other way around; the idea or structure of the program comes first. Still, here are a few tips:

  • Be specific: What will the participants learn? How will they feel? What behavior will be changed as a result?
  • Make it measurable: Make the learning outcome something that can be measured, preferably with no more than one to two questions or indicators.
  • Focus on what the participants will learn (as opposed to what the event organizers will do): What thresholds will be crossed?

In order to create learning outcomes that measure transformation, I find that it is also helpful to consult Bloom's taxonomy (i.e. remembering, understanding, applying, analyzing, evaluating and creating). I ask myself, "Am I hoping to dive deeply into a specific level within the taxonomy? Or am I asking attendees to move between categories?" Once I am able to answer that question, I can ask, "How am I going to measure this?" 

Measuring Learning Outcomes

Image
LMU Human Library
The learning outcome for LMU's Human Library event was: participants will recognize and/or demonstrate the importance of conversation as a means to address issues of discrimination, prejudice and stigma.

There are as many ways to measure learning outcomes as there are methodologies for research, but given the number of events we hold each year, I have found that surveys are the easiest to implement given the time restraints. For example, throughout the past two years we have hosted a Human Library. The learning outcome we developed for this event was: participants will recognize and/or demonstrate the importance of conversation as a means to address issues of discrimination, prejudice and stigma.

Within this outcome, there are two possible indicators to measure:

  1. Whether participants articulate a positive relationship between conversation and empathetic thinking. 
  2. Whether participants actually walk away with a demonstrable change in perception.

To measure these indicators, we asked two questions:

  1. The Human Library is a good way to challenge discrimination, prejudice or stigma. Do you agree? (Options: strongly agree, agree, undecided, disagree, strongly disagree)
  2. Do you feel that you learned anything about discrimination, prejudice or stigma today? If so (or if not!), what did you learn?

The first question seeks to measure whether participants recognize the importance of conversations in challenging discrimination, and the second asks participants to articulate that change in perception in themselves. Not surprisingly, most of the participants responded to the first question positively. After all, the Human Library participants are, for the most part, a self-selected population. However, the responses to the second question get at the heart of what makes programming transformative. Some of the responses to our most recent Human Library included:

  • β€œIn my conversation with the [human books] we discussed how strangers have preconceived notions of who or what he ought to be like as a blind man. I think our conversations today helped challenge those stigmas.”
  • β€œI often think of religion as being slightly constraining. Yet, he presented an open and all encompassing idea of what spirituality can be.”
  • β€œI learned that everybody's story is different. The life-scripts are not the same for all people who might look the same or seem to have similar experiences.”

Additionally, one could go a step further and rate these responses according to a "rubric of transformation," where 1 represents no evidence of perceptual change, 2 represents little evidence of perceptual change and so on. Currently, we are working on such a rubric at LMU and hope to use it for (among other things) justifying how we prioritize library programming.

Building Learning Outcomes into Program Planning

For the most part, I use learning outcomes assessment internally, among my own team and within the library. Moving forward, I would like to begin including our programming partners in developing these outcomes. I would also like to utilize other assessment methods beyond the survey, including object analysis, focus groups and longitudinal surveys. Like many librarians responsible for programming, I have a tendency to be thinking about the next event long before the current one is complete. However, when done systemically and with intention, programming assessment can yield the richest of data!