Through the National Impact of Library Public Programs Assessment (NILPPA), ALA is quantifying the impact of library programming.
Makerspaces. Beekeeping classes. All-ages coloring. Glow-in-the-dark story time. Butchering demonstrations. Reading aloud to therapy dogs. Slumber parties for adults. A Lego robotics team. What is your library doing?
Libraries across the US are offering some incredibly cool programming — and yet it’s been challenging to document the full range of what’s happening. That’s one of NILPPA’s core questions: How can we characterize and categorize public programs offered by libraries today?
We want to know what types of program are being offered, who is offering them, and how. Down the line, this information will help us understand the impact of public programming. But first, we need a baseline.
Measuring impact, defining categories
The NILPPA research team started out by looking at existing ways of measuring impact, which is already a hot topic in the library field. Here are just a few examples:
- Project Outcome is a Bill & Melinda Gates Foundation-funded free toolkit to help public libraries understand and share the impact of their essential programs and services.
- Measures that Matter is an IMLS-COSLA collaboration that aims to improve the field’s ability to collect and report on measures meaningful to telling the story of the public library.
- The University of Washington’s Impact Survey helps public libraries understand how people use their technology resources and services.
Starting with these and similar initiatives, we determined that the most useful way to categorize library programming is to consider five main dimensions:
- Library profile
- Program characteristics
- Program audience
- Program administration
- Intended outcomes
For each dimension, we then identified a primary question, typically one relating to program goals.
Library profile: What type of library is it?
Different types of libraries (public, K-12, academic, special) reach very different publics. Knowing what type of library we are looking at lets us begin to understand the range of the audience.
Program characteristics: What is the program topic?
Just as libraries contain books that cover the full spectrum of topics, library programs include a very broad topic range. Understanding which topics are most popular has implications not only for future programming but also for collections development.
Program audience: Who is the targeted audience? Is the program trying to appeal to the library’s entire audience or a subset?
Some programs are designed for a library’s full public; others are intended to serve some particular group, often those that are underserved by other institutions or systems. Comparing the target audience to the actual audience makes it possible to see whether programs met an aspect of their intended goals. If a program is intended for the library’s full audience, we want to see that the people who show up are representative of that larger public. And if a program is meant for a specific group, we want to make sure those people are attending — and anyone else who comes is a bonus.
Program administration: How was the program developed?
There are three chief models of program development — most programs are developed by a library itself, by or with a community partner, or by a national organization such as the ALA. Each model has different possibilities.
Intended outcomes: What is the most important intended outcome?
Our team agreed that outcomes fall into seven main groups. Participants can learn new knowledge or new skills. Participants can change their attitudes or their behavior. Participants can gain awareness of all the things that libraries do. Participants can also have fun or be exposed to something new, like art or food. Finally, participants can meet and engage in dialogue with people they would not have met otherwise. A typical program may combine elements of all seven — but one will often be the most important in determining if the program succeeds or not.
Next steps: What do you think?
The next thing we need to know is this: do library professionals find this categorization meaningful and complete? We’re in the process of developing a survey to test the accuracy of this system. If we find that we’re way off, we’ll change it. Once we improve the categories, we can design the best ways to capture impact. We’ve all observed how libraries bring value to their communities; being able to rigorously prove this will give the field a more powerful voice with stakeholders and funders.
In the meantime, we would love to hear your thoughts:
- What’s the coolest program your library is running?
- What dimensions of programming do you think are the most important to capture? Why?
NILPPA: Phase I is made possible in part by the Institute of Museum and Library Services grant number LG-96-17-0048-17.