Evidence-based public libraries

How many public librarians are, right this very moment, struggling with the fact that they need more money, but can’t seem to find the evidence they need to make a stronger case to funding agencies about how critical library services are? I bet nearly every director or financial officer is. I know I am. Most of the other directors I know are in the same boat.

We need evidence. We need to demonstrate empirically what we do and why and what the outcomes are. We need it for funders. We need it for patrons/uses/customers/members. We need it most of all for ourselves. Evidence that the bookmobile is a great idea and serves people we’d never get in our doors. Evidence that our adult workshops are meeting needs that cost-based programs aren’t. Evidence about people getting jobs, buying houses, learning guitar, making bike accessories*,  having healthier families, writing books, being happy because they read and play with library stuff…so much evidence could be gathered to help us do our jobs better.

We need evidence for why all this stuff is a good idea, or why it doesn’t work the way we wish it would. If public library makerspaces don’t actually build more knowledge, skill, excitement, social interaction, participatory learning, or other socioeconomic benefits (and I can’t see how they couldn’t) it would be a bad idea for every library in the land to invest in them, right?

So where is the evidence on makerspaces? (I’m working on it–someone could throw me a bone, however.)

Where is the evidence for coffee in libraries? Should it be free or is it no big deal if it costs money? (I need to know, people. We’re talking about offering free specialty coffee in my library.)

Where is the evidence about using volunteers on digitization projects or even on just how patrons interact at the service desk?

Well, for the last two questions, the answer is:  Here.

The wonderful Evidence Based Library and Information Practice journal’s 1st 2012 issue focused on public libraries and evidence that they might want or need. This issue contains articles that synthesize research on summer library programs. They look at teens’ library needs, or at social impacts of libraries. It includes empirical research articles on library redesign and on customer experiences at service desks. There is even an article addressing researchers who want to use focus groups.

Every single article in this issue was of use to me in some way, as a director of a tiny rural library, and as a researcher.

The best part of the whole issue, for me, was the Pam Ryan‘s call for more public library research:

Now, more than ever, with fiscal pressures and societal changes challenging the value of our public libraries, we need a strong base of evidence upon which to draw support and inform evidence based practice and advocacy efforts. The evidence base needs increased contributions about public library practice and value from both LIS faculty and practitioner-researchers to ensure balance and relevance.

Ryan then asked for support for public librarian-researchers, from positions on editorial boards and conference-organizing committees, to collaboration with academics, to actively seeking out and supporting public librarians who do research.

Yes! Please!

That’s what we hope to do here, in the limited time we have to offer as working librarians, researchers, and as a (in my case) student. We could really use some help, especially from those more “in the know” about research and those more “in power” on those editorial boards, etc.

If you are willing to chip in and blog about what it takes to be a public librarian-researcher, please contact me. And if you’re a public librarian who wants to try a research project, what barriers do you face? Let us know, and we’ll do our best to find ways to overcome them.

*I wish I could link the bike accessories thing, because I read that a couple are 3d printing these things at a library makerspace and selling them, but cannot find the reference again. So much for my vaunted librarian skillz

[edit] By the way, of the 29 authors for the public library issue of EBLIP, only 12 were public librarians, and ten of these were all from the Edmonton Public Library in Canada. While Edmonton Public Library obviously rocks, it’s depressing to note that 79%  of the research and review articles had no public librarian authors. Of the four with public librarian authors, 3 were written by Edmonton librarians.


User research for beginners

Aaron Schmidt who blogs at Walking Paper and is the author of “The User Experience” feature in Library Journal wrote about how librarians can get started with user research in the January 2012 issue. The article is titled The User Interview Challenge and describes the process and some possible uses of user interviews.


Local research can have a wide-ranging impact

So you are interested in research, but only on a local level? After all, research is haaaaaard (cue Scott Pilgrim here).

You can search for library surveys online that can help you assess how you are serving your users, what they like and don’t like. Doing a survey like this can not only help you plan for the future of your library, but can also inspire research that is generalizable and useful for librarians everywhere.

For example, imagine this survey (adapted from a variety of sources, you are free to use & adapt it further) were done by every library in your consortium over the course of three weeks. You would be able to really say things based on the results that could communicate what’s going on with your consortium libraries, which may be interesting to other consortiums.

Or you could find a tantalizing data point after doing a local survey and decide to follow up with an in-depth research project. For example, you could find that your community is clamoring for content creation, and decide to research content creation across the board.

Or say that you and 8 other librarians–in libraries of various sizes, in various locales, who were alerted to an easy research opportunity via listserv–did this same survey and you analyzed all results. You could use the resultant data to build a picture of user satisfaction that many, many librarians would find valuable. They could use your findings to compare to their own findings as a sort of yardstick. They could skip the surveying process and make plans based on your data. They could use your data as marketing, or as a call to action.

I won’t really touch on the epistemological limitations of survey methods here, or how to analyze the survey results right now. That’s material for future posts. But here are some basic tips on the process of doing local research (whether or not you ever disseminate the results beyond your library board)

1. Don’t pass out the survey in your library.

You’ll just be hitting those already satisfied (to some extent) with what you already offer. You want to know what people who DON’T use your library think. In my new director job, my first action (after the basic moving-in, meeting the staff, etc.) will be to stand outside local businesses hawking a survey.

2. Only ask one thing at a time.

I was at a library where a survey asked many double-barreled questions, like “How satisfied are you with the hours & location of the X Library?” Well, that’s pretty useless. How does someone who hates the hours but love the location answer this? If this seems like a no-brainer to you, you are not alone. But I’ve seen these types of questions many times on all kinds of surveys.

Sometimes double-barreled questions are sneaky and you may not realize you’re asking two different things. e.g. “Please agree or disagree with the following statement: I’d like the library to be open later on Saturday and Sunday” (weekends may feel like one thing to you when you write this question) or “Do you think the library director should have more contact with patrons and staff?” If there’s an “AND” in the question, beware.

3. And make sure you will get the answer you need to know.

I once saw a question on a survey that asked if a user wanted more of particular types of programs, but didn’t ask if the user liked the quantity of programs already offered. Thus, when a user didn’t check the “I want more storytimes” box was it because he was satisfied with what the library already offered or because he didn’t care about storytimes?

Questions like this are worse than useless–they can actually lead you to make assumptions that have no basis in the real feelings of your users. If you want to know if you should offer more storytimes, ask  a question that measures how many storytimes a user wants, or how satisfied they are with current offerings. Give them a chance to tell you what they actually want & like.

And leading questions should be verboten. No matter how much you think the question: “Don’t you think hard-working MLIS-bearing  librarians deserve to be paid at least as much as garbage collectors?” is legitimate, you’re muddying the waters with that phrasing. Instead you could educate, then ask:

Librarians with Master’s Degrees typically earn a  salary 25% lower than garbage collectors who do not need to have a high school diploma.

Do you believe librarians with Master’s degrees’ salaries are lower than they should be, higher than they should be, or about right?

A. Librarian salaries are lower than they should be
B. Librarian salaries are higher than they should be
C.  Librarian salaries are about right
D. Don’t know/no opinion

And then hope the respondents choose option A. (PLEASE!) By the way, this is a made-up statistic. And I’m not dissing garbage collectors. They deserve every penny they make, and probably a lot more.

4. Make sure you offer paper  surveys

Electronic surveys on surveymonkey or wherever are very handy for the busy librarian. You don’t have to do much with the data–the computer compiles everything. But a large portion of your audience will not be willing or able to answer an online survey. People who are tech-averse for whatever reason deserve to be heard. So even if you do most of your surveying online, have a paper version of your survey available.

5. Your paper and online surveys should be as similar as possible

For example, surveymonkey lets you have 10 questions per survey with a free account. Make sure the paper survey is the same length and format. Otherwise you’ll not quite be asking the same things of the different sets of patrons. This may or may not matter to you. If you’re doing “real” research, this matters. If you’re just checking the lay of the land, it may matter less.

Also, be aware that few people will click asecond link to answer a second set of online questions, if you’re a cheapskate like me, and just broke a 15-question survey into two parts to avoid paying for online survey services (I learn these things the hard way.) Either finesse your survey to the 10 question limit, like this, or only ask really unimportant questions in the second half of the survey (and what would be the point of that?)

6. Keep it short

The first survey I linked to above is really, really long. It tries to cover everything at once. It’s way better to have a shorter survey, and do it a couple of times. For the somewhat shorter online survey, a new library has been built, and a new director is coming on board. The building decisions have been made for now, and the new director just needs to establish a baseline of satisfaction for what has come before, and tentatively assess what the community would like to see offered. The director can go back and ask building-related questions later.

7. It’s better to get a few answers from a lot of people than a lot of answers from a few people

At least in the case of surveying about user satisfaction. This is a corollary of #6. By querying a wide range of people on a couple of questions (did you ask people of all sorts of demographics: economic, educational, religious, age, yadda yadda yadda…) you can comfortably believe that you have something approaching “truth” when you analyze the answers to those few questions. But some people don’t want to answer questions about their race, income, etc. In my opinion, you would be better served not asking those questions when possible, and simply trying to find a way to gather surveys from a range of people automatically. For example, if you gather your surveys in a WalMart entryway (assuming they’d allow that!) you may not be reaching either those people who have the resources to shop somewhere a little more posh, or those with so few resources that they can’t shop at all.

In future posts I’ll go into more depth on survey methods, the epistemology of research methods, and how to better craft & analyze a great survey. If you just can’t wait, check out the fairly comprehensive  Survey Methods Workbook: