I was recently at a meeting in which one of the participants challenged the idea that LIS scholars should investigate what occurs in makerspaces “after the fact.” Her point (as I understood it) was that the practitioners should have the skills to pragmatically assess outcomes, instead of awaiting for white-coated researchers to come & tell the practitioners what was “really” happening.
I had to let this percolate a while. Because while she was absolutely spot-on, she was also absolutely missing (part of) the point.
My answer to this critique is that we need both feet-on-the-ground practitioners and theoretically-rich scholars to look at stuff together. Librarians should do outcome measures assessment. AND scholars should do research. Ideally each would help the other do their thing better for the good of the field in general. But I feel like this needs some fleshing out, so I’ll do this here:
- Some reasons why practicing librarians could kick ass as researchers
Practicing librarians (contrary to the expectations of some academics, many of whom have never worked in a library) are pretty smart. They are fully capable of the reflexivity, theoretical richness, and careful conceptualization that good research entails. They are not trained to do this, but so what? Anyone can learn anything, and learning research skills, though a complex task, is well within the range of any moderately intelligent person.
Librarians have the added advantage of understanding the complexities of a case—why something relates to something else, why so-and-so does that, and how x caused y. It can take a researcher AGES to learn this stuff, and they never get it all.
Librarians are situated where the action is. Researchers have to make up fake situations (we call them experiments) if we can’t get access to the “real” action. (Yes, there are other reasons for experiments. Whatever.) Researchers are dying to get access to the real empirical data and interactions, and librarians are marinating in the stuff. Lucky them.
Practicing librarians are already gathering all kinds of data, including survey data and all sorts of metrics.
- Why they generally DON’T kick this research-ass
They are not, as I noted, trained in research. They often ask really lousy questions, which are not carefully explicated, and don’t link practice to any sort of overarching theory (all of which could be said about a hunk of LIS research in general). They may or may not be well-read in social theory, and they likely have no idea of the scope of LIS research in general. They often don’t have access to the expensive databases containing the research articles, or don’t read research for a variety of other reasons.
They may be too close to the stuff happening. Reflexivity is possible, but challenging. They also may be too close to see that their stuff is interesting or useful for a general audience. They may have relationships with patrons that both help AND hinder research (I think this is likely a wash, with somewhat more advantages accruing to being situated and participatory already, but I have little evidence to support that opinion).
They may be looking “too” micro at stuff that could be explained with “macro” level theory. Anything from Bourdieu to Heidegger might be useful in conceptually linking what’s occurring on the ground, but they may not know that.
Practitioners are gathering data, but this data may be too inwardly-focused or individualistic to be useful, other than a “how we done it good” explanatory case study. Their outcome measurements might be focused on the “wrong” thing, or at least and unexamined set of assumptions (again, so so so true for LIS research in general).
Here’s an example: at the meeting where this issue came up, the focus was on “learning.” When one focuses on learning instead of other outcomes, one uses a particular set of measuring tools and assumptions that may not be the most ethical ones, the richest ones, whatever. Or they might be exactly what is needed, for very particular reasons. But if those reasons aren’t carefully sussed out, the research won’t be useful at all. Case in point: the “social capital” research done by a few people in LIS (not the awesome Norweigan PLACE studies) doesn’t appear to really understand social capital, uses quirky tools to measure it, and isn’t very generalizable.
Finally, in the interests of time and maximizing service, ideally practitioners would focus on practice, and scholars focus on research, with a lot of partnering up and cross-pollination. In this way, practitioners don’t have to reinvent the wheel, but can add the amazing to a project, and let the academic types do the theoretical backstory work. And academics will add the depth and flavor (in a perfect world) and perhaps even some new points of views to practice. And let’s face it, librarians have enough on their underfunded plate without having to learn research from scratch on top of it.
But, for whatever reasons, some practitioners appear to dislike theory, academic researchers, and the idea of academics doing research in their libraries. Not sure why. Probably LIS does a lousy job of teaching theory and showing how it literally frames every decision we make from the moment we get up in the morning. But I hope more practitioners will start seeing the benefits of partnering with scholars to get grounded, useful research done on topics like:
Library fines—I’m working on this right now. What happens when fines go away? We don’t know, because no one has done the research since 1988.
Library boards—are they helpful or do they hinder? When does which thing happen? Are there best practices we could be disseminating to use our trustees insights while not getting bogged down if they try to micromanage? Don’t know, there’s no research.
Library policies—what happens when you add or change a policy on just about anything? We largely don’t know.
Library programs—there are like 2 things published on these. I exaggerate very slightly—other than the barely-researched-storytime, we know nothing.
And so on. I’m hoping the UWM SOIS Public Library Collaboratory will help to bridge research and practice so we can learn this stuff, and start gathering useful data that practitioners can actually use in their funding, staffing, programming, policy-making decisions. If you are interested in participating in research at your library what would you want to study? How involved would you want to be?