LibQUAL+ Survey Results and Practical Applications

LibQUAL+ Survey Results and Practical Applications


>>Amy Yeager: Good afternoon and welcome to the
Association of Research Libraries webinar on LibQUAL+ results and
practical applications. I’m Amy Yeager public relations program officer at ARL
and I’m joined by Angela Pappalardo ARL program coordinator for events and
finance and the LibQUAL+ survey liaison and Michael Maciel
senior data analyst at Texas A&M University Libraries. Before we begin
there are a few housekeeping items to mention. All participants lines have been
muted to cut down on background noise but there’s a chat box in the lower
lower left-hand corner of your screen and if you have questions
type them in there. At the end of the webinar we’ll have a question and answer
session and answer any questions that come in but feel free to type them
in as you think of them. This webinar is being recorded and we will make the
recording available to AMICAL and you’ll get further instructions from them on
how to access it once it’s available. So now I’d like to turn the presentation
over to Angela Pappalardo.>>Angela Pappalardo: Hi everyone I’m Angela Pappalardo. Thank
you for joining us. Today I’ll be giving a brief overview of the survey
components and customization as well as the steps to run a survey and then I’ll
also briefly discuss interpreting the survey results before turning it over to
Michael Maciel who will provide examples of practical applications from Texas A&M. LibQUAL questions are measured in
three dimensions: affect of service measures the interpersonal aspect of
service such as empathy and responsiveness
(an example is willingness to help users), library as place measures how the
physical environment of the library’s space (for example community space for
group learning and group study), and information control measures
the content and access to information resources. It includes types of content,
convenience, ease of navigation, etc. An example of this is ‘the electronic
information resources I need.’ There are 22 core questions and there are also five
optional questions, five information literacy questions, as well as three
general satisfaction questions, three library use questions and up to three
demographics questions as well as a free text comment box. In 2010 we introduced a
shortened version of the survey called LibQUAL+ Lite. This is a customization
that you can apply in stage one where you select anywhere from 0 to 100%
Lite and respondents will randomly receive either the short or long survey
according to the percent you select. The median completion time for the long
survey is about 10 minutes and the median time for the Lite survey is a
little over five minutes. Each Lite questionnaire includes 8 of the core
questions, one optional question, one information literacy question, two
general satisfaction questions, three library use questions, and all of the
demographics questions that the library chooses as well as the comments box. The
core questions use a triple Likert scale where users are asked to evaluate a
statement such as “Library space that inspires study and learning,” and to give
three ratings: the minimum, desired, and perceived. The perceived rating
represents the level of service that the respondent believes is currently
provided and the minimum and desired ratings offer context for that
perception. Minimum represents the minimum level of service the user would find acceptable and desired represents the level of service that the
user personally wants—or their ideal level of service. LibQUAL+ can be run in
multiple languages. When you register you’ll see language choices
based on your region and then in your survey dashboard you’ll be able to
preview the survey in all available languages. There are
four steps to running the survey. When you log into your institution’s account
you’ll be taken to you the survey dashboard and that will display the
information you need for your current survey’s stage. Pre-launch will be your
customization stage. Stage two will be monitoring your survey
progress. Stage three is closing the survey and stage four is post survey
tasks and results. When you click on the link to configure your survey you’ll be
taken to a page with a series of tabs across the top for customizing various
aspects of the questionnaire. If your library’s previously run LibQUAL+, the
choices you made in your last survey will be carried over to this year
and you can decide to keep these features as they were on your previous
survey or you can change them. The first tab you’ll see is the
customization tab where you can upload a logo, establish your Lite
percentage, enter your dates, and choose your demographic items. If you choose to
award incentive prizes, your questionnaire will include a box where
respondents can optionally enter their email address. You can select up to five
optional questions in the optional questions tab by selecting them from the
existing bank of questions or by adding your own. If you choose to submit your
own they must be in the triple-Likert format and LibQUAL+ staff will moderate
the questions before adding them to the database. This usually takes a day or two. LibQUAL+ has standard sets of position
and discipline options and you can customize these labels with your local
terminology. Your results report will break down your findings by user group
with sections for each of the categories and we will go over this in more detail
later. On the questionnaire these categories are further broken down into
subgroups which are the response options that your users can select. This is how
it will look on your configuration screen. Your subgroups must match to a
reporting value. And then here’s the view of how this item looks on the survey
questionnaire. Only the user subgroup options—not the parent options—can be
selected. On the next tab you’ll see branch library options. You can enter the
response options for the question “The library you use most often.” This question
is optional so if your institution has only one library you can leave this item
blank. The same as the position question, you can use your local
terminology to map to the standard list of disciplines. Too many choices can
present challenges to users so we recommend no more than 16 disciplines
and in your results notebooks there will be charts showing the number
of respondents from each discipline. In this example as with the position
options you’ll be able to enter custom text in the left column and these must
map to a reporting value in the right column. When you finish configuring
your survey it’s time to preview the questionnaire and launch. The preview
survey URL does not collect data but it gives you an opportunity to test your
questionnaire in different settings using different platforms and web
browsers. Once you launch you can’t make any changes to your configuration. After
you open your survey you’ll receive the survey URL to distribute to your users.
If you need to know the URL in advance for creating promotional materials we
recommend opening a few days or more before you announce it to your community.
In stage 2 you can also monitor the number of responses coming in by date,
time of day, branch, discipline, and position. You can also view and download
the comments. In your results report ARL provides an analysis of how well your
respondent samples represents your overall campus population. To do this we
ask you to fill out what we call a representativeness questionnaire where
you will provide population data for your user groups and
discipline areas. The representativeness questionnaire becomes available in stage
2 and it is based on the customizations that you make in
stage 1. In this example you’ll see a representativeness chart where the blue
line shows the population in each discipline area as the percentage of the
overall campus population and the red line shows the number of respondents. In
this example the line tracks fairly well indicating that the distribution of
respondents by discipline is representative of the campus overall. At
the end of your survey run you can manually close your survey from the
survey dashboard. We recommend keeping your survey open for at least three
weeks or longer and the system will ask you to confirm that you want to close.
This is an irreversible step so make sure you’re ready to close. As soon as
you close your survey some of your survey data is immediately available on
your dashboard. There will be three CSV files: the raw data, the options key, and
the response key. These can be read in either Excel or SPSS. You’ll also see the
comments and the incentive emails list. There are also some optional
questionnaires, a post-hoc questionnaire where you can provide information about
your survey such as sample size and number of emails sent, as well as the
evaluation questionnaire which is where you can give feedback about your
LibQUAL+ experience. Your results notebook will be available on the survey
dashboard and in the LibQUAL+ data repository approximately two weeks after
you close your survey. You’ll receive an email notification as soon as the report
is uploaded. The notebook contains sections for overall,
undergraduates, graduate students, faculty, staff and library staff and within each
of these sections you’ll see a demographic summary, core question
summary, local questions, general satisfaction questions, information
literacy outcomes, and library use summary. LibQUAL+ scores have three
interpretation frameworks. Interpreting the perceived scores against the minimally
acceptable and the desired service level is what we call
the zone of tolerance. You can benchmark against peer institutions via the data
repository and analytics as well as through norms and you can benchmark
longitudinally which is the ability to know how your library is
changing over time. For each question the averages of the respondents minimum and
desired scores form a zone of tolerance. In this example that’s the
gray box and it’s bounded on the bottom by the minimum mean and on the top by
the desired mean. The adequacy mean represented here by the orange bar
measures how well the library is meeting users’ minimum expectations. The adequacy
mean is calculated by subtracting the minimum mean from the perceived mean so a positive adequacy mean shows the degree to which the library is exceeding
those minimum expectations and a negative adequacy mean indicates that
the library is failing to meet minimum expectations. If you have a negative mean
it means the orange bar would display below that gray box and these scores
would be noted in the results notebook with red text. The superiority mean is calculated by subtracting the desired mean from the perceived mean. So
this is the gray area above the orange bar. The superiority mean is usually a
negative number and it indicates the library’s room for improvement. If a
library exceeds desired expectations the perceived score would fall above the zone of tolerance and the superiority mean would be positive. And if that happens this
chart would show the orange bar extending above the top of the gray box. This is another view of the same concept. In this example, the mean of the respondents’ minimum scores is 3 and the
mean of their desired scores is 8, and the zone of tolerance is that 5-point range
in between the two scores. The perceived mean of 6 falls within a zone of
tolerance, indicating that the library is exceeding their users’ minimum
expectations. The adequacy mean is 3 and the superiority mean, which is the
measure of the room for improvement, is negative 2. Radar charts are another example you’ll
see in your results notebook. These give you a snapshot view of the dimensions.
Each spoke in the wheel represents one of the core questions. Most charts will
display blue and yellow which indicates that the perceived score falls
within the zone of tolerance. Green indicates that perceived is above the
desired and red indicates that it’s below the minimum. This is a close-up
view of the radar charts and zones. Yellow shows the superiority gaps (the distance
from perceived to desired), blue shows the adequacy gap (the distance from minimum
to perceived), and green and red show scores above the desired and below the
minimum. Once you close your survey you’ll have immediate access to the raw
data and the comments. The results report will be delivered within 2 weeks and you
can find your survey results in the data repository along with the comments and
raw data files as well as the group notebooks. You can compare your data in
the analytics portion of the website where you will be able to compare the
aggregate data against that of other institutions in your same survey years.
You can also generate charts and view and download the data. You can conduct
peer benchmarking via the analytics page’s Data Explorer tab. You’ll select
your institution and years, generate your stats, and choose your peer institutions. Here are two examples of the charts and
tables you can generate in the analytics section. You’ll be able to view the
charts in the data table as well as download the data. Qualitative analysis
of the comments is another way to use your results. You will have access to the
comments immediately upon closing a survey and you can download them as
either an Excel or text file. The simple way to begin tackling your
results is to determine which services need attention by ranking services with
the highest desire scores and/or taking a look at the adequacy and
superiority gap scores. The D-M score model combines these three scores into
one and allows stakeholders to easily interpret their results.
Michael little will talk about this a bit more in detail in the next section.
And then the right column of this slide highlights some additional ways to look
at your results. You can look at the top five most desired services, individual
user groups through a lens of awareness, or explore one or more particular
questions by discipline and user groups. Communicating your results is a critical
component to putting the results into action. These need to be
communicated clearly to your stakeholders and be sure to consider the
needs of different user groups; faculty needs, for example, may be considerably
different than undergraduate needs. Determining whether library services are
meeting user needs or not can be tricky. In some cases it may be necessary to
implement new marketing strategies in addition to changing or adding services. And I’ve included a couple of links here
for further reading. The article in the second bullet point contains a detailed
description of the D-M score model that I mentioned. Now I will turn it over to
Michael Maciel, senior data analyst at Texas A&M University and LibQUAL+
super user, and he’ll give some practical advice on running the survey and
interpreting the results.>>Michael Maciel: Thank you, Angela. As Angela said my name is Michael
Maciel and I work at the Texas A&M University Libraries. Today I’d like to
present to your group some recommendations on setting up, running
and analyzing the LibQUAL+ survey and data. At the end of the presentation I
will talk about some of the projects that we have completed as a result of
reviewing data for and from LibQUAL+. So one of my first recommendations
is the population sample. One of the questions you may be asking yourself is
just “Do we survey the entire campus or do we
survey a random sampling of it?” My recommendation is that if you’re not
running a survey annually that you invite everyone to participate in the
survey. And don’t forget that you have other populations besides your faculty
and students. You have researchers and clinical staff, university
administration, and the library staff itself. I would recommend you consider
using local questions. Sometimes you’ll have a sense of an issue that you
particularly want to address at your campus and in your library. This is
a great way to do it. One of the questions, one of the options that
you’ll be given in the survey when you set it up is what percentage of the
survey do you want to conduct in the Lite version and what percentage of the
survey you want to conduct in the full version.
Now the full version will take considerably longer to complete whereas
the Lite version won’t take as long and you’ll probably get more respondents
that way, but you necessarily won’t get as many questions, the full
twenty-two core questions that you would from the Lite that you would from the
full, so you really got to talk about—do you you want as many responses as possible
or do you want to look at the full 22 core question perspective from your
campus and from your survey participants? Before you start your
survey here are some recommendations. One, meet with your subject specialists, the
people that go out in the field, and meet with the students and with the faculty
and let them know in advance that the survey’s coming and give them some
talking points about the survey. Word-of-mouth is one way to generate
participation. Meet with your public service personnel, the people at your
circulation desks, interlibrary loan desk, and invest like that. Again, word of mouth
and them mentioning it might improve your participation rates.
Make sure that the entire library knows it is conducting the survey because you
never know what point of contact a staff person is going to have with with a
student or with a faculty member so don’t look at just your public service
personnel but look at your technical behind-the-scenes personnel as well. Also
do a follow-up: send an email to your library personnel so that they can read
that and again have some bullets to refer to when talking about the survey
and then share the survey schedule and marketing materials with everyone just
so that when they start seeing especially the marketing materials,
signage, table tents, and items like this, that they know what’s on
them and it doesn’t take them by surprise. My scheduling recommendation is that we generally do the survey in the second semester, not the fall, but the spring
semester and we generally do it mid semester, typically either after or before
midterms. The length of the survey as Angela mentioned to do it more than
three weeks. We’ve actually had better response rates by conducting it over a
45 day period and just sending out an occasional reminder. You don’t want to
send too many reminders out but you do want to send reminders just to
increase that participation rate. Check also with your your campus to make sure
that your institutional wide survey is not conflicting with another survey. We
conduct our student assessments of surveys like NSTI and SERU and one
thing that we make sure is that the invitations to participates in those
surveys do not conflict with the LibQUAL+ invitation. When appropriate, send out different emails with different email content to select user groups. We have
one email text that we send out to undergraduates, another another type to
our master’s and PhD students, yet another to our
professional degree students, like our medical doctors and our veterinary
students. We do send out a separate one for faculty. In the last rendition, the last
time we ran the survey we sent out a separate content to researchers,
administration, and in some cases we highlighted certain colleges—for example
the College of Nursing, which was had a history of low participation rates—where
we’ve addressed, you know, specific email texts to those user groups. Be sure to
keep the emails brief. If you print out your email it should not
extend beyond the page and preferably three-quarters of a page. I’d recommend
you use bullets as opposed to full sentences. When you send out that email
invitation, give your participants, give the people you’re inviting a
reason why to participate. List the service improvements that are important
to the user group and then also emphasize that user input drives
improvements, but the more that they participate in the survey the more we’re
going to be able to deliver exactly what they’re looking for.
I also recommend that the invitation come from either the dean of your
library or the director of your library and in saying that I also recommend that
you set up a separate email address for the dean, otherwise when people start
responding to the dean’s email you might just blow up her or his email box and more
to the point you want to be able as a survey administrator to be able to look
at those comments and address those comments, and I don’t know that many
deans that are going to give you access to their individual email account so set
up an alias for something that you have access to. I would recommend you begin
the survey on a Tuesday or Wednesday, deliver at mid morning. Email schedule: send an initial invitation, a first reminder, again on the Tuesday or
Wednesday, and then a final reminder on the Wednesday that
the week that the survey is scheduled to end. Announce the survey will end on
Friday but keep it open through the weekend just to catch any stragglers. What you see here on the right side of
this slide is the marketing image that we use throughout the survey period. We’ve
created table tents that we put on study desks and at public service desks.
We, in addition to the survey URL that ARL provides, we actually created a
user friendly email address and ours is library.tamu.edu/survey
so that people don’t have to keep looking up and down to type in numbers.
Ask your subject specialists to send out announcements about the survey. Use any
listservs that you have that are campus- or university-specific. Use
library and institutional electronic signage, use social media if
you have it available, and again use table tents at the library study table
in library service points. While the survey is being conducted, you will have
the ability to look at the comments as they come in and the comments will be
identified by user group: first year student, second year student, assistant
professor, associate dean, as examples, and also by college. If the responses are
just proportionate—by that I mean that you’re not getting as many participation, you’re not getting the participation numbers that you would like to see—consider
changing your email reminder text content. Then make sure that your
response rates have peaked and are declining before you send out email
reminders. Don’t send out reminders while your participation
rates are climbing. One thing that we do is that we also monitor the comments and
whenever a user mentions an individual library, a librarian
or an individual library department by name, I will send out a
congratulations email, a kudos email, to that individual or that department. I
will also go up the chain of command copying the supervisor and the dean as well, and then we coordinate with the Dean so that when I send those out the dean then
follows up by sending a further congratulations for, you know, getting
this kind of notice. You’d be surprised about what this does to generate, you
know, your own people out there marketing for participation in the survey. A lot of
these comments that are, that name specific librarians are actually used in
faculty evaluations, so around this time that we do conduct the survey you’ll see
the faculty members going out there and promoting this just so they have
something put in their evaluations. There’s a practical application to this.
After the survey is done some of the areas that you’ll have data to analyze
will be from the LibQUAL+ analytics for your library. You also have the ability
to compare yourself to other libraries. You’ll have the raw data itself to look
at as well as the comment text. Some of the types of analysis—and I’m going to
be going through this in a little bit more detail—is breakdown by category,
four year trends, comments and so forth. I’ll address those in the upcoming slides.
Angela showed you a copy of what the graphic representation of the data looks
like. I’ve come up with my own through Excel, a way of visualizing the data.
Basically the same version is what Angela showed you except I use a dot as
opposed to a bar within the zone of tolerance. I’ve also given you some
definitions here to use. Priorities are the top desired scores, your top perceived scores are what you are your successes, your satisfaction is a ratio I
call AGR which is your adequacy gap ratio and your concerns are your bottom AGR scores. And then on the left hand of this slide I’ve given you some of the formulas and some of the criteria to use to determine when
something as a success or when something is an area of concern and this goes back to
what Angela talked to you all so what I did here is give kind of a mathematical
version of explaining what this data can be used for and how it can be
interpreted. Angela mentioned top five lists. I definitely use them. For the
purpose of this presentation, I used a top three list by priorities, successes,
and concerns, so the first column will show you what is important to your user
group and then you can carry that over and see, of those priorities, which ones
your library is currently succeeding at, which ones your library is an area of
concern that your library may want to work on. The core question organization—
there’s 22 core questions and what I’ve done is I’ve broken them down into six
different categories. That’s customer treatment and job
expertise under affect of service; under information control, it’s information
resources and information accessibility; and then library as place of study,
there’s three categories—the library environment overall, individual and group
study related questions, and then question, a question related to the equipment that
your library provides your users. Well here are some of the examples of
the analysis that I talked about and Angela talked about. Here’s one that
compares for the question “A library web site enabling me to locate information
on my own” and what I’ve done here is compared undergraduates to graduates to
faculty and you can see that with undergraduates where the dot is, it’s
within that zone of tolerance so they’re pretty much satisfied with the
ability to locate information on their own because that dot is over halfway up
that zone of tolerance bar you can say this is a success. You look at the graduate students, that dot is within the zone of
tolerance but it’s in the bottom half of that zone so even though your graduate
students are satisfied, there is some area for improvement. And in the faculty
you look at that dot is below the zone of tolerance which is a demonstration
that faculty are not comfortable with being able locate information on their own.
Here’s another analysis where I’ve broken down the analysis one by user
group and then by college. I believe this is for undergraduates, and again what I’ve
done is the first bar shows the responses for all users for all
undergraduates for all colleges and then I broke it down by college, and then the
final bar is where I compared our results to our ARL members so that
not only can you look at how these scores by college vary to the
institution’s overall score but also to other institutions.
Speaking of which, one of the things that will be important to y’all especially if
you’re doing this in a consortium environment is being able to benchmark, which Angela mentioned, and here you see a longitudinal trend for the question
“Print or electronic journal collections I require” for faculty and you
can see over the years that our score has gone from in 2003 when we weren’t
meeting faculty needs to 2015 where we were meeting those needs but because
that dot is in the lower half of that bar there are still areas of improvement. And then I provided a trend chart here for the ARL perceived scores also; it shows
that compared to some of our ARL other members our scores are slightly higher
so time for patting ourselves on the back. I did an analysis by AGR which
again is just satisfaction value and what I’m looking for
is changes from year to year where the satisfaction either increased or
decreased from the previous year or increased or decreased from two years. When I’m particularly looking for is where it’s decreased over a two-year period. You are
going to see some fluctuation from year to year but if the decrease is of two
years and that means that you’re headed in the wrong direction and maybe—definitely
evaluate what what you’re doing to address, in this case dependability and
handling user service problems. I’ve also done a comment analysis. I don’t use
Atlas TI, I actually have a code, a comments code book that I use in
Excel, to know how comments are related or how they’re categorized. And you can
see here that library as place is one of the most important functions for
undergraduates whereas with graduates library as place and information
accessibility share equal importance and with faculty its information
accessibility that’s the most important. Here is that comments codebook I was
talking about and here’s how I break down the comments, not only by your
broader areas—affect of service, library as place, information resources,
information accessibility—but subcategories within there so if I want
to drill down and find out you know how the comments are for a particular area
for example marketing or the way our reference or general treatment I can pull out those comments using a sort function in Excel. By the way just
regarding comments the general rule is that you should get about 50% of
comments for your participation number so if you get a thousand people
participating in the survey you should get about five hundred comments. Here’s another example of library usage; this is again for undergraduates, even
though I didn’t mark it that way. How many people visit
the library premises. 84% visit within at least a month. How many use the web
resources? 72%, so you can see that whereas you know both the library
webpage and the premises are important, more undergraduates are going to the
building itself versus going to the virtual library. This is the one thing
that I want to talk about when we talked about just the impact that LibQUAL can have, it’s evidence supports funding. I can’t tell you how many projects we’ve
had funded by the university just because we can refer back to the LibQUAL+ survey findings and say look this is what our users say is important and
this is how we want to address that particular issue. Just quickly, just to go
over some examples here, customer treatment—we’ve provided a standardized
method how we greet and talk to our customers at our public service desk, job
expertise, I’ve listed some examples here of all in one area or another dealing
with professional development. In many cases we moved away from the librarian
just out of the MLS school and have hired people that have had
experience. We’ve also provided a new clinical and instruction faculty track
as opposed to a tenure-track if that’s applicable to your organization.
Information resources, again I’ll let you read through this, accessibility is our big
issue, and our biggest effort these days has been on doing web usability studies
to track how people, when they open up the homepage of the library website, how
they go about looking for information on their own. Library environment—two
slides with the library environment stuff—we recently completed over the
last ten years about a 15 million dollar renovation on all our
libraries and here’s just some examples of things that we’ve done. This is the
library environment itself, individual group study and equipment. And with that
I’m done so thank you very much for your time.>>Amy Yeager: Thank you Michael that was that was
just incredibly helpful to to hear, all of your really practical
experience gained over many years of familiarity with this survey. We’d like
to open it up for questions from the audience now. There’s a chat box
in the lower left corner of your screen and if you type your question in there
we will pose it to Michael or Angela. And there is one question waiting now for
Michael: “Can you talk about how often Texas A&M runs the LibQUAL+ survey
and whether you use the same local questions every time or if you change
them?”>>Michael Maciel: We’re actually in the middle of a transition. We used to conduct the survey
annually and as a result of that we would do random samples of students and
faculty. One year we would sample all the Science and Technology faculty, the next
year we’d do the liberal arts and business faculty, but we’re moving toward a once
every three year model and in fact we’ll be conducting the survey this coming
in March to begin that new cycle. Regarding the local questions, no we do
change them every time we run the survey. Again, what the local questions do is
try to highlight a concern at that time. The last LibQUAL+ survey that we did
we emphasized information literacy classes and for this survey
we’re actually going to spend some more time on web usability issues.>>Amy Yeager: That’s a
good way to keep the keep the survey current for for what’s going on in your
library in a particular year. Angela, other libraries, some consortia will
run surveys every two or three years?>>Angela Pappalardo: Yes, Most libraries I would say
run it every few years. We do offer discounts for running it every year or
every two years. A lot of, some institutions run every three years or
every four years, so they would not be hitting the same populations if they do
in more than three or four years apart.>>Michael Maciel: You had a question here and Amy
I’m sorry for jumping in but yes we do use the survey for accreditation reports
and visits. In fact, at the 2018 Library Assessment Conference I did a whole
presentation on how to use LibQUAL+ for assessment reporting and visits.>>Amy Yeager: I
think that that presentation is on the LibQUAL+ website in the publications section if people are interested in exploring
that further. Liza has a question, “Is it possible to manipulate the wording of
the questions taking into consideration that we work in an ESL environment?”>>Angela Pappalardo: The
wording of the questions themselves, the core questions, cannot be changed. The
translations however we could potentially work with
you on a translation if there’s a mis-translation issue, but generally the
the core questions can’t changed. The optional questions are
what you can submit to be worded however you want so long as they’re in that
triple Likert format—the beginning text is “when it comes to” and then the question
text “my minimum, desired, and perceived rating is,” so it sort of has to fit
within that format to be on the survey, to make sense within the survey context.>>Amy Yeager: Evi asks if we could share a link to Michael’s presentation on using
LibQUAL+ for accreditation and when we share the recording with Alex at
AMICAL to pass on to you all we will also send along that resource and
any other resources that we think might be useful. Michael you talked and Angela
demonstrated a little bit the LibQUAL+ analytics module which is where
libraries can interact with their data to compare with peers, to compare across
user groups, discipline areas, to create custom radar charts, and download subsets
of data. Michael could you talk a little bit about how you use analytics in using your LibQUAL+ data?>>Michael Maciel: Well the university has several aspirational
peers we use ARL institutions as one peer group, we also
use you know all these Texas universities as another user group so by
having that analytics option or the subscription to analytics we can pretty
much pick who we want to compare ourselves to either individually,
institution to institution, or, and this is very germane to the fact
that I’m talking to a consortia group right now, is you could gather all your
data both by institution and by the consortium as a total to compare how
your library is doing to the consortium or to individual
institutions. I would say one thing just as a reminder is that if you are going
to compare yourself to, if you do use analytics, and you are going to compare
yourself to another specific institution there are certain guidelines on how you
report that other institutional data. And Amy or Angela, I’ll let you explain
that but I do want to offer that cautionary note about not identifying
that institution by name. >>Amy Yeager: That’s true, yeah, we have guidelines on how to use
data that emphasize that these scores are just one measure of how
people look at library services and they’re not an absolute measure, that
doesn’t necessarily that mean that one library’s services are better than another.
It’s a measure of the satisfaction and so you need to take a lot of external
factors into account when comparing scores. And then we also ask that people
anonymize libraries in comparisons.>>Michael Maciel: Just as a further note on that I believe that
Greece has an IRB human testing standard that they have to meet when
they send out surveys and one of the issues is that we do want to anonymize
not only our data but that of other institutions just to meet those internal
review board guidelines.>>Amy Yeager: Evi has some questions about the analytics module…was
wondering if it’s a feature of LibQUAL+ or if it’s an additional service
and Angela can explain the options there.>>Angela Pappalardo: Yes, analytics is
available to anyone who runs the LibQUAL+ survey. It’s not an additional service,
however, there is an additional service for having more access to all of the
institutions who have run. So when you run the survey normally you will have access
to any of the aggregate results for institutions that have run in your same
survey year. So if you run the survey a lot, as Texas A&M does, you automatically
have access to all the years and all the institutions who have run in those years.
If you are only running in 2019 for example you will only have access to the
other institutions that have also run in 2019. For an additional fee which is
$1,000 per year we offer a subscription to the LibQUAL+ analytics which gives
you access to all of the institutions in all years. So we recommend doing that if
you are planning to do a lot of benchmarking work especially after you
run a survey perhaps in an off year and it’s something you can subscribe to it
anytime so you don’t have to do it the same year that you’re running a survey.
But you do have to run a survey first before you can subscribe to that so it’s
not just open to anyone who hasn’t run a survey.>>Michael Maciel: I do want to second that because Texas A&M no longer does annual surveys but does them, you know,
on like a 3-year cycle. We do pay for that analytics subscription even
on the years that were not conducting the survey and one reason for that is
that when we do benchmarking, when I’m comparing our university to another
university there may be a situation where another a university ran the
survey in 2017 and we ran it on 2016 so by having that analytics subscription I
can compare those two institutions whereas if I didn’t have that option I
wouldn’t be able to pull that data for the other library.>>Angela Pappalardo: Exactly, thank you.>>Amy Yeager: Michael, switching gears a little bit–do
you at Texas A&M use incentive contests to promote participation in the
survey?>>Michael Maciel: Yes we do and we actually have to be very careful about that because of
new tax laws. What we’ve been offering over the past few surveys is we offer
five Amazon Fire tablets and we make sure that the Fire tablets are under a
hundred dollars, again for tax purposes. And there’s also a catch to that, is the
federal government here requires that even if is it even if it is an incentive
under $100 the participants or the recipient has to pay tax on that so we
try to make sure that people know that you’re getting this you know $100 fire
tablet but you’re going to wind up having to pay two dollars in tax on it
or whatever whatever the tax rate is. But we do offer incentives. I did one year
where I did not offer an incentive and the one question I kept getting on
comments from the people that had taken the survey before was “What’s your
incentive this year?” So if you run this survey you know consistently like on a
three-year cycle or or less and you do offer the incentive it’s something that
you almost lock yourselves into for future surveys. But again look at what
your tax codes are regarding incentives before offering them but I would
recommend that you use them. Oh, one other thing with regard to that—and Amy or
Angela correct me if I’m wrong—but I believe there is a resource link on LibQUAL+ that gives a list of the incentives that have been offered over offered by
various libraries throughout the years; is that correct?>>Angela Pappalardo: That sounds familiar;
I will look for that. I’m not sure exactly where it is and there’s been a
large range of options that I’ve seen just in the past couple of years that
I’ve been working with the LibQUAL+ survey: anything as small as a piece of
candy or a few pieces of candy for completing the survey, to $5 gift cards, to larger items like iPads. I’ll look for that and we’ll send it around with the
with the slides as well and the recording.>>Michael Maciel: Yeah, there were some really
good ideas on that if I recall correctly>>Amy Yeager: With regard to that, we, LibQUAL+
also has a repository of other examples of marketing materials on the website.>>Michael Maciel: Oh
yeah and definitely spend some time looking at that webpage with those
links. I’m really proud of the marketing design that we came up with but there’s
some genius other ideas out there and some very cute and effective ideas that
can really spur your creative thought process when coming up with your
marketing campaign.>>Amy Yeager: While we do provide some examples of what other
libraries have done, LibQUAL+ doesn’t provide marketing materials
for libraries to customize themselves although we do have a bank of images
online where you can download the survey logo.>>Michael Maciel: If I recall
correctly that website does identify the institution that was providing those
marketing ideas and it’s been my experience in previous years that
they’re more than willing to share their graphics with you.>>Amy Yeager: Evi asks, “Please explain how you differentiate
the Lite and the full version and what percentages do you recommend and why?”>>Michael Maciel: Are
you throwing that question at me?>>Amy Yeager: Who would like to …>>Angela Pappalardo: I guess so yeah Michael could you talk about
what you’ve done at Texas A&M? (So yes, she specifies Michael.)>>Michael Maciel: Gee, thanks. I’ll
tell you that’s a point of contention this year. In previous years we did 50 and 50: 50 full version and 50 Lite, and I actually want to …
Our participation rate generally is anywhere from 10 to 25 percent of the total population so we don’t, you know, whereas with most
surveys you know you’re a success if you’re getting an over 50 percent
response rate, you don’t look at that with LibQUAL+, you look at your
representativeness chart to determine survey validity. But this year what
I’m trying to promote is 75 percent full and 25 percent light just
because I prefer to get that entire 22 core question perspective as
opposed to only the eight question perspective that the Lite does. But in
saying that there are several people here that want to see a higher
participation rate versus that full 22 so like I said it’s a bone of
contention right now and I know that’s not an answer but at least let you know
that you know that’s where I’m leaning and why I’m leaning that way.>>Angela Pappalardo: Yeah it’s
hard to it’s hard to say, it really depends on what the priorities are for
the institution and like you say Michael you get a higher response rate using
LibQUAL+ Lite 100 percent or or any percent but then you don’t have
as much data for each question as you would if you had run it all long. So
most institutions I would do a mix. I think lately there’ve
been a lot running 100 percent Lite so it really just depends but I think a mix
is a great way to start out if you haven’t run a survey before to see what
kind of participation rates you get.>>Amy Yeager: Evi asks if there’s a support community to help with questions or a listserv, which we do have—Angela, would you
like to talk about that?>>Angela Pappalardo: Yes we have a LibQUAL+ listserv and we can definitely
get you added to that. I don’t remember off the top of my head I think we have a
link on the LibQUAL+ website which I’ll have to send around to join, but we can
also manually add you, anyone who likes. If you send an email to [email protected], that’s me, and I can help you out with that. We’ll also send
this link around when we send out the recording and the slides.>>Michael Maciel: And then Evi in
addition to that and anyone else that’s out there, my email address is on
my slides and I really am a LibQUAL+ geek so feel free to email me if you
have any questions as well.>>Amy Yeager: And we’ve just put up the slide with our emails so
please feel free to to contact any one of us on the slide here.>>Michael Maciel: I have another
email address which is just [email protected]; you don’t have to put the library
in there.>>Angela Pappalardo: Thanks, Michael.>>Amy Yeager: well we’ve just about come to the end of
the hour and it looks like there no more no further questions coming in,
so thank you all very much for joining us. Thank you Michael and Angela for all
the good information, and we will be sending the recordings to the consortium
within the next week. Thank you.

You May Also Like

About the Author: Oren Garnes

Leave a Reply

Your email address will not be published. Required fields are marked *