Cuyamaca College Library
Face-to-face instruction takes place at the reference desk and during library orientations. Our perspective is that both of these are valuable instructional activities - not simply services. Librarians do not just give students the answers and send the satisfied student away. Librarians demonstrate for students how to find, choose and accurately evaluate information. Our mission is to assist students in becoming independent researchers.
Since we view reference as instruction, it was imperative for the
Library to develop SLOs and assessment tools that measure student information
competency skills, not student satisfaction.
MODIFICATIONS to Library Instruction @ the reference desk
A. The Reference Interview Modifications
In Fall 2010,
we implemented these changes to our distribution of the Reference card
2. Librarians need to explain N/A to students saying if we did not discuss this item please bubble in N/A. do not leave it empty. Librarian needs to revise the Ref Card survey so students understand what to bubble in. For example, some students bubbled in SLO #1 and then bubbled in all the strongly agreed in the Likert scale column.
3. Librarians need to be more proactive in SLO #3 called evaluating sources. The Librarian's challenge lies in instructing students to use their critical thinking skills in selecting sources. At this time we do not know if the students need help in evaluating source or they figure it out as they continue searching for sources
Every student has the opportunity to meet with a
librarian at the reference desk. The librarian interviews the student and
eventually develops the research strategy. Every reference desk interview is
considered one-on–one instruction and it is impossible for librarians to
follow a one-size fits all strategy. Even though every librarian has
his/hers own instructional style, there are common components in every
reference interview. The librarians have developed a “best practice”
research strategy which includes three components: searching for a topic
using keyword and subject word , searching databases, and evaluating the
merits of sources.
B. Assessment Tool Design Modifications: Reference card survey (RCS) design
B. Assessment Tool Design Modifications: Reference card survey (RCS) design
In Fall 2010, we implemented the following changes to the assessment tool, called the Reference Card survey. We added more directional arrows to the survey form to make it easier for students to fill out.
The REFERENCE CARD SURVEY (RCS).
The RCS consists
of two forms: Form A, which focuses on student experience during
a reference desk interview, and Form B, which focuses on the
librarian's experience during a reference desk interview.
The RCS was printed on one 8X11’ cardstock, perforated
to separate the student response from the librarian response.
The two sections were coded to facilitate rematching.
Both halves of the form contained a Likert 6 point scale that
included SA, A, N, D, SD, and N/A.
Librarians would bubble in their responses
by selecting a number
that best represents their response.
Please note that the RCS is not a
Please note that the RCS is not aa user-satisfaction survey.
C. Methodology: Distributing the RCS to students
Once the Reference Card Survey (RCS) was developed and approved by Librarians, it underwent a pilot test to establish its reliability and validity. In the Fall 2009 semester, eighty (80) students who asked what the reference librarian deemed an instructional question, were asked to complete the RCS. If the student agreed to fill out the form, the librarian tore off the top and gave it to the student. The librarian kept the bottom portion of the page.
The Librarian and the student would complete their one-half of the form immediately after a reference desk interview. We chose this distribution method because it allowed the librarians and students to respond when the learning was still fresh in their minds. We also felt it important to distribute the RCS after a reference interview in order to collect precise data rather than distributing a large scale survey that are not connected to a particular encounter. The large scale surveys that are distributed to a broad population run the risk of the student respondent referring to something else than what the survey authors intended. A reference interview survey administered immediately with carefully targeted questions is far less susceptible to such pitfalls.
D. Data Analysis: Inter-rater reliability
Whenever you use humans as a part of our assessment procedure, we have to worry about whether the results we get are reliable or consistent. People are notorious for their inconsistency. So how do we determine whether two observers are being consistent in their observations? We chose to calculate the percent of agreement between the raters called Inter-rater Reliability". Interrater reliability is a measure used to examine the agreement between two people (raters/observers). Inter-rater reliability seemed to be one of the best ways for us to estimate reliability. For the rating of the students and the librarians to be consider “credible” , we needed to measure the agreement between students and librarians. When we test the reliability of ratings we compute the inter-rater reliability coefficient. It is generally accepted that a inter-rater reliability coefficient of .75 (75%) or higher suggests that the ratings are reliable. The closer to 1.0 (100%), the more reliable the ratings are. Read more about inter-rater reliability
Analysis of the RCS data
indicates 100% of all students saw significant benefits from their
participation with the librarian.
Please note that the
RCS Fall 2010-Spring 2011 PDF has an error. In the columns 4,5,6 labeled
(Librarian) the phrase should read, As a result of my interaction with the
student today, he/she is..."
1. Kappa analysis
A statistical measure of interrater reliability is Cohen’s Kappa which ranges from -1.0 to 1.0 where large numbers mean better reliability, values near zero suggest that agreement is attributable to chance, and values less than zero signify that agreement is even less than that which could be attributed to chance. Our RCS Kappa scores fell in the Moderate range of agreement (see chart below). To be more specific, the scores were .799 (SLO#1), and .854 (SLO #2) and.866 (SLO#3) and this put our test in the moderate agreement category.
and Recommendations-comparing cycle 1 to cycle 2 data
In cycle one, our
assessment tool had an "N/A" bubble. Students and Librarians were confused
For example, students who did not receive Librarian-led instruction
in “evaluating sources” (SLO3) did not bubble-in N/A and chose to
leave it empty. Librarians explained
the meaning of N/A to students saying “if we did not
discuss this item please bubble in N/A. do not leave it empty”.
In cycle one,
students did not understand the Neutral bubble.
The neutral response is a legitimate response and cannot be removed from the
analysis. In this cycle 2, Librarians were more confident in
explaining to the students when and when not to check Neutral.
The neutral response is a legitimate response and cannot be removed from the analysis. In this cycle 2, Librarians were more confident in explaining to the students when and when not to check Neutral.
3. In cycle one, Librarians were not proactive in explaining SLO #3 called evaluating sources. The Librarian's , in cycle 2, were much more proactive in instructing students to use their critical thinking skills in selecting sources.