Cuyamaca College skip to content

               Cuyamaca College Library

Student Learning Outcomes

Introduction  HOME | SLOs 1,2,3 |  SLO 4 | SLO 5 | SLO matrixppt

 

 Findings, Methodology and Recommendations for Library Instruction @ the Reference Desk
Fall 2009 - Spring 2010

Introduction
    
     The Cuyamaca College Library mission is to create a learning environment so students can acquire information competency skills in order to become independent researchers.  Information competency is defined as being “information literate. A person must be able to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information.” To that end Cuyamaca Library offers both face-to-face and virtual instruction.

     Face-to-face instruction takes place at the reference desk and during library orientations.  Our perspective is that both of these are valuable instructional activities - not simply services. Librarians do not just give students the answers and send the satisfied student away. Librarians demonstrate for students how to find, choose and accurately evaluate information. Our mission is to assist students in becoming independent researchers.  

     Since we view reference as instruction, it was imperative for the Library to develop SLOs and assessment tools that measure student information competency skills, not student satisfaction.  The SLO initiatives have the potential to assist librarians in collecting student learning responses in order for us librarians to become better instructors.

     I. LIBRARY INSTRUCTION ASSESSMENT TOOL #1 : Reference Card Survey @ the reference desk  
      For the first tool, called the Reference Card survey, measured student learning as a result of a reference interview with librarian. The data analysis indicating that students (100%) agreed that they did learn something about library resources. 

     A. The Reference Interview

     Every student has the opportunity to meet with a librarian at the reference desk. The librarian interviews the student and eventually develops the research strategy. Every reference desk interview is considered one-on–one instruction and it is impossible for librarians to follow a one-size fits all strategy. Even though every librarian has his/hers own instructional style, there are common components in every reference interview. The librarians have developed a “best practice” research strategy which includes three components: searching for a topic using keyword and subject word , searching databases, and  evaluating the merits of sources.  Reference librarians continue to use the classic reference interview process listed in the steps below:

  1. Student asks a question.
  2. Librarian analyses the question to determine the best research strategy.
  3. Librarian reconfirms the question and may ask more questions for clarification.
  4. Librarian identifies specific library databases and print sources that may provide the answer to the question.
  5. Librarian instructs students on how to choice appropriate search terms.
  6. Librarian instructs the student on how to use selected databases or print sources.
  7. Librarian guides the student to select best sources.
  8. If student is satisfied with the list of sources, the research process ends here.
  9. If student is not satisfied, librarian may repeat or modify the research strategy.

B.  Assessment Tool Design: Reference card survey (RCS) design

     The survey author has designed an assessment tool, titled the REFERENCE CARD SURVEY (RCS).  The RCS consists of two forms: Form A, which focuses on student experience during a reference desk interview, and Form B, which focuses on the librarian's experience during a reference desk interview. The RCS was printed on one 8X11’ cardstock, perforated  to separate the student response from the librarian response.  The two sections were coded to facilitate rematching.   Both halves of the form contained a Likert 6 point scale that included SA, A, N, D, SD, and N/A. Students and Librarians would bubble in their responses by selecting a number that best represents their response. Please note that the RCS is not a a user-satisfaction survey.

C. Methodology: Distributing the RCS to students

     Once the Reference Card Survey (RCS) was developed and approved by Librarians, it underwent a pilot test to establish its reliability and validity.  In the Fall 2009 semester, eighty (80) students who asked what the reference librarian deemed an instructional question, were asked to complete the RCS.  If the student agreed to fill out the form, the librarian tore off the top and gave it to the student. The librarian kept the bottom portion of the page.

     The Librarian and the student would complete their one-half of the form immediately after a reference desk interview.   We chose this distribution method because it allowed the librarians and students to respond when the learning was still fresh in their minds.  We also felt it important to distribute the RCS after a reference interview in order to collect precise data rather than distributing a large scale survey that are not connected to a particular encounter. The  large scale surveys that are distributed to a broad population run the risk of the student respondent referring to something else than what the survey authors intended.  A reference interview survey administered immediately with carefully targeted questions is far less susceptible to such pitfalls.

D. Data Analysis:  Inter-rater reliability

     Whenever you use humans as a part of our assessment procedure, we have to worry about whether the results we get are reliable or consistent. People are notorious for their inconsistency.  So how do we determine whether two observers are being consistent in their observations? We chose to calculate the percent of agreement between the raters called Inter-rater Reliability". Interrater reliability is a measure used to examine the agreement between two people (raters/observers). Inter-rater reliability seemed to be one of the best ways for us to estimate reliability. For the rating of the students and the librarians to be consider “credible” , we needed to measure the agreement between students and librarians. When we test the reliability of ratings we compute the inter-rater reliability coefficient. It is generally accepted that a inter-rater reliability coefficient of .75 (75%) or higher suggests that the ratings are reliable. The closer to 1.0 (100%), the more reliable the ratings are.   Read more about inter-rater reliability

     Analysis of the RCS data indicates 100% of all students saw significant benefits from their participation with the librarian.
     The SLO #1 showed a 100% agreement rate between the student and librarian.  All students selected Strongly agree and Agree. Students who bubbled in more “strongly agree”  may indicate that they could easily construct a successful search statement using keywords and subject words.
     SLO #2 showed a 100% agreement rate between student and librarian. More students selected more "agree" than "strongly agree", which is the opposite of the SLO #1. Students who bubbled in more “agree” instead of "strongly agree" may indicate that they were somewhat less confident of their ability to differentiate between library catalog and databases. 
    
SLO #3 showed a 95% agreement  The percentile was less than 100% because one response by Librarian was a neutral.  What is significant about this SLO is that  of the 80 surveys collected, only 35 students and 35 librarian reported talking about evaluating sources. This number of 35 is very low compared to SLO #1 with 80 responses and SLO #2 with 63 responses.  What could contribute to this low number of responses? The low number of responses (35) could mean several things. For example, the student may have received help with SLO#1 and SLO#2 and this may have been enough to get him started to evaluate sources. Or the student did not need help in evaluating sources.  Another explanation is that the librarians are not being proactive enough in teaching students how to evaluate sources. 

1. Kappa analysis

A statistical measure of interrater reliability is Cohen’s Kappa which ranges from -1.0 to 1.0 where large numbers mean better reliability, values near zero suggest that agreement is attributable to chance, and values less than zero signify that agreement is even less than that which could be attributed to chance. Our RCS Kappa scores fell in the Moderate range of agreement (see chart below). To be more specific, the scores were .459 (SLO#1), and .572 (SLO #2) and.750 (SLO#3) and this put our test in the moderate agreement category.

κ

Interpretation

< 0

No agreement

0.0 — 0.20

Slight agreement

0.21 — 0.40

Fair agreement

0.41 — 0.60

Moderate agreement

0.61 — 0.80

Substantial agreement

0.81 — 1.00

Almost perfect agreement

 D. Conclusion and Recommendations
    
Based on the data we gathered, what can the Cuyamaca Library do to improve student learning?  The expected outcomes were not met during this pilot study.  There may be several factors that may have obscured the actual influence of library instruction on student library skill development. For example, the results indicate a need for improvement to assessment tools. Institutional Research Dept will work with Library to re-write the two assessment tool questions, clarify test directions, and select different questions in order to avoid ambiguity. We will be refining our assessment tool and introducing it to students in Fall 2010 - Spring 2011.

1.      Librarians need to give clearer directions to students on how to fill in the Ref Card Survey.  For example, students who did not receive Librarian-led instruction in “evaluating sources” (third bubble) did not bubble-in N/A and chose to leave it empty.  It is recommended that Librarians explain N/A to students saying “if we did not discuss this item please bubble in N/A. do not leave it empty”.

2.      Librarians and students did not understand the Neutral bubble. The neutral response is a legitimate response and cannot be removed from the analysis.  Interpreting the neutral response, however, is impossible.  Did the Librarian feel ambivalent? Or were they apathetic? 

3.      Librarian needs to revise the Ref Card survey so students understand what to bubble in. For example, some students bubbled in SLO #1 and then bubbled in all the strongly agreed in the Likert scale column.

4.      Librarians need to be more proactive in SLO #3 called evaluating sources. The Librarian's challenge lies in instructing students to use their critical thinking skills in selecting sources.  At this time we do not know if the students need help in evaluating source or they figure it out as they continue searching for sources.

Back to | SLO 1,2,3 | SLO matrix |

 

 

Page footer information
Cuyamaca College Social Media Cuyamaca College Social Media