Cuyamaca College skip to content

               Cuyamaca College Library

Student Learning Outcomes

Introduction  HOME | SLOs 1,2,3 |  SLO 4 | SLO 5 | SLO matrixppt

 

 Findings, Methodology and Recommendations for Library Instruction @ the Reference Desk
Fall 2010 - Spring 2011 (Cycle #2)

Introduction
    
     The Cuyamaca College Library mission is to create a learning environment so students can acquire information competency skills in order to become independent researchers.  Information competency is defined as being “information literate. A person must be able to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information.” To that end Cuyamaca Library offers both face-to-face and virtual instruction.

     Face-to-face instruction takes place at the reference desk and during library orientations.  Our perspective is that both of these are valuable instructional activities - not simply services. Librarians do not just give students the answers and send the satisfied student away. Librarians demonstrate for students how to find, choose and accurately evaluate information. Our mission is to assist students in becoming independent researchers.  

     Since we view reference as instruction, it was imperative for the Library to develop SLOs and assessment tools that measure student information competency skills, not student satisfaction.  The SLO initiatives have the potential to assist librarians in collecting student learning responses in order for us librarians to become better instructors.

     I. MODIFICATIONS to  Library Instruction @ the reference desk  
    
In Fall 2012, we implemented the following changes to the reference interview and the assessment tool, called the Reference Card survey.

          A. The Reference Interview Modifications

    In Fall 2010, we implemented these changes to our distribution of the Reference card Survey:
1. Librarians need to give clearer directions to students on how to fill in the Ref Card Survey. For example, students who did not receive Librarian-led instruction in evaluating sources; (third bubble) did not bubble-in N/A and chose to leave it empty.

2. Librarians need to explain N/A to students saying if we did not discuss this item please bubble in N/A. do not leave it empty. Librarian needs to revise the Ref Card survey so students understand what to bubble in. For example, some students bubbled in SLO #1 and then bubbled in all the strongly agreed in the Likert scale column.

3. Librarians need to be more proactive in SLO #3 called evaluating sources. The Librarian's challenge lies in instructing students to use their critical thinking skills in selecting sources. At this time we do not know if the students need help in evaluating source or they figure it out as they continue searching for sources

Every student has the opportunity to meet with a librarian at the reference desk. The librarian interviews the student and eventually develops the research strategy. Every reference desk interview is considered one-on–one instruction and it is impossible for librarians to follow a one-size fits all strategy. Even though every librarian has his/hers own instructional style, there are common components in every reference interview. The librarians have developed a “best practice” research strategy which includes three components: searching for a topic using keyword and subject word , searching databases, and  evaluating the merits of sources.  Reference librarians continue to use the classic reference interview process listed in the steps below:

  1. Student asks a question.
  2. Librarian analyses the question to determine the best research strategy.
  3. Librarian reconfirms the question and may ask more questions for clarification.
  4. Librarian identifies specific library databases and print sources that may provide the answer to the question.
  5. Librarian instructs students on how to choice appropriate search terms.
  6. Librarian instructs the student on how to use selected databases or print sources.
  7. Librarian guides the student to select best sources.
  8. If student is satisfied with the list of sources, the research process ends here.
  9. If student is not satisfied, librarian may repeat or modify the research strategy.

B.  Assessment Tool Design Modifications: Reference card survey (RCS) design

     In Fall 2010, we implemented the following changes to the assessment tool, called the Reference Card survey. We added more directional arrows to the survey form to make it easier for students to fill out.

     The REFERENCE CARD SURVEY (RCS).  The RCS consists of two forms: Form A, which focuses on student experience during a reference desk interview, and Form B, which focuses on the librarian's experience during a reference desk interview. The RCS was printed on one 8X11’ cardstock, perforated  to separate the student response from the librarian response.  The two sections were coded to facilitate rematching.   Both halves of the form contained a Likert 6 point scale that included SA, A, N, D, SD, and N/A. Students and Librarians would bubble in their responses by selecting a number that best represents their response. Please note that the RCS is not a a user-satisfaction survey.

C. Methodology: Distributing the RCS to students

     Once the Reference Card Survey (RCS) was developed and approved by Librarians, it underwent a pilot test to establish its reliability and validity.  In the Fall 2009 semester, eighty (80) students who asked what the reference librarian deemed an instructional question, were asked to complete the RCS.  If the student agreed to fill out the form, the librarian tore off the top and gave it to the student. The librarian kept the bottom portion of the page.

     The Librarian and the student would complete their one-half of the form immediately after a reference desk interview.   We chose this distribution method because it allowed the librarians and students to respond when the learning was still fresh in their minds.  We also felt it important to distribute the RCS after a reference interview in order to collect precise data rather than distributing a large scale survey that are not connected to a particular encounter. The  large scale surveys that are distributed to a broad population run the risk of the student respondent referring to something else than what the survey authors intended.  A reference interview survey administered immediately with carefully targeted questions is far less susceptible to such pitfalls.

D. Data Analysis:  Inter-rater reliability

     Whenever you use humans as a part of our assessment procedure, we have to worry about whether the results we get are reliable or consistent. People are notorious for their inconsistency.  So how do we determine whether two observers are being consistent in their observations? We chose to calculate the percent of agreement between the raters called Inter-rater Reliability". Interrater reliability is a measure used to examine the agreement between two people (raters/observers). Inter-rater reliability seemed to be one of the best ways for us to estimate reliability. For the rating of the students and the librarians to be consider “credible” , we needed to measure the agreement between students and librarians. When we test the reliability of ratings we compute the inter-rater reliability coefficient. It is generally accepted that a inter-rater reliability coefficient of .75 (75%) or higher suggests that the ratings are reliable. The closer to 1.0 (100%), the more reliable the ratings are.   Read more about inter-rater reliability

     Analysis of the RCS data indicates 100% of all students saw significant benefits from their participation with the librarian. Please note that the RCS Fall 2010-Spring 2011 PDF has an error. In the columns 4,5,6 labeled (Librarian) the phrase should read, As a result of my interaction with the student today, he/she is..."
     The SLO #1 showed a 100% agreement rate between the student (mean 4.6) and librarian (4.56).  A total of 85 out of 92 students selected Strongly agree and Agree. Students who bubbled in more “strongly agree”  may indicate that they could easily construct a successful search statement using keywords and subject words.
     SLO #2 showed a 100% agreement rate between student (mean 4.76) and librarian (mean 4.66). A total of 66 out of 92 students selected "Strongly agree and 'agree".  Students (50 out of 66) who bubbled in more “strongly agree" may indicate that they were more confident of their ability to differentiate between library catalog and databases.  It is interesting to note that there is 100% agreement between the remaining 26 students and the librarian. The remaining 26 students bubbled in "n/a" and the librarian bubbled in "n/a". This may suggest that the librarian did not think it necessary to show students both sources and chose to show the student either the library catalog or the library databases.
    
SLO #3 showed a 100% agreement  rate between students (mean 4.76) and librarian (mean 4.66).  What is significant about this SLO is that  of the 92 surveys collected,  66 students reported they learned out to evaluate sources. This number of 66 is higher than the first cycle results (30) and this suggests librarians have been more proactive in teaching students how to evaluate sources.  For example, the student may have received help with SLO#1 and SLO#2 and this may have been enough to get him started to evaluate sources. Or the student did not need help in evaluating sources. 

1. Kappa analysis

A statistical measure of interrater reliability is Cohen’s Kappa which ranges from -1.0 to 1.0 where large numbers mean better reliability, values near zero suggest that agreement is attributable to chance, and values less than zero signify that agreement is even less than that which could be attributed to chance. Our RCS Kappa scores fell in the Moderate range of agreement (see chart below). To be more specific, the scores were .799 (SLO#1), and .854 (SLO #2) and.866 (SLO#3) and this put our test in the moderate agreement category.

κ

Interpretation

< 0

No agreement

0.0 — 0.20

Slight agreement

0.21 — 0.40

Fair agreement

0.41 — 0.60

Moderate agreement (cycle 1)

0.61 — 0.80

Substantial agreement (this cycle shows improvement from cycle 1)

0.81 — 1.00

Almost perfect agreement

 D. Conclusion and Recommendations-comparing cycle 1 to cycle 2 data
    
Based on the data we gathered, what can the Cuyamaca Library do to improve student learning?  The expected outcomes were met. 

1.      In cycle one, our assessment tool had an "N/A" bubble. Students and Librarians were confused   For example, students who did not receive Librarian-led instruction in “evaluating sources” (SLO3) did not bubble-in N/A and chose to leave it empty.  Librarians explained the meaning of N/A to students saying “if we did not discuss this item please bubble in N/A. do not leave it empty”.  In this cycle, we chose to eliminate the N/A and use the neutral bubble to report "did not discuss"

2.      In cycle one, Librarians and students did not understand the Neutral bubble. The neutral response is a legitimate response and cannot be removed from the analysis.  In this cycle 2, Librarians were more confident in explaining to the students when and when not to check Neutral. 

3.      In cycle one, Librarians were not proactive in explaining SLO #3 called evaluating sources. The Librarian's , in cycle 2, were much more proactive in instructing students to use their critical thinking skills in selecting sources. 

Back to | SLO 1,2,3 | SLO matrix |

 

 

Page footer information
Cuyamaca College Social Media Cuyamaca College Social Media