Block I Illinois Library Illinois Open Publishing Network

13 Evaluating and Improving Instruction

Introduction

Chapter 9 discussed assessment, a crucial step in Backward Design that allows us to measure student learning. The terms “evaluation” and “assessment” are often used interchangeably, but while assessment focuses on learning and student progress toward identified learning outcomes, evaluation involves “determining the quality of the service or activity and the extent to which the service or activity … accomplishes stated goals and objectives” (Hernon & Schwartz, 2012, p. 79). While learning is always our main focus as instructors, we should also consider other factors that contribute to the overall success of our instruction, like learner satisfaction and perceived quality of the lesson. Students can certainly learn even if they do not enjoy the process, but their levels of satisfaction can impact their learning and, for those who voluntarily attend workshops and trainings, influence whether they will return for future sessions. This chapter explores methods for evaluation of library instruction with a focus on quality and patron satisfaction. See Activity 13.1 for a brief reflection on evaluation.

 

Activity 13.1: Reflecting on Evaluation

Nearly all of us have some experience with evaluation as a learner. For instance, students, especially in higher education, regularly fill out course evaluations, and workshop participants are often asked to complete feedback forms or surveys at the end of a session. Recall a time that you have responded to this sort of evaluation.

Questions for Reflection and Discussion:

  1. Surveys linked to instruction sessions, like course evaluations, often mix questions about learning (assessment questions) with questions about satisfaction (evaluation questions). Can you recall specific evaluative questions?
  2. If you cannot recall specific questions, can you imagine the kinds of questions that might be asked on such a survey?
  3. How might instructors use the information from evaluative questions to inform their instruction?

Planning for Evaluation

As with assessment, instructors can use data from evaluations to identify areas for improvement. Evaluation data can also help library managers determine the effectiveness of an instruction program and inform allocation of resources in support of the program, and it can demonstrate the value of the program to library stakeholders. Some studies have linked student satisfaction and engagement to self-regulation (Liaw & Huang, 2013), perceived quality of instruction (Rodriguez et al., 2008), and learning (Baños et al., 2019; Lo, 2010). Because of these potential impacts on students and the usefulness of data for improvement and demonstrating value, we should consider evaluating our instruction sessions as well as assessing for learning.

The process of evaluation is similar to that of assessment, described in Chapter 9, and can be summed up in four steps:

  1. Identify the criteria by which we will evaluate the instruction. For assessment, we use our learning outcomes. In evaluation, as described in more detail later in this chapter, we explore other criteria that impact the perceived quality or success of the session.
  2. Find or develop a tool to gather data related to those criteria.
  3. Analyze the findings.
  4. Use that information to make informed decisions about changes.

Identifying Criteria for Evaluation

Just as we need learning outcomes in order to assess student learning, we need to identify metrics against which to evaluate our instruction sessions. Importantly, evaluation in instruction generally takes the learners’ perspective, meaning that quality and success depend on the learners’ subjective experiences. As Hernon and Altman explain, “if customers say there is quality service, then there is. If they do not, then there is not. It does not matter what an organization believes about its level of service” (1996, p. 6). The researchers’ point is that when the learner’s opinion of the quality or success of instruction differs from the instructor’s, the learner’s opinion counts, because they will act on it. If learners enjoy the session, they might return and even encourage others to attend. If learners did not find the session useful, or if they perceived the instructor to be unengaged, they probably will not return and are likely to tell others about their negative experience (Dixon et al., 2010).

To create or identify evaluation measures, we need to ask ourselves what criteria or outcomes would define a successful instruction session. Our choices of what to measure should align with the mission and priorities of our institution and focus on what will be most helpful in improving our practice. Often, these measures will relate to the quality or enjoyability of the session as determined by the learners, which means we will want to ask questions related to our learners’ perceptions of and attitudes toward the session. Satisfaction is one popular measure in evaluation.

Developing Tools

Using the criteria we have identified, we can develop tools to gather relevant data. As described in more detail later, a range of possible data-gathering methods exists, including both quantitative and qualitative methods. Quantitative methods include tools, such as surveys, that generate numerical data. Qualitative methods collect textual data. Examples of qualitative methods include comment cards and minute papers.

Analyzing Data

Once evaluation data has been gathered, we need to analyze and interpret the data to uncover its meaning. Specific methods of analysis will vary, depending on whether we have gathered quantitative or qualitative data. In general, quantitative data is analyzed using frequency counts and percentages, while qualitative data is analyzed for themes or patterns in the responses. For instance, we would calculate the percentage of people who rated themselves on a survey as highly satisfied with an instruction session, or review comment cards to see if learners agree on which aspects of the session were most engaging or least clear. It is beyond the scope of this chapter to provide in-depth explanations of data analysis, but the Suggested Readings at the end of the chapter provide more information.

Using Data

After analyzing the data, we must think about what story the data tells and how we can use what we have learned to make improvements to our instruction. For instance, if a substantial portion of our learners indicates that the pace of the session was too fast or too slow, we can adjust the pace. We might keep an activity that learners liked, and tweak or discard one that they found boring or unhelpful.

Creating Evaluation Tools

A number of methods exist for evaluating instruction. In fact, some of the assessment methods described in Chapter 9 could be readily adapted for evaluation. This section highlights several evaluation tools, with a focus on those most likely to be used in library instruction.

Surveys

Surveys are probably the most popular method of evaluation. In fact, anyone who has been through a college course in the United States is probably familiar with the end-of-semester course evaluation survey. These surveys are useful because they are relatively quick to administer and analyze, and they can incorporate a range of questions about different aspects of the session, including learners’ satisfaction, self-efficacy, and engagement.

Surveys usually consist of closed-ended questions, sometimes called forced-choice questions, that ask the respondent to select an answer among a range of choices or along a set scale. For instance, evaluation surveys often ask learners to rate their level of agreement with statements such as “the workshop met its learning outcomes” or “the instructor provided clear explanations.” Similarly, surveys can ask students to rate their level of satisfaction with the session as a whole, or with various aspects, such as the pace of the instruction, the amount of content covered, balance of lecture to activities, and comfort of the facilities. We can also incorporate open-ended questions in surveys and provide space for respondents to provide an answer in their own words. Open-ended questions can be a good way to expand on closed-ended questions. For instance, we might ask learners to rate their level of satisfaction with the session and then ask them to explain their answer by describing which aspects of the session left them more or less satisfied. Open-ended survey questions should be analyzed as qualitative responses.

While we can find some helpful examples of course or workshop evaluation questions online, we will generally create our own surveys tailored to our topics of interest and the specific content and logistics of our session. Writing good survey questions that will result in useful data is deceptively challenging. Slight variances in wording can lead to vastly different responses, and a number of factors can impact the reliability and validity of the data. Following are several strategies for writing good survey questions (Harris, 2007; Lloyd, 2018; Pew Research Center, n.d.):

  • Use clear, simple language.
  • Keep questions short.
  • In general, avoid using technical terms and jargon. If you do include these terms, provide a brief definition.
  • Avoid vague or ambiguous language. Asking learners if they agree that “the activity was good” is vague because the learner could respond to many aspects of the activity, and the word “good” could be interpreted differently by different people. A better approach is to ask about specific aspects such as, “Did you like pairing up with a classmate for the activity?” or “Were the activity instructions clear?”
  • Avoid double-barreled questions. Double-barreled questions ask two separate questions but allow for only one answer, making it difficult or impossible for respondents to answer accurately. “Was the instructor friendly and knowledgeable?” is an example of a double-barreled question because the instructor could demonstrate one quality and not the other; respondents, however, are asked to treat the two qualities together. Ask about each quality or aspect separately.
  • Avoid leading or loaded questions. Some questions suggest a “correct” answer or prompt the respondent to answer in a certain way. For example, asking if learners are happy they attended the session prompts the respondent to answer positively.
  • Offer comprehensive lists. When providing respondents with a list of choices, be sure that all possible options are included and provide an option for a write-in response if necessary.
  • Avoid overlapping or ambiguous scales. For instance, if you ask patrons how often they attend library workshops and provide the options of “often,” “frequently,” “sometimes,” and “not often,” respondents will probably have a hard time distinguishing the difference among these choices. How many workshops count as “frequently,” and how many would be “sometimes”? Better wording might be “about once a week,” “about once a month,” “several times a year,” “about once a year,” or “less than once a year.”
  • Avoid asking for unnecessary personal information. Demographic questions such as age, gender, race, and ethnicity should be asked only if they are relevant to your analysis. For instance, a public librarian running a mixed-age workshop might be interested in whether learners of different ages rated the workshop differently.

See Activity 13.2 for a brief exercise on writing survey questions.

 

Activity 13.2: Writing Survey Questions

Following are several poorly worded survey questions. Working individually or in groups, identify the problems and rewrite the questions to conform to the best practices outlined above.


Sample Instruction Survey

Please rate your level of agreement with the following statements on a scale of 1 to 5, where 1 is Strongly Disagree and 5 is Strongly Agree:

  1 2 3 4 5
The workshop was awesome.
The activities were engaging and relevant to the content.
The pedagogical approaches were appropriate to the audience.
I have a better understanding of how to search the OPAC.
The instructor did a good job.

 

If you had the chance, do you think that you might recommend that a friend or family member attend this same workshop in the future?

Yes            No

Please tell us your age range:

          0-10       10-20       20-30       30-40       40 or above

 

In addition to the wording of questions, we must think about the overall format and design of the survey. We should focus only on essential questions and avoid anything extraneous. Shorter surveys require less time and effort for the learner to complete and for the instructor to analyze. Sometimes survey questions are dependent on the answer to a previous question. For instance, if learners did not complete a certain activity, they will not be able to answer questions about that activity. We should make it easy for respondents to skip unnecessary questions by directing them to the next relevant question. Survey software typically offers “skip logic,” whereby we can embed commands into the design to automatically redirect respondents away from questions that do not pertain to their experience. Finally, we should organize the survey so questions on the same topic are grouped together.

Surveys can be offered in paper or online. Online surveys will be easier to analyze. Many survey software packages perform some analysis automatically, such as generating frequency counts and percentages, and might also create helpful charts and graphs. However, if we create online surveys, we should ensure that our learners will have access to a device to complete the survey, and make sure the survey is optimized for display on different types of devices and screen sizes. Whether in paper or online, the survey should follow the design and accessibility guidelines outlined in Chapter 11.

Short Text Responses

Reflective writing exercises such as minute papers and Critical Incident Questionnaires (CIQs) can function as evaluations. If questions for reflective writing exercises are directly linked to learning outcomes, then they are assessments. However, if the questions focus on satisfaction or engagement, they are evaluations. Questions for short-text responses will be more open-ended and general than survey questions, but we should still strive to make them clear, simple, and unambiguous. Also, because these questions require more effort on the part of the learner, we should limit ourselves to two or three questions. Some examples of short text evaluation questions include:

  • What are one or two things that you enjoyed about today’s session?
  • What is one thing that you would improve or change about the session?
  • What did you find most engaging about the session, and why?
  • Which part of the session was least engaging, and why?
  • How well did the workshop meet your expectations? Please explain.

Observations and Video Recording

Chapter 14 describes how we can use video recordings and peer observations for reflective practice. When we use these methods to focus on aspects of instruction that impact the quality of the experience or the user’s satisfaction, such as our presentation skills and perceived levels of learner engagement, we are engaging in evaluation. We can use the information we gather from the observations and recordings to make decisions that will improve our practice, thereby increasing the quality of instruction and the levels of learner satisfaction.

Standards

Standards such as the Framework for Information Literacy for Higher Education (ACRL, 2016) and the National School Library Standards (American Association of School Librarians, 2018) outline specific content and skills that learners are expected to master at different developmental and educational stages. We can compare our lessons to these standards to see how well we are addressing them. Keep in mind that we cannot address all standards in a single session. However, we can see if individual lessons align with some part of the relevant standards and frameworks.

Wong (2019) describes how we can use existing standards to evaluate ourselves as instructors. For instance, using the professional competency standards relevant to our setting and position, such as the Association for Library Services to Children’s Competencies for Librarians Serving Children in Public Libraries (ALSC, 2015) or ACRL’s Roles and Strengths of Teaching Librarians (2017), we can evaluate our proficiency in each area and determine steps to improve areas that are not as strong.

Wong (2019) also notes the existence of instructional design standards that can help us evaluate the quality of our curriculum and instructional materials. For instance, the Online Learning Consortium (2016) offers a free Scorecard to guide evaluation of our curriculum in terms of overall design, content, engagement, and adherence to principles of accessibility and universal design. We could use the Scorecard ourselves, or we could invite peers to provide us with their feedback on our sessions. We can also use the best practices outlined in Chapter11 and Chapter 16, or the guidelines from the National Center on Accessible Educational Materials (n.d.) to ensure that any handouts, videos, or other learning objects we create adhere to universal design and accessibility standards.

Program Evaluation

The bulk of this chapter focuses on evaluation of individual instruction sessions or self-reflective evaluation of ourselves as instructors. However, evaluation should also be carried out at the program level. Program evaluation allows us to improve services, provide evidence of our value to stakeholders, and inform managerial decisions such as allocation of funds and staff. Program evaluation is discussed in more depth in Chapter 20.

Conclusion

While our priority for library instruction is student learning, evaluation can provide us with useful insights into learners’ satisfaction with and perceptions of the quality of our sessions. The main takeaways from this chapter are as follows:

  • Evaluation data can tell us how satisfied learners are with our instruction sessions and provide us with an overview of their perceptions of the quality of the session. Since satisfaction is correlated with learning, motivation, and self-efficacy and can influence future attendance, we should make an effort to evaluate our sessions.
  • Surveys are one of the most popular evaluation tools, but they can be challenging to develop. We should follow best practices to ensure we are writing good survey questions that will result in useful data.
  • Short text responses, peer observations, and video recordings are all valuable tools for evaluation.

Suggested Readings

Applegate, R. (2013). Practical evaluation techniques for librarians. Libraries Unlimited.

Although not specific to instruction, this text offers thorough, clear, and straightforward guidance on developing and implementing evaluation methods, including surveys, interviews, use analysis, and focus groups. Advice on analyzing and interpreting results is included. The author includes plenty of examples, as well as advice on communicating results to stakeholders.

Matthews, J. R. (2007). The evaluation and measurement of library services. Libraries Unlimited.

A complete handbook for evaluation of library services, this text provides information on quantitative and qualitative tools for evaluation, as well as guidance on analyzing and interpreting results. A chapter is devoted to evaluation of library instruction.

Perlmutter, D. D. (2011, October 30). How to read a student evaluation of your teaching. The Chronicle of Higher Education, 58(11). https://www.chronicle.com/article/How-to-Read-a-Student/129553

In this advice column, Perlmutter lays out a simple approach to reading and interpreting course evaluations, including scanning for red flags, teasing out useful data, and preparing by evaluating yourself first. The author recognizes that negative comments can be demoralizing and encourages instructors to take such feedback in stride and recognize when a comment is an outlier as opposed to an indicator of a bigger issue.

Shonrock, D. D. (Ed.). (1996). Evaluating library instruction: Sample questions, forms, and strategies for practical use. American Library Association. http://hdl.handle.net/11213/9207

Despite its publication date, this slim volume remains a useful resource for evaluation of library instruction. The text offers guidance on developing session evaluation questions and advice on survey design, as well as clear and straightforward explanations of how to tabulate and analyze results. Sample surveys are included. A free downloadable version of the guide is available at the American Library Association Institutional Repository.

Winer, L., Di Genova, L., Vungoc, P., & Talsma, S. (2012). Interpreting end-of-course evaluation results. Teaching and Learning Services, McGill University.

This brief guide provides invaluable information on analyzing course evaluations. It offers a clear overview of how to interpret numerical survey results, along with a discussion of various factors that can impact the reliability of those results. Another section deals with interpreting student comments and includes a handy comment analysis worksheet. Although written for college faculty, the advice is applicable for most instructors.

References  

American Association of School Librarians. (2018). National school library standards for learners, school librarians, and school libraries. ALA Editions.

Association of College & Research Libraries. (2016). Framework for information literacy for higher education. http://www.ala.org/acrl/standards/ilframework

Association of College & Research Libraries. (2017). Roles and strengths of teaching librarians. http://www.ala.org/acrl/standards/teachinglibrarians

Association of Library Services to Children. (2015). Competencies for librarians serving children in public libraries. http://www.ala.org/alsc/edcareeers/alsccorecomps

Baños, R., Baena-Extremera, A., & Granero-Gallegos, A. (2019). The relationship between high school subjects in terms of school satisfaction and academic performance in Mexican adolescents. International Journal of Environmental Research and Public Health, 16(18), 3494. https://doi.org/10.3390/ijerph16183494

Dixon, M., Freeman, K., & Toman, N. (2010). Stop Trying to Delight Your Customers. Harvard Business Review, 88, 116-22. https://hbr.org/2010/07/stop-trying-to-delight-your-customers

Harris, C. (2007). Tip sheet on question wording. Harvard University Program on Survey Research. https://psr.iq.harvard.edu/files/psr/files/PSRQuestionnaireTipSheet_0.pdf

Hernon, P. & Altman, E. (1996). Service quality in academic libraries. Ablex Publishing.

Hernon, P. & Schwartz, C. (2012). The assessment craze. Library & Information Science Research, 34(2), 79. https://doi.org/10.1016/j.lisr.2012.01.001

Liaw, S., & Huang, H. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors to self-regulation in e-learning environments. Computers & Education, 60(1), 14-24. https://doi.org/10.1016/j.compedu.2012.07.015

Lloyd, S. (2018, December 10). The 10 commandments for writing good survey questions. Qualtrics. https://www.qualtrics.com/blog/good-survey-questions/

Lo, C.C. (2010). How student satisfaction affects perceived learning. Journal of the Scholarship of Teaching and Learning, 10(1), 47-54 (EJ882125).ERIC. https://eric.ed.gov/?id=EJ882125

National Center on Accessible Educational Materials. (n.d.). Designing for accessibility with POUR. http://aem.cast.org/creating/designing-for-accessibility-pour.html

Online Learning Consortium. (2016). Quality course teaching and instructional practice scorecard. https://onlinelearningconsortium.org/consult/olc-quality-course-teaching-instructional-practice/

Pew Research Center. (n.d.). Questionnaire design. https://www.pewresearch.org/methods/u-s-survey-research/questionnaire-design/

Rodriguez, M. C., Oomes, A., & Montañez, M. (2008). Students’ perceptions of online learning quality given comfort motivation, satisfaction, and experience. Journal of Interactive Online Learning, 7(2), 105-125. https://www.ncolr.org/jiol/issues/pdf/7.2.2.pdf

Wong, M. A. (2019). Instructional design for LIS professionals. Libraries Unlimited.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Instruction in Libraries and Information Centers Copyright © 2020 by Laura Saunders and Melissa A. Wong is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book