Presented at: 11th Annual Outcomes and Assessment Conference
QEP Perspective: When Right or Wrong is Insufficient for Assessment
This session presents the ongoing process of creating and using best-response, multiple-choice questions (MCQs) for assessment within the college's QEP. Presenters will focus designing MCQs in which the available answer options are grounded to a course-specific assessment rubric. Unlike traditional MCQs, questions are designed to assess students’ various skill levels.
After attending the session, participants will be able to:
- Explain the advantages and disadvantages of best-response, multiple-choice questions
- Explain the concept of creating best-response, multiple-choice questions that correspond to a 5-point assessment rubric specific to critical thinking skills
- Describe ways to construct multiple-choice questions with best-response answers
- Describe ways to analyze the validity and effectiveness of the questions and responses
- Describe methods of working with faculty to collaboratively build consensus for quality and content
In an attempt to respond to faculty overload in submitting data for a Quality Enhancement Plan (QEP) and to enhance data collection, this institution is currently exploring the development of multiple choice questions with scaled answers which are grounded to a discipline/course specific assessment rubric.This process goes beyond the conventional ‘best answer’ multiple choice question in that it assesses, in a scaled format, the student's ability to demonstrate specific components of critical thought within a specific discipline or course.
This process is currently ongoing at this institution and heavily involves the work of the QEP Core Team (who are this session's presenters) and faculty participating in discipline-specific faculty learning communities. The QEP is exploring both faculty generation of multiple choice answers based upon a rubric and the use of previously collected student artifacts to generate answer options.
Early results, at least in one course, indicate a high level of reliability between written artifact scoring (on an essay) and scores based on multiple-choice answers. Other results, based upon the distribution of student scores, are indicating a need for careful refinement of the answer options to discriminate effectively among the various levels of student ability to think critically.