ßÙßÇÂþ»­

ßÙßÇÂþ»­

Center for Innovative Teaching & Learning (CITL)

News

The word From the Faculty.... on an open book.

Multiple Chance Testing for Equitable Grading 

Welcome to CITL's new series 'From the Faculty...', where ßÙßÇÂþ»­ educators share their innovative approaches to teaching and learning. These faculty-authored pieces showcase the diverse and creative methods used across the university. We hope these insights spark discussion and inspire new ideas in our teaching community. 
 
may not measure true student learning, which is a common concern. Imagine a large enrollment class that assesses learning via three midterm tests and a final exam, each weighing 25% of the semester grade. If a student scores 46%, 90%, 90%, and 90% on the four assessments, they will have a grade of C at the end of the semester. What other alternative grading schemes are available that are fair and equitable? Some advocate for the use of (MCT) as a method to ensure fair grading. MCT offers students multiple chances to retake exams and improve their initial scores. I implemented MCT in a required STEM course at the ßÙßÇÂþ»­ while making several adjustments to reduce procrastination and better handle the teacher's workload. Here are the specifics so you can adapt them to your situation. 

The traditional grading system in the course comprised 15% of the learning-management system (LMS) quizzes, three 15% midterm tests, 10% for projects, 5% for a concept inventory, and 25% for a final exam. We used MCT for the LMS quizzes and midterm tests, which is 60% of the grade. In addition, the final exam, a standalone grading component, also counted as another chance test.  

The course was divided into eight standards, each a chapter. This division clearly delineated the standards for the student.

Dr. Autar Kaw Teaching

There are 30 LMS quizzes in the semester. Each quiz has three questions, two of which are multiple-choice and one algorithmic. These questions were chosen using question banks I have developed for the course. The students can make as many attempts as they wish before the weekly deadline, and the LMS automatically reports the highest score. If they want to attempt them again after the deadline, they can do so till the last day of class and recoup half of the missed grade, e.g., if they score 6/10 before the deadline and 9/10 after the deadline, their score would be 6+(9-6)/2=7.5/10. If their score after the deadline is lower, their grade on a quiz stays unchanged. The semester has three midterm tests, which check 3, 3, and 2 standards, respectively. Checking for multiple standards in a midterm maintains the interleaving effect, where students must figure out which standard the question belongs to. can also be given where one standard is a prerequisite for another. Each standard is graded out of 20 or 40 points depending on the length of the chapter. For example, Standard 1 is a 2-week long chapter and is graded out of 40, while Standard 2 is a 1-week long chapter and is graded out of 20. The score for each standard is reported on the graded test. Triple feedback is given to the student on each question asked – the wrong answer is pointed out, how to get to the correct answer is shown, and, more importantly, reference is given to examples and problems the student can attempt to review the material. Students are encouraged to come to office hours for face-to-face or online help.  

A second-chance test is given two to three weeks after each mid-term test. The student can take the retest on any or all the standards of the midterm test that they had just taken. For example, in midterm test one, we have three standards. The retest is given for three standards as separate tests of 25 minutes each (e.g., individual tests are given for Standard 1 from 11 AM to 11:25 AM, 5-minute break, Standard 2 from 11:30 AM to 11:55 AM, 5-minute break, Standard 3 from 12 noon to 12:25 PM). A late policy is implemented that if a student leaves early during a retest of a standard, a student coming in later than the first person leaving cannot take the test. This policy was adopted to maintain the academic integrity of the retest, but we did not need to use it. We also post the retests on the LMS for students so that they do not show up just to get a copy of the retest.  

The student can recoup only half of the missed points, e.g., if they scored 24/40 in the midterm test on Standard 1 and 34/40 in the retest, their score would be 24+(34-24)/2=29/40. If their score in the retest is lower, they are not penalized, and their grade stays unchanged. If a retest for a standard is taken, the updated score is also limited to 90%. This policy was adopted to avoid highly performing students taking the retest for just gaining a few more points, as their time would be better spent learning new course topics. Although it was not my intention, this policy helped reduce grading efforts. Only 60% of the possible retests were taken in the course.  

The final exam is a standalone category in the grade but also a proxy for a third-chance test for all eight standards. Questions from the final exam are allocated to each standard and used as third-chance test scores. The scoring update policy is the same as for the second-chance tests. Some would argue that I should use the final exam session to test for standards that the students wished to get retested in, but the must not be ignored.

Since we do not have an uncomplicated way to report updated grades to the students on the LMS, we made a student-friendly Excel spreadsheet where students could enter their grades for all the quizzes and tests they had taken. The spreadsheet calculated the grade without and with the retests. The grade without the retests matched the overall grade reported on the LMS, so students knew their minimum grade at any time in the semester if they did not want to use the Excel spreadsheet. To calculate the final grade, one must get the grades from their LMS and use simple spreadsheet functions. However, this process can then be automated for later semesters.  

We compared the student performance and affective outcomes for the course with and without MCT for students. The findings reported in a indicated that implementing MCT resulted in a higher percentage of students achieving a high (80% or more) final exam score (15% vs. 3%), a more considerable proportion of 'A' grades (36% vs. 27%), and a more positive classroom environment in terms of participation, unity, and satisfaction. During focus groups, students mentioned appreciating the enhanced learning experience, retake opportunities, and reduced stress associated with multiple-chance testing. Some students were concerned about being unaware of their current grades in the course, as they did not wish to use the spreadsheet. The referenced blog and journal article offer additional insights into the study's findings.  

My questions to the reader are:  

  • Would you use multiple-chance testing?  
  • How would you implement it differently?  
  • How can you maximize the advantages of multiple-chance testing and minimize the drawbacks for students and instructors?  
  • Do you have a better way of reporting grades in an LMS so that the current overall grade is reflected just in time?  

If you want to know more about multiple-chance testing, don't hesitate to contact Autar Kaw at kaw@usf.edu.  

References:  
Kaw, A. (2023, November 15). Multiple Chance Testing as a Gateway to Standards-Based Grading.

Kaw, A. & Clark, R. (2024). Effects of standards-based testing via multiple-chance testing on cognitive and affective outcomes in an engineering course. International Journal of Engineering Education, 40(2), 303–321. (available online for USF faculty when logged in). 
 

Return to article listing

About Department News

CITL News covers upcoming faculty development events, latest trends in teaching and learning, and innovative approaches USF faculty use to engage their students.