Tonight I watched the ASM JMBE Live session recording with Heather Seitz and Andrea Rediske. The title of the session, recorded on December 10, 2021, was “Impact of COVID-19 curricular shifts on learning gains on a a Concept Inventory.” Their Microbiology for Health Sciences Concept Inventory (MHSCI) is very useful and described in JMBE. Rediske explained that developing a concept inventory is an iterative process that, in this case, took five years! They first determined microbiology concepts important to faculty members by surveying faculty experts. Next, they identified student thinking on these concepts with true/false questions. They created an open-ended survey to probe student thinking with 119 students. They next developed a forced-answer survey that “measures student thinking” with three coding teams. They validated with interviews with fifteen students and delivered the instrument to 620 students. Results were then assessed. Seitz defined assessment validity as “the degree to which an assessment measures or assesses what it claims to assess” with variations including accuracy of an assessment, face validity (weakest evidence), content validity (not assessed quantitatively), criterion validity (quantitative measurement of correlation). Assessment reliability was defined as “how well the score of an assessment represents a student’s ability” and considers consistency and dependability of results. Cronbach’s alpha and Kuder-Richardson are ways of determining assessment reliability. The goal is to get close to high validity and reliability. Seitz described the use of normalized learning gains as the “change in the class average score divided by the maximum possible gain.” with high scores 0.7, medium 0.3-0.7, and low less than 0.3. Next, item difficulty was explained as measuring “the percentage of students who answered the question correctly.” Item difficulty ranges from 0.4 and above for not as difficult to 0.09-0.19 very difficult, considered a poor item. Lastly, Seitz described discrimination as “the ability of an item (assessment question) to differentiate among students based on how well they know the material being tested” with good discrimination 0.3, fair discrimination 0.1-0.3, and poor discrimination below 0.1 using Pearson’s Product Moment correlation. The MHSCI had:
- Normalized Learning Gains: 0.2
- Reliability score: 0.34
- Item difficulty: all items P= 0.6
- Whole test reliability with Kuder-Richardson score 0.65.
- Pearson’s correlation coefficient: 0.63 (p<0.02)
The MHSCI is being used all over the country and internationally. Rediske shared that students who use the MHSCI get reports with pre and post test scores. Importantly, the learning outcomes are aligned with the ASM learning outcomes. Rediske and Seitz wanted to know what was the impact of the COVID-19 pandemic on learning gains of the MHSCI. Interestingly, the preliminary results indicated “no statistically significant change in learning gains pre and post-COVID-19.” These results were published in a JMBE article. They presented the data pre/post COVID and broken down by institution type. There are twenty three items in the MHSCI. Seitz contextualize these results by comparing to other studies. During the question and discussion session, Seitz explained that they have developed a document with “best practices” for implementing the MHSCI. The pre-test is intended to help the instructor and not be used as a “graded component” of the course, for example. One question was using the MHSCI in other countries, and I learned that they have versions in Spanish and French! I also really liked Rediske’s response about students not getting the hands-on experience yet the MHSCI focuses on important concepts and misconceptions in microbiology. Seitz explained that they did not focus on lab skills. The last portion of the session was an opportunity to learn more about the career paths of the speakers. It was really neat to learn about Heather’s path! Rediske did a Ph.D. in education and STEM research!
