AI and Assessment Strategies for Online Courses

Tonight I watched the Quality Matters (QM) Research Connect Conference session on assessment. The title of this session was “Assessment in Online Courses in Higher Education: Perspectives of Instructional Designers in the Age of Artificial Intelligence.” The presenters were Florence Martin, Jennifer DeLarm, both from NC State University, and Stella Kim from UNC Charlotte and Doris Bollinger from Texas Tech. Bollinger and Martin have studied online research and student satisfaction for a number of years. Martin noted that assessment has taken a new perspective in the age of Artificial Intelligence. The team connected data from online instructors about assessment in online courses. The second part was to learn about the perspective from instructional designers. The third part has a focus on student perspectives. DeLarm is a graduate student and spoke about the types of assessments. They explained that traditional assessments now have challenges from AI. Authentic assessments include multimedia, for example. DeLarm spoke about how different assessment strategies should include a mix of formative and summative and feedback. The timing of feedback is crucial, DeLarm explained. For larger projects, feedback within a week is recommended. AI in assessments has benefits, noted DeLarm, including enhanced precision and efficiency, tailored feedback delivery, and automated evaluation capabilities. The challenges of AI in assessments include dishonesty concerns, quality/accuracy of AI-generated content, and the need for balanced implementation. DeLarm explained that the purpose of their study was to examine instructional designer perceptions of effectiveness regarding learner assessments in online environments in higher education in the age of artificial intelligence. The team had four research questions. Kim served as the methodologist for this project. They began with a literature review, survey development, expert review panel, and survey implementation. Data collection was through distribution of the survey through various networks and professional organizations. Data analysis included descriptives, correlations, and content analysis for open coding. After collection of data and filtering, there were 103 participants (instructional designers). For research question 1, the team studied assessment types and how effective they were. The most effective was the case study analysis, with most people rating it very high. Next, electronic portfolios were highly rated. Instructional designers shared their perspectives and rankings. I now want to reach out to Martin!

What do instructional designers share as strategies for online course assessments? AI-generated image.