Another Quality Matters (QM) webinar entitled “Improving Course Design Quality Through Online Graduate Student Evaluations” caught my attention. Drs. Amy M. Grincewicz and Cathy DuBois from Kent State University have years of experience in instructional design and online course programs. The goals of the session were to discuss quality assurances in an online degree program, identify purposes for student evaluations, and apply feedback to improve courses.
DuBois described their online MBA program with five departments and fourteen faculty… and also the first online program development. They also described changes in credit for the courses from 3 to 2, faculty resistance, and a short timeline. The college adopted an internal evaluation process focused on continuous improvement. Some faculty took QM courses. They also used module feedback surveys with 1-7 questions dealing with course design, student perceived workload, and suggestions. Grincewicz talked about how the surveys were used and the details about the module surveys:
- Questions focus on the design of the course
- Five multiple choice questions
- Approximately how long did it take to complete the module? (I should be asking this!)
- They used scales
- 4 Likert scale
- amount of work
- clear instructions
- timing of feedback
- learner-learner
- interaction
- 3 Short answer
- Feedback on assignments
- Which activities were the most helpful?
- Which activities were the least helpful?
Grincewicz emphasized that the surveys were placed at the end of every module and the data from different courses was used for course improvement. Their internal review process had high standards requiring accessibility, alignment of objectives that promote active learning and engagement, descriptive grading, and means of interaction with students. Grincewicz described a course that was developed by a new faculty member and the student feedback from each module. I found it useful to see the time calculation: “each module should be 22 hours of work per module since each module is 2 weeks long for a 2-credit hour course.” This information and the data from the feedback from each of the four course modules helped identify enhancements. Due dates were restructured and a learning path was created. Students were cramming work into weekends before the due date. Another enhancement was the creation of narrated presentations with slides and notes. The notes were popular with students and video closed-captioning was done with rev.com! It is good to know that others use rev.com!
The second example Grincewicz described used a textbook course and the student feedback that help improve the course. The enhancements included dropping the textbook, the instructor created story presentations of real-world applications, and discussions were added. Practice cases and group case analyses were developed around the business stories the instructor created. A “coffee house” discussion allowed students to comment on the cases. I like this idea and have been doing something similar with the cases we discuss.
The third example Grincewicz described was a course that already met standards and student feedback was still used to improve it further. Students wished the discussion posts were due later than Wednesday, and they were moved to Thursdays. Assignment guidelines were improved. Students also wanted to see guidelines that were not visible until completing certain parts of the course. Grincewicz compared then presented data comparing feedback across courses in the program. The differences in number of hours spent on modules and whether students thought it was too much, just about right, or not enough. Interestingly, students thought it was too much work when the amount (self-reported) spent was just about right! As part of program enhancement, Grincewicz showed how they created a course introduction (slide) that includes the expectations of work for credit hours and modules. I love this and should be more transparent and communicate the expectations of workload early on (e.g., Module 0: Start Here!).
The lessons learned included the need for administrator support; improving awareness of faculty of alignment, engagement, accessibility, and course expectations; and low response rate and survey fatigue happens! In the questions and answers session, Grincewicz mentioned using a spreadsheet to calculate workload. Calculating workload (time needed to complete assignments and module) has been challenging to determine. The experience from Kent State and the module surveys they used are very valuable. I would like to include end-of-module surveys like this and explicitly state expectations early on. It could be that along with an Electronic Elements Disclosure question needed to move past Module 0: Start Here, I should include a calculation question about hours of work expected. At the same time, I want to welcome feedback without survey fatigue. In an 8-week module I could survey after weeks 2, 4, 6, and 8, for example. Watching this session was time well-spent!
