It was a busy day with scooters, painting, chalk, and baking. Tonight, we watched an ALT 2021 day 3 session entitled “Making Video and Multimedia for Learning Accessible and More Inclusive, at Scale” presented by Sandra Partington, Maria Kaffa, and Sandra Guzman Rodriguez. Partington spoke about the transition to online teaching and use of video. They separated out pre-recorded media with automated captions, recordings of synchronous online sessions with automated captions and transcripts only, and recordings of on-campus face-to-face sessions. Partington talked about the volume of voice-over-slide presentations and automated speech recognition captions. Whenever they could, the switched on live automated speech recognition. They started a “stents as caption correctors” pilot, and in the end they took a group of students who became proficient in subject areas for captioning. Partington explained how they became a “captioning factory” and learned from the students about new settings in Kaltura. I love how students were engaged and contributed to the project. Partington also spoke about how they learned that accents were not the cause of inaccurate captions in their experience. Partington spoke about common mistakes of the automated captions. They disclosed some (in retrospect) funny mistakes with settings that they had to address. Their campus also used several different platforms including Kaltura, Zoom… Teams. The student pilot helped inform Partington’s team what to request from vendors and get it fixed. They identified limitations and caps on some features that could not have been identified without the help from students. Partington will now pilot human caption correction services. Partington also asked: “Could Data help speed up and share benefits across cohorts?” An audience member mentioned that they had similar conversations on their campus with challenges. This problem goes beyond a campus: how can we use technology and data to help neurodiverse AND neurotypical students (all!)? I have been using Panopto auto transcription and been fortunate to also have access to a pilot with human captions. While the turnaround is a couple of days, the human captions are fantastic, even for scientific terms. The challenge is planning ahead to caption the videos in time for student use.
A second somewhat related session was entitled “Analysing online discussion in different digital spaces” presented by Carmen Vallis and Carlos Prieto Alvarez. Prieto Alvarez finished a Ph.D. in learning analytics and Vallis works at the University of Sydney Business School. Vallis presented and explained that their project wanted to take a fresh look at discussion spaces and their affordances. They wanted to explore how students used asynchronous discussions. Vallis mentioned that they are not advocating for one tool over another. They compared the Atomic Discussion Tool and Canvas Discussions with a class of 1200+ students. Atomic discussions are integrated into the Canvas Learning Management System (LMS). Discussions were used as formative activities. Vallis also talked about the ED discussion tool that is mostly a Q & A forum to “reduce staff emails.” None of the three tools were required. The project is still ongoing. Vallis mentioned that they had a lot of data analysis that had to be done manually. They then did content analysis and used a visualization tool. Interestingly, Vallis stated that the Community of Inquiry framework really didn’t apply to their project since discussion spaces were used to: comprehend, critique, construct, and share with learner actions involving interpretation, challenging ideas, comparing/refining thinking, and supporting discussion. Thus, they used a framework by Gao et al. 2012 to analyze learner dispositions. They used data visualizations to explore data. The engagement factor was about six times higher with Atomic discussions, and Vallis thinks it is because the tool is embedded on the page. Students were using discussion forums to learn and contribute but not further the discussion itself, according to Vallis. ED was mostly used as a “transactional” approach in which someone posted a question, the instructor replied, and others “lurked” and read the responses. I find this fascinating! In a way, that’s what we want, though maybe we want to normalize this behavior and make it more evident. Among the findings from the study, Vallis mentioned that:
- Discussion is formal.
- Students are influenced by nearby posts.
- Students find search for information useful.
Vallis explained that future work will ask the purpose and place for discussions with the possibility of updating discussions and maybe connecting learners in more informal ways. I thought it was intriguing to think about presenting “engagement dashboards” to students and automating discussion analysis. They also have a cdrg.blog that I will check out! I thought the idea of updating or moving on to a different framework to analyze online discussions is intriguing. It brings up the question: what is the purpose of the discussion forum? Is it to create community or share and refine opinions/knowledge?
