AI is rapidly transforming education, in ways that are both worrying and beneficial. On the bright side of the ledger, new research shows how AI can help improve how instructors interact with their students, through a cutting-edge tool that provides feedback on their interactions in the classroom.
A new study led by Stanford, published May 8 in the peer-reviewed journal Educational evaluation and policy analysis, found that an automated feedback tool improved instructors’ use of a practice known as acceptance, in which teachers acknowledge, reiterate, and build on student contributions. The results also showed that, among students, the tool improved their homework completion rate and their overall satisfaction with the course.
For instructors looking to enhance their practice, the tool offers a low-cost complement to conventional classroom observation, one that doesn’t require a teaching instructor or other expert to observe the teacher in action and compile a set of recommendations.
“We know from past research that timely and specific feedback can improve teaching, but it’s not scalable or feasible for someone to sit in a teacher’s class and give feedback every time,” said Dora Demszky, assistant professor at Stanford Graduate School of Education (GSE) and first author of the study. “We wanted to see if an automated tool could support teacher professional development in a scalable and cost-effective way, and this is the first study to show that it does.”
Promote effective teaching practices
Recognizing that existing methods of providing personalized feedback require significant resources, Demszky and colleagues set out to create a low-cost alternative. They took advantage of recent advances in natural language processing (NLP), a branch of artificial intelligence that helps computers read and interpret human language, to develop a tool that can analyze transcripts of a class session to identify conversation patterns and provide consistent, automated feedback.
For this study, they focused on identifying teachers’ uptake of student contributions. “Acceptance is key to making students feel heard, and as a practice, it has been linked to greater student achievement,” Demszky said. “But it’s also widely considered difficult for teachers to improve.”
The researchers trained the tool, called M-Powering Teachers (the M stands for machine, as in machine learning), to detect the extent to which a teacher’s response is specific to what a student said, which it would demonstrate that the teacher understood and built on a student’s idea. The tool can also provide feedback on teachers’ questioning practices, such as asking questions that elicited a meaningful response from students, and on the relationship between teacher/student talk time.
The research team put the tool to work in the spring 2021 session of Stanford’s Code in Place, a free online course now in its third year. In the five-week program, based on Stanford’s popular introductory computer science course, hundreds of volunteer instructors teach basic programming to students around the world, in small chunks with a 1:10 teacher-to-student ratio.
Code in Place instructors come from all kinds of backgrounds, from college students who recently took the course to professional computer programmers working in the industry. As excited as they are to introduce beginners to the world of programming, many instructors approach the opportunity with little or no prior teaching experience.
Volunteer instructors received basic training, clear lesson objectives and session outlines to prepare for their role, and many welcomed the ability to receive automated input on their sessions, said study co-author Chris Piech, assistant professor of computer education at Stanford and co-founder of Code in Place.
“We make a big deal in education about the importance of timely feedback to students, but when do teachers get that kind of feedback?” he said. “Maybe the principal will come in and participate in your class, which sounds terrifying. It’s much more comfortable to interact with feedback that isn’t from your principal, and you can get that not just after years of practice, but from day one on the job.”
Instructors received their feedback from the tool via an app within a few days after each lesson, so they could reflect on it before the next session. Presented in a colorful, easy-to-read format, the feedback used positive, non-judgmental language and included specific examples of dialogue from their class to illustrate supportive conversational patterns.
The researchers found that, on average, instructors who reviewed their feedback subsequently increased their use of comprehension and questioning, with the most significant changes occurring in the third week of the course. Student learning and course satisfaction also increased among those whose instructors received feedback, compared with the control group. Code in Place does not administer an end-of-course exam, so the researchers used completion rates of optional assignments and course surveys to measure student learning and satisfaction.
Test in other settings
Demszky’s subsequent research with one of the study’s co-authors, Jing Liu, Ph.D. ’18, investigated tool use among instructors who worked face-to-face with high school students in an online mentoring program . The researchers, who will present their findings in July at the 2023 Learning at Scale conference, found that on average the tool improved mentors’ absorption of student inputs by 10%, reduced their talk time by 5% and improved students’ experience with the program as well as their relative optimism about their academic future.
Demszky is currently conducting a study of in-person use of the tool for K-12 classrooms, and noted the challenge of generating the high-quality transcript she was able to obtain from a virtual environment. “The audio quality of the classroom isn’t great and separating voices isn’t easy,” she said. “Natural language processing can go a long way once you have transcripts, but you need good transcripts.”
She stressed that the tool was not designed for surveillance or evaluation purposes, but to support teachers’ professional development by giving them an opportunity to reflect on their practices. He likened it to a fitness tracker, which provides information for the benefit of its users.
Furthermore, the tool was not designed to replace human feedback but to complement other professional development resources, she said.
More information:
Dorottya Demszky et al, Can automated feedback improve teachers’ absorption of students’ ideas? Evidence from a randomized controlled trial in a large-scale online course, Educational evaluation and policy analysis (2023). DOI: 10.3102/01623737231169270
#Feedback #AIpowered #tool #improves #teaching #research