Enhancing a Visual Learning Analytics Platform for Synchronous Online Mathematics Instruction: A Study of Teacher Perspectives

 


Bulletin of the Technical Committee on Learning Technology (ISSN: 2306-0212)
Volume 25, Number 1, 81-88(2025)
Received May 31, 2025
Accepted September 21, 2025
Published online November 22, 2025
This work is under Creative Commons CC-BY-NC 3.0 license. For more information, see Creative Commons License


Authors:

Chung Kwan Lo1email and Gaowei Chen2

1: Education University of Hong Kong, Hong Kong, China;2: University of Hong Kong, Hong Kong, China

Abstract:

This study explores a learning technology designed to support teachers’ reflection and instructional improvement through video-based activities. It focuses on synchronous online mathematics instruction, which has the potential to provide students with additional and sustainable learning opportunities in the post-pandemic era. We enhanced a visual learning analytics (VLA) platform—developed based on established visualization principles—to analyze online classroom discourse data from lesson recordings. The platform enables teachers to review and reflect on their teaching practices. The study involved 15 secondary school teachers who participated in our professional development programs. Data were collected through written feedback to gain insights into the teachers’ perspectives. According to their user experiences, the teacher participants shared both the benefits and challenges of using the platform, along with suggestions for improvement. The findings indicated that they valued the technological (e.g., effective design and functionality), educational (e.g., transcription and classification support for review), and social (e.g., collaborative lesson reflection) affordances provided by the VLA platform. Their suggestions (e.g., integrating AI technologies to support analysis and facilitating idea exchange on the platform) offered valuable directions for further enhancement. This study thus serves as a demonstration of how VLA technology can be applied to analyze classroom interactions and support instructional improvement in teacher professional development.

Keywords: Continuing professional development, instructional aids, teaching, visual analytics, visualization.

I. INTRODUCTION

During the pandemic-induced lockdown, technology played a pivotal role in enabling the shift from traditional classroom settings to fully online environments, particularly in mathematics education [1]. In traditional classrooms, the presentation of knowledge by teachers and their dialogue with students are crucial for facilitating students’ mathematics learning [2–4]. As Song et al. [4] summarized, these interactions yield a range of benefits, including stimulating reasoning, co-constructing knowledge, and promoting problem-solving skills. When the shift to fully online environments occurred, video conferencing platforms, such as Zoom and Microsoft Teams, became essential tools that enabled real-time online interaction between teachers and students. In the post-pandemic era, synchronous online instruction continues to be relevant, as it provides accessibility and flexibility for sustainable, remote learning opportunities [5, 6]. However, teachers still face challenges with this mode of instruction, including maintaining student engagement and fostering effective communication [7, 8]. Recognizing the importance of online classroom interactions, it is essential to provide teachers with lesson video playback and tools that enable a comprehensive analysis of and reflection on their instructional practices, ultimately enhancing teaching effectiveness and promoting student success [9, 10].

Our existing visual learning analytics (VLA) platform (https://v2elearn.com/) offers a solution to support teachers in their efforts to improve instruction by reviewing lesson recordings. Gaudin and Chaliès [11] cautioned that teachers may overlook critical events in these recordings that are relevant to reflective practice, thereby limiting professional growth. To address this concern, the VLA platform was enhanced to visualize classroom discourse data from synchronous online lessons. Visualizing such data increases the likelihood that teachers will notice and identify areas requiring further investigation and improvement during lesson review [12]. We apply a set of design principles to enhance the effectiveness of the visualizations, including the use of multiple visualizations, connected visualizations, visualizations of data at various levels, interactive visualizations, and novel visualizations [13]. The objective of this study is to examine whether the enhanced VLA platform can support teachers’ reflection on synchronous online mathematics instruction. The study is guided by the following research questions (RQs):

  • How does the VLA platform facilitate teachers’ reflection on teaching practices?
  • What challenges do teachers face when using the VLA platform?
  • How can the VLA platform be improved for a better user experience?

II. RELATED WORK

This section reviews related work across three key areas. First, it draws on empirical research on video-supported teacher professional development to examine the benefits and challenges of using video for instructional reflection. Second, it defines VLA and outlines the visualization principles established in the literature. Finally, it explores several influential frameworks for analyzing classroom discourse.

A. Video-Supported Teacher Professional Development

To support instructional improvement, some teacher educators incorporate video-based activities into professional development programs [11]. Video captures the richness and complexity of classroom interactions, enabling teachers to observe exemplary lessons or reflect on their practices [14]. In the context of mathematics teacher education, Hollingsworth and Clarke [9] found this approach to be particularly effective. In their study, teachers and researchers collaboratively examined video-recorded lessons, focusing on specific elements selected by the teachers. They then engaged in feedback discussions centered on their observations, analyses, and the implications for future instruction. The researchers concluded that using video for observation and analysis facilitated teacher self-reflection and stimulated professional learning.

However, teachers may struggle to identify relevant events for reflection owing to the overwhelming richness of video data and the potential for cognitive overload [11]. Blomberg et al. [15] further noted that many video-based activities relied on minimally edited footage without supplementary support for viewers, which can be cognitively demanding for teachers, even for short clips. Thus, relying solely on raw lesson recordings may not effectively support teacher reflection or professional growth.

B. VLA and Visualization Principles

VLA offers a promising approach to help teachers gain insights from video data. According to the science of analytical reasoning, VLA aims to assist users in understanding complex data through interactive visual interfaces [16]. In education, VLA technology combines data analysis, visual representation, and user interaction to make educational data-related tasks more effective and efficient [13]. Research syntheses in References [13] and [17] indicate that VLA has been widely applied to online dashboard data, including discussion forum contributions, survey responses, and student performance metrics. However, there have been few attempts to apply VLA technology to classroom discourse data to inform teaching practices.

Therefore, our collaborator Chen [18] and his team developed a VLA platform to support the improvement of teachers’ classroom discourse practices. The development and subsequent enhancements are based on the five design principles of VLA strategies established by Vieira et al. [13]:

  • Multiple visualizations: This principle employs two or more visualizations to present data, enabling a deeper and more comprehensive understanding of the information.
  • Connected visualizations: By linking visualizations, this principle allows users to observe relationships and patterns across different data representations.
  • Visualizations of data at various levels: This principle facilitates both broad overviews and detailed insights by presenting data at multiple levels of granularity.
  • Interactive visualizations: By allowing user interaction, this principle promotes engagement and active exploration during data analysis.
  • Novel visualization: This principle introduces innovative visualization techniques to offer fresh perspectives on the data.
C. Frameworks for Classroom Discourse Analysis

Various frameworks exist for analyzing classroom discourse data, each emphasizing different aspects of teaching practices. Some frameworks focus on teachers’ instructional activities [19], while others focus on how teachers present knowledge [20] or how teachers elicit student responses [21]. In our previous study [22], we examined several frameworks for analyzing synchronous online classroom discourse, including the Classroom Observation Protocol for Undergraduate STEM [19], Mathematics Discourse in Instruction (MDI) [20], and the Academically Productive Talk (APT) framework [21]. Our findings indicated that both the MDI and APT frameworks were particularly well-suited for this context. Notably, a significant portion of online lesson time was devoted to teacher presentations of knowledge, which aligned closely with the MDI framework. Second, Gutentag et al. [7] found that students’ experiences with APT in online lessons were positively related to their motivation and engagement. By using the MDI and APT frameworks, teachers could gain valuable insights into their instructional practices, thereby supporting reflection and improvement in synchronous online mathematics teaching.

As detailed in Reference [22], Table I summarizes the MDI and APT frameworks for classroom discourse analysis. The MDI framework includes four codes, namely examples, tasks, naming, and legitimations, all related to teachers’ presentations of knowledge. For example, when a teacher explains the expression 2a + 2b = 2(a + b) by stating, “it concerns the common factor” [20] (p. 249), this speech turn is classified as “naming” owing to the use of mathematical language. The APT framework consists of eight codes, namely say more, revoice, press for reasoning, challenge, explain others, restate, add on, and agree/disagree, all of which pertain to how teachers elicit student responses. For example, when a teacher asks a student to state his/her standpoint by asking, “Do you agree or disagree and why?” [21] (p. 180), this speech turn is classified as “agree/disagree.”

TABLE I. THE MDI [20] (PP. 10–14) AND APT [21] (P. 180) FRAMEWORKS
Framework and codes Description
The MDI framework
 Examples Using examples that illustrate similarity, contrast, and fusion in mathematical concepts.
 Tasks Requiring students to perform known operations, apply skills, select procedures, and make mathematical connections.
 Naming Using naming conventions for terms within and across episodes, including colloquial language, mathematics terms used as names, and proper mathematical language.
 Legitimations Justifying concepts based on both localized and general criteria, incorporating non-mathematical factors such as everyday knowledge, visual cues, and authority.
The APT framework
 Say more Asking the student(s) for further elaboration.
 Revoice Re-voicing the expressions of the student(s) to prompt further responses.
 Press for reasoning Asking the student(s) to explain their reasoning.
 Challenge Challenging the idea put forth by the student(s).
 Explain others Asking the student(s) to explain someone else’s reasoning.
 Restate Asking the student(s) to restate someone else’s reasoning.
 Add on Prompting other students to contribute further.
 Agree/disagree Asking the student(s) to state their standpoint.
MDI = Mathematics Discourse in Instruction; APT = Academically Productive Talk.

In addition to the MDI and APT frameworks, we defined additional codes for analyzing online classroom discourse data (see Table II). In our previous study [22], we identified “checking attention” and “fixing technical problems” as particularly relevant to online instructional environments. For example, teachers may check student attention by asking those with their cameras off, “Are you here?” Similarly, if students try to speak without unmuting their microphones, the teacher may address the issue by saying, “Could you turn on your microphone?”

TABLE II. ADDITIONAL CODES FOR ONLINE CLASSROOM DISCOURSE [22] (P. 3)
Code Description
 Checking attention Monitoring and verifying students’ attention during the lesson.
 Fixing technical problems Addressing connectivity issues, audio or video problems, and software or platform glitches.
 Guiding and administration Providing introductions, giving follow-up feedback on homework or tests, and assigning homework or specific tasks.
Student responses Instances when students respond to the teacher.
 Affirmation Validating and acknowledging students’ responses.

 

III. METHODOLOGY

This section begins with a description of the VLA platform, followed by an outline of the procedure for preparing VLA from lesson recordings. Afterward, we provide an overview of the research context and the participants involved in the study. Finally, we detail our data collection and analysis methods.

A. Search Strategy and Inclusion/Exclusion Process

Fig. 1 illustrates the interface of the VLA platform, which visualizes classroom discourse data from an online lesson recording. The interface is divided into three main sections: (a) video footage, (b) speech visualization, and (c) transcripts of classroom dialogue. The application of visualization principles and their corresponding key affordances are described below.

Fig. 1. Interface of the VLA platform.

1) Multiple visualizations

The video footage, speech visualization, and transcripts of classroom dialogue are all presented within a single interface to capture the rich complexity of classroom interactions. In the speech visualization section, Fig. 2 further shows that the platform can visually categorize data by (a) types of speech turns or (b) speakers (the teacher and individual students). Hence, teacher users can understand each student’s level of participation and identify disengaged students who may need additional attention in future lessons.

2) Connected visualizations

The video footage, speech visualization, and classroom dialogue transcripts are synchronized. For example, Fig. 3 highlights an affirmation turn where the teacher acknowledges a student’s correct solution to a problem. The speech bubble is related to both the corresponding video segment and the transcript, enabling easy navigation.

3) Visualizations of data at various levels

Teacher users can view classroom discourse data at both individual and class levels. For example, Fig. 4 illustrates the number and percentage of turns contributed by the teacher and each student.

4) Interactive visualizations

Teacher users can control the video playback using the “play,” “pause,” and “rewind” buttons in the video footage section (Fig. 1a). They can also click on any turn bubble in the speech visualization section (Fig. 1b) to navigate directly to the corresponding video segment and the related talk episode in the classroom dialogue transcript section (Fig. 1c).

Fig. 2. Multiple visualizations in the speech visualization section. (Note: For clarity, APT codes are grouped within the “Teacher questions” category.)

Fig. 3. Navigation of a teacher’s affirmation turn.

Fig. 4. Speaker contribution statistics shown in the speech visualization section.

5) Novel visualization

In the visualization section, the x-axis represents the timeline based on speech turns, while the y-axis displays either the MDI and other codes (Fig. 2a) or individual speakers (Fig. 2b). By examining the visualization in Fig. 2a, teachers can gain insights into their use of examples, tasks, naming, and legitimation in mathematics instruction. The visualization in Fig. 2b allows them to reflect on student participation during class discussions. On the y-axis, each row contains bubbles of varying sizes representing the number of words in a speech turn: the larger the bubble, the more words are spoken.

B. Procedure for Preparing VLA from Lesson Recordings

 Upon receiving lesson recordings from teacher users, we first transcribed the sessions, followed by the analysis and coding of each speech turn. Our classroom discourse analysis framework comprised three main categories: MDI, APT, and other codes, as described in Section II-C.

To analyze the discourse data, we employed a two-stage AI–human collaborative approach, as established in our previous study [22]. Our findings indicated that while GPT-4o was effective in classifying speech turns into the three main categories, its performance at the code level was less satisfactory. For example, consider the “naming” speech turn: “So, their relationship will involve a percentage increase or decrease, leading to the outcome. Therefore, you have to clearly recognize these terms.” Although GPT-4o correctly classified it into the MDI category, it misidentified the speech turn as “examples” rather than “naming.” Therefore, our two-stage approach involved both AI and human coders to enhance efficiency while ensuring accuracy in the coding process.

At the first stage, we utilized GPT-4o, hosted at the first author’s university, for initial coding to ensure privacy and efficiency. At this stage, we adapted the prompt design proposed by Zhang et al. [23], which encompassed seven components that equipped GPT-4o with a comprehensive understanding necessary for the analysis: (1) role-playing, (2) goal of the task, (3) background/conceptual understanding, (4) focus on analytical process, (5) transparency and traceability, (6) data format (inputs), and (7) data format (outputs). Our description within each component collectively provided GPT-4o with essential background information, task instructions, and the framework detailing the codes and their corresponding definitions (see Tables I and II). In subsequent prompts, we inputted the discourse data into GPT-4o on an episode-by-episode basis. The coding results were documented in a spreadsheet. Please see Reference [22] for more details.

At the second stage, two research team members independently double-coded the data, using the coding results from GPT-4o as a reference. The AI tool’s suggestions expedited the coding process, as the human coders did not have to prepare codes from scratch. Any discrepancies between the human coders were resolved through discussion until a consensus was reached.

After the transcriptions of classroom dialogue were coded, the lesson recordings, along with their corresponding transcriptions and coding results, were uploaded to the VLA platform. The platform then visualized the lesson recordings, classroom discourse transcripts, and coding results using graphs, as illustrated in Fig. 1 to Fig. 4.

C. Research Context and Participants

This study was part of a larger design-based research project focused on synchronous online mathematics instruction in the post-pandemic context. By leveraging VLA technology, the project aimed to enhance teachers’ discourse practices in fully online environments, ultimately improving student achievement and engagement.

The study involved 15 secondary school teachers from Hong Kong. According to Ando et al. [24], this sample size (n ≥ 12) is sufficient to capture key themes and codes necessary for comprehensive thematic analysis. Of the participants, 10 teachers participated in our professional development programs during the second and third quarters of 2024, while the remaining 5 participated during the second quarter of 2025. Their teaching experience ranged from 5 to 24 years.

D. Data Collection

We collected data through written feedback from teacher participants to understand their views on the platform’s benefits, challenges, and potential improvements. The feedback form featured two closed-ended questions, rated on a 5-point scale from 5 (strongly agree) to 1 (strongly disagree), to capture teachers’ overall impressions of the VLA platform. The questions asked were as follows: (1) “I am satisfied with the design of the platform” and (2) “The platform helps teachers reflect on their lessons.”

In addition to the closed-ended questions, we included three open-ended questions aligned with the three RQs: (1) “What benefits do you think this platform brings to lesson observation and classroom analysis? Why?” (2) “What difficulties and issues did you encounter when using this platform?” (3) “What aspects of this platform do you think need improvement? What new elements or data presentation methods could be added?”

E. Data Analysis

Our analysis was based on the usefulness theoretical framework [25], which aligns closely with the objectives of this study. According to Kirschner et al. [25], the usefulness of an educational tool can be evaluated using two criteria: (1) usability—the ease with which the tool can be understood and operated, and (2) utility—the extent to which the tool provides the necessary functionalities to support learning tasks. Drawing on the concept of affordances from ecological psychology [26], this framework emphasizes evaluating educational tools by understanding the activities they enable. Norman [26] defined affordances as the “perceived and actual fundamental properties of a thing, primarily those fundamental properties that determine just how the thing could possibly be used” (p. 9). Kirschner et al. [25] argued that a tool’s technological affordances determine its usability, while its educational and social affordances determine its utility (Fig. 5). In this study, we defined these three affordances as follows:

  • Technological affordances refer to the design features and usability of the VLA platform.
  • Educational affordances describe the role of the VLA platform in shaping teachers’ experiences and facilitating their reflection and instructional improvement.
  • Social affordances refer to the characteristics of the VLA platform that enable social interaction and foster a sense of community among teachers.

Fig. 5. Graphical representation of the usefulness theoretical framework.

Through a literature review, Mohseni et al. [27] identified several affordances of VLA for educational interventions. In terms of technological affordances, VLA technologies can operate as stand-alone solutions or be integrated into learning management systems, such as Moodle. These technologies transform educational data into dashboards with basic visualizations that generate alerts, prompting teachers to take pedagogical actions [28]. In terms of educational affordances, the use of VLA enhance teachers’ ability to monitor student progress, leading to meaningful feedback and improved student performance [29–31]. In terms of social affordances, the studies identified by Mohseni et al. [27] primarily focused on the student perspective. For example, Hu et al. [32] developed a VLA tool designed to support students in their learning processes and facilitate effective peer talk techniques during collaborative problem-solving discussions. These findings underscore the applicability of the usefulness theoretical framework [25] for analyzing our VLA platform.

We analyzed the data from our teacher participants using the usefulness theoretical framework [25]. Emerging themes were organized into three primary categories: technological, educational, and social affordances (RQ1). Issues (RQ2) and suggestions for improvement (RQ3) were also categorized. To minimize subjectivity and ensure that all categories were considered equally, we conducted the thematic analysis with an open mindset. Specifically, we did not prioritize or weight any of the affordances from the usefulness theoretical framework during the coding process. Similar codes were grouped into sub-themes, and frequency analysis was conducted to identify the most prevalent themes. Representative quotes from teacher feedback were selected to illustrate key points. Some responses originally in Chinese were translated into English for reporting purposes.

The first author performed the data coding, which was then reviewed by the second author to ensure reliability. In cases of disagreement, both authors collaboratively re-examined the data to reach a consensus.

IV. FINDINGS AND IMPLICATIONS

As shown in Table III, the teacher participants were generally satisfied with the design of the VLA platform, with a mean score of 4.29 (SD = 0.61) on a 5-point scale for the first survey question. Furthermore, they indicated that the platform was helpful in supporting their reflective practices, reflected by a mean score of 4.43 (SD = 0.51) for the second survey question.

 

TABLE III. RESULTS OF SURVEY QUESTIONS (N = 14*)
Survey question M (SD)
1. I am satisfied with the design of the platform. 4.29 (0.61)
2. The platform helps teachers reflect on their lessons. 4.43 (0.51)

*One teacher participant did not respond to the closed-ended questions.

A. RQ1: How Does the VLA Platform Facilitate Teachers’ Reflection on Teaching Practices?

Table IV summarizes the key affordances of the VLA platform that support teachers’ reflection on their teaching practices across the technological, educational, and social aspects of the VLA platform. Regarding technological affordances, three teacher participants praised the platform’s design and functionality. For example, Teacher 2 noted, “The placement and [font] size of the transcriptions are appropriate and easy to read. It has accurate time markers. So, even when I miss keywords during the observation, I can check the transcriptions in real-time, enhancing the efficiency of lesson observation.” Teacher 7 commented, “It has features for saving video segments as images and exporting transcriptions for downloading and future reference.” In addition to its effective design and functionality, three teacher participants appreciated the platform’s accessibility. Teacher 7 explained, “It is not limited by time or location; as long as there is an internet connection, I can review or focus on specific episodes [in the lesson recordings].” This 24/7 accessibility mirrors the technological affordance of learning management systems [1, 27]. By ensuring continual access, our VLA platform supports the ongoing professional development of teachers with varying schedules.

 

TABLE IV. AFFORDANCES OF THE VLA PLATFORM
Aspect Affordances f
Technological Effective design and functionality 3
Accessible anytime, anywhere 3
Educational Transcription and classification support for review 8
Visualization of classroom discourse data 6
Monitoring student performance 4
Social Collaborative lesson reflection 1

Regarding the educational affordances, eight teacher participants highlighted that both the transcriptions of classroom dialogue and the classification of speech turns supported their review and reflection. As Teacher 6 explained, “The platform can automatically generate transcriptions, allowing teachers to reflect on whether their word choices during class are appropriate and whether their explanations of concepts are clear, thereby improving future lessons.” We used the MDI framework [20] to analyze teachers’ presentations of knowledge. Teacher 13 noted, “The Category Filter enables teachers to better understand the types of instructions they use, prompting them to reflect on the quality of their lessons.” In addition, the teacher participants appreciated the visualization of classroom discourse data from lesson recordings (n = 6) and how these visualizations helped them monitor student performance (n = 4). As Teacher 4 explained, “Teachers can observe data trends and analyze graphs to evaluate the effectiveness of their teaching practices and understand changes in student performance over time. According to these insights, [they can] make improvements and adjustments.” This capability to monitor student performance aligns with the educational affordance of VLA identified in the literature [27]. By tracking student engagement and performance, teachers can make informed and timely interventions [29].

Finally, we identified a social affordance: collaborative lesson reflection. Specifically, “Peers can repeatedly watch lesson recordings to identify the strengths and weaknesses of the observed teacher, enabling them to reflect on and improve their own teaching practices” (Teacher 12). This social affordance extends beyond the VLA-supported student collaboration highlighted in the literature [27]; it also fosters an environment where teachers can engage in reflective practices together, facilitating mutual improvement and professional growth. Specifically, they can collaboratively navigate VLA from lesson recordings, including visualizations of both teacher and student speech, as well as how they present knowledge and elicit student responses [10].

B. RQ2: What Challenges Do Teachers Face When Using the VLA Platform?

Regarding the challenges encountered while using the VLA platform, the teacher participants generally reported minimal issues. They described the platform as “very easy to use” (Teacher 10) and noted, “I believe there are no issues with the general operation of the platform” (Teacher 1). However, a few technological and educational concerns were identified.

Regarding technological issues, four teacher participants noted that the transcripts of classroom dialogue (Fig. 1c) occasionally failed to synchronize with the video footage (Fig. 1a) during playback. Teacher 8 explained, “When I move the mouse to the visualization analysis to select a specific speech turn, the platform jumps to that speech turn immediately. However, the transcripts on the right do not synchronize, which hinders teachers’ analysis.” This feedback has prompted us to improve the platform further in terms of the technological affordance associated with synchronization, guided by the principle of connected visualizations [13].

Regarding educational issues, six teacher participants mentioned that they were initially unfamiliar with the VLA platform. Some needed time to become comfortable with its functionalities and to interpret the classroom discourse analytics. Teacher 3 noted, “During my first use of the platform, I needed some time to familiarize myself with its functions and interface, as well as to understand the meaning behind the data.” Teacher 4 emphasized, “Training and support from the school or researchers are necessary to help teachers fully understand and utilize the platform, enabling them to leverage the analytical data to improve their teaching practices.” As Mohseni et al. [27] noted, the effectiveness of VLA tools depends on users’ understanding of how to translate knowledge into actionable interventions. To fully leverage the educational affordances of the VLA platform, the feedback from these teacher participants highlights the importance of ongoing support in professional development [33].

C. RQ3: How Can the VLA Platform Be Improved for a Better User Experience?

Table V summarizes the teacher participants’ suggestions for improving the VLA platform across its technological, educational, and social aspects. First, there were two valuable suggestions aimed at enhancing the platform’s technological affordances. Two teacher participants suggested adding speed adjustment features to “allow for faster or slower video playback” (Teacher 6). Teacher 2 elaborated on this by proposing options to select playback speeds of 0.5×, 1.5×, or 2×, enabling users to tailor the pace to their individual needs. In addition, Teacher 2 recommended including a section for classroom resources, where teachers could upload supplementary materials. This feature would support teachers’ reflection on their teaching materials as they review lesson recordings. Incorporating these enhancements could significantly improve the platform’s usability by better accommodating the diverse reflective practices of teacher users.

TABLE V. TEACHERS’ SUGGESTIONS FOR IMPROVING THE VLA PLATFORM
Aspect Suggestions f
Technological Add speed adjustment options for video playback. 2
Include a section for uploading supplementary resources. 1
Educational Integrate AI technologies to support data analysis. 3
Social Add a comment feature to facilitate idea exchange. 2

To enhance the educational affordances of the VLA platform, three teacher participants emphasized the potential of integrating AI technologies. Teacher 1 noted, “For long-term and sustainable use, effective AI-based voice recognition and dialogue analysis could significantly improve teachers’ instructional effectiveness.” Teacher 4 added that AI could play a key role in predicting student performance and offering tailored recommendations: “By incorporating advanced AI technologies for comprehensive analysis and modeling, the platform could more accurately predict students’ performance, learning difficulties, and learning styles, providing intelligent suggestions based on these insights.” These comments reflect teachers’ confidence that leveraging AI could greatly increase the platform’s utility in supporting instructional improvement. The findings align with recent research highlighting AI’s potential to assist teachers by delivering valuable, actionable feedback [34].

To enhance the platform’s social affordances, two teacher participants proposed features to promote communication and collaboration among users of the VLA platform. Teacher 2 suggested, “A comment section could be added to allow users to communicate and share their experiences.” Teacher 5 added that a commenting function would enable teachers to ask questions while reviewing lesson recordings, fostering interaction among users. Such features would support collaboration on the platform, which is widely recognized as a valuable component of professional development for teachers in STEM education [33].

V. CONCLUSION, LIMITATIONS, AND COLLABORATION OPPORTUNITIES

This study tackled the challenges of ineffective practices in synchronous online mathematics instruction and traditional video-based activities in teacher professional development. By leveraging VLA technology, we aimed to improve teachers’ reflective practices and support their professional growth. The teacher participants reported high satisfaction with our initiative. In particular, they appreciated the technological and educational affordances provided by the VLA platform. This study demonstrated how VLA could be effectively applied to analyze online classroom discourse data to enhance teachers’ reflective practices.

Despite the positive outcomes observed in this study, it has several limitations. First, the sample size of teacher participants was small. In addition, the study was conducted within a specific educational context focused on mathematics education. These factors may limit the generalizability of the findings to other settings and subject areas. Therefore, we encourage educators and researchers from other institutions and disciplines to contact us for potential collaboration. Together, we can expand and adapt the application of VLA technology to benefit a wider community.

ACKNOWLEDGMENT

We would like to thank the teacher participants in the project for their generous help.

Reference

[1] C. K. Lo, K. F. Hew, S. Xu, Y. Song, G. Chen, and M. S. Y. Jong, “Recommendations based on experiences of pandemic‐led remote mathematics teaching in Pre‐K–12 contexts: A systematic review from the activity theory perspective,” Journal of Computer Assisted Learning, vol. 41, no. 3, e70005, 2025, DOI: 10.1111/jcal.70005
[2] J. Díez-Palomar, M. C. E. Chan, D. Clarke, and M. Padrós, “How does dialogical talk promote student learning during small group work? An exploratory study,” Learning, Culture and Social Interaction, vol. 30, Part A, 100540, 2021, DOI: 10.1016/j.lcsi.2021.100540
[3] C. Morgan, T. Craig, M. Schuette, and D. Wagner, “Language and communication in mathematics education: An overview of research in the field,” ZDM Mathematics Education, vol. 46, no. 6, pp. 843–853, 2014, DOI: 10.1007/s11858-014-0624-9
[4] Y. Song, S. Zhang, and B. Liu, “Investigating the dialogic patterns of mathematics lessons in different stages of education,” The Journal of Educational Research, vol. 116, no. 2, pp. 77–89, 2023, DOI: 10.1080/00220671.2023.2192686
[5] S. Fabriz, J. Mendzheritskaya, and S. Stehle, “Impact of synchronous and asynchronous settings of online teaching and learning in higher education on students’ learning experience during COVID-19,” Frontiers in Psychology, vol. 12, 733554, 2021, DOI: 10.3389/fpsyg.2021.733554
[6] I. Majewska and V. Zvobgo, “Students’ satisfaction with quality of synchronous online learning under the COVID-19 pandemic: Perceptions from liberal arts and science undergraduates,” Online Learning, vol. 27, no. 1, pp. 313–335, 2023, DOI: 10.24059/olj.v27i1.3201
[7] T. Gutentag, A. Orner, and C. S. Asterhan, “Classroom discussion practices in online remote secondary school settings during COVID-19,” Computers in Human Behavior, vol. 132, 107250, 2022, DOI: 10.1016/j.chb.2022.107250
[8] C. K. Lo and K. Y. Liu, “How to sustain quality education in a fully online environment: A qualitative study of students’ perceptions and suggestions,” Sustainability, vol. 14, no. 9, 5112, 2022, DOI: 10.3390/su14095112
[9] H. Hollingsworth and D. Clarke, “Video as a tool for focusing teacher self-reflection: Supporting and provoking teacher learning,” Journal of Mathematics Teacher Education, vol. 20, no. 5, pp. 457–475, 2017, DOI: 10.1007/s10857-017-9380-4
[10] C. K. Lo and G. Chen, “Improving experienced mathematics teachers’ classroom talk: A visual learning analytics approach to professional development,” Sustainability, vol. 13, no. 15, 8610, 2021, DOI: 10.3390/su13158610
[11] C. Gaudin and S. Chaliès, “Video viewing in teacher education and professional development: A literature review,” Educational Research Review, vol. 16, pp. 41–67, 2015, DOI: 10.1016/j.edurev.2015.06.001
[12] S. Xu, C. K. Lo, Y. Song, and G. Chen, “Visual learning analytics-supported teacher reflection on synchronous online mathematics instruction: Insights from a stimulated-recall interview,” in Innovating Education with AI – Selected Proceedings of 2024 4th Asia Education Technology Symposium, E. C. K. Cheng, Ed. Singapore: Springer, 2025, pp. 129–138. DOI: 10.1007/978-981-96-4952-5_9
[13] C. Vieira, P. Parsons, and V. Byrd, “Visual learning analytics of educational data: A systematic literature review and research agenda,” Computers & Education, vol. 122, pp. 119–135, 2018, DOI: 10.1016/j.compedu.2018.03.018
[14] L. Major and S. Watson, “Using video to support in-service teacher professional development: The state of the field, limitations and possibilities,” Technology, Pedagogy and Education, vol. 27, no. 1, pp. 49–68, 2018, DOI: 10.1080/1475939X.2017.1361469
[15] G. Blomberg, A. Renkl, M. Gamoran Sherin, H. Borko, and T. Seidel, “Five research-based heuristics for using video in pre-service teacher education,” Journal for Educational Research Online, vol. 5, no. 1, pp. 90–114, 2013, DOI: 10.25656/01:8021
[16] J. J. Thomas and K. A. Cook, “A visual analytics agenda,” IEEE Computer Graphics and Applications, vol. 26, no. 1, pp. 10–13, 2006, DOI: 10.1109/MCG.2006.5
[17] J. Moon, D. Lee, G. W. Choi, J. Seo, J. Do, and T. Lim, “Learning analytics in seamless learning environments: A systematic review,” Interactive Learning Environments, vol. 32, no. 7, pp. 3208–3225, 2024, DOI: 10.1080/10494820.2023.2170422
[18] G. Chen, “A visual learning analytics (VLA) approach to video-based teacher professional development: Impact on teachers’ beliefs, self-efficacy, and classroom talk practice,” Computers & Education, vol. 144, 103670, 2020, DOI: 10.1016/j.compedu.2019.103670
[19] M. K. Smith, E. L. Vinson, J. A. Smith, J. D. Lewin, and M. R. Stetzer, “A campus-wide study of STEM courses: New perspectives on teaching practices and perceptions,” CBE-Life Sciences Education, vol. 13, no. 4, pp. 624–635, 2014, DOI: 10.1187/cbe.14-06-0108
[20] J. Adler and E. Ronda, “A framework for describing mathematics discourse in instruction and interpreting differences in teaching,” African Journal of Research in Mathematics, Science and Technology Education, vol. 19, no. 3, pp. 237–254, 2015, DOI: 10.1080/10288457.2015.1089677
[21] L. B. Resnick, S. Michaels, and M. C. O’Connor, “How (well-structured) talk builds the mind,” in Innovations in Educational Psychology: Perspectives on Learning, Teaching, and Human Development, D. D. Preiss and R. J. Sternberg, Eds. New York, NY, USA: Springer, 2010, pp. 163–194.
[22] S. Xu, X. Huang, C. K. Lo, G. Chen, and M. S. Y. Jong, “Evaluating the performance of ChatGPT and GPT-4o in coding classroom discourse data: A study of synchronous online mathematics instruction,” Computers and Education: Artificial Intelligence, vol. 7, 100325, 2024, DOI: 10.1016/j.caeai.2024.100325
[23] H. Zhang, C. Wu, J. Xie, Y. Lyu, J. Cai, and J. M. Carroll, “Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT,” Computers in Human Behavior: Artificial Humans, vol. 4, 100144, 2025, DOI: 10.1016/j.chbah.2025.100144
[24] H. Ando, R. Cousins, and C. Young, “Achieving saturation in thematic analysis: Development and refinement of a codebook,” Comprehensive Psychology, vol. 3, 4, 2014, DOI: 10.2466/03.CP.3.4
[25] P. Kirschner, J. W. Strijbos, K. Kreijns, and P. J. Beers, “Designing electronic collaborative learning environments,” Educational Technology Research and Development, vol. 52, no. 3, pp. 47–66, 2004, DOI: 10.1007/BF02504675
[26] D. Norman, The Psychology of Everyday Things. New York, NY, USA: Basic Books, 1988.
[27] Z. Mohseni, I. Masiello, R. M. Martins, and S. Nordmark, “Visual learning analytics for educational interventions in primary and secondary schools: A scoping review,” Journal of Learning Analytics, vol. 11, no. 2, pp. 91–111, 2024, DOI: 10.18608/jla.2024.8309
[28] R. Dickler, J. Gobert, and M. Sao Pedro, “Using innovative methods to explore the potential of an alerting dashboard for science inquiry,” Journal of Learning Analytics, vol. 8, no. 2, pp. 105–122, 2021, DOI: 10.18608/jla.2021.7153
[29] A. V. Y. Lee, “Determining quality and distribution of ideas in online classroom talk using learning analytics and machine learning,” Educational Technology & Society, vol. 24, no. 1, pp. 236–249, 2021, DOI: 10.30191/ETS.202101_24(1).0018
[30] I. Molenaar and C. A. N. Knoop-van Campen, “How teachers make dashboard information actionable,” IEEE Transactions on Learning Technologies, vol. 12, no. 3, pp. 347–355, 2019, DOI: 10.1109/tlt.2018.2851585
[31] Y. Rosmansyah, N. Kartikasari, and A. I. Wuryandari, “A learning analytics tool for monitoring and improving students’ learning process,” in Proceedings of the 6th International Conference on Electrical Engineering and Informatics, Langkawi, Malaysia, 2017, DOI: 10.1109/ICEEI.2017.8312462
[32] L. Hu, J. Wu, and G. Chen, “iTalk–iSee: A participatory visual learning analytical tool for productive peer talk,” International Journal of Computer-Supported Collaborative Learning, vol. 17, no. 3, pp. 397–425, 2022, DOI: 10.1007/s11412-022-09374-w
[33] C. K. Lo, “Design principles for effective teacher professional development in integrated STEM education: A systematic review,” Educational Technology & Society, vol. 24, no. 4, pp. 136–152, 2021, DOI: 10.30191/ETS.202110_24(4).0011
[34] C. K. Lo, “What is the impact of ChatGPT on education? A rapid review of the literature,” Education Sciences, vol. 13, no. 4, 410, 2023, DOI: 10.3390/educsci13040410


 Authors


Azizbek Khaitov

Chung Kwan Lo

is an Assistant Professor at The Education University of Hong Kong. His primary research interests cover technology-enhanced learning (e.g. flipped learning and AI in education) and mathematics education. He has published in various journals (e.g., Computers & Education, Smart Learning Environments, and Interactive Learning Environments). He has been recognised by Stanford University as one of the World’s Top 2% Most-cited Scientists since 2022.


Mas Nida Md. Khambari

Gaowei Chen

is Tin Ka Ping Foundation Outstanding Young Professor in the Faculty of Education at The University of Hong Kong. His research interests cover learning sciences, AI in education, dialogic teaching and learning, video-based learning, and learning analytics. He is a recipient of the Research Output Prize and the Faculty Outstanding Young Researcher Award at The University of Hong Kong.