arXiv:2405.15026v1 Announce Type: new
Abstract: Peer review is a popular feedback mechanism in higher education that actively engages students and provides researchers with a means to assess student engagement. However, there is little empirical support for the durability of peer review, particularly when using data predictive modeling to analyze student comments. This study uses Na”ive Bayes modeling to analyze peer review data obtained from an undergraduate visual literacy course over five years. We expand on the research of Friedman and Rosen and Beasley et al. by focusing on the Na”ive Bayes model of students’ remarks. Our findings highlight the utility of Na”ive Bayes modeling, particularly in the analysis of student comments based on parts of speech, where nouns emerged as the prominent category. Additionally, when examining students’ comments using the visual peer review rubric, the lie factor emerged as the predominant factor. Comparing Na”ive Bayes model to Beasley’s approach, we found both help instructors map directions taken in the class, but the Na”ive Bayes model provides a more specific outline for forecasting with a more detailed framework for identifying core topics within the course, enhancing the forecasting of educational directions. Through the application of the Holdout Method and $mathrm{k}$-fold cross-validation with continuity correction, we have validated the model’s predictive accuracy, underscoring its effectiveness in offering deep insights into peer review mechanisms. Our study findings suggest that using predictive modeling to assess student comments can provide a new way to better serve the students’ classroom comments on their visual peer work. This can benefit courses by inspiring changes to course content, reinforcement of course content, modification of projects, or modifications to the rubric itself.
Analyzing Peer Review Data with Naïve Bayes Modeling
In higher education, peer review is widely used as a feedback mechanism to engage students and assess their engagement. However, there has been limited empirical evidence on the long-term effectiveness of peer review, especially when analyzing student comments using data predictive modeling. In this study, we explore the application of Naïve Bayes modeling to analyze peer review data from an undergraduate visual literacy course over a five-year period.
By building upon the research of Friedman and Rosen and Beasley et al., we focus on utilizing the Naïve Bayes model to analyze students’ remarks. The results of our study highlight the effectiveness of Naïve Bayes modeling, particularly when analyzing student comments based on parts of speech. We found that nouns emerged as the most prominent category in student comments, providing valuable insights into the topics students found important or relevant.
Furthermore, when examining students’ comments using the visual peer review rubric, we found that the lie factor, a measure of deception or misrepresentation, was the predominant factor. This finding suggests that students may struggle with accurately assessing their peers’ work and may be inclined to provide misleading feedback at times.
Comparing the Naïve Bayes model to Beasley’s approach, we discovered that both models are useful for instructors to map the directions taken in the class. However, the Naïve Bayes model offers a more specific outline for forecasting and provides a more detailed framework for identifying core topics within the course. This enhanced forecasting capability can greatly benefit educational directions, allowing instructors to make more informed decisions about changing course content, reinforcing important concepts, modifying projects, or even adjusting the rubric itself.
To validate the predictive accuracy of the Naïve Bayes model, we employed the Holdout Method and k-fold cross-validation with continuity correction. Our findings confirm the model’s effectiveness in offering deep insights into the peer review mechanisms. By utilizing predictive modeling to assess student comments, instructors can gain a new perspective on how to better serve students’ classroom comments on their visual peer work.
From a multi-disciplinary perspective, this study integrates concepts from the fields of multimedia information systems, animations, artificial reality, augmented reality, and virtual realities. By utilizing Naïve Bayes modeling, which is a machine learning technique widely used in various disciplines, we demonstrate its application in the context of visual peer review data analysis. This interdisciplinary approach highlights the potential for leveraging techniques from different fields to gain novel insights and enhance educational practices.
In conclusion, our study underscores the utility of Naïve Bayes modeling in analyzing peer review data, particularly for assessing student comments based on parts of speech and the visual peer review rubric. The findings provide valuable insights into student engagement and can inform improvements in course content, assignments, and assessment strategies. The multi-disciplinary nature of this study showcases the potential for cross-pollination of techniques from various fields, allowing for innovative approaches in educational research and practice.