This research seeks to answer the question: To what extent can Artificial Intelligence (AI) be used for predictive analysis in student essay scoring and real-time feedback, and what are the implications for fairness, accuracy, and educational outcomes? AI has shown transformative potential in education, particularly through automated essay scoring (AES) systems and large language models (LLMs), which analyze patterns in prior essays to forecast scores and provide immediate, iterative feedback.
To address this question, the study employs state-of-the-art (SOTA) models and rationale-based scoring approaches that align with human grading practices by generating rubric-based rationales. These methods are evaluated against traditional assessment systems to determine their effectiveness in assessing nuanced traits like creativity, argument coherence, and originality. The research also explores a hybrid human-AI framework to mitigate limitations such as bias, fairness, and privacy concerns.
Preliminary findings indicate that while AI systems excel in evaluating surface-level features like grammar and organization, they face challenges with higher-order traits requiring subjective interpretation. The proposed hybrid approach combines AI's efficiency with human oversight, resulting in more reliable and equitable assessments. Additionally, the study highlights the critical role of diverse training datasets and advanced prompt engineering in improving model accuracy and inclusivity.
This research contributes to the integration of AI in education by addressing its strengths and limitations, offering practical solutions for enhancing student outcomes, and reducing educator workloads. These findings aim to guide the development of equitable, effective AI systems for real-time essay feedback and predictive scoring.
Exploring AI’s Potential in Predictive Analysis for Student Essay Scoring and Real-Time Feedback
Category
Student Abstract Submission