Abstract:
Automated grading tools help instructors tremendously. One challenge that needs to be addressed in constructed-response type assessments is automatic recognition of answers that are equivalent to the specimen answers but are written with different words/formats. This paper proposes schemes that use natural language processing and machine learning techniques to automatically grade short answer type assessments. The schemes compare students’ answers with specimen answers according to their semantics instead of the words in the answers. Experiments show that the proposed schemes can achieve high level accuracy while grading assessments.