Meeting Abstract
Because the clinical importance of anatomical knowledge differs depending on a clinician’s specialty, an anatomy curriculum must ensure that undergraduate medical students demonstrate competency in the full breadth of anatomical knowledge. Conventional testing methods are often unable to determine the specific strengths or weaknesses of both the students and the curriculum. For example, one would not know whether students pass exams by satisfactorily understanding all of the material, as opposed to ignoring some portion of it but studying another sufficiently to achieve a desired score (e.g. 65% in a pass/fail course). To address this uncertainty, we developed a program to quantitatively analyze student performance on electronic anatomy exams using resources readily available to many medical educators. Our program automatically identifies those individual questions which are potentially too easy or too hard as well as the overall concepts that the class both succeed and struggled in demonstrating competence. For example, analysis of two years of thorax exams revealed class-wide weaknesses in medical imaging and histology. We analyze each exam separately, and we then aggregate the exams to identify the concepts on which students consistently performed well or poorly, as well as which concepts on which students improved or declined. Though developed for anatomy in medical education, educators in a variety of disciplines can apply our program to their own courses to discover and correct potential weak points in the both individual students’ knowledge base and the curriculum. Future directions include generating personalized student reports to facilitate targeted remediation for borderline and failing students.