Manchester eScholar Services

Supported by The University of Manchester Library

In April 2016 Manchester eScholar was replaced by the University of Manchester’s new Research Information Management System, Pure. In the autumn the University’s research outputs will be available to search and browse via a new Research Portal. Until then the University’s full publication record can be accessed via a temporary portal and the old eScholar content is available to search and browse via this archive.

ONTOLOGY-BASED MULTIPLE-CHOICE QUESTION GENERATION

Alsubait, Tahani Mohammad S

[Thesis]. Manchester, UK: The University of Manchester; 2015.

Access to files

Abstract

Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts’ reviews and students’ performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.

Layman's Abstract

Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts’ reviews and students’ performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.

Bibliographic metadata

Type of resource:
Content type:
Form of thesis:
Type of submission:
Degree type:
Doctor of Philosophy
Degree programme:
PhD Computer Science
Publication date:
Location:
Manchester, UK
Total pages:
299
Abstract:
Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts’ reviews and students’ performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.
Layman's abstract:
Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts’ reviews and students’ performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.
Thesis main supervisor(s):
Thesis co-supervisor(s):
Language:
en

Institutional metadata

University researcher(s):

Record metadata

Manchester eScholar ID:
uk-ac-man-scw:267857
Created by:
Alsubait, Tahani
Created:
3rd July, 2015, 10:25:57
Last modified by:
Alsubait, Tahani
Last modified:
5th May, 2017, 12:05:58