Section 7: Assessment & Accreditation

Assessment in MOOCs

In a free course, having coursework or exercises graded by an instructor is not financially sustainable.  There are three main techniques that have been used in MOOCs to remove the grading burden from the instructor:

  • Objective tests (such as multiple choice tests) that can be machine graded.
  • Peer Assessment where learners grade each other’s work.
  • Machine essay grading.

As there are no cheap systems available for machine essay grading and its reliability is not fully verified at this point, this document will address only the first two techniques.

Objective testing and multiple choice quizzes

[1]
An objective test is made up of factual questions which are unambiguous and have a clearly agreed answer.  Because of this, there is no subjectivity in the grading of
answers and it can be done by a computer.  There are a number of types of questions available in most computerised systems:

  • Multiple choice where the learner is presented with a number of options only one of which is correct.
  • Multiple response where more than one option may be correct or partially correct.
  • Hot spot, where the learners is asked to indicate the location of something in an image or diagram.
  • Matching pairs, where the user matches items from two lists.
  • Short answer where the user inputs free text and the system look for key words or phrases.

Objective tests are more applicable towards the bottom levels of Bloom’s Taxonomy.  It is easier to think of questions that check a knowledge of facts than it is to test comprehension or application and probably cannot be used at the upper levels.  For that reason they are generally used to check if a learner is keeping up with the content of the course.
To make the best use of it, it is useful to be familiar with a few terms that are used in testing:
Validity is the ability of the test to measure what you want it to measure. (always ask relevant questions)
Reliability is the ability of the test to return consistent results for learners who have the same level of knowledge.
Discrimination is the ability of the test to return appropriately different results for learners with different levels of knowledge.

Multiple Choice Questions (MCQs) are probably the most popular and simple
question type and as most other questions types can be reformulated in this form we will consider this type further.  It is worth being aware of the following terms:

  • The STEM is the actual question posed to the learner.
  • The possible answers presented are known as the OPTIONS or ALTERNATIVES
  • Marks are awarded for the CORRECT or BEST ALTERNATIVE
  • The incorrect options are known as DISTRACTORS

Tips for creating good questions

  • Consider using a text editor or word processor to write your questions, as  this is usually much easier than using the tools in the platform itself.  However, you can only do this if the platform you are using can import in one of the many text formats that are typically used (e.g. GIFT format).  If you use a word processor, you must export the quiz in plain text format before importing into your platform.  
  • Use 3 to 5 distractors.  If you have too few options luck will have a bigger impact and if you have too many it can make the question too difficult. (Avoid using True/False questions where possible)
  • Let the platform shuffle the options for you where possible as you will tend to have a bias towards particular positions which the learners can guess quite successfully.
  • Avoid difficult, easy questions.  Everyone will tend to score low or high and your test will not be able to discriminate as well as it might.
  • Avoid “trick” questions, as learners with knowledge may not be able to answer them and again, they will not be able to discriminate between learners.
  • Avoid implausible distractors.  Learners will know that these options are incorrect and it will make the question too easy (It can be hard to think of implausible distractors but a good way to find them is to ask learners a short answer question and see what answers they give)
  • Avoid negatives in the question stem, but if you do need to use a negative (e.g. which of the following is FALSE) do not use negatives in the options as double negatives can be quite confusing (e.g. London is NOT in the UK)
  • Options should look similar, particularly of similar length.  If an option looks different, learners tend to think that this the correct option (e.g. because the correct answer had to be more precise)
  • Avoid combining options:

A
B
C
A and B
as the fourth option (A and B) will attract learners unfairly. If you do, use all the combinations:
A
B
C
A and B
A and C
B and C.

  • Avoid “all of the above” and “none of the above”.
  • Avoid clues in the stem.  A good example is when a word appears in both the stem and the correct option.
  • Use your experience of mistakes students have made in the past to create questions.
  • Draw questions at random from banks of questions.  Having more than enough questions available so that each time the test is taken the questions are different will force a learners to review their learning materials if they wish to repeat a quiz to get a better score.  It may also be a good idea to require a specific time gap between taking the same quiz again.
  • Create multiple versions of the same question if possible.  For mathematical questions there are tools that will do this for you.
  • Spread the questions evenly over the content.  If there are too many from a specific part of the content, the test would not be as “valid” as it should be. It is a good idea so step through your content and compose the same number of questions for each new idea presented.
  • Use correct grammar and similar styles in all your alternatives.
  • Consider asking another expert in the area to read over your questions.

Peer Assessment

[2]
In peer assessment, learners submit an assignment and are then graded by several other learners on the course.   It is used to assess higher orders of learning in Bloom’s Taxonomy that cannot be assessed by objective tests.

However, peer assessment can also improve the learning experience within your MOOC in the following ways:

  • It allows the learners to learn from others in the course.
  • It forces both the course designers and learners to think much more precisely about issues that are important in a particular topic.

To use peer assessment effectively in your course it is important that you have a well designed challenge for the learners.  This challenge must encourage learners to engage with the content you have presented in a deeper way, possibly using that content to carry out a task that they can report on.  This will both improve the learning experience and the “validity” of the assessment.
The assessment should also be “reliable” and yield consistent grades for learners of a similar standard.  To achieve this it is important to use “rubrics” to help students grade each other.  Rubrics consist of a set of criteria on which the learners will be scored as well as the scores that will be awarded against each criteria for each specified level of performance in the submission.  Using a platform that supports peer assessment and rubrics makes the task of grading easy for learners as it presents the submission along with the list of criteria and checkboxes so that the grader can easily select the level of performance against each criteria and the system will generate the grade.  Research has shown that using such rubrics improves consistency of grading for both individual graders and between several graders.  Research has also shown that averaging multiple peer graders for individual submissions produces grades of similar reliability to individual instructors.

It should be noted that peer assessment using rubrics has a number of other benefits as well as removing the need for instructor grading:

  • Publishing the rubric in advance helps learners carry out assignments.
  • The use of rubrics improves the design of assessments.
  • Implementing a peer assessment in a course is carried out in four stages;
  • The assignment is designed and set up on the platform.
  • The learners submit between a specified start and end dates
  • A grading period is set, where learners view several other submissions and rate them using the rubric.
  • The system automatically calculates and awards grades for all submissions.

It should be noted that in some systems the automated grading can be more sophisticated than just averaging the grades generated from the peer assessments.  It may have the functionality of ignoring individual grades that are too far from the others or, allowing self-grading and awarding penalties for being inaccurate in grading thus encouraging learners to be as accurate as possible.
Peer grading can have several challenges and care should be taken to address these.  These are accuracy, scheduling and missing grades and the following are steps that may be useful to take:
Multiple graders, possibly 3 to 5, increases accuracy and reduces the probability of any particular assignment not being graded.

  • Well designed rubrics improves accuracy. Anonymity reduces the chances of cheating or even bias in grading.
  • Random allocation reduces the chances of graders recognising submissions from others that they know.
  • Self grading improves accuracy and reduces cheating.
  • Scoring learners on their grading accuracy improves accuracy and reduces cheating.
  • Having an appeal system addresses the issue of an individual receiving multiple inaccurate grades but this should include the possibility of a grade being reduced to discourage abuse.
  • All learners need to strictly adhere to the times for submission and grading and this may require severe penalties (including zero grades) for non-compliance.  Only those who submit will be asked to grade others. Those who have not submitted are unlikely to be motivated to do so. Failure to review and award grades will result in a penalty or non-grade, thus increasing the motivation to grade.

Accreditation

With regard to recognizing courses that do not form part of an accredited programme, we see two main developments. On one hand, we see how MOOCs are being used for degree programmes, which means they need to be recognized by the institution that issues the degree. On the other hand, we see that alternative credit is being developed for MOOCs. Below you can see an inventory of the different initiatives that have arisen to recognize MOOCs.  

Credit for MOOCs offered by other schools: the Alternative Credit Project

The American Council of Education initiated a project to recognize MOOC credit in degree programmes at a selection of US based universities (American Council of Education). The Council evaluated a selection of MOOCs from different providers against a set of quality criteria, and MOOCs that met the criteria were included in the project. A number of US universities now issue college credit for these MOOCs, which do include ID-verification, but do not seem to include proctored exams. In the website below, students can search courses based on topic, school offering the courses and schools accepting the credit.

A European quality label for MOOCs

In Europe, the OpenUpEd quality label was developed for MOOCs (OpenUpEd). Its associated institutional benchmarking is primarily meant to be applied as an improvement tool. It compares institutional performances with current best practices and leads to measures to raise the quality of its MOOCs and their operation. This process is designed to complement both an institutional course approval process, and ongoing evaluation and monitoring of courses in presentation. This quality label does not lead to credits.

A full first year based on MOOCs: Arizona State University

The Arizona State University partnered up with edX in the Global Freshman Academy, with the aim to offer the first year of college entirely through MOOCs (Global Freshman Academy). Students register in the ID-verified track of the MOOC for US$ 49, take a proctored exam and, if they pass, they have a year to convert to college credit for an additional fee. The openness of MOOCs is still valid: the courses are accessible for free and without formal entry requirements like transcripts or GPA.

A BSc degree based on MOOCs

The French start-up OpenClassRooms (Openclassrooms) is launching the first State-recognized bachelor degree in France that relies exclusively on MOOCs. Students need to register in the paid Premium Plus track, that offers individual interaction with a mentor, and they need to have their project work evaluated by a jury.

MIT Micromasters

MIT was the first to offer a series of MOOCs that can lead to a MicroMaster’s credential, issued by MITx (MIT Micromasters). Required for this credential are exceptional results in the online courses and in the additional, proctored, exam. Learners that have  earned the MicroMasters credential can apply to be admitted to the campus program at MIT, where they can earn the  full master’s degree by taking additional coursework and a thesis project.  The programme features “inverted admissions”, which means students can take the online part of the coursework without having to apply for admission. Admitted students will be able to use their MicroMaster’s credential as course credit. Currently 14 institutions offer 20 micromasters on edX (https://www.edx.org/micromasters).

The next step: What will it take to recognize MOOCs in formal, accredited programmes?

Six universities (Delft University of Technology; Ecole polytechnique fédérale de Lausanne; the Australian National University; the University of Queensland; the University of British Columbia; and Boston University) have been working towards developing an international credit transfer system for MOOCs. The consortium developed a system of reliable testing for MOOCs and a coding systems to measure the level and weight of each course, as well as to examine the entry requirements for each module.

To reach this, agreement is needed on
1. Finances: exchange of students and credits will be much easier without the exchange of funds;
2. Coding: to enable students to include the courses in their programmes as building blocks, the level, expected prior knowledge and study load need to be clear; and
3. A system of quality and trust: when the institutions agree on the quality of the courses, they rely on trust instead of individual quality checks. The system has just been implemented, and students can now take the MOOCs from other institutions for credit.
This will lead to the greater flexibility and variety in courses, and students can cherry-pick their courses from a broad range of institutions. Student-exchange is not new: students have received credits for courses taken at other universities. But with MOOCs, we open up the potential that this number largely increases, and thus create greater flexibility for a larger number of students.