"

15 Can online quizzes help law students learn? (Is this a multiple-choice question?)

Jennifer Campion

Abstract

The move to online learning, prompted by the coronavirus pandemic, has invited educators to examine their approaches to teaching and to consider new ways of supporting students, making use of a variety of online learning tools. This chapter details the use of online multiple-choice quizzes (MCQs) as a tool for benchmarking learning and revising content in a law paper taught at Te Piringa–Faculty of Law at the University of Waikato, LEGAL307 Land Law. It discusses the way MCQs are used in LEGAL307, shares student feedback, considers the impact of these quizzes on student outcomes in LEGAL307 and the more general utility and limitations of the quizzes to support learning, particularly for law students who must develop analytical skills alongside their substantive understanding of the key principles and cases that have shaped the development of land law in Aotearoa New Zealand. The chapter considers the utility of these online tests for modelling these analytical skills as well as for testing content, and discusses the way these quizzes support flexible, interactive learning approaches while allowing learning to transcend the boundaries of the lecture room.

Keywords

Multiple-choice quizzes (MCQs), online learning, assessment design, law assessment, land law


Introduction

Let’s start with a question: how useful can a multiple-choice quiz (MCQ) be to the study of law?

  1. Very
  2. Somewhat
  3. Not at all

When I began teaching land law, I assumed the answer would be (C). MCQs are often useful for testing knowledge of content, especially fact-based content like ‘what colour will sodium or copper burn when you put them in the Bunsen burner?’, but law is a skills-based programme, and I expected that assessing a student’s ability to undertake statutory interpretation or case analysis would be beyond the scope of MCQs. Because law assessments test a student’s ability to apply the law (and not to simply recall it), how useful could MCQs really be for teaching and assessing our law students?

In fact, my experience of incorporating MCQs in teaching and assessment has changed my mind, and I now consider the answer to the question to be (A): MCQs can be very useful to the study of law. This chapter discusses the way I have been using MCQs in LEGAL307 Land Law, initially for assessment purposes, but increasingly as a learning tool that supports an assessment, with the assessment function of the MCQ having become secondary to the learning support function[1]. Drawing on teaching and learning scholarship that has considered the use of MCQs, and on my experiences of incorporating these quizzes into LEGAL307, the chapter will explore the theme of ‘Education without Boundaries’ through the lens of assessment design in the online learning environment. The chapter begins by summarising the teaching and learning literature on MCQs, before detailing how MCQs are used in LEGAL307 via the online learning management system Moodle. The chapter offers learnings from the LEGAL307 experience and considers the impact of these quizzes on student outcomes in LEGAL307, as well as the more general utility and limitations of the quizzes to support learning, particularly for law students who must develop analytical skills as well as subject matter knowledge. The chapter discusses the effectiveness of the MCQs for modelling these analytical skills as well as for testing content. The chapter also considers some of the implications and impacts of generative artificial intelligence (genAI) for MCQs and the safeguards that may be employed to ensure assessment security.

Using multiple-choice quizzes in assessment design: The scholarship

MCQs are a commonly used method of assessment which can offer educators an efficient means of evaluating student understanding and knowledge (Oc & Hassen, 2024). At its most basic, a MCQ consists of a series of questions that comprise a ‘stem’ (i.e. the context/problem scenario and the question the test-taker is required to answer) and a set of potential responses, of which only one is correct (Butler, 2018). MCQ tests are associated with assessing lower-order cognition, such as the recall of discrete facts (UNSW, 2024). Because of this, assessors have questioned the usefulness of MCQs in higher education (UNSW 2024; Liu et al., 2023). There is a risk that MCQs could rely too heavily on simple, structured problems that assess only factual knowledge (Scouller & Prosser, 1994). That said, it is possible to design MCQ tests to assess higher-order cognition – including creative thinking and problem-solving skills – but questions must be carefully crafted to ensure the MCQs are a valid and reliable method of assessment (Liu et al., 2023). Indeed, when MCQs are designed effectively, this kind of assessment can require a greater level of analytical thinking from students, while giving them an opportunity to demonstrate their integration of knowledge, problem-solving skills, and application of knowledge (Riggs et al., 2020; Stevens et al., 2023)[2].

MCQs have become a dominant method of assessment in online learning (Timmis et al., 2016); suggested reasons for this include MCQs being relatively easy to score, offering objectivity in grading, and allowing more content to be covered by reducing the time it takes test-takers to respond to questions (Butler, 2018). The approach can support assessment of a range of skills and provides feedback very soon after the assessment has been completed, which is useful for students (Winstone & Boud, 2022). Students may prefer MCQs to other assessment formats because of a perception that this type of assessment is ‘easier’ than others (Zeidner, 1987; Kaipa, 2020; Jopp et al., 2023). Nevertheless, MCQs have been found to improve both student learning and student perceptions of the quality of their learning experience (Velan et al., 2008), although MCQs could also increase test-related anxiety (Jerrim, 2023). Additionally, electronic assessment using MCQs may help to ensure the objectivity and reliability of student results (Oc & Hassen, 2024). Even so, concerns have been raised about the limitations of MCQs as assessment tools, especially in online learning (Cagliesi et al., 2023; Summers et al., 2023). The increasing use of genAI by students in their assessments also has implications for MCQs: academics have been questioning the use of MCQ assessments in their online courses because of the potential for compromised academic integrity (Reddy et al., 2022). On the one hand, MCQs may offer a more genuine test of a student’s abilities than a written assessment (which the student may use genAI support to complete); on the other, genAI may be able to assist students to complete MCQ assessments (Newton & Xiromeriti, 2024).

Butler has investigated the usefulness of MCQ assessments to student learning; he points out that testing is a means of assessment, so students need to learn in order to prepare for and sit that assessment (Butler, 2018). This preparatory work can support student learning because the need to recall information in response to a test question strengthens memory, which can lead to better retention of that information over time (Butler, 2018). Butler notes that practice testing in preparation for sitting an assessment has also been shown to enhance memory and comprehension (citing Dunlosky et al., 2013). The representation of the information in the student’s memory can also change when it is applied in assessment contexts, which may produce deeper understanding (Butler 2018, citing Yang et al., 2019). Laboratory studies have shown that taking MCQ tests can be beneficial for learning (Butler 2018, citing Marsh et al., 2007).

Nevertheless, it is important to consider whether best practices for assessment purposes align with best practices for learning. It is by no means guaranteed that they will: “the primary goal of assessment is to measure the extent to which students have acquired the skills and knowledge that form the learning objectives of an educational experience (e.g. an activity, session, or course)” (Butler, 2018, p. 324). Effective assessment “requires stable and consistent results […] and accurate measurement of the intended skills, knowledge, or both” (Butler, 2018, p. 324, citing Green, 1981). Whereas the primary goal of using tests for learning is “to produce knowledge and skills that are durable, so that they will be retained over long periods of time, and generalizable, so that they can be flexibly used in different contexts” (Butler, 2018, p. 324, emphasis in original). For MCQs to effectively assess students, while ensuring that they have developed knowledge and skills that benefit them beyond the exercise of undertaking an assessment, MCQs should be designed carefully, taking into account the overall learning goals of the course (see, for example, the discussion in Jovanovska, 2018).

Case and Donahue (2008) have suggested that MCQs may be a useful method of assessing law students, although they caution that “multiple-choice tests do not assess writing skills, nor do they assess in-depth knowledge of a particular topic” (p. 373). Law is a discipline that has resisted the use of MCQ assessments to assess undergraduate law students (Huang, 2017; Whittaker & Olcay, 2021). Whittaker and Olcay (2021) note that this resistance is based on the perception that MCQs cannot assess a student’s writing skills (citing Huang, 2017), or ability to construct and evaluate legal arguments (citing Case & Donahue, 2008), and that they are only capable of assessing surface-level knowledge of the law (citing Huang, 2017). Admittedly, there is an inherent limitation on student responses within MCQs because students must choose from the choices provided and are unable to write down and submit their own arguments to answer the question (Sergienko, 2001). This risks MCQs being used primarily to assess ‘memory’ and ‘understanding’ cognitive processes and factual knowledge (Whittaker & Olcay, 2021). However, it is possible to design questions that require students to draw on their evaluative skills: for example, when answering questions in which they have to determine the respective strengths and weaknesses of an answer in relation to the question (Huang, 2017). Additionally, MCQs allow a breadth of material to be assessed, in a way that would be beyond the scope of traditional essay or legal opinion assessments, which test subject-matter knowledge and analytical and evaluative skills in relation to a specific topic (Sergienko, 2001). This has value for law students because law graduates are expected to be knowledgeable on all the material in the compulsory law curriculum (Quality Assurance Agency for Higher Education, 2019, as cited in Whittaker & Olcay, 2021). It is noteworthy that MCQs have been used at other stages of a student’s legal education: for example, in England, the Solicitors Regulation Authority (SRA) has implemented a MCQ assessment for graduates to be recognised as practising solicitors (Whittaker & Olcay, 2021, discussing Solicitors Regulation Authority, 2021).

Using multiple-choice quizzes in assessment design: the LEGAL307 experience

I began to use MCQs as an assessment in LEGAL307 Land Law in 2020[3]. Although MCQs had been used in the course when I was a student, in more recent years the assessments had been either essays or legal opinions. In that first year (2020), there was one online Moodle-delivered, MCQ assessment, which was used to test content knowledge. For example, a question may ask: ‘which of the following are the requirements for a valid legal lease’, with students having to select the correct answers from a list. The questions were carefully crafted to ensure that they would fairly test student knowledge over a range of topics covered in class by highlighting key principles. I intended that the MCQ would serve as both an assessment that showcased their knowledge and a chance for students to clarify any uncertainties over those key principles prior to them sitting their final exam. The self-marking nature of the MCQ meant that this assessment could be scheduled for the final week of lectures and students could receive feedback immediately after the assessment closed, which ensured that they had the benefit of that feedback when preparing to sit their final exam. Despite these benefits, the MCQ assessment was, effectively, just a series of discrete, unconnected questions that tested student knowledge of the core principles and cases they had learned about in class. It did not connect with their other assessment tasks or contextualise their knowledge beyond the information provided in the questions. Student results varied, but the variance was in line with the variance in their other assessments for this paper. Although I had to be a little careful in the crafting of the questions, the MCQ worked well and achieved its purpose. Even so, I did not consider that it was a particularly helpful assessment that served a role beyond its purpose as an assessment: while it tested the students, it did not really teach them any skills that had an application beyond the MCQ – it was a test of their knowledge of the paper content that largely served as assessment for assessment’s sake.

In 2021, I decided that the MCQ needed to be more relevant to the other assessments that students undertake on LEGAL307. In particular, I wanted the MCQ to better align with the final exam, so that it could help students to prepare for that exam[4]. The MCQ is usually scheduled to take place in the week the final topic is taught; students sit the test and receive feedback prior to the revision classes, so that they have an opportunity to discuss the MCQ feedback in the revision sessions. This timing also ensures that students sit the test a week or two before study week, so having a MCQ assessment is very helpful from the perspective of ensuring timely return of grades and feedback to students.

At that time, LEGAL307 had three hours of weekly teaching timetabled, which consisted of one two-hour lecture on a Monday and a one-hour lecture on a Wednesday[5]. I chose to use the time in this way: the two-hour lecture was delivered as a seminar on the week’s topic, and the class would then practice an exam-style problem question on the week’s topic in the one-hour lecture on Wednesday. This meant that the Wednesday classes gave the class a recap of the key points from the Monday lecture as well as an opportunity to workshop a problem question from a past exam on the topic, so that they could connect their learning with their assessments for the course. In these ‘Wednesday workshop’ lectures students were asked questions about the problem question, which were designed to assist them to workshop the problem question and to get them thinking more generally about how to analyse a question to know what it is testing. So, I would ask the students questions like: ‘how do you know this is a leases question?’; ‘what are the clues in the question that tell you what case to use?’; and ‘how does the information in the question help you to answer the question?’ And then we would work through answering the question together. We would also discuss what sort of things needed to be included in their answers, which gave them greater insight into how exams are marked.

I began to wonder whether something similar to the ‘Wednesday workshop’ lectures could be achieved via an online MCQ quiz. I decided to explore this in 2021, with the goal of creating an assessment that would also serve as revision for the final exam. The MCQ was redesigned so that, instead of having a series of discrete questions, it would instead be a MCQ that was based on a problem question scenario – the same type of problem question that students had been analysing in their Wednesday lectures and the same type of problem question that they would be given in their final exam. An old exam question was adapted for the assessment, with students being given a series of questions about the exam question that tested all the topics they had studied in B Trimester. The MCQ had to do more than just test substantive content knowledge: it also needed to help students to prepare for their final exam. So, the MCQ included questions like ‘how do you know what topics this problem question is testing?’ to demonstrate to students how the details in the question give them hints about what aspects of land law the question is testing and also how these hints give students a clue about what they are expected to cover in their answers. For example, in the leases part of LEGAL307, students study a case called Nordern v Blueport Enterprises [1996] 3 NZLR 450. This is a case that all land law students remember, because it involves cancellation of a lease by a lessee due to another lessee operating a brothel from the premises. The facts are memorable, and the case provides the relevant law concerning the obligations of the lessor to not derogate from the grant of lease it has given and to guarantee to the lessee quiet enjoyment of their leasehold property. When setting a question on the lessor’s obligations, we will often use a problem question scenario that is similar but different to the Nordern case, with the intention that students will apply the principles from Nordern to a similar fact scenario to determine if the law should be applied in the same way or differently, and whether the same outcome should be reached. But, even with clue words like ‘exotic dancers’ or ‘brothel’ in an exam question, many students did not seem to realise that Nordern was the relevant legal authority to use. So, the MCQ included questions that were designed to help students learn how to analyse their problem questions, as well as questions that just tested their understanding of the law. This was also reinforced in a general exam revision workshop that I regularly present to law students.

In the fortnight before the assessment date, I received a lot of questions from concerned students; it became clear that many students were quite anxious about the MCQ assessment. In response, I decided to create another MCQ: an optional practice quiz. This was in exactly the same MCQ format as the actual quiz would be, so it was another MCQ based around an exam-style question that asked very similar questions to the actual assessment. The practice quiz was offered as much to help students to manage their anxiety as anything else; it was made available to them in the week before they sat the actual assessment. About half of the students in the class attempted it and some did so multiple times. The students who attempted the practice quiz did much better in the actual test than they had in the practice quiz – so it was clear that they learned from the practice! And all the students who submitted course evaluations gave very positive feedback about the assessment. Student representatives also gave very positive feedback about the assessment at our staff-student feedback sessions.

Given the positive results and feedback about the practice MCQ, in 2022 I decided to build on this approach. At that time, I had recently begun using the DuoLingo language app and I had noticed that the regular practice and repetition of learning that DuoLingo offers was helping me to better understand and retain the vocabulary and grammar lessons I was learning. I wondered if something similar could be achieved in LEGAL307. So, in 2022, I created and made an optional practice MCQ available to students each week, with students being given a past exam question on the week’s topic and a series of questions relating to the exam question. As the weeks went on, the questions became more complicated, testing both the week’s topic and earlier topics, to allow students to see the links between the topics more clearly than there was scope to do in class. This approach had several benefits: the MCQ served as a summary of the week’s learning by highlighting the key points; the MCQ gave students an opportunity to revise previous week’s learning (which gave students a chance to revisit previous content and quiz questions and to have another attempt at questions they may have gotten wrong in previous quizzes); and it gave students a chance to practice a quiz that would be in the same format as their MCQ assessment – so, effectively, they would be practising for the MCQ assessment for eight weeks before taking the test. The primary purposes of this approach were to: (1) support students dealing with assessment anxiety by giving them the opportunity to regularly practice the assessment and become more familiar with the assessment content and format; and (2) draw students’ attention to the key learnings for that week’s topic. The secondary purpose was that, as the weeks went along, students would revise content from previous weeks (so they would also be getting started early on revision). That approach allowed students to see the same kinds of questions coming up from week to week and to become familiar with both the kinds of land law issues they would be tested on in the MCQ assessment and to understand how the problem questions that the MCQ interrogates test these things. Through that repetition, students were given an opportunity to build their confidence in their knowledge and ability. It allowed them to benchmark their learning on a weekly basis, identify areas for further revision, and ask questions if they did not understand a question or the answer feedback – well in advance of sitting an assessment on the topic.

Of course, creating MCQs takes a considerable amount of time (and workload constraints are also worth considering when exploring ‘education without boundaries’). It is not always easy or quick to devise questions that genuinely and fairly challenge students while conforming to the limits of the MCQ format. Each quiz took several hours to prepare and review. The flipside of that up-front work is that the quizzes can be ‘rolled over’ from year to year and only need to be reviewed and, if necessary, adjusted, rather than re-created. There is also scope to refine the MCQs in light of student feedback, so that they can be improved each year; this has meant that the MCQs are tested annually, and I continue to update them in response to student feedback.

Case and Donahue (2008) have developed a ‘Checklist for Writing Multiple-Choice Questions’; they encourage a focus on important concepts and application of the law (rather than recall) as well as clear, concise, and specific questions and straightforward, plausible options. Use of absolutes (‘always’ and ‘never’) is discouraged, as is vagueness (‘usually’ and ‘frequently’) and the use of conditional options. To these helpful recommendations, I would add that practice MCQs should allow students to attempt questions on the same issue more than once: because the weekly practices quizzes tested the same issues in different ways, this reinforced the significance of the most important concepts, while giving students another opportunity to attempt to answer questions on those concepts – and to learn from previous mistakes.

How effective were the MCQs?

As a general observation, we are good at teaching ‘the what’ in law, but we do not always model ‘the how’ so well. We give students information about the law and its application, with reference to cases that demonstrate how courts have applied the law in the past, but we do not always demonstrate to students what we expect them to do with that information (that is, how we want them to apply the law to facts in assessment situations). One unfortunate effect of this is that students may think they are understanding the content well enough when they are being taught it, but they may not do so well on their assessments, and they may not understand why (and they may even feel ‘ambushed’ or ‘tricked’ by the assessment). That is something I very much do not want to do to students: in my opinion, it does not support the students’ learning experience, nor does it set them up well for the transition from university study into legal practice. The practice MCQs were designed to support this philosophy, and they worked a little better than the Wednesday workshop classes because they required each student to attempt the questions themselves[6]. Additionally, the quizzes reached a wider audience: following the COVID-19 pandemic and the move to flexible, online delivery of courses, there was a reduction in the number of students attending classes. Whilst students could attend lectures in-person or online via Zoom, many chose to instead watch lecture recordings. This meant that those students were not able to participate in the Wednesday workshop lectures, but the quizzes still allowed them to have the experience of analysing a past exam question.

Student engagement was not initially as high as hoped: in a class of about 230 students, 60 students sat the first weekly practice MCQ, and 30–40 students regularly completed the weekly quizzes from then on. This was a little disappointing, given the work that had gone into preparing these quizzes, but at the same time, the quizzes had been advertised to the class as being entirely optional extra content (with lectures and course readings remaining the core content and focus of their studies). Despite this, those who did attempt the weekly practice did do significantly better in Online Test 2, so there appeared to be merit to this approach.

Of course, the question remains whether those students who attempted the practice MCQs were students who would have been likely to perform better in the assessment without the support of the practice quizzes. Certainly, the students who attempted the practice quizzes were the more engaged students. However, when I compared all student results for the MCQ assessment with all student results for other assessments in the paper, student results were, overall, better for the MCQ assessment. Additionally, the students who attempted the practice quizzes generally did better at this assessment than at their other assessments. Moreover, when I looked at the student results across the practice MCQs, students were obtaining higher scores as the course progressed, which suggests that the practice quizzes made a beneficial difference to their results.

The weekly quizzes were again used in 2023 and 2024 – and received much more student engagement: in both years, over 100 students (of a class of 220-240) regularly attempted the quizzes. And, overall, the results for the MCQ assessment were very good. Not only that, student results were also greatly improved for the two problem questions I set and marked for the final exam. Student feedback was very positive and several students commented that the question feedback that they received after completing the weekly practice MCQs was giving them their topic summary notes!

Key to the practice MCQs was the feedback: this had to be comprehensive, so that students could understand why they had got an answer wrong. Figures 15.1–3 show an example scenario, question, and feedback.

Barney is married to Betty. Betty has children from a previous relationship, as does Barney. Barney and Betty were considering buying an apartment, in unequal shares. Betty would contribute 60% of the price and Barney would contribute 40%. They both wanted to ensure that their ownership reflected those different shares and they each wanted to be able to pass their respective shares onto their children when they die. This was particularly concerning for Betty, who was terminally ill.

This apartment was purchased and is a unit title under the Unit Titles Act 2010. Barney wanted Betty’s last years to be comfortable, so he decided to make some alterations to the apartment. He planned to install a stair lift chair to make it easier for Betty to get around the apartment. He also planned to adjust the windows to widen them and give Betty a better view. Barney was not sure if he needed to seek the Body Corporate’s permission to do this work. He decided to just go ahead, but unfortunately, the alterations to the windows were not done well and water leaked into the apartment, causing substantial damage to the apartment below. The other apartment owner was not happy and complained to the body corporate.

Advise on all issues.

Figure 15.1. Example MCQ scenario

 

What is this question testing? Tick all that apply:

[latex]\Box[/latex] Caveats

​[latex]\Box[/latex] Concurrent Interests in Land

​[latex]\Box[/latex] Cross Leases

​[latex]\Box[/latex] Equitable Priorities

​[latex]\Box[/latex] Leases

​[latex]\Box[/latex] Mortgages

​[latex]\Box[/latex] The Repair Obligation of the Body Corporate under the Unit Titles Act 2010.

​[latex]\Box[/latex] The Repair Obligation of Unit Owners under the Unit Titles Act 2010.

Figure 15.2. Example MCQ question

 

This question is part of a 2019 exam question. The main part of the question deals with mortgages. However, there were 2 other topics tested in this question: ‘Concurrent Interests in Land’ and ‘Unit Titles’, which are the two topic areas you will have identified are being tested in this excerpt from the question.

As always, think about the issues as well as the topic area and try to frame your issue as a question:

For example:

‘How can Betty and Barney ensure that their ownership of the apartment reflects their different percentage contributions to the purchase of the apartment?’;
‘Should Barney have sought the Body Corporate's permission before undertaking the alterations to the apartment?’; and
‘Who is responsible for undertaking repairs to the apartment following the damage that occurred as a result of Barney's alterations to the apartment?’

Let's take each issue in turn:

Issue 1: Concurrent Interests in land:

I = How can Betty and Barney ensure that their ownership of the apartment reflects their different percentage contributions to the purchase of the apartment?

R = Joint Tenants own a proportionate undivided share of the whole property, whereas Tenants in Common will own the property jointly but in defined shares.

A = Betty and Barney want their ownership to reflect their different ownership shares; therefore a Tenancy in Common is the best arrangement for them.

C = Betty and Barney need to own their apartment as Tenants in Common.

[…]

Figure 15.3. Example MCQ feedback (abridged)

 

The feedback also gave me the opportunity to expand upon the explanations of concepts covered in class, by reiterating or adding to these or by trying to explain the concepts in a different way. There was also scope to connect the Land Law concepts explored in the practice quizzes to the students’ learnings in other law papers (such as Contracts and Torts), and to give more general advice about answering exam questions. For example: for the ‘what is this question testing?’ question, the feedback would be a bit more discursive and suggest how students might analyse the problem question in an exam to produce an answer. The practice quizzes also gave scope to go into more detail in the answer feedback around the reasoning employed in certain cases than there was time to cover in class. I also used the feedback to direct students to key sections of the course textbook or cases, as well as identifying additional resources that students could follow up on. Essentially, the quizzes and feedback became a way of directing students to the most significant or relevant aspects of the week’s learning; a way of allowing them to self-check that they had understood the concepts; and a way of helping them to regularly practice exam-style questions and receive feedback on these, in a way, and to an extent, that there was not scope to do in class.

The student feedback has been very positive. Feedback in the course evaluations has consistently singled out the MCQs as something students recognise as being helpful to their learning. In particular, students praised the quizzes for assisting them to identify the key learnings from each week’s lecture and to test their understanding through the application of these principles to exam-style problem questions. Students also recognised that the extensive feedback they receive when they complete the quizzes has been very helpful.

However, not all feedback was positive: in particular, some students did not like the limited window in which the quizzes were available and wanted it extended. The assessed MCQ is a 90-minute random-question multiple-choice test which – at the time – was scheduled to be open from 12 noon on a Saturday to 12 noon the following Monday. The test could be started anytime in that 48-hour window. Given that was the format of the assessment, and given that my intention was for the students to be practising under assessment-style conditions in the weeks leading up to the assessment, the weekly practice MCQs were timed to be available each week over the weekend after the week’s learning. This also ensured that students who wanted to sit the practice quizzes would be engaging with the lectures each week. And this was deliberate: during exam revision sessions at the end of the year, it became apparent that, each year, there are a number of students who have not kept up with the lectures on a weekly basis and who are trying to learn the content for the first time at that point in the year in order to prepare for an exam which is only a fortnight away! However, the LEGAL307 course content is both too broad and too intricate for students to be able to learn it in a few weeks before the exam; the time limits around the weekly practice quizzes were designed to mimic the assessment but also to get students to engage with the content each week, in the hope that this would encourage them not to leave their learning until the last minute! At the same time, students who missed a quiz would have an opportunity to be tested on the same content in future, because the quizzes are iterative and similar questions repeat across the quizzes, so missing any one week would not prevent students from having quiz practice of that week’s topic.

In 2024, the time window was increased in response to student feedback, with each practice quiz remaining open for a week after the weekly lectures to which they related. This approach worked better because it gave students more time to review the lecture content prior to sitting the weekly quiz, although some students still suggested in the course evaluations that the quizzes should remain open for the duration of the paper.

Education without boundaries

Thinking about the theme of ‘education without boundaries’, the time-limits around the quizzes were a clear boundary – and one not all students appreciated! Although there was a deliberate and (in my opinion) valid reason for designing the MCQs in this way, it is worth thinking about the merits of this approach, in light of the student feedback and this book’s theme. On the one hand, the practice quizzes were an exercise that was available to all students, regardless of whether they attended the weekly lectures, so that students who attended classes remotely, and those who work or have other commitments during the scheduled lecture times, could participate; but, on the other hand, students needed to complete the quizzes within a limited time window[7]. As we reflect on the future of flexible learning, it is timely to reflect on what boundaries may be helpful for education – and which ones can be broken down.

Of course, all learning involves boundaries: students learn within disciplinary boundaries (even if they subsequently transcend these through interdisciplinary work); academic programmes deliver learning within the boundaries of particular papers (even as one paper may refer to and build on another). While barriers to learning can and should be broken down, it may not be practical or desirable to remove the boundaries that structure and scaffold learning, especially if removing these would not support student learning. Exploring alternative formats for content delivery and assessment has proven helpful in LEGAL307; disregarding time boundaries may be less so?

Arguably, the LEGAL307 MCQ experience demonstrates that there may not be the same scope for building flexibility into assessment activities as there is for building flexibility into learning activities, depending upon the nature of the assessment. While the MCQs allow learning to take place beyond the boundaries of the classroom, a test is usually a time-bound activity, which students need to sit contemporaneously. However, the LEGAL307 MCQ experience also demonstrates that creating opportunities for additional practice and preparation in advance of an assessment can relax the boundaries between learning and assessment and produce helpful outcomes for learners.

MCQs and genAI tools

Advancements in genAI technology mean that educators must now consider the role genAI may play in assessments (Swiecki et al., 2022). This includes considerations of how genAI may contribute to assessment design, but also how students may be using genAI to assist them to complete their assessments (Haque et al., 2022). MCQs are not immune to this risk: Coles (2024) notes the possibility that students may be using genAI tools to assist them in completing MCQ assessments. GenAI models can process and analyse questions, providing probable answers based on the information available (Coles, 2024). Obviously, a student relying on genAI to answer quiz questions undermines the purpose of the assessment, which is to measure the student’s knowledge and understanding of the subject matter. That said, there is also evidence that reliance on genAI when answering MCQs may be more harmful than helpful: one study revealed that ChatGPT answered ‘easy’ questions incorrectly (Coffey, 2024). Potential safeguards include use of technology to prevent or detect use of genAI in assessments (for example, using proctoring software that monitors student activities during the MCQ to detect any suspicious behaviour; using plagiarism detection tools to identify the potential use of genAI in answering MCQs) (Ibrahim, 2023); designing assessments with regard to the way students may attempt to use genAI in the assessment (for example, by using randomised questions, or basing questions on a scenario that students must analyse, which may make it harder for genAI to predict and provide accurate answers [University of Lincoln, n.d.]); and reinforcing academic integrity policies and educating students about the importance of submitting their own work (Rane et al., 2024).

While these strategies may all contribute valuably to ensuring the integrity of assessments, perhaps educators should also reflect on why students are attracted to using genAI in the first place. As the education sector considers the appropriate place of genAI tools in learning, it is important to remember that there was a time when using a calculator was frowned upon (Swiecki et al., 2022). I have observed that, when students do not feel confident that they know what they are doing, they may seek support from genAI in a similar way to how, prior to having access to genAI tools, they might have wanted to copy the work of another student or sought to delay or avoid the assessment, perhaps through extensions or special consideration applications. However, when a student understands what they need to do to complete the assessment, they may not choose to rely on genAI in the same way. Drawing an analogy between use of genAI and student behaviour around special consideration and extension applications in LEGAL307, I have observed that the number of students applying for special consideration for the MCQ assessment has declined significantly following the introduction of the practice quizzes: in a class of around 250 students, only 11 applications for special consideration or extensions were made in 2024[8]. By providing students with multiple opportunities for practicing the MCQ assessment prior to the assessment being taken, much of the anxiety around the MCQ assessment has been reduced.

Although the use of genAI in assessments is a concern that needs to be addressed through a range of safeguards (such as those noted above), ensuring students are well prepared and confident when attempting an assessment may mean that they are less inclined to rely on genAI tools. Whilst assessment integrity must be safeguarded for an assessment to be a true test of student learning, an assessment is not an end in and of itself but a mechanism for assessing whether learning outcomes have been achieved. Using MCQs as a learning tool in LEGAL307 that prepares students for an assessment has achieved my goal of teaching them how to use the information and skills they have been learning in class, which appears to have built their confidence in their ability to complete the assessments without the need to rely on genAI or other tools to supplement their performance. Of course, that is an observation that should be tested through further research.

Conclusion

This chapter started with a question: ‘how useful do you think a multi-choice quiz could be to the study of law?’ Despite my initial scepticism, I have found that MCQs have the potential to be very helpful to students, if designed carefully, so that they are relevant to assessments (for example, by crafting a quiz around an exam question). This is because MCQs can enhance and complement the lectures and course materials by highlighting the key learnings for students, and the use of practice MCQs over a number of weeks allows those key learnings to be repeated and reinforced. Although MCQs cannot test writing skills or the quality of student arguments, MCQs can be used to help students learn how to analyse exam-style problem questions, and that way better prepare them to sit the exam. My experiences align with the literature: MCQs can be helpful for testing recall (UNSW, 2024), although they can also be used to test reasoning and analysis (Liu et al., 2023; Riggs et al., 2020; Stevens et al., 2023); simple, structured problems that assess only factual knowledge have limited value, while MCQs that assess higher-order cognition have greater value (Case & Donohue, 2008). Moreover, MCQs can support student learning as well as assessment (Butler, 2018) and can be used to teach students about assessment design, so that the MCQ does not just assess student understanding, it also helps students to understand how and why a problem question assesses them in the way it does. This demystifies assessment and benefits students beyond the particular course in which the MCQs are used.

Certainly, the MCQs have allowed LEGAL307 students to continue their land law learning beyond the boundaries of the classroom, in a way that has improved their understanding as well as their grades. I have made the MCQs a regular feature in LEGAL307 and have also begun using them in another paper I teach, LEGAL468 Intellectual Property Law. Additionally, some of my colleagues are also exploring the use of MCQs in light of the success I have had with them in LEGAL307 and the positive student feedback the MCQs have received. Despite my initial reservations, I now recommend MCQs as both an assessment and a way to support student learning through weekly practice that highlights key learnings and prepares students to answer exam-style problem questions on the topics covered in the practice MCQs.


References

Alldridge, P. (1997). Multiple choice examining in law. The Law Teacher, 31, 167. https://doi.org/10.1080/03069400.1997.9992973

Butler, A. C. (2018). Multiple-choice testing in education: Are the best practices for assessment also good for learning? Journal of Applied Research in Memory and Cognition, 7(3), 323–331. https://doi.org/10.1016/j.jarmac.2018.07.002

Cagliesi, M. G., Hawkes, D., & Smith, S. (2023). Narrowing awarding gaps: The contributory role of policy and assessment type. Studies in Higher Education, 48(11), 1665–1677. https://doi.org/10.1080/03075079.2023.2209597

Case, S. M., & Donahue, B. E. (2008). Developing high-quality multiple-choice questions for assessment in legal education. Journal of Legal Education, 58, 372.

Coffey, L. (2024, August 30). Can AI be used to cheat on multiple-choice exams? Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/08/30/professor-finds-way-see-if-students-used-ai

Coles, G. (2024, April 4). ChatGPT can answer multiple-choice questions, here’s how. PC Guide. https://www.pcguide.com/ai/can-chatgpt-answer-multiple-choice-questions/

Driessen, E., Van Der Vleuten, C., & Van Berkel, H. (1999). Beyond the multiple-choice v. essay questions controversy: Combining the best of both worlds. The Law Teacher, 33, 159. https://doi.org/10.1080/03069400.1999.9993027

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58. https://doi.org/10.1177/1529100612453266

Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. (2022). “I think this is the most disruptive technology”: Exploring sentiments of ChatGPT early adopters using Twitter data. arXiv. https://doi.org/10.48550/arXiv.2212.05856

Huang, V. (2017). An Australian study comparing the use of multiple-choice questionnaires with assignments as interim, summative law school assessment. Assessment and Evaluation in Higher Education, 42(5), 580–595. https://doi.org/10.1080/02602938.2016.1170761

Ibrahim, K. (2023). Using AI-based detectors to control AI-assisted plagiarism in ESL writing: “The Terminator versus the machines.” Language Testing in Asia, 13(46). https://doi.org/10.1186/s40468-023-00260-2

Jerrim, J. (2023). Test anxiety: Is it associated with performance in high-stakes examinations? Oxford Review of Education, 49(3), 321–341. https://doi.org/10.1080/03054985.2022.2079616

Jopp, R., Pallant, J. L., & Russell, H. (2023). Choose your own adventure: Understanding why students prefer certain types of assessment. Journal of University Teaching & Learning Practice, 20. https://doi.org/10.53761/1.20.7.11

Jovanovska, J. (2018). Designing effective multiple-choice questions for assessing learning outcomes. Infotheca, 18(1), 25–42. https://doi.org/10.18485/infotheca.2018.18.1.2

Kaipa, R. M. (2020). Multiple-choice questions and essay questions in curriculum. Journal of Applied Research in Higher Education, 13(1), 16–32. https://doi.org/10.1108/JARHE-01-2020-0011

Liu, Q., Wald, N., Daskon, C., & Harland, T. (2023). Multiple-choice questions (MCQs) for higher-order cognition: Perspectives of university teachers. Innovations in Education and Teaching International, 61(4), 802–814. https://doi.org/10.1080/14703297.2023.2222715

Marsh, E. J., Roediger III, H. L., Bjork, R. A., & Bjork, E. L. (2007). The memorial consequences of multiple-choice testing. Psychonomic Bulletin & Review, 14(2), 194–199. https://doi.org/10.3758/BF03194051

Newton, P., & Xiromeriti, M. (2024). ChatGPT performance on multiple-choice question examinations in higher education: A pragmatic scoping review. Assessment & Evaluation in Higher Education, 49(6), 781–798. https://doi.org/10.1080/02602938.2023.2299059

Nordern v Blueport Enterprises [1996] 3 NZLR 450.

Oc, Y., & Hassen, H. (2024). Comparing the effectiveness of multiple-answer and single-answer multiple-choice questions in assessing student learning. Marketing Education Review, 1(1), 1–14. https://doi.org/10.1080/10528008.2024.2417106

Rane, N., Shirke, S., Choudhary, S., & Rane, J. (2024). Education strategies for promoting academic integrity in the era of artificial intelligence and ChatGPT: Ethical considerations, challenges, policies, and future directions. Journal of ELT Studies, 1(1), 36–59. https://doi.org/10.48185/jes.v1i1.1314

Reddy, L., Letswalo, M. L., Sefage, A. P., Kheswa, B. V., Balakrishna, A., Changundega, J. M., Mvelase, M. J., Kheswa, K. A., Majola, S. N. T., Mathe, T., Seakamela, T., & Nemakhavhani, T. E. (2022). Integrity vs. quality of assessments: Are they compromised on the online platform? Pedagogical Research, 7(2), em0121. https://doi.org/10.29333/pr/11840

Riggs, C. D., Kang, S., Rennie, O., & Brickman, P. (2020). Positive impact of multiple-choice question authoring and regular quiz participation on student learning. CBE - Life Sciences Education, 19(2), Article 16. https://doi.org/10.1187/cbe.19-09-0189

Scouller, K. M., & Prosser, M. (1994). Students’ experiences in studying for multiple-choice question examinations. Studies in Higher Education, 19(3), 267–279. https://doi.org/10.1080/03075079412331381870

Sergienko, G. (2001). New modes of assessment. San Diego Law Review, 38(2), 463–506. https://digital.sandiego.edu/sdlr/vol38/iss2/3

Stevens, S. P., Palocsay, S. W., & Novoa, L. J. (2023). Practical guidance for writing multiple-choice test questions in introductory analytics courses. INFORMS Transactions on Education, 24(1), 51–69. https://doi.org/10.1287/ited.2022.0274

Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment & Evaluation in Higher Education, 30(4). https://doi.org/10.1080/02602930500099102

Summers, R. J., Burgess, A. P., Higson, H. E., & Moores, E. (2023). How you teach and who you teach both matter: Lessons from learning analytics data. Studies in Higher Education, 49(3), 576–591. https://doi.org/10.1080/03075079.2023.2245424

Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldonado, R., Lodge, J. M., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 3, Article 100075. https://doi.org/10.1016/j.caeai.2022.100075

Timmis, S., Broadfoot, P., Sutherland, R., & Oldfield, A. (2016). Rethinking assessment in a digital age: Opportunities, challenges, and risks. British Educational Research Journal, 42(3), 454–476. https://doi.org/10.1002/berj.3215

University of Lincoln. (n.d.). Question-based testing and A.I. Digital Education at Lincoln. https://digitaleducation.lincoln.ac.uk/online-assessment/a-i-and-assessment/question-based-testing/

UNSW Sydney. (2024, December 16). Assessing by multiple choice questions. UNSW Teaching. https://www.teaching.unsw.edu.au/assessing-multiple-choice-questions

Velan, G. M., Jones, P., McNeil, H. P., & Kumar, R. K. (2008). Integrated online formative assessments in the biomedical sciences for medical students: Benefits for learning. BMC Medical Education, 8, 52. https://doi.org/10.1186/1472-6920-8-52

Whittaker, S., & Olcay, T. (2021). Multiple-choice questionnaire assessments: Do they have a role in assessing law students? The Law Teacher, 56(3), 335–353. https://doi.org/10.1080/03069400.2021.1979762

Winstone, N. E., & Boud, D. (2022). The need to disentangle assessment and feedback in higher education. Studies in Higher Education, 47(3), 656–667. https://doi.org/10.1080/03075079.2020.1779687

Yang, B. W., Razo, J., & Persky, A. M. (2019). Using testing as a learning tool. American Journal of Pharmaceutical Education, 83(9), 7324. https://doi.org/10.5688/ajpe7324

Zeidner, M. (1987). Essay versus multiple-choice type classroom exams: The student's perspective. The Journal of Educational Research, 80(6), 352–358. https://doi.org/10.1080/00220671.1987.10885782

 


  1. In 2022 and 2024, I received Divisional Teaching Awards, largely due to the success of my work employing MCQs in my law teaching. Nominations for these awards were all from students who cited the use of MCQs as the basis for nominating me.
  2. Case and Donahue (2008) work through examples of MCQ questions, explaining how MCQs can be effectively designed. They emphasize the importance of the ‘stem’ (or problem scenario that precedes and contextualises the question), noting that “Questions that contain no vignettes, merely asking examinees ‘What’s the rule?’ or ‘What’s true?’, are testing surface learning – the examinees’ recall of isolated facts (‘recall questions’). Recall questions can often be answered by turning to a single paragraph in a textbook. They reward examinees who have simply memorized the material, but who might not be able to apply or interpret it” (p. 377). The stem/problem scenario should be drafted with a particular issue in mind and be easy to understand: “Multiple-choice stems should set up simple, understandable problems; the challenge for examinees should be in determining the correct answer, not in trying to understand the scenario […] The difficulty of a multiple-choice question is best determined by the sophistication and plausibility of the options, not by the complexity of the stem” (p. 379).
  3. I would like to especially acknowledge my colleague at the University of Waikato, Centre for Tertiary Teaching and Learning, Clementine Annabell, whose support and guidance have been invaluable as I began exploring, and then evolving, the use of MCQs in my teaching.
  4. LEGAL307 is a full-year paper (covering A and B Trimesters from February to October). Students complete four assessments over the course of the paper: one at the end of A Trimester, which tests their A Trimester learnings; a Māori Land Law-focused written assignment early in B Trimester; they sit the online MCQ at the end of B Trimester (which tests their B Trimester learning); and they sit an exam, worth 50% of their final grade, at the conclusion of the course. Course details are set out in the paper outline, published annually.
  5. In 2023, this teaching allocation was reduced to one two-hour lecture on a Monday, so the MCQs allowed me to preserve a form of problem question practice, despite no longer having scheduled lecture time for this.
  6. That’s because the Wednesday workshops tended to be me workshopping problem questions with the class with only a few brave students willing to speak up (so there was still risk in the Wednesday workshops that a student could think they understood how to answer the problem question when they were being guided through it, which might provide a false sense of security). In contrast, the practice MCQs test the student’s own knowledge, not anyone else’s.
  7. This 90-minute window was extended for students with accessibility needs. Most students completed the quizzes in 30–60 minutes.
  8. This can be compared to 25 and 37 applications for the other two LEGAL307 internal assessments, for which we do not offer practice assessments, although we do provide extensive assessment guidance, which is reflected in the relatively low number of applications, compared to some other law papers.

About the author