Center for Teaching

Writing good multiple choice test questions.

Brame, C. (2013) Writing good multiple choice test questions. Retrieved [todaysdate] from https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/.

Constructing an Effective Stem

Constructing effective alternatives.

  • Additional Guidelines for Multiple Choice Questions

Considerations for Writing Multiple Choice Items that Test Higher-order Thinking

Additional resources.

Multiple choice test questions, also known as items, can be an effective and efficient way to assess learning outcomes. Multiple choice test items have several potential advantages:

essay vs multiple choice exams

Reliability: Reliability is defined as the degree to which a test consistently measures a learning outcome. Multiple choice test items are less susceptible to guessing than true/false questions, making them a more reliable means of assessment. The reliability is enhanced when the number of MC items focused on a single learning objective is increased. In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions.

Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much more quickly than an essay question, tests based on multiple choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment.

The key to taking advantage of these strengths, however, is construction of good multiple choice items.

A multiple choice item consists of a problem, known as the stem, and a list of suggested solutions, known as alternatives. The alternatives consist of one correct or best alternative, which is the answer, and incorrect or inferior alternatives, known as distractors.

essay vs multiple choice exams

1. The stem should be meaningful by itself and should present a definite problem. A stem that presents a definite problem allows a focus on the learning outcome. A stem that does not present a clear problem, however, may test students’ ability to draw inferences from vague descriptions rather serving as a more direct test of students’ achievement of the learning outcome.

essay vs multiple choice exams

2. The stem should not contain irrelevant material , which can decrease the reliability and the validity of the test scores (Haldyna and Downing 1989).

irr-material

3. The stem should be negatively stated only when significant learning outcomes require it. Students often have difficulty understanding items with negative phrasing (Rodriguez 1997). If a significant learning outcome requires negative phrasing, such as identification of dangerous laboratory or clinical practices, the negative element should be emphasized with italics or capitalization.

essay vs multiple choice exams

4. The stem should be a question or a partial sentence. A question stem is preferable because it allows the student to focus on answering the question rather than holding the partial sentence in working memory and sequentially completing it with each alternative (Statman 1988). The cognitive load is increased when the stem is constructed with an initial or interior blank, so this construction should be avoided.

essay vs multiple choice exams

1. All alternatives should be plausible. The function of the incorrect alternatives is to serve as distractors,which should be selected by students who did not achieve the learning outcome but ignored by students who did achieve the learning outcome. Alternatives that are implausible don’t serve as functional distractors and thus should not be used. Common student errors provide the best source of distractors.

essay vs multiple choice exams

2. Alternatives should be stated clearly and concisely. Items that are excessively wordy assess students’ reading ability rather than their attainment of the learning objective

essay vs multiple choice exams

3. Alternatives should be mutually exclusive. Alternatives with overlapping content may be considered “trick” items by test-takers, excessive use of which can erode trust and respect for the testing process.

essay vs multiple choice exams

4. Alternatives should be homogenous in content. Alternatives that are heterogeneous in content can provide cues to student about the correct answer.

essay vs multiple choice exams

5. Alternatives should be free from clues about which response is correct. Sophisticated test-takers are alert to inadvertent clues to the correct answer, such differences in grammar, length, formatting, and language choice in the alternatives. It’s therefore important that alternatives

  • have grammar consistent with the stem.
  • are parallel in form.
  • are similar in length.
  • use similar language (e.g., all unlike textbook language or all like textbook language).

6. The alternatives “all of the above” and “none of the above” should not be used. When “all of the above” is used as an answer, test-takers who can identify more than one alternative as correct can select the correct answer even if unsure about other alternative(s). When “none of the above” is used as an alternative, test-takers who can eliminate a single option can thereby eliminate a second option. In either case, students can use partial knowledge to arrive at a correct answer.

7. The alternatives should be presented in a logical order (e.g., alphabetical or numerical) to avoid a bias toward certain positions.

essay vs multiple choice exams

8. The number of alternatives can vary among items as long as all alternatives are plausible. Plausible alternatives serve as functional distractors, which are those chosen by students that have not achieved the objective but ignored by students that have achieved the objective. There is little difference in difficulty, discrimination, and test score reliability among items containing two, three, and four distractors.

Additional Guidelines

1. Avoid complex multiple choice items , in which some or all of the alternatives consist of different combinations of options. As with “all of the above” answers, a sophisticated test-taker can use partial knowledge to achieve a correct answer.

essay vs multiple choice exams

2. Keep the specific content of items independent of one another. Savvy test-takers can use information in one question to answer another question, reducing the validity of the test.

When writing multiple choice items to test higher-order thinking, design questions that focus on higher levels of cognition as defined by Bloom’s taxonomy . A stem that presents a problem that requires application of course principles, analysis of a problem, or evaluation of alternatives is focused on higher-order thinking and thus tests students’ ability to do such thinking. In constructing multiple choice items to test higher order thinking, it can also be helpful to design problems that require multilogical thinking, where multilogical thinking is defined as “thinking that requires knowledge of more than one fact to logically and systematically apply concepts to a …problem” (Morrison and Free, 2001, page 20). Finally, designing alternatives that require a high level of discrimination can also contribute to multiple choice items that test higher-order thinking.

essay vs multiple choice exams

  • Burton, Steven J., Sudweeks, Richard R., Merrill, Paul F., and Wood, Bud. How to Prepare Better Multiple Choice Test Items: Guidelines for University Faculty, 1991.
  • Cheung, Derek and Bucat, Robert. How can we construct good multiple-choice items? Presented at the Science and Technology Education Conference, Hong Kong, June 20-21, 2002.
  • Haladyna, Thomas M. Developing and validating multiple-choice test items, 2 nd edition. Lawrence Erlbaum Associates, 1999.
  • Haladyna, Thomas M. and Downing, S. M.. Validity of a taxonomy of multiple-choice item-writing rules. Applied Measurement in Education , 2(1), 51-78, 1989.
  • Morrison, Susan and Free, Kathleen. Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education 40: 17-24, 2001.

Creative Commons License

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules
  • Educational Assessment

Multiple-Choice Tests: Revisiting the Pros and Cons

  • February 21, 2018
  • Maryellen Weimer, PhD

What multiple-choice testing has going for it.

  • Scoring is quick and easy, especially if a machine is involved.
  • Easy creation of multiple versions, again with machine assistance. Plus, there’s the potential to grow the collection of questions every time the course is taught.
  • Simple statistics (now regularly calculated by computer or via LMS) allow item analysis to reveal how well a question discriminates between those who know the material and those who don’t.
  • Can be graded objectively without rater bias.
  • Allow for inclusion of a broad range of topics on a single exam thereby effectively testing the breadth of a student’s knowledge.

Potential benefits of multiple-choice test questions when done right.

  • On too many multiple-choice tests, the questions do nothing more than assess whether students have memorized certain facts and details. But well-written questions can move students to higher-order thinking, such as application, integration, and evaluation. SAT questions illustrate how thought-provoking a multiple-choice question can be. Ways to address: Recognize the amount of time it takes to write a good question. Preserve and reuse good questions. Consider using only three-answer options. Research says you can; check the reference below.
  • Questions can be clearly written and if they are, it’s a straight shot to what the student knows. But the clarity of multiple-choice questions is easily and regularly compromised—with negatives or too much material in the stem, for example. Ways to address: Do an item analysis and find out if a question is being missed by those with high exam scores. If so, there’s probably something wrong with the question and it should be tossed.

What’s problematic about multiple-choice testing.

  • A careful reading of some questions can reveal the right answer, and test savvy students will use this to their advantage. It might be the grammatical structure that only fits one answer option or the longer length of the correct response. What happens here is that the questions end up testing literary skill rather than content knowledge. Ways to address : Give the test to someone not taking the course and see how many questions they get correct. Ask if something tipped them off to the right answer.
  • With lucky guesses students get credit for correct answers. It looks like they know something they don’t know. Ways to address: 1) Avoid throw-away answer options—those that are obviously incorrect. If the student doesn’t know the answer but can rule out one or two of the options, they’ve significantly upped the chances of getting it right. 2) Some teachers use a formula that gives points for the correct answer and takes a lesser amount of points off for answers missed. This approach, not terribly popular with students, decreases guessing by forcing student to leave questions blank when they don’t know.  3) Others have students rate the level of confidence they have in their answer, which becomes part of the score. Correct answers with high confidence ratings score the highest. Correct answers with low confidence ratings get a lower score.
  • Wrong answer options expose students to misinformation, which can influence subsequent thinking about the content. This is especially true if students carefully consider the options and select an incorrect one after having persuaded themselves that it’s right. Ways to address: Spend time during the debrief on incorrect answer options regularly selected. This is a time when students need to be doing the leg work, not the teacher. Have them talk with each other, check notes, look things up in the text, and then explain why the option is incorrect. Make five bonus points available during the debrief. Those points are earned for everyone in the class by students who explain why certain answer options are wrong. More points are awarded when the explanation is offered by someone who selected that incorrect option.
  • Asked for their test preference, most students pick multiple-choice tests. They like them because they think they’re easier. And they are. With a multiple-choice question, the answer is selected, not generated. Students also think they’re easier because they’re are used to multiple-choice questions that test recall, ask for definitions, or have answers that can be memorized without being all that well understood. Ways to address: Write questions that make students think.

If you regularly use multiple-choice tests, you ought to have a good working knowledge of the research associated with them. That can be acquired with one well-organized and easily understood “Teacher-Ready Research Review.”

Xu, X., Kauer, S., and Tupy, S. (2016). Multiple-choice questions: Tips for optimizing assessment in-seat and online. Scholarship of Teaching and Learning in Psychology, 2 (2), 147-158.

An article highlighting the research covered in the Xu, et al. appeared in the November 2016 issue of The Teaching Professor.

For more on multiple-choice tests, read:

  • Seven Mistakes to Avoid When Writing Multiple-Choice Questions
  • 30 Tips for Writing Good Multiple-Choice Questions
  • Advantages and Disadvantages of Different Types of Test Questions
  • Tips for Writing Good Multiple-Choice Questions

Stay Updated with Faculty Focus!

Get exclusive access to programs, reports, podcast episodes, articles, and more!

  • Opens in a new tab

Welcome Back

Username or Email

Remember Me

Already a subscriber? log in here.

  • Corporate Learning & Development
  • Business Enablement
  • The GT Blog
  • White Papers

Demo

Multiple Choice Questions: Benefits, Debates, and Best Practices

image of hand checking a multiple choice box. There are six choices: multiple choices, multiple controversies, multiple benefits, multiple best practices, multiple bloom's levels and multiple development methods. Hand is checking

Who knew that a question type could be so shrouded in controversy? The multiple choice question (MCQ) may be a “Who Wants to Be a Millionaire” favorite, but it’s also the most widely debated question type when it comes to efficacy and outcomes reporting. Why all the buzz? The multiple choice question is forever associated with standardized tests, Scantron sheets, #2 pencils, and all of the  above . But like any question type, there are benefits and downfalls, there's a time and place, and there are a slew of best practices. Let’s weigh the pros and cons and figure out when to best use this traditional testing favorite.

The Multiple-Benefit Question Type

Like any question type, the format alone is useless without proper usage, wording, and subject pairing to make it effective. The following benefits make multiple choice an attractive option for fact-based content.

  • Easy on the Grader Think about the instructor with no TA and 500 students in their 101 course. Essays and short answer questions, while effective, will inevitably delay grading. Auto-graded multiple-choice questions allow instructors to test their students quickly and efficiently, without hiring additional graders.
  • Time and Scope: There’s a reason why MCQs are a default for most standardized testing. By nature, MCQs allow for fast testing across a vast expanse of content. According to Vanderbilt University, “because students can typically answer a multiple choice item much more quickly than an essay question, tests based on multiple choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment. ”

New call-to-action

  • Flexibility Perhaps it isn’t the nature of the question but what we are asking that allows us to think of this question type as so rigid. There are options to expand to different Bloom’s Taxonomy levels in am MCQ. While many default to questions that test both understanding and remembering facts, a well-worded question can test on application and analysis.
  • Single/Multiple Answers A single answer allows for simple weeding out of incorrect answers. However, with multiple correct answers (and this doesn’t mean “D. All of the above”) present, you can eliminate the process of elimination.
  • Measurable and Reliable With the focus on efficacy measurement in schools increasing, being able to have large amounts of objective testing data that show students' grasp and retention of content is pivotal for an institution.

The Multiple Layers of Controversy

All benefits aside, MCQs are widely debated for their efficacy and often considered a poor question type to gauge a student’s level of critical thinking, making them far better suited for lower-level Bloom's questioning. The following are some of the pitfalls mentioned by leading MCQ opponents.

  • Development Time For the question author, a well-crafted MCQ isn’t always just about writing the best correct answer, it’s creating deeply convincing false answers, or distractors. This takes more time than a simple fill-in-the-blank or essay question. Too many sloppy questions have been written in the past that give the correct answer away or that give a freebie distractor away as a definitely wrong answer.
  • Working Backwards from Wrong According to the National Center for Fair and Open Testing , “Multiple choice items can be easier than open-ended questions asking the same thing. This is because it is harder to recall an answer than to recognize it. Test-wise students know that it is sometimes easier to work backwards from the answer options, looking for the one that best fits. It also is possible to choose the 'right' answer for the wrong reason.”
  • Beating the Odds  You may have heard these question types called “multiple guess questions.” Of course, guessing is present in any question, though MCQs allow for even the most clueless learner to have a 25% chance. If they can remove even one distractor, their odds have immediately increased to 33%. The option for guessing is present in plenty of question types… but here, the right answer is literally on the page. May the odds be ever NOT in their favor.
  • Diversify Your Question Types With the majority of standardized tests heavily reliant on multiple-choice, deliberate choices must be made as to when to use MCQs and MCQs should be interspersed with other question types that assess students on their abilities to create, evaluate, and formulate their own responses to situational questions.

Your Lifelines: Best Practices for MCQ Authoring

Pros and cons aside, the MCQ is still a formidable option for testing. When used in moderation, with a diverse cast of other question types, and well-crafted for optimal learning, MCQs can remain steadfast against the tide of push-back. To better assist you, the content developer, with your multiple-choice assessment authoring, keep the following in mind:

  • Move Beyond the “Above” “All of the above” and “None of the above” have a negative effect on your testing. While one allows students to gain credit when they recognize at least two correct choices, the other rewards them for not formalizing what the correct answer is at all. Too often, questions are authored with the traditional “above” distractor. However, with digital randomization features, what is above may actually be below.
  • The Power of Distractors A well developed MCQ not only tests students on correct answers, it puts to rest commonly chosen incorrect answers by adding them in as distractors. Choose your distractor options carefully. Be consistent in your options, make them each plausible and relevant to the subject matter being tested. A key term from two chapters ago is a dead giveaway. Like matching questions , you want to keep your options homogenous with an objective list format such as numerical or alphabetical.
  • Randomize Multiple-choice questions have grown a lot smarter. Choose an assessment building tool, such as GT's MyEcontentFactory , that allows for randomization of distractors. Not only does randomization act as a built-in cheating prevention tool, it also keeps your choice organization objective. No learner will be cracking that code.
  • Move beyond text We often think in text, but, according to eLearningIndustry.com , a great way to test on higher levels of analysis includes adding a chart, graph or image to a multiple-choice question.
  • absolutes like always and never
  • long-winded distractors
  • multiple-multiple choices, such as " C. Choices A and B"
  • incomplete questions that just seem to ________
  • negatives, such as " all of the following are... except" and "which of the following are NOT..."

For more best practices and examples of great MCQs at various levels of Bloom’s Taxonomy, be sure to visit the following links:

  • University of Texas at Austin
  • Vanderbilt University
  • Brigham Young University

Crafting great assessments isn't always easy, but beginning with the right authoring tool from the start can help the content developer, instructors who rely on assessments to accurately measure outcomes, and the students themselves. 

Did you know that you can make multiple choice questions in GT's MyEcontentFactory? This is just one of many assessment types that our assessment tool facilitates. Try it today - contact us for a demo.

Which side of the MCQ aisle are you? Let us know in the comments section below!

New call-to-action

Recommended Content

Maximize Your Team's Return on Effort: How GT's Platform Transforms Content Engineering

Maximize Your Team's Return on Effort: How GT's Platform Transforms Content Engineering

The Art of Content Interoperability: The Key to Unlocking Success in Educational Publishing

The Art of Content Interoperability: The Key to Unlocking Success in Educational Publishing

Digitally Innovative and Digitally Inefficient: The Challenge Educational Publishers Face That Nobody Wants to Talk About

Digitally Innovative and Digitally Inefficient: The Challenge Educational Publishers Face That Nobody Wants to Talk About

Want more great articles like this.

Subscribe to our newsletter!

Leave a comment

Get a custom demo.

Learn how to streamline and automate your content production and distribution process. A GT team member will contact you to set up a live demo customized for your needs.

InnerDrive

Education resources › Blog › How to assess students: multiple-choice questions vs essays

How to assess students: MCQs vs essays

How to assess students: multiple-choice questions vs essays

  • Delivering feedback

Written by the InnerDrive team | Edited by Bradley Busch

As a teacher, you want your students to become analytical and critical thinkers. You want them to be able to apply their learning to new situations and transfer their learning experience onto solving real-life problems. There are many ways to aid this development, which include assessment methods such as MCQs and essays that foster abilities and provide students with the opportunity to demonstrate the higher order skills they have developed.

Approaches to learning

Students can take either of two approaches regarding their studies:

  • A surface approach  focusing on recall  and reproduction.
  • A deeper approach that focuses on meaning, understanding and long-term learning.

Now of course, the more students are able to recall the more they are able to apply meaning and understanding to. Students often change their study patterns according to what type of assessment they are given. They can be tempted to concentrate on memorization if their motivation is to do well on a specific multiple-choice questionnaire test for a better grade, choosing the surface approach. On the other hand, a student who wishes to apply a deep approach may integrate the theory and practice of a subject, as they intend to foster a strong understanding of the material.

In some instances, surface learning is particularly useful. Learning is a lengthy process and, before students can develop a deeper understanding of a subject, it is important for them to know the relative basic facts. For example, when learning a foreign language, knowing basic vocabulary, such as numbers and colours, is necessary before you can move on to constructing sentences.

However, deeper learning is often considered to be superior because of its ability to help student build their critical thinking skills; a key aspect of  metacognition . When you encourage your students to undertake a deep approach to learning, you are simultaneously encouraging them to  develop metacognitive skills  that will enhance their academic performance.

So, how can teachers encourage students to use either type of learning? As mentioned above, motivation is an important component. In a school setting, this often involves assessments – by changing the nature of the assessments, you can in turn encourage your students to change their approach to them. In this case, there are two types we recommend: multiple-choice questionnaires and essays.

Let’s compare multiple-choice questions and essays

Multiple-choice questions.

Students often perceive MCQ tests as an evaluation of their ability to reproduce knowledge they have learnt by heart. This makes it a perfect approach for when teachers want to assess recall of knowledge and basic facts.

Evidence suggests  that students are more likely to apply surface learning approaches when preparing for MCQ tests, possibly because they believe that this requires lower levels of intellectual processing. In this study, students who applied deep learning strategies when filling out the MCQ test even received lower grades. This demonstrates that a deep approach can actually confuse students and waste their precious revision time if they are being assessed through a multiple-choice questionnaire.

Essays are a popular way for teachers to assess their students’ learning. Because of their length and the thoroughness they require, essays give students the opportunity to engage with the material deeply and, as a result, develop and demonstrate higher levels of thinking. Unlike MCQ tests, there isn’t a clear difference between right or wrong answers, requiring students to undertake a deeper approach to learning to present a quality end product.

Research shows  that students are more likely to use deep learning approaches when preparing assignment essays, usually perceived as an assessment of higher cognitive processing. In fact, students in this study who employed surface strategies to complete the essay had poorer performance, indicating that writing a good essay requires deeper understanding and the ability to create meaningful links between information.

Which one should you use?

As mentioned above, different requirements call for different types of assessment, and each one has its advantages. MCQ tests are a great way to evaluate students’ basic understanding of a subject’s foundations.

However, essay assessments are considered a more reliable method of assessment. There is more structure to the questions and marking criteria that allows for students to really demonstrate their skills. An MCQ test will simply be a score out of 100, with each question having a strict right or wrong answer, whereas an essay gives students the opportunity to explore the material they have learnt.

Although some may argue that the subjective nature of essay writing can put some students at a disadvantage, it does provide a much clearer view of their progress. Essay assessments also allow for feedback that can be useful to all students, no matter their academic level.  Research has shown  that feedback is crucial to improving the learning experience for students, further contributing to the benefits of essay assessments over MCQ tests.

Final thought

The obvious element in the room is time. Multiple choice tests can be very quick to mark, which saves time (arguably the most precious teaching resource there is). Therefore, no one size fits all. It really is a case of matching the assessment to the given situation.

About the editor

Bradley Busch

Bradley Busch

Bradley Busch is a Chartered Psychologist and a leading expert on illuminating Cognitive Science research in education. As Director at InnerDrive, his work focuses on translating complex psychological research in a way that is accessible and helpful. He has delivered thousands of workshops for educators and students, helping improve how they think, learn and perform. Bradley is also a prolific writer: he co-authored four books including Teaching & Learning Illuminated and The Science of Learning , as well as regularly featuring in publications such as The Guardian and The Telegraph.

Jump to section:

Recommended Delivering feedback reads

Can AI give better feedback than teachers?

Can AI give better feedback than teachers?

Universal practices of effective teaching: Formative assessments, examples and feedback

Universal practices of effective teaching: Formative assessments, examples and feedback

The feedback students want vs the feedback students need

The feedback students want vs the feedback students need

Is it time to abandon the Feedback Sandwich?

Is it time to abandon the Feedback Sandwich?

Hattie’s latest feedback study is out: What does it say?

Hattie’s latest feedback study is out: What does it say?

3 things you can do to give your athletes better feedback

3 things you can do to give your athletes better feedback

How can you use feedback to enhance Retrieval Practice?

How can you use feedback to enhance Retrieval Practice?

3 essential questions to ask for effective feedback

3 essential questions to ask for effective feedback

Be the first to know about the key Teaching & Learning research and how to apply it. Sign up to receive our free resources directly in your inbox.

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Types of Exams

Anita Frederiks; Kate Derrington; and Cristy Bartlett

calculator

Introduction

There are many different types of exams and exam questions that you may need to prepare for at university.  Each type of exam has different considerations and preparation, in addition to knowing the course material.  This chapter extends the discussion from the previous chapter and examines different types of exams, including multiple choice, essay, and maths exams, and some strategies specific for those exam types.  The aim of this chapter is to provide you with an overview of the different types of exams and the specific strategies for preparing and undertaking them.

The COVID19 pandemic has led to a number of activities previously undertaken on campus becoming online activities.  This includes exams, so we have provided advice for both on campus (or in person) exams as well as alternative and online exams. We recommend that you read the chapter Preparing for Exams before reading this chapter about the specific types of exams that you will be undertaking.

Types of exams

During your university studies you may have exams that require you to attend in person, either on campus or at a study centre, or you may have online exams. Regardless of whether you take the exam in person or online, your exams may have different requirements and it is important that you know what those requirements are. We have provided an overview of closed, restricted, and open exams below, but always check the specific requirements for your exams.

Closed exams

These exams allow you to bring only your writing and drawing instruments. Formula sheets (in the case of maths and statistics exams) may or may not be provided.

Restricted exams

These exams allow you to bring in only specific things such as a single page of notes, or in the case of maths exams, a calculator or a formula sheet. You may be required to hand in your notes or formula sheet with your exam paper.

Open book exams

These exams allow you to have access to any printed or written material and a calculator (if required) during the exam. If you are completing your exam online, you may also be able to access online resources. The emphasis in open book exams is on conceptual understanding and application of knowledge rather than just the ability to recall facts.

Myth: You may think open book exams will be easier than closed exams because you can have all your study materials with you.

Reality: Open book exams require preparation, a good understanding of your content and an effective system of organising your notes so you can find the relevant information quickly during your exam. Open book exams generally require more detailed responses. You are required to demonstrate your knowledge and understanding of a subject as well as your ability to find and apply information applicable to the topic.  Questions in open book exams often require complex answers and you are expected to use reason and evidence to support your responses. The more organised you are, the more time you have to focus on answering your questions and less time on searching for information in your notes and books. Consider these tips in the table below when preparing for an open book exam.

Tips for preparing your materials for open or unrestricted exams

  • Organise your notes logically with headings and page numbers
  • Use different colours to highlight and separate different topics
  • Be familiar with the layout of any books you will be using during the exam. Use sticky notes to mark important information for quick reference during the exam.
  • Use your learning objectives from each week or for each new module of content, to help determine what is important (and likely to be on the exam).
  • Create an alphabetical index for all the important topics likely to be on the exam. Include the page numbers, in your notes or textbooks, of where to find the relevant information on these topics.
  • If you have a large quantity of other documents, for example if you are a law student, consider binding legislation and cases or place them in a folder. Use sticky notes to indicate the most relevant sections.
  • Write a summary page which includes, where relevant, important definitions, formulas, rules, graphs and diagrams with examples if required.
  • Know how to use your calculator efficiently and effectively (if required).

Take home exams

These are a special type of open book exam where you are provided with the exam paper and are able to complete it away from an exam centre over a set period of time.  You are able to use whatever books, journals, websites you have available and as a result, take-home exams usually require more exploration and in-depth responses than other types of exams.

It is just as important to be organised with take home exams. Although there is usually a longer period available for completing these types of exams, the risk is that you can spend too long researching and not enough time planning and writing your exam. It is also important to allow enough time for submitting your completed exam.

  Tips for completing take home exams

  • Arrange for a quiet and organised space to do the exam
  • Tell your family or house mates that you will be doing a take-home exam and that you would appreciate their cooperation
  • Make sure you know the correct date of submission for the exam paper
  • Know the exam format, question types and content that will be covered
  • As with open book exams, read your textbook and work through any chapter questions
  • Do preliminary research and bookmark useful websites or download relevant journal articles
  • Take notes and/or mark sections of your textbook with sticky notes
  • Organise and classify your notes in a logical order so once you know the exam topic you will be able to find what you need to answer it easily

When answering open book and take-home exams remember these three steps below.

Three steps of analysing question

Multiple choice

Multiple choice questions are often used in online assignments, quizzes, and exams. It is tempting to think that these types of questions are easier than short answer or essay questions because the answer is right in front of you. However, like other types of assessment, multiple choice questions require you to understand and apply the content from your study materials or lectures. This requires preparation and thorough content knowledge to be able to retrieve the correct answer quickly. The following sections discuss strategies on effectively preparing for, and answering, multiple choice questions, the typical format of multiple choice questions, and some common myths about these types of questions.

Preparing for multiple choice questions

  •  Prepare as you do for other types of exams (see the Preparing for Exams chapter for study strategies).
  • Find past or practice exam papers (where available), and practise doing multiple choice questions.
  • Create your own multiple choice questions to assess the content, this prompts you to think about the material more deeply and is a good way to practise answering multiple choice questions.
  • If there are quizzes in your course, complete these (you may be able to have multiple attempts to help build your skills).
  • Calculate the time allowed for answering the multiple-choice section of the exam. Ideally do this before you get to the exam if you know the details.

Strategies for use during the exam

  • Consider the time allocated per question to guide how you use your time in the exam.  Don’t spend all of your time on one question, leaving the rest unanswered.  Figure 23.3 provides some strategies for managing questions during the exam.
  • Carefully mark your response to the questions and ensure that your answer matches the question number on the answer sheet.
  • Review your answers if you have time once you have answered all questions on the exam.

Three tips for multiple chocie exams

Format of multiple choice questions

The most frequently used format of a multiple choice question has two components, the question (may include additional detail or statement) and possible answers.

Table 23.1 Multiple choice questions

The question and/or statement or question: Analyse this very carefully as the key words give you the information to determine the correct answer.
Take care with small words which are qualifiers (e.g., ‘not’, ‘only’, ‘today’) as they place limitations on the situation or problem (e.g., which answer is not a type of cat).
The possible answers: There may be as few as three, but generally there are four or five possible answers which are made up of the following types:
• one or more incorrect answers;
• one or more correct answers, one of which is a more accurate or a fuller answer than the other/s.

The example below is of a simple form of multiple choice question.

An example of a simple form of a multiple choice question

Multiple choice myths

Multiple chocie exam

These are some of the common myths about multiple choice questions that are NOT accurate:

  • You don’t need to study for multiple choice tests
  • Multiple choice questions are easy to get right
  • Getting these questions correct is just good luck
  • Multiple choice questions take very little time to read and answer
  • Multiple choice questions cannot cover complex concepts or ideas
  • C is most likely correct
  • Answers will always follow a pattern, e.g., badcbadcbadc
  • You get more questions correct if you alternate your answers

None of the answers above are correct! Multiple choice questions may appear short with the answer provided, but this does not mean that you will be able to complete them quickly.  Some questions require thought and further calculations before you can determine the answer.

Short answer exams

Short answer, or extended response exams focus on knowledge and understanding of terms and concepts along with the relationships between them. Depending on your study area, short answer responses could require you to write a sentence or a short paragraph or to solve a mathematical problem. Check the expectations with your lecturer or tutor prior to your exam. Try the preparation strategies suggested in the section below.

Preparation strategies for short answer responses

  • Concentrate on key terms and concepts
  • It is not advised to prepare and learn specific answers as you may not get that exact question on exam day; instead know how to apply your content.
  • Learn similarities and differences between similar terms and concepts, e.g. stalagmite and stalactite.
  • Learn some relevant examples or supporting evidence you can apply to demonstrate your application and understanding.

There are also some common mistakes to avoid when completing your short answer exam as seen below.

Common mistakes in short answer responses

  • Misinterpreting the question
  • Not answering the question sufficiently
  • Not providing an example
  • Response not structured or focused
  • Wasting time on questions worth fewer marks
  • Leaving questions unanswered
  • Not showing working (if calculations were required)

Use these three tips in Figure 23.6 when completing your short answer responses.

Use the keywords in the question (e.g. define, explain, analyse...) to know how to appropriately answer the question. Read and answer all parts of the question. You may be required to do more than one thing, e.g. “Define and give an example of...”.

Essay exams

As with other types of exams, you should adjust your preparation to suit the style of questions you will be asked. Essay exam questions require a response with multiple paragraphs and should be logical and well-structured.

It is preferable not to prepare and learn an essay in anticipation of the question you may get on the exam. Instead, it is better to learn the information that you would need to include in an essay and be able to apply this to the specific question on exam day. Although you may have an idea of the content that will be examined, usually you will not know the exact question. If your exam is handwritten, ensure that your writing is legible. You won’t get any marks if your writing cannot be read by your marker. You may wish to practise your handwriting, so you are less fatigued in the exam.

Follow these three tips in Figure 23.7 below for completing an essay exam.

Three tips for essay exams

Case study exams

Case study questions in exams are often quite complex and include multiple details. This is deliberate to allow you to demonstrate your problem solving and critical thinking abilities. Case study exams require you to apply your knowledge to a real-life situation. The exam question may include information in various formats including a scenario, client brief, case history, patient information, a graph, or table. You may be required to answer a series of questions or interpret or conduct an analysis. Follow the tips below in Figure 23.8 for completing a case study response.

Three tips for case study exams

Maths exams

This section covers strategies for preparing and completing, maths-based exams. When preparing for a maths exam, an important consideration is the type of exam you will be sitting and what you can, and cannot, bring in with you (for in person exams). Maths exams may be open, restricted or closed. More information about each of these is included in Table 23.2 below.  The information about the type of exam for your course can be found in the examination information provided by your university.

Table 23.2 Types of maths exams

Exam type Materials allowed Study tips
Open exam Access to any printed or written material and a calculator. • Avoid bringing in too much information—as you may not be able to find the information you need quickly enough.
• Organise any notes or books you bring to the exam, use tabs to identify different sections.
• Summarise and highlight key points in different colours to find easily.
• If you have an online textbook/studybook, consider if there are sections you may need to print out.
Restricted Exams Bring in only specific items, normally a calculator and sometimes a formula sheet. • Practice using the formula sheet while studying to familiarise yourself with using it to be able to quickly find everything you need.
Closed Exams Access only writing and drawing instruments. • Know what will and will not be assessed in the exam.
• You may be provided with a formula sheet, if so, know what will be included and practice using it.

Once you have considered the type of exam you will be taking and know what materials you will be able to use, you need to focus on preparing for the exam. Preparation for your maths exams should be happening throughout the semester.

Maths exam preparation tips

  • Review the information about spaced practice in the previous chapter Preparing for Exams to maximise your exam preparation
  • It is best NOT to start studying the night before the exam. Cramming doesn’t work as well as spending regular time studying throughout the course. See additional information on cramming in the previous chapter Preparing for Exams ).
  • Review your notes and make a concise list of important concepts and formulae
  • Make sure you know these formulae and more importantly, how to use them
  • Work through your tutorial problems again (without looking at the solutions). Do not just read over them. Working through problems will help you to remember how to do them.
  • Work through any practice or past exams which have been provided to you. You can also make your own practice exam by finding problems from your course materials. See the Practice Testing section in the previous Preparing for Exams chapter for more information.
  • When working through practice exams, give yourself a time limit. Don’t use your notes or books, treat it like the real exam.
  • Finally, it is essential to get a good night’s sleep before the exam so you are well rested and can concentrate when you take the exam.

Multiple choice questions in maths exams

Multiple choice questions in maths exams normally test your knowledge of concepts and may require you to complete calculations. For more information about answering multiple choice questions, please see the multiple choice exam section in this chapter.

Short answer questions in maths exams

Exam

These type of questions in a maths exam require you to write a short answer response to the question and provide any mathematical working.  Things to remember for these question types include:

  • what the question is asking you to do?
  • what information are you given?
  • is there anything else you need to do (multi-step questions) to get the answer?
  • Highlight/underline the key words. If possible, draw a picture—this helps to visualise the problem (and there may be marks associated with diagrams).
  • Show all working! Markers cannot give you makes if they cannot follow your working.
  • Check your work.
  • Ensure that your work is clear and able to be read.

Exam day tips

Before you start your maths exam, you should take some time to peruse (read through) the exam.  Regardless of whether your exam has a dedicated perusal time, we recommend that you spend time at the beginning of the exam to read through the whole exam. Below are some strategies for perusing and completing maths based exams.

When you commence your exam:

  • Read the exam instructions carefully, if you have any queries, clarify with your exam supervisor
  • During the perusal time, write down anything you are worried about forgetting during the exam
  • Read each question carefully, look for key words, make notes and write formulae
  • Prioritise questions. Do the questions you are most comfortable with first and spend more time on the questions worth more marks. This will help you to maximise your marks.

Once you have read through your options and made a plan on how to best approach your exam, it is time to focus on completing your maths exam. During your exam:

  • Label each question clearly—this will allow the marker to find each question (and part), as normally you can answer questions in any order you want! (If you are required to answer the questions in a particular order it will be included as part of your exam instructions.)
  • If you get stuck, write down anything you know about that type of question – it could earn you marks
  • The process is important—show that you understand the process by writing your working or the process, even if the numbers don’t work out
  • If you get really stuck on a question, don’t spend too long on it.  Complete the other questions, something might come to you when you are working on a different question.
  • Where possible, draw pictures even if you can’t find the words to explain
  • Avoid using whiteout to correct mistakes, use a single line to cross out incorrect working
  • Don’t forget to use the correct units of measurement
  • If time permits, check your working and review your work once you have answered all the questions

This chapter provided an overview of different types of exams and some specific preparation strategies.  Practising for the specific type of exam you will be completing has a number of benefits, including helping you to become comfortable (or at least familiar) with the type of exam and allowing you to focus on answering the questions themselves.  It also allows you to adapt your exam preparation to best prepare you for the exam.

  • Know your exam type and practise answering those types of questions.
  • Ensure you know the requirements for your specific type of exam (e.g., closed, restricted, open book) and what materials you can use in the exam.
  • Multiple choice exams – read the response options carefully.
  • Short answer exams– double check that you have answered all parts of the question.
  • Essay exams – practise writing essay responses under timed exam conditions.
  • Case study exams – ensure that you refer to the case in your response.
  • Maths exams – include your working for maths and statistics exams
  • For handwritten exams write legibly, so your maker can read your work.

Academic Success Copyright © 2021 by Anita Frederiks; Kate Derrington; and Cristy Bartlett is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Multiple-choice questions: pros and cons

essay vs multiple choice exams

Multiple-choice questions should contain a question (known as the stem), the correct answer (key) and distractors (other plausible options). Multiple-choice questions can be used at different points in the learning process, to check for understanding or as a low stakes retrieval task. There are a range of benefits linked to using this quizzing technique in the classroom. However, multiple-choice questioning has limitations and is not a perfect classroom strategy, no classroom strategy is. Below are some pros and cons to consider when planning, designing and using multiple-choice questions (MCQs).

Pros of multiple-choice questions:

  • MCQs are a flexible questioning technique, they can be used at various points in a lesson and throughout the learning process. MCQs can be used for both formative and summative assessment and can be used inside or outside of the classroom. MCQs can be versatile in terms of the content and type of questions asked which can range from factual recall to higher order thinking (if the questions are carefully crafted).
  • MCQs can provide retrieval support for younger students and students with learning difficulties making retrieval practice more accessible and the challenge desirable. They can be differentiated through scaffolded question design. Initial retrieval success is important and having the correct answer visible increases the likelihood of success and that can lead to increased confidence and motivation.
  • MCQs for quizzing can be flexible in terms of time spent in a lesson. MCQs can be delivered relatively quickly, not dominating lesson time also meaning more time can be used for meaningful feedback and discussion. As students can answer MCQs fairly quickly, in comparison to free recall or extended answers, this means more questions can be asked to test a significant amount of knowledge and content. A concern with checking for understanding and retrieval practice, can be finding the time to do so in addition to teaching a demanding and content heavy curriculum. Checking for understanding and retrieval practice are essential and cannot be abandoned but MCQs can assist in terms of timing within a lesson.
  • MCQs can support responsive teaching in the classroom. Carefully designed MCQs can address potential misconceptions that may have developed in previous lessons, this is very useful for the teacher to be aware of and respond to.
  • MCQs are graded and scored objectively – answers are either right or wrong, no need for moderation or review. MCQs can be workload friendly in terms of feedback and marking. There are a variety of digital tools that can provide instant feedback to students. Alternatively, students can self or peer assess MCQs, monitored by the teacher.
  • Another workload benefit of MCQs is that a carefully constructed quiz can be repeated and used again, to assist with regular and spaced retrieval practice.
  • MCQs can be used with students across different ages and different subjects. MCQs can and ideally should be used across year groups/departments to promote consistency of the content being quizzed. The questions can be the same but the teacher can have flexibility of delivery of the MCQ for example one teacher may use a digital tool to ask questions but their colleague may prefer to embed questions into their presentations with students using mini white boards to respond. The questions used for MCQ quizzes can be designed so they can be used for short answer questions, simply removing the distractors and correct answer to increase the level of challenge.

Cons of multiple-choice questions:

  • If MCQs are not designed well they won’t require effortful or meaningful retrieval but instead it is more likely to involve low level recognition or power of elimination. Distractors must be plausible and this can be a challenge for teachers to think of plausible distractors. Two plausible distractors and the correct option is sufficient. Writing carefully designed questions and plausible options can also be time consuming. A good way to address this is to view other teachers’ quizzes and use or adapt questions or alternatively, a great idea is to work together within a department or phase to design MCQ quizzes.
  • MCQs can be used for both summative and formative assessments but if MCQs are used for end of unit tests or any form of high stakes assessment it can be difficult for them to be viewed as a low stakes retrieval task by learners. Some students will make the distinction but it is important that the teacher communicates with their class the purpose of the MCQ quiz.
  • A reason some educators are opposed to or reluctant to use MCQs can be due to the fact that there is potential for guess work. It can be difficult for teachers to know if students selected or recalled correct information or simply guessed (although they are likely to be more reliable than simple true/false) and there are ways to tackle this through elaboration and further questioning.
  • There are some online quizzing tools that use timers and award points to students depending on the speed of their answers. This encourages students to rush, not read questions carefully and make errors. Students with learning difficulties or English as an additional language, may need longer to read and process the question and for selecting or recalling the information, but a timer can cause pressure and/or panic.
  • Students don’t always check their answers and reflect on their progress, preferring to view scores rather than identify and address the gaps in their knowledge but this is a vital element of the learning process to continue to move learners forward. If a student has scored 15/20 on a MCQ quiz they should be encouraged to check and be aware of which answers were correct and incorrect so they can learn from their mistakes and avoid repeating those mistakes.
  • There is no flexibility in terms of credit with MCQs – either incorrect or correct, even if the students have some knowledge linked to the question that will not be awarded or recognised. This can be frustrating for the student.
  • MCQs as a strategy to promote retrieval practice has limitations. There must be opportunities provided for students for free recall and elaboration. Teachers should not rely solely on MCQs for retrieval practice. Other strategies can and should be used in addition to MCQs.

There are both pros and cons but it is clear there is a place for multiple-choice questions in the classroom. They can enhance learning by checking for understanding, identifying misconceptions and used for regular retrieval practice. MCQs can also be used to promote consistency across a curriculum and support teacher workload.

You can read more about multiple-choice questions in a previous blog here .

For more resources on questioning, check out our podcast with Michael Chiles on ‘questioning in the classroom’. All of our resources are available in our free Resource Library .

' src=

The “partial knowledge issue”, can be statistically addressed considering that if you sustract point when error, and that partial knowledge give you clues to discard options, your chances to guess correctly increase, so statistically you get points from that “informed guess”, which is a way to give points according to that partial knowledge. Merry Xmas!

[…] Multiple-choice questions: pros and cons is by Kate Jones. I’m adding it to The Best Ways To Use Multiple Choice Exercises. […]

Leave a Comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • Our Approach to Teacher CPD
  • Our Advisory Board
  • Our Partners
  • Press Releases
  • Privacy Policy

Twitter

How are schools and colleges using the Great Teaching Toolkit? We have 22 case studies for you to choose from 👇 hubs.la/Q01Xr3g_0

reply

Teacher collaboration: Why we’re for it (even when the evidence is weak). Blog by @ProfCoe 👇 hubs.la/Q01Xr1kk0

Encouraging better conversations about teaching and learning. hubs.la/Q01XpFbs0

@LouiseW17696382 pic.twitter.com/jmeA…

Louise Walsh

I have completed Unit Two of @EvidenceInEdu 's Assessment Lead Programme. In this Unit I have mastered assessment design with the assessment blueprint! pic.twitter.com/PxxI…

Evidence Based Education King's Award for Enterprise

Evidence Based Education is the proud recipient of a 2024 King's Award for Enterprise , in the Innovation category. Click here to read more ! Copyright © 2024 Evidence Based Education | View our Privacy Policy .

Privacy Overview

lemon grad logo

6 Test-Taking Strategies for Multiple Choice Exams [With Examples and Data]

  • Published on May 5, 2017

Avatar photo

  •   shares

Ever got stuck on multiple-choice questions in a test where you were not certain about the answer? Or, you had to rush through the last few questions due to paucity of time.

When faced with such situations, the best you can do is eliminate as many options as possible and make an educated guess.

Educated guess!

Yes, that’s one of the obvious multiple-choice test-taking strategy top students follow.

But, are you really making educated guesses?

Or, are you just randomly picking one of the remaining (after eliminating the options you can) options?

Here are the hacks, or you may call them test-taking strategies, for multiple-choice tests – some of them used by me over the years – you can use to smart-guess. These strategies are supported by data, too, which has been drawn from Rock Break Scissors: A Practical Guide to Outguessing & Outwitting Almost Everybody by William Poundstone. (He crunched statistics on a sample of 100 tests – 34 from schools and colleges and 66 from other sources, comprising of 2,456 multiple-choice questions. These tests included middle school, high school, college, professional school exams, driver’s practice tests, US Naturalization Self Test, newspaper quizzes, and so on.)

Without further ado, here are six hacks you can use when taking multiple choice tests:

1. How to guess answers to true-false questions?

In true-false tests, true (T) answers are more common than false (F): according to Poundstone’s analysis, on an average, 56% answers are T and 44% F.

It’s not hard to see why. True statements come to our mind naturally, and hence with less effort, but we need to make up a false statements, which requires more effort? No wonder, more T answers creep in question papers, as test makers unwittingly take the path of least resistance.

Another pattern in true-false tests: two same responses (TT or FF) in a row are less likely than two dissimilar responses (TF or FT).

To give an example from Poundstone’s book, the answer key to 21 questions from a college textbook (Plummer, McGeary, Carlson’s Physical Geology , ninth edition) is:

F T T F T F F T T F T T F T T T F T T F

At the first glance, the answers seem to be randomly distributed.

But they aren’t.

In this sequence, two successive responses are same seven times out of nineteen (the twentieth answer has no successor). That is, the chance that the next answer will be different from the present one is 63% (12/ 19), which is higher than the expected 50%, if it was completely random.

Check it out for any test. You’ll find it to be true on most occasions.

Let’s understand how to apply these two hacks through a hypothetical test with ten true-false questions.

essay vs multiple choice exams

Step 1 : As always, first mark the answers you know. Let’s say you know the answers to questions 3, 5, 6, 8, and 10. After you’ve marked these answers, your answer sheet looks like this:

essay vs multiple choice exams

Step 2 : Now, come to the questions where you’re clueless. Of these, first pick those whose both the neighboring responses are same (either both are T or F), and choose the opposite of that as the answer. Here, question 7 has both its neighboring answers T, so pick F as the answer for 7. The answer sheet now is:

essay vs multiple choice exams

Step 3 : When preceding and succeeding answers are different, then pick T as your response because T is likelier than F. So, we pick T for both 4 and 9. The answer sheet now is:

essay vs multiple choice exams

Step 4 : You’re now left with the first two questions. Here, TF will be the best answer, as it’ll form a non-repeating pattern.

essay vs multiple choice exams

2. How to guess answers to multiple-choice questions?

Through his data, Poundstone found following probabilities in case of multiple-choice questions:

  • On tests with three choices (say, A, B, and C), all the options were equally likely to be correct.
  • On tests with four choices (say, A, B, C, and D), B was slightly more likely to be correct (28%). Remember, the expected likelihood of each option being correct is 25%.
  • And on tests with five choices (say, A, B, C, D, and E), E was the most commonly correct answer (23%). C was the least (17%). In this case, the expected likelihood of each option being correct is 20%.

As the number of options increase, the bias toward a particular answer increases. To quote Poundstone, “This is in line with experimental findings that the quality of randomizing decreases as the number of options increases.”

So, B and E are better guesses in a 4-option and 5-option multiple-choice tests, respectively, than picking the middle answer, a common guess hack, or a random guess.

And, like the case of true-false, here too Poundstone’s analysis showed that the answers in multiple-choice tests are less likely (than a completely random one) to repeat the previous answer.

For three-choice tests, he found that the correct choice repeated the previous answer only on 25% occasions against an expected 33%. For four-choice, 19% (against an expected 25%). And for five-choice, 18% (against an expected 20%).

Wondering, how to answer multiple choice questions using these two hacks?

Here is an example.

Consider following three questions (#28-30) in a test in which you know answers to questions 28 and 30: B and D, respectively. But the only thing you know about #29 is that option C can’t be the answer.

How do you go about answering #29?

First, rule out any choice that you know for sure is wrong. Here, it’s C. Of the remaining three options A, B, and D, you give one vote to B because B is most likely to be correct (albeit by a small %) in a four-choice test.

Also, because answers are less likely to repeat, give one vote to A and none to B and D.

Blog Images LG 1

Now, you’ve one vote each for A and B, and because they get equal votes, pick any of the two as the answer to 29.

Let’s take another example.

Here, the answers to both 28 and 30 are A, and you’ve to guess on 29.

Repeating the process we just followed in the previous example, B gets two votes and D, one.

Blog Images LG 2

So, the guess here is B, the one with the higher vote.

3. Outlier options are less likely to be correct

Contemporary authors are much more at liberty to be candid than were authors of previous centuries, but modern writers nevertheless often find themselves ——– portions of their work.

A. Emancipating

B. Censoring

C.  Refuting

D.  Censuring

E.  Ameliorating

F.  Expurgating

Here, options B, C, D, and F show similar intent (finding fault, critic, disapprove etc.) in meaning. Whereas A and E are different in meaning, and hence outliers. Correct answers: B & F

If 3r = 18, what is the value of 6r + 3?

This is a simple question, but if you’re running short on time and have no option but to guess, then you can eliminate the first answer, using outlier-hack.

The correct answer here is D.

2x – 3y = -14 and 3x – 2y = -6

If (x, y) is the solution to the system of equations above, what is the value of x – y?

Using outlier-hack, you may narrow down your zone of consideration to B and C or drop A and D.

The correct answer is C.

In the first example (contemporary authors …), if A or E was the correct answer, why would the test maker take so much pain to create four similar wrong answers? It’s hard to create closer-looking, closer-meaning responses than disjointed. Moreover, it’ll make it easier for the test-takers because if I know that the meaning of words in either A or E fits better with the context of the question, I can immediately rule out four options (whose meaning doesn’t go well with the context).

4. Universal qualifiers are more likely to be correct

Contrary to popular guess-practice of avoiding answers which have universal qualifiers such as always, none, never, and all , they’re in fact best guesses as per Poundstone’s analysis. He found that none / all answers in his sample were correct on whooping 52% occasions.

Well, it’s contrary to what even I believed. And, therefore, I checked it myself on three tests and found it to be largely correct – the lowest correct response rate being 37% and the highest, 50%.

Participate in a short survey

If you’re a learner or teacher of English language, you can help improve website’s content for the visitors through a short survey.

5. Grammatical clues can throw up the answer

Look for grammatical clues or ways in which a response, when combined with the stem, makes for a better sentence.

A word used to describe a noun is called an:

A. Adjective

B. Conjunction

In this example, only ‘Adjective’ starts with a vowel and hence the only option which forms grammatically correct sentence when combined with the stem.

Which option would do the most to promote the application of nuclear discoveries to medicine?

A. Trained radioactive therapy specialists.

B. Developing standardized techniques for treatment of patients.

C. Do not place restrictions on the use of radioactive substances.

D. If the average doctor is trained to apply radioactive treatments.

Here, option (B) fits grammatically with the stem, which also happens to be the correct answer.

6. Longest response is more likely to be correct

The longest response (of course, in non-quant answers) has a greater chance of being the correct one, because test makers tend to load the correct response with qualifying language to make it unambiguously correct.

Which of the following is the best indication of high morale in a supervisor’s unit?

A. Employees are rarely required to work overtime.

B. Employees are willing to give first priority to attaining group objectives, subordinating any personal desires they may have.

C. The supervisor enjoys staying late to plan the next day.

D. The unit gives expensive presents to each other.

Here, too, the longest answer is the correct answer.

To turn right, you should be in:

A. The left lane.

B. The center lane.

C. The lane that’s closest to the direction you want to go.

D. Any one of the lanes.

The correct answer (C), here, is also the longest.

Hacks aren’t a replacement for knowledge

No doubt, these hacks are better than wild guessing. But they can’t replace certain knowledge on a topic.

Moreover, their effectiveness increase dramatically when combined with certainty that eliminates few options or that gets neighboring questions right, for example.

Smart-guessing is always better than random-guessing, and it can get you those extra, defining marks. Some of the guess-hacks you can use are:

  • In true-false questions, first, T are more likely than F and, second, a TT or FF is less likely than a TF or FT
  • In multiple-choice questions, first, B and E are the most likely answers in 4- and 5-option questions, respectively and, second, same answer is least likely to be repeated in the next question
  • Outlier answers are less likely to be the correct answers
  • Multiple-choice questions with universal qualifiers such as always, none, never, and all are more likely to be correct than other options
  • Grammatical clues such as correct article, subject-verb agreement, or a better overall sentence can lead you to the correct answer, sometimes
  • Longest response is more likely to be the correct answer

If you’re inquisitive type, you can test these rules on the sample or real tests that you plan to take and you never know you may unearth a new hack.

Source: Chapter 3, Rock Breaks Scissors by William Poundstone and Brigham Young University Testing Center

Avatar photo

Anil is the person behind this website. He writes on most aspects of English Language Skills. More about him here:

I’ve gone through several posts on the topic, but this is the first which provides evidence in support, talks about probabilities. I think this is wonderful. Second, now I can do experiments (and find probabilities) similar to what Poundstone did. That will be even more specific to the test I’m planning to take. Thanks for such a wonderful post.

Thanks, Judy. You can indeed increase your chances of getting an answer right by being smarter (with the same level of preparation).

Ideally, multiple-choice exams would be random, without patterns of right or wrong answers. However, all tests are written by humans, and human nature makes it impossible for any test to be truly random. Thanks for sharing the great information. Good Luck!

This is not proper statistical analysis of the likelihood of answers to tests. This is analysis, to a degree, or the creator of tests. Now that software tools are available that populate responses to multiple choice, T/F and short answer tests automatically, the biases of test makers are becoming increasingly irrelevant.

Thank you for this article. It is very useful and has increased my knowledge about taking test.

I agree with this statement. Smart-guessing is always better than random guessing, and it can get you those extra, defining marks.

This is a really good post here. Thanks for taking the time to post such valuable information. Quality content is what always gets the visitors coming.

Comments are closed.

Logo

Difference Between An Essay vs Multiple Choice Tests

  • Views 30736
  • Author Sandra W.

essay vs multiple choice exams

A Comparison of a Multiple Choice and an Essay Test

Below is a guideline prepared by iwriteessays.com on the difference between an essay exam and a multiple-choice test. Below is a comparison of  Essays vs. Multiple-Choice Exams.

  • Preparation

Preparing for a multiple-choice test is an easy task that requires the writer to identify important information when he/she see it.

An essay exam requires that the writer gather enough knowledge on the subject matter; such the writer can be able to answer to answer any prompt questions with a detailed explanation of ideas.

It is very easy for you to complete a multiple-choice essay in a short time be it you know the answers or not.

However, you should not ignore the intensity of your essay exam . The writer should make sure that he organizes his thoughts in order. In addition, you should be aware of your handwriting if you want your teacher to read and understand your essay. It is useless for the writer to write an essay that is not readable.

If your  multiple-choice exam is in the form of a fill-in-the-bubble sheet , it is not advisable to use pencils because they increase the chances of smudging. Smudging is disadvantageous because it complicates the functioning of the electronic-grading-robot.

A lucky instance includes that when your teacher will allow you to use a pen in the essay exam . Pens ensure you produce a clean paper that is appealing to the eye. However it the teacher does not permit the use of a pen, be careful not mess your essay paper through smudging.

An essay exam gives you the chance of presenting your ideas creatively using language, constructive sentences that express the meaning of your thesis.

With a multiple-choice test , you have the limitation of expressing your ideas creatively by sacrificing your scores in order to decorate patterns on your sheet.

4. Hard questions

For a multiple-choice test, you can guess answers if you not have an idea of what the right answer might be.

On the other hand, for an essay exam, you can construct a sensible and convincing answer even if you do not have an idea of the main topic.

5. Giving Up

It is practically hard to give up in a multiple-choice test, since you can decide to assign randomly a choice to every question and chances are minimal that you will get below average marks.

Giving up in an   essay exam is a hard alternative for any student. The student will be in a tough dilemma as to writing either repetitive phrases or handing in a blank paper

Recent Posts

  • A Sample Essay on Birds 21-08-2023 0 Comments
  • Is Homeschooling an Ideal Way... 21-08-2023 0 Comments
  • Essay Sample on Man 14-08-2023 0 Comments
  • Academic Writing(23)
  • Admission Essay(172)
  • Book Summaries(165)
  • College Tips(312)
  • Content Writing Services(1)
  • Essay Help(517)
  • Essay Writing Help(76)
  • Essays Blog(0)
  • Example(337)
  • Infographics(2)
  • Letter Writing(1)
  • Outlines(137)
  • Photo Essay Assignment(4)
  • Resume Writing Tips(62)
  • Samples Essays(315)
  • Writing Jobs(2)

Get the Reddit app

/r/ScienceTeachers is a place for science educators to collaborate on and contribute tips, ideas, labs, and curricula. We seek to encourage the sharing of interesting studies, experiments, videos and articles that will interest students of all ages and promote science and critical thinking in their lives.

[High School Biology] Essay Questions vs Multiple Choice

First year teaching biology. I hate multiple choice exams. I know they are given because they are easy to grade. I teach math, so I never use multiple choice - - ever.

Are MC exams necessary to tests students understanding of biology?

I would rather use AP-Bio level essay exams (but adjusted to a standard course) to test students understanding of biology. For example, I would rather make an exam where as student answers 3-5 short essay style exams.

Is this feasible in a small classroom environment?

How do I train students to learn to write in science? What are good examples of writing for science at the high school level?

essay vs multiple choice exams

Recommended for you

Easiest penn state classes you need to take next semester, to the pre-med student who wants to graduate early, let me explain why you shouldn't do it, i'm the girl who prefers essay exams over multiple choice, don't @ me, because i much rather write out everything than bubble in a letter..

I'm The Girl Who Prefers Essay Exams Over Multiple Choice, Don't @ Me

Yes, you read that correctly.

I would much rather take a final where I am handed a blank sheet of paper and a prompt and asked to write as much as I know about this topic in an hour. Ever since I came to college the word "Scantron" gave me chills. Yeah, that white and blue sheet with bubbles all over it that you have to fill in every last bit of information for it to be graded correctly. Those sheets terrified me and I knew that it was something I would have to get over even though I thought bubble sheets would never be seen again after the SAT and ACT tests.

Growing up, my dad would ask me to watch a movie or read a book and then write about what I gained from it. I never enjoyed doing that more so because it meant that I had to follow the book or movie the ENTIRE time and I felt like I was being graded. Looking back at it now, I appreciate my dad doing this because I grew to become a stronger writer and I now pay closer attention to details when it comes to assignments similar to those.

Being passionate about writing and a journalism major has made me appreciate writing significantly more as compared to some of my friends who freak out when it comes to essay prompts. Sure, I could see why students prefer multiple choice since they can use a process of elimination, but wouldn't you rather write out everything you know about the book or topic you learned about in class over a month ago?

I was fortunate enough to only have one final exam this semester, but the worst part was that it was multiple choice. I dreaded the idea that all the questions and answers would sound the same and I would just bubble in random letters. Thankfully it did not come to that point. The other teacher for this course happens to make her tests short answer and as great as that sounds to me, she is known to have not much sympathy if you ended up writing something that was partially correct. I was never considered a great test taker, but that doesn't mean I can't apply myself to whatever is thrown in front of me.

Being a journalism major has brought me to enjoy editing friends papers as well as helping them understand how an essay should be formatted correctly. Yes, essay exams are more time consuming and that scares a lot of people away because they want to be done as fast a possible, but the more you write and have knowledge about the course the more information you can provide to your professor who is grading it.

Now yeah finals may be over but I still have another semester to go through and hope that my tests are formatted in an essay rather than bubbling a sheet for an hour of my time. Looking back I would not change a thing about the way I tackled my finals and other exams throughout my life I just know now that I prefer to write out my thoughts on a specific topic rather than just having to choose from a limited amount of answers pre-written out.

Subscribe to our Newsletter

25 beatles lyrics: your go-to guide for every situation, the best lines from the fab four.

For as long as I can remember, I have been listening to The Beatles. Every year, my mom would appropriately blast “Birthday” on anyone’s birthday. I knew all of the words to “Back In The U.S.S.R” by the time I was 5 (Even though I had no idea what or where the U.S.S.R was). I grew up with John, Paul, George, and Ringo instead Justin, JC, Joey, Chris and Lance (I had to google N*SYNC to remember their names). The highlight of my short life was Paul McCartney in concert twice. I’m not someone to “fangirl” but those days I fangirled hard. The music of The Beatles has gotten me through everything. Their songs have brought me more joy, peace, and comfort. I can listen to them in any situation and find what I need. Here are the best lyrics from The Beatles for every and any occasion.

And in the end, the love you take is equal to the love you make

The End- Abbey Road, 1969

The sun is up, the sky is blue, it's beautiful and so are you

Dear Prudence- The White Album, 1968

Love is old, love is new, love is all, love is you

Because- Abbey Road, 1969

There's nowhere you can be that isn't where you're meant to be

All You Need Is Love, 1967

Life is very short, and there's no time for fussing and fighting, my friend

We Can Work It Out- Rubber Soul, 1965

He say, "I know you, you know me", One thing I can tell you is you got to be free

Come Together- Abbey Road, 1969

Oh please, say to me, You'll let me be your man. And please say to me, You'll let me hold your hand

I Wanna Hold Your Hand- Meet The Beatles!, 1964

It was twenty years ago today, Sgt. Pepper taught the band to play. They've been going in and out of style, but they're guaranteed to raise a smile

Sgt. Pepper's Lonely Hearts Club Band-1967

Living is easy with eyes closed, misunderstanding all you see

Strawberry Fields Forever- Magical Mystery Tour, 1967

Can you hear me? When it rains and shine, it's just a state of mind

Rain- Paperback Writer "B" side, 1966

Little darling, it's been long cold lonely winter. Little darling, it feels like years since it' s been here. Here comes the sun, Here comes the sun, and I say it's alright

Here Comes The Sun- Abbey Road, 1969

We danced through the night and we held each other tight, and before too long I fell in love with her. Now, I'll never dance with another when I saw her standing there

Saw Her Standing There- Please Please Me, 1963

I love you, I love you, I love you, that's all I want to say

Michelle- Rubber Soul, 1965

You say you want a revolution. Well you know, we all want to change the world

Revolution- The Beatles, 1968

All the lonely people, where do they all come from. All the lonely people, where do they all belong

Eleanor Rigby- Revolver, 1966

Oh, I get by with a little help from my friends

With A Little Help From My Friends- Sgt. Pepper's Lonely Hearts Club Band, 1967

Hey Jude, don't make it bad. Take a sad song and make it better

Hey Jude, 1968

Yesterday, all my troubles seemed so far away. Now it looks as though they're here to stay. Oh, I believe in yesterday

Yesterday- Help!, 1965

And when the brokenhearted people, living in the world agree, there will be an answer, let it be.

Let It Be- Let It Be, 1970

And anytime you feel the pain, Hey Jude, refrain. Don't carry the world upon your shoulders

I'll give you all i got to give if you say you'll love me too. i may not have a lot to give but what i got i'll give to you. i don't care too much for money. money can't buy me love.

Can't Buy Me Love- A Hard Day's Night, 1964

All you need is love, love is all you need

All You Need Is Love- Magical Mystery Tour, 1967

Whisper words of wisdom, let it be

Blackbird singing in the dead of night, take these broken wings and learn to fly. all your life, you were only waiting for this moment to arise.

Blackbird- The White Album, 1968

Though I know I'll never lose affection, for people and things that went before. I know I'll often stop and think about them. In my life, I love you more

In My Life- Rubber Soul, 1965

While these are my 25 favorites, there are quite literally 1000s that could have been included. The Beatles' body of work is massive and there is something for everyone. If you have been living under a rock and haven't discovered the Fab Four, you have to get musically educated. Stream them on Spotify, find them on iTunes or even buy a CD or record (Yes, those still exist!). I would suggest starting with 1, which is a collection of most of their #1 songs, or the 1968 White Album. Give them chance and you'll never look back.

14 Invisible Activities: Unleash Your Inner Ghost!

Obviously the best superpower..

The best superpower ever? Being invisible of course. Imagine just being able to go from seen to unseen on a dime. Who wouldn't want to have the opportunity to be invisible? Superman and Batman have nothing on being invisible with their superhero abilities. Here are some things that you could do while being invisible, because being invisible can benefit your social life too.

1. "Haunt" your friends.

Follow them into their house and cause a ruckus.

2. Sneak into movie theaters.

Going to the cinema alone is good for your mental health , says science

Considering that the monthly cost of subscribing to a media-streaming service like Netflix is oft...

Free movies...what else to I have to say?

3. Sneak into the pantry and grab a snack without judgment.

Late night snacks all you want? Duh.

4. Reenact "Hollow Man" and play Kevin Bacon.

America's favorite son? And feel what it's like to be in a MTV Movie Award nominated film? Sign me up.

5. Wear a mask and pretend to be a floating head.

Just another way to spook your friends in case you wanted to.

6. Hold objects so they'll "float."

"Oh no! A floating jar of peanut butter."

7. Win every game of hide-and-seek.

Just stand out in the open and you'll win.

8. Eat some food as people will watch it disappear.

Even everyday activities can be funny.

9. Go around pantsing your friends.

Even pranks can be done; not everything can be good.

10. Not have perfect attendance.

You'll say here, but they won't see you...

11. Avoid anyone you don't want to see.

Whether it's an ex or someone you hate, just use your invisibility to slip out of the situation.

12. Avoid responsibilities.

Chores? Invisible. People asking about social life? Invisible. Family being rude? Boom, invisible.

13. Be an expert on ding-dong-ditch.

Never get caught and have the adrenaline rush? I'm down.

14. Brag about being invisible.

Be the envy of the town.

But don't, I repeat, don't go in a locker room. Don't be a pervert with your power. No one likes a Peeping Tom.

Good luck, folks.

19 Lessons I'll Never Forget from Growing Up In a Small Town

There have been many lessons learned..

Small towns certainly have their pros and cons. Many people who grow up in small towns find themselves counting the days until they get to escape their roots and plant new ones in bigger, "better" places. And that's fine. I'd be lying if I said I hadn't thought those same thoughts before too. We all have, but they say it's important to remember where you came from. When I think about where I come from, I can't help having an overwhelming feeling of gratitude for my roots. Being from a small town has taught me so many important lessons that I will carry with me for the rest of my life.

1. The importance of traditions.

Sometimes traditions seem like a silly thing, but the fact of it is that it's part of who you are. You grew up this way and, more than likely, so did your parents. It is something that is part of your family history and that is more important than anything.

2. How to be thankful for family and friends.

No matter how many times they get on your nerves or make you mad, they are the ones who will always be there and you should never take that for granted.

3. How to give back.

When tragedy strikes in a small town, everyone feels obligated to help out because, whether directly or indirectly, it affects you too. It is easy in a bigger city to be able to disconnect from certain problems. But in a small town those problems affect everyone.

4. What the word "community" really means.

Along the same lines as #3, everyone is always ready and willing to lend a helping hand when you need one in a small town and to me that is the true meaning of community. It's working together to build a better atmosphere, being there to raise each other up, build each other up, and pick each other up when someone is in need. A small town community is full of endless support whether it be after a tragedy or at a hometown sports game. Everyone shows up to show their support.

5. That it isn't about the destination, but the journey.

People say this to others all the time, but it takes on a whole new meaning in a small town. It is true that life is about the journey, but when you're from a small town, you know it's about the journey because the journey probably takes longer than you spend at the destination. Everything is so far away that it is totally normal to spend a couple hours in the car on your way to some form of entertainment. And most of the time, you're gonna have as many, if not more, memories and laughs on the journey than at the destination.

6. The consequences of making bad choices.

Word travels fast in a small town, so don't think you're gonna get away with anything. In fact, your parents probably know what you did before you even have a chance to get home and tell them. And forget about being scared of what your teacher, principle, or other authority figure is going to do, you're more afraid of what your parents are gonna do when you get home.

7. To trust people, until you have a reason not to.

Everyone deserves a chance. Most people don't have ill-intentions and you can't live your life guarding against every one else just because a few people in your life have betrayed your trust.

8. To be welcoming and accepting of everyone.

While small towns are not always extremely diverse, they do contain people with a lot of different stories, struggle, and backgrounds. In a small town, it is pretty hard to exclude anyone because of who they are or what they come from because there aren't many people to choose from. A small town teaches you that just because someone isn't the same as you, doesn't mean you can't be great friends.

9. How to be my own, individual person.

In a small town, you learn that it's okay to be who you are and do your own thing. You learn that confidence isn't how beautiful you are or how much money you have, it's who you are on the inside.

10. How to work for what I want.

Nothing comes easy in life. They always say "gardens don't grow overnight" and if you're from a small town you know this both figuratively and literally. You certainly know gardens don't grow overnight because you've worked in a garden or two. But you also know that to get to the place you want to be in life it takes work and effort. It doesn't just happen because you want it to.

11. How to be great at giving directions.

If you're from a small town, you know that you will probably only meet a handful of people in your life who ACTUALLY know where your town is. And forget about the people who accidentally enter into your town because of google maps. You've gotten really good at giving them directions right back to the interstate.

12. How to be humble .

My small town has definitely taught me how to be humble. It isn't always about you, and anyone who grows up in a small town knows that. Everyone gets their moment in the spotlight, and since there's so few of us, we're probably best friends with everyone so we are as excited when they get their moment of fame as we are when we get ours.

13. To be well-rounded.

Going to a small town high school definitely made me well-rounded. There isn't enough kids in the school to fill up all the clubs and sports teams individually so be ready to be a part of them all.

14. How to be great at conflict resolution.

In a small town, good luck holding a grudge. In a bigger city you can just avoid a person you don't like or who you've had problems with. But not in a small town. You better resolve the issue fast because you're bound to see them at least 5 times a week.

15. The beauty of getting outside and exploring.

One of my favorite things about growing up in a rural area was being able to go outside and go exploring and not have to worry about being in danger. There is nothing more exciting then finding a new place somewhere in town or in the woods and just spending time there enjoying the natural beauty around you.

16. To be prepared for anything.

You never know what may happen. If you get a flat tire, you better know how to change it yourself because you never know if you will be able to get ahold of someone else to come fix it. Mechanics might be too busy , or more than likely you won't even have enough cell service to call one.

17. That you don't always have to do it alone.

It's okay to ask for help. One thing I realized when I moved away from my town for college, was how much my town has taught me that I could ask for help is I needed it. I got into a couple situations outside of my town where I couldn't find anyone to help me and found myself thinking, if I was in my town there would be tons of people ready to help me. And even though I couldn't find anyone to help, you better believe I wasn't afraid to ask.

18. How to be creative.

When you're at least an hour away from normal forms of entertainment such as movie theaters and malls, you learn to get real creative in entertaining yourself. Whether it be a night looking at the stars in the bed of a pickup truck or having a movie marathon in a blanket fort at home, you know how to make your own good time.

19. To brush off gossip.

It's all about knowing the person you are and not letting others influence your opinion of yourself. In small towns, there is plenty of gossip. But as long as you know who you really are, it will always blow over.

Grateful Beyond Words: A Letter to My Inspiration

I have never been so thankful to know you..

I can't say "thank you" enough to express how grateful I am for you coming into my life. You have made such a huge impact on my life. I would not be the person I am today without you and I know that you will keep inspiring me to become an even better version of myself.

You have taught me that you don't always have to strong. You are allowed to break down as long as you pick yourself back up and keep moving forward. When life had you at your worst moments, you allowed your friends to be there for you and to help you. You let them in and they helped pick you up. Even in your darkest hour you showed so much strength. I know that you don't believe in yourself as much as you should but you are unbelievably strong and capable of anything you set your mind to.

Your passion to make a difference in the world is unbelievable. You put your heart and soul into your endeavors and surpass any personal goal you could have set. Watching you do what you love and watching you make a difference in the lives of others is an incredible experience. The way your face lights up when you finally realize what you have accomplished is breathtaking and I hope that one day I can have just as much passion you have.

SEE MORE: A Letter To My Best Friend On Her Birthday

The love you have for your family is outstanding. Watching you interact with loved ones just makes me smile . You are so comfortable and you are yourself. I see the way you smile when you are around family and I wish I could see you smile like this everyday. You love with all your heart and this quality is something I wished I possessed.

You inspire me to be the best version of myself. I look up to you. I feel that more people should strive to have the strength and passion that you exemplify in everyday life.You may be stubborn at points but when you really need help you let others in, which shows strength in itself. I have never been more proud to know someone and to call someone my role model. You have taught me so many things and I want to thank you. Thank you for inspiring me in life. Thank you for making me want to be a better person.

Waitlisted for a College Class? Here's What to Do!

Dealing with the inevitable realities of college life..

Course registration at college can be a big hassle and is almost never talked about. Classes you want to take fill up before you get a chance to register. You might change your mind about a class you want to take and must struggle to find another class to fit in the same time period. You also have to make sure no classes clash by time. Like I said, it's a big hassle.

This semester, I was waitlisted for two classes. Most people in this situation, especially first years, freak out because they don't know what to do. Here is what you should do when this happens.

Don't freak out

This is a rule you should continue to follow no matter what you do in life, but is especially helpful in this situation.

Email the professor

Around this time, professors are getting flooded with requests from students wanting to get into full classes. This doesn't mean you shouldn't burden them with your email; it means they are expecting interested students to email them. Send a short, concise message telling them that you are interested in the class and ask if there would be any chance for you to get in.

Attend the first class

Often, the advice professors will give you when they reply to your email is to attend the first class. The first class isn't the most important class in terms of what will be taught. However, attending the first class means you are serious about taking the course and aren't going to give up on it.

Keep attending class

Every student is in the same position as you are. They registered for more classes than they want to take and are "shopping." For the first couple of weeks, you can drop or add classes as you please, which means that classes that were once full will have spaces. If you keep attending class and keep up with assignments, odds are that you will have priority. Professors give preference to people who need the class for a major and then from higher to lower class year (senior to freshman).

Have a backup plan

For two weeks, or until I find out whether I get into my waitlisted class, I will be attending more than the usual number of classes. This is so that if I don't get into my waitlisted class, I won't have a credit shortage and I won't have to fall back in my backup class. Chances are that enough people will drop the class, especially if it is very difficult like computer science, and you will have a chance. In popular classes like art and psychology, odds are you probably won't get in, so prepare for that.

Remember that everything works out at the end

Life is full of surprises. So what if you didn't get into the class you wanted? Your life obviously has something else in store for you. It's your job to make sure you make the best out of what you have.

Trending Topics

Songs About Being 17 Grey's Anatomy Quotes Vine Quotes 4 Leaf Clover Self Respect

Top Creators

1. Brittany Morgan,   National Writer's Society 2. Radhi,   SUNY Stony Brook 3. Kristen Haddox , Penn State University 4. Jennifer Kustanovich , SUNY Stony Brook 5. Clare Regelbrugge , University of Illinois Urbana-Champaign

Trending Stories

19 things you can do when you turn 19 years old, why girls love the dad bod, 10 pros and cons of autumn, 9 rory gilmore quotes that explain your life during finals season, it's time to thank your first roommate, best of student life navigating the talking stage: 21 essential questions to ask for connection, challah vs. easter bread: a delicious dilemma, 15 lake life truths: only lake people get it, top 10 reasons my school rocks, 70 of the most referenced movies ever, subscribe to our newsletter, facebook comments.

essay vs multiple choice exams

Full-Length SAT Suite Practice Tests

Find full-length practice tests on Bluebook™ as well as downloadable paper (nonadaptive) practice tests to help you prepare for the SAT, PSAT/NMSQT, PSAT 10, and PSAT 8/9.

Advertisement

Supported by

Abortion Debate Shifts as Election Nears: ‘Now It’s About Pregnancy’

Two years after Roe was struck down, the conversation has focused on the complications that can come with pregnancy and fertility, helping to drive more support for abortion rights.

  • Share full article

A crowd of people holding signs that support abortion rights in front of the Supreme Court building.

By Kate Zernike

In the decades that Roe v. Wade was the law of the land, abortion rights groups tried to shore up support for it by declaring “Abortion Is Health Care.”

Only now, two years after the Supreme Court eliminated the constitutional right to abortion, and just six months before the presidential election, has the slogan taken on the force of reality.

The public conversation about abortion has grown into one about the complexities of pregnancy and reproduction, as the consequences of bans have played out in the news. The question is no longer just whether you can get an abortion, but also, Can you get one if pregnancy complications put you in septic shock? Can you find an obstetrician when so many are leaving states with bans? If you miscarry, will the hospital send you home to bleed? Can you and your partner do in vitro fertilization?

That shift helps explain why a record percentage of Americans are now declaring themselves single-issue voters on abortion rights — especially among Black voters, Democrats, women and those ages 18 to 29 . Republican women are increasingly saying their party’s opposition to abortion is too extreme, and Democrats are running on the issue after years of running away from it.

“When the Dobbs case came down, I told my friends — somewhat but not entirely in jest — that America was about to be exposed to a lengthy seminar on obstetrics,” said Elaine Kamarck, a fellow at the Brookings Institution, referring to the Supreme Court decision that overturned Roe v. Wade.

Abortion opponents say that stories about women facing medical complications are overblown and that women who truly need abortions for medical reasons have been able to get them under exceptions to the bans.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

  • Open access
  • Published: 26 June 2024

Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study

  • Giacomo Rossettini   ORCID: orcid.org/0000-0002-1623-7681 1 , 2 ,
  • Lia Rodeghiero 3 ,
  • Federica Corradi 4 ,
  • Chad Cook   ORCID: orcid.org/0000-0001-8622-8361 5 , 6 , 7 ,
  • Paolo Pillastrini   ORCID: orcid.org/0000-0002-8396-2250 8 , 9 ,
  • Andrea Turolla   ORCID: orcid.org/0000-0002-1609-8060 8 , 9 ,
  • Greta Castellini   ORCID: orcid.org/0000-0002-3345-8187 10 ,
  • Stefania Chiappinotto   ORCID: orcid.org/0000-0003-4829-1831 11 ,
  • Silvia Gianola   ORCID: orcid.org/0000-0003-3770-0011 10   na1 &
  • Alvisa Palese   ORCID: orcid.org/0000-0002-3508-844X 11   na1  

BMC Medical Education volume  24 , Article number:  694 ( 2024 ) Cite this article

62 Accesses

3 Altmetric

Metrics details

Artificial intelligence (AI) chatbots are emerging educational tools for students in healthcare science. However, assessing their accuracy is essential prior to adoption in educational settings. This study aimed to assess the accuracy of predicting the correct answers from three AI chatbots (ChatGPT-4, Microsoft Copilot and Google Gemini) in the Italian entrance standardized examination test of healthcare science degrees (CINECA test). Secondarily, we assessed the narrative coherence of the AI chatbots’ responses (i.e., text output) based on three qualitative metrics: the logical rationale behind the chosen answer, the presence of information internal to the question, and presence of information external to the question.

An observational cross-sectional design was performed in September of 2023. Accuracy of the three chatbots was evaluated for the CINECA test, where questions were formatted using a multiple-choice structure with a single best answer. The outcome is binary (correct or incorrect). Chi-squared test and a post hoc analysis with Bonferroni correction assessed differences among chatbots performance in accuracy. A p -value of < 0.05 was considered statistically significant. A sensitivity analysis was performed, excluding answers that were not applicable (e.g., images). Narrative coherence was analyzed by absolute and relative frequencies of correct answers and errors.

Overall, of the 820 CINECA multiple-choice questions inputted into all chatbots, 20 questions were not imported in ChatGPT-4 ( n  = 808) and Google Gemini ( n  = 808) due to technical limitations. We found statistically significant differences in the ChatGPT-4 vs Google Gemini and Microsoft Copilot vs Google Gemini comparisons ( p -value < 0.001). The narrative coherence of AI chatbots revealed “Logical reasoning” as the prevalent correct answer ( n  = 622, 81.5%) and “Logical error” as the prevalent incorrect answer ( n  = 40, 88.9%).

Conclusions

Our main findings reveal that: (A) AI chatbots performed well; (B) ChatGPT-4 and Microsoft Copilot performed better than Google Gemini; and (C) their narrative coherence is primarily logical. Although AI chatbots showed promising accuracy in predicting the correct answer in the Italian entrance university standardized examination test, we encourage candidates to cautiously incorporate this new technology to supplement their learning rather than a primary resource.

Trial registration

Not required.

Peer Review reports

Being enrolled in a healthcare science degree in Italy requires a university examination, which is a highly competitive and selective process that demands intensive preparation worldwide [ 1 ]. Conventional preparation methods involve attending classes, studying textbooks, and completing practical exercises [ 2 ]. However, with the emergence of artificial intelligence (AI), digital tools like AI chatbots to assist in exam preparation are becoming more prevalent, presenting novel opportunities for candidates [ 2 ].

AI chatbots such as ChatGPT, Microsoft Bing, and Google Bard are advanced language models that can produce responses similar to humans through a user-friendly interface [ 3 ]. These chatbots are trained using vast amounts of data and deep learning algorithms, which enable them to generate coherent responses and predict text by identifying the relationships between words [ 3 ]. Since their introduction, AI chatbots have gained considerable attention and sparked discussions in medical and health science education and clinical practice [ 4 , 5 , 6 , 7 ]. AI chatbots can provide simulations with digital patients, personalized feedback, and help eliminate language barriers; they also present biases, ethical and legal concerns, and content quality issues [ 8 , 9 ]. As such, the scientific community recommends evaluating the AI chatbot’s accuracy of predicting the correct answer (e.g., passing examination tests) to inform students and academics of their value [ 10 , 11 ].

Several studies have assessed the accuracy of AI chatbots to pass medical education tests and exams. A recent meta-analysis found that ChatGPT-3.5 correctly answered most multiple-choice questions across various medical educational fields [ 12 ]. Further research has shown that newer versions of AI chatbots, such as ChatGPT-4, have surpassed their predecessors in passing Specialty Certificate Examinations in dermatology [ 13 , 14 ], neurology [ 15 ], ophthalmology [ 16 ], rheumatology [ 17 ], general medicine [ 18 , 19 , 20 , 21 ], and nursing [ 22 ]. Others have reported mixed results when comparing the accuracy of multiple AI chatbots (e.g., ChatGPT-4 vs Microsoft Bing, ChatGPT-4 vs Google Bard) in several medical examinations tests [ 23 , 24 , 25 , 26 , 27 , 28 , 29 ]. Recently, two studies observed the superiority of ChatGPT-3.5 over Microsoft Copilot and Google Bard in hematology [ 30 ] and physiology [ 31 ] case solving. Recent work has also observed that ChatGPT-4 outperformed other AI Chatbots in clinical dentistry-related questions [ 32 ], whereas another revealed that ChatGPT-4 and Microsoft Bing outperformed Google Bard and Claude in the Peruvian National Medical Licensing Examination [ 33 ].

These findings suggest a potential hierarchy in accuracy of AI chatbots, although continued study in medical education is certainly warranted [ 3 ]. Further, current studies are limited by predominantly investigating: (A) a single AI chatbot rather than multiple ones; (B) examination tests for students and professionals already in training rather than newcomers to the university; and (C) examination tests for medical specialities rather than for healthcare science (e.g., rehabilitation and nursing). Only two studies [ 34 , 35 ] have attempted to address these limitations, identifying ChatGPT-3.5 as a promising, supplementary tool to pass several standardised admission tests in universities in the UK [ 34 ] and in France [ 35 ]. To our knowledge, no study has been performed on admission tests for admissions to a healthcare science degree program. Healthcare Science is a profession that includes over 40 areas of applied science that support the diagnosis, rehabilitation and treatment of several clinical conditions [ 36 ]. Moreover, the only studies conducted in Italy concerned ChatGPT's accuracy in passing the Italian Residency Admission National Exam for medical graduates [ 37 , 38 ] offering opportunities for further research setting.

Accordingly, to overcome existing knowledge gaps, this study aimed to assess the comparative accuracy of predicting the correct answer of three updated AI chatbots (ChatGPT-4, Microsoft Copilot and Google Gemini) in the Italian entrance university standardized examination test of healthcare science. The secondary aim was to assess the narrative coherence of the text responses offered by the AI chatbots. Narrative coherence was defined as the internally consistency and sensibility of the internal or external explanation provided by the chatbot.

Study design and ethics

We conducted an observational cross-sectional study following the Strengthening of Reporting of Observational Studies in Epidemiology (STROBE) high-quality reporting standards [ 39 ]. Because no human subjects were included, ethical approval was not required [ 40 ].

This study was developed by an Italian multidisciplinary group of healthcare science educators. The group included professors, lecturers, and educators actively involved in university education in different healthcare disciplines (e.g., rehabilitation, physiotherapy, speech therapy, nursing).

In Italy, the university’s process of accessing the healthcare professions is regulated by the laws according to short- and long-term workforce needs [ 41 ]. Consequently, the placements available for each degree are established in advance; to be enrolled in an academic year, candidates should take a standardized examination test occurring on the same day for all universities. This process, in most Italian universities, is annually managed by the CINECA (Consorzio Interuniversitario per il Calcolo Automatico dell'Italia Nord Orientale), a governmental organization composed of 70 Italian universities, 45 national public research centers, the Italian Ministry of University and Research, and the Italian Ministry of Education [ 42 ]. CINECA prepares the standardized test common to all healthcare disciplines (e.g., nursing and midwifery, rehabilitation, diagnostics and technical, and prevention) for entrance to University [ 43 ]. The test assesses basic knowledge useful as a prerequisite for their future education [ 44 ], in line with the expected knowledge possessed by candidates that encompass students at the end of secondary school, including those from high schools, technical, and professional institutes [ 45 ].

For this study, we adopted the official CINECA Tests from the past 13 years (2011–2023) obtained from freely available public repositories [ 46 , 47 ]. The CINECA Test provided 60–80 range of independent questions per year for a total of 820 multiple-choice questions considered for the analysis. Every question presents five multiple-choice options, with only one being the correct answer and the remaining four being incorrect [ 44 ]. According to the law, over the years, the CINECA test consisted of multiple-choice questions covering four areas: (1) logical reasoning and general culture, (2) biology, (3) chemistry, and (4) physics and mathematics. The accuracy of each AI chatbot was evaluated as the sum of the proportion of correct answers provided among all possible responses for each area and for the total test. In Additional file 1, we reported all the standardized examination tests used in the Italian language and an example of the question stem that was exactly replicated.

Variable and measurements

We assessed the accuracy of three AI chatbots in providing accurate responses for the Italian entrance university standardized examination test for healthcare disciplines. We utilized the latest versions of ChatGPT-4 (OpenAI Incorporated, Mission District, San Francisco, United States) [ 48 ], Microsoft Copilot (Microsoft Corporation, WA, US) [ 49 ] and Google Gemini (Alphabet Inc., CA, US) [ 50 ] that were updated in September 2023. We considered the following variables: (A) the accuracy of predicting the correct answer of the three AI chatbots in the CINECA Test and (B) the narrative coherence and errors of the three AI chatbots responses.

The accuracy of three AI chatbots was assessed by comparing their responses to the correct answers from the CINECA Test. AI Chatbots’ answers were entered into an Excel sheet and categorized as correct or incorrect. Ambiguous or multiple responses were marked as incorrect [ 51 ]. Since none of the three chatbots has integrated multimodal input at this point, questions containing imaging data were evaluated based solely on the text portion of the question stem. However, technical limitations can be present, and a sensitivity analysis was performed, excluding answers that were not applicable (e.g., images).

The narrative coherence and errors [ 52 ] of AI chatbot answers for each question were assessed using a standardized system for categorization [ 53 ]. Correct answers were classified as [ 53 ]: (A) “Logical reasoning”, if they clearly demonstrated the logic presented in the response; (B) “Internal information”, if they included information from the question itself; and (C) “External information”, if they referenced information external to the question.

On the other side, incorrect answers were categorized as [ 53 ]: (A) “Logical error”, when they correctly identify the relevant information but fail to convert it into an appropriate answer; (B) “Information error”, if AI chatbots fail to recognize a key piece of information, whether present in the question stem or through external information; and (C) “Statistical error”, for arithmetic mistakes. An example of categorisation is displayed in Additional file 2. Two authors (L.R., F.C.) independently analyzed the narrative coherence, with a third (G.R.) resolving uncertainties. Inter-rater agreement was measured using Cohen’s Kappa, according to the scale offered by Landis and Koch: < 0.00 “poor”, 0–0.20 “slight”; 0.21–0.40 “fair”, 0.41–0.60 “moderate”, 0.61–0.80 “substantial”, 0.81–1.00 “almost perfect” [ 54 ].

We used each multiple-choice question of the CINECA Test, formatted for proper structure and readability. Because prompt engineering significantly affects generative output, we standardized the input formats of the questions following the Prompt-Engineering-Guide [ 55 , 56 ]. First, we manually entered each question in a Word file, left one line of space and then inserted the five answer options one below the other on different lines. If the questions presented text-based answers, they were directly inputted into the 3 AI chatbots. If the questions were presented as images containing tables or mathematical formulae, they were faithfully rewritten for AI chatbot processing [ 57 ]. If the answers had images with graphs or drawings, they were imported only into Microsoft Copilot because ChatGPT-4 and Google Gemini only accept textual input in their current form and could not process and interpret the meaning of complex images, as present in the CINECA Test, at the time of our study [ 58 ].

On 26th of September 2023, the research group copied and pasted each question onto each of the 3 AI chatbots in the same order in which it was presented in the CINECA Test [ 59 ] and without translating it from the original Italian language to English because the AIs are language-enabled [ 60 ]. To avoid learning bias and that the AI chatbots could learn or be influenced by conversations that existed before the start of the study, we: (A) created and used a new account [ 2 , 51 ], (B) always asked each question only once [ 61 , 62 ], (C) did not provide positive or negative feedback on the answer given [ 60 ], and (D) deleted conversations with the AI chatbots before entering each new question into a new chat (with no previous conversations). We presented an example of a question and answer in Additional file 3.

Statistical analyses

Categorical variables are presented as the absolute frequency with percent and continuous variables as mean with confidence interval (CI, 95%) or median with interquartile range (IQR). The answers were collected as binomial outcomes for each AI chatbot respect to the reference (CINECA Tests). A chi-square test was used to ascertain whether the CINECA Test percentage of correct answers differed among the three AI chatbots according to different taxonomic subcategories (logical reasoning and general culture, biology, chemistry, and physics and mathematics). A sensitivity analysis was performed, excluding answers that were not applicable (e.g., if the answers had images with graphs or drawings). A p -value of < 0.05 was considered significant. Since we are comparing three groups/chatbots, Bonferroni adjustment, Familywise adjustment for multiple measures, for multiple comparisons was applied. Regarding narrative coherence and errors, we calculated the overall correct answers as the relative proportion of correct answers provided among the overall test answers of each AI chatbot accuracy. A descriptive analysis of reasons for logical argumentation of correct answers and categorization of type error was reported by percentage in tables. Statistical analyses were performed with STATA/MP 16.1 software.

AI chatbots’ multiple-choice questions

From our original sample, we inputted all the multiple-choice questions in Microsoft Copilot ( n  = 820). Twelve multiple-choice questions were not imported in ChatGPT-4 ( n  = 808) and Google Gemini ( n  = 808) since they were images with graphs or drawings. The flowchart of the study is shown in Fig.  1 .

figure 1

The study flow chart

AI chatbots’ accuracy

Overall, we found a statistically significant difference in accuracy between the answers of the three chatbots ( p  < 0.001). The results of the Bonferroni adjustment, as a Familywise adjustment for multiple measures and tests between couples, are presented in Table  1 . We found a statistically significant difference in the ChatGPT-4 vs Google Gemini ( p  < 0.001) and Microsoft Copilot vs Google Gemini ( p  < 0.001) comparisons, which indicate a better ChatGPT-4 and Microsoft Copilot accuracy than Google Gemini (Table  1 ). A sensitivity analysis excluding answers that were not applicable (e.g., if the answers had images with graphs or drawings) showed similar results reported in Additional file 4.

AI chatbots’ narrative coherence: correct answers and errors

The Inter-rater agreement regarding AI chatbots’ narrative coherence was “almost perfect” ranging from 0.84–0.88 kappa for internal and logical answers (Additional file 5). The narrative coherence of AI chatbots is reported in Tables 2 and 3 . We excluded from these analyses all not applicable answers (ChatGPT-4: n  = 12, Microsoft Copilot: n  = 0, Google Gemini: n  = 12).

About the category of correct answer (Table  2 ), in ChatGPT-4 (tot = 763), the most frequent feature was “Logical reasoning” ( n  = 622, 81.5%) followed by “Internal information” ( n  = 141, 18.5%). In Microsoft Copilot (tot = 737), the main frequent feature was “Logical reasoning” ( n  = 405, 55%), followed by “External information” ( n  = 195, 26.4%) and “Internal information” ( n  = 137, 18.6%). In Google Gemini (tot = 574), the most frequent feature was “Logical reasoning” ( n  = 567, 98.8%), followed by a few cases of “Internal information” ( n  = 7, 1.2%).

With respect to category of errors (Table  3 ), in ChatGPT-4 (tot = 45), the main frequent reason was “Logical error” ( n  = 40, 88.9%), followed by a few cases of “Information error” ( n  = 4, 8.9%) and statistic ( n  = 1, 2.2%) errors. In Microsoft Copilot (tot = 83), the main frequent reason was “Logical error” ( n  = 66, 79.1%), followed by a few cases of “Information error” ( n  = 9, 11.1%) and “Statistical error” ( n  = 8, 9.8%) errors. In Google Gemini (tot = 234), the main frequent reason was “Logical error” ( n  = 233, 99.6%), followed by a few cases of “Information error” ( n  = 1, 0.4%).

Main findings

The main findings reveal that: (A) AI chatbots reported an overall high accuracy in predicting the correct answer; (B) ChatGPT-4 and Microsoft Copilot performed better than Google Gemini; and (C) considering the narrative coherence of AI chatbots, the most prevalent modality to present correct and incorrect answers were “Logical” (“Logical reasoning” and “Logical error”, respectively).

Comparing our study with existing literature poses a challenge due to the limited number of research that have examined the accuracy of multiple AI chatbots [ 30 , 31 , 32 , 33 ]. Our research shows that AI chatbots can accurately answer questions from the CINECA Test, regardless of the topics (logical reasoning and general culture, biology, chemistry, physics and mathematics). This differs from the fluctuating accuracy found in other studies [ 34 , 35 ]. Our findings support Torres-Zegarra et al.'s observations that the previous version of ChatGPT-4 and Microsoft Bing were superior to Google Bard [ 33 ], while other research groups did not confirm it [ 30 , 31 , 32 ]. This discrepancy may be due to differences in the tests used (e.g., medical specialties vs university entrance), the types of questions targeted at different stakeholders (e.g. professionals vs students), and the version of AI chatbots used (e.g., ChatGPT-3.5 vs 4).

The accuracy ranking of AI chatbots in our study might be due to differences in their neural network architecture. ChatGPT-4 and Microsoft Copilot AI use the GPT (Generative Pre-trained Transformer) architecture, while Google Gemini adopts LaMDA (Language Model for Dialogue Application) and later PaLM 2 (Pathways Language Model) in combination with web search [ 32 ]. The differences in the quality, variety, and quantity of data used for training, the optimization strategies adopted (e.g., fine-tuning), and the techniques applied to create the model could also account for the accuracy differences between AI chatbots [ 63 ]. Therefore, the variations mentioned above could lead to different responses to the same questions, affecting their overall accuracy.

In our study, the narrative coherence shows that AI chatbots mainly offer a broader perspective on the discussed topic using logical processes rather than just providing a simple answer [ 53 ]. This can be explained by the computational abilities of AI chatbots and their capacity to understand and analyze text by recognizing word connections and predicting future words in a sentence [ 63 ]. However, it is important to note that our findings are preliminary, and more research is needed to investigate how narrative coherence changes with advancements in AI chatbot technology and updates.

Implications and future perspective

Our study identifies two contrasting implications of using AI chatbots in education. The positive implication regards AI chatbots as a valuable resource, while the negative implication perceives them as a potential threat. First, our study sheds light on the potential role of AI chatbots as supportive tools to assist candidates in preparation for the Italian entrance university standardized examination test of healthcare science. They can complement the traditional learning methods such as textbooks or in-person courses [ 10 ]. AI chatbots can facilitate self-directed learning, provide explanations and insights on the topics studied, select and filter materials and can be personalized to meet the needs of individual students [ 10 ]. In addition to the knowledge components, these instruments contribute to developing competencies, as defined by the World Health Organization [ 64 ]. Virtual simulation scenarios could facilitate the development of targeted skills and attitudes where students have a virtual interlocutor with a dynamic and human-like approach driven by AI. However, we should highlight that they cannot replace the value of reflection and discussion with peers and teachers, which are crucial for developing meta-competencies of today's students and tomorrow's healthcare professionals [ 10 ]. Conversely, candidates must be protected from simply attempting to use these tools to answer questions while administering exams. Encouraging honesty by avoiding placing and using devices (e.g., mobile phones, tablets) in classrooms is important. Candidates must be encouraged to respond with their preparation and knowledge, given that they are mostly applying for professions where honesty and ethical principles are imperative.

Strengths and limitations

As a strength, we evaluated the comparative accuracy of three AI chatbots in the Italian health sciences university admissions test over the past 13 years on a large sample of questions, considering the narrative consistency of their responses. This enriches the international debate on this topic and provides valuable insights into the strengths and limitations of AI chatbots in the context of university education [ 2 , 3 , 8 , 9 , 11 ].

However, limitations exist and offer opportunities for future study. Firstly, we only used the CINECA Test, while other universities in Italy adopted different tests (e.g., CASPUR and SELECTA). Secondly, we studied three AI Chatbots without considering others presented in the market (e.g., Cloude, Perplexity) [ 31 ]. Thirdly, we adopted both paid (ChatGPT-4) and free (Microsoft Copilot and Google Gemini) versions of AI Chatbots. Although this choice may be a limitation, we aimed to use the most up-to-date and recent versions of the AI Chatbots available when the study was performed. Fourthly, although we inputted all queries into AI chatbots, we processed only some of them as only Microsoft Copilot was able to analyse complex images, as reported in the CINECA Tests, at the time of our study [ 65 , 66 , 67 ]. Fifthly, we inputted the test questions only once to simulate the test execution conditions in real educational contexts [ 32 ], although previous studies have prompted the test questions multiple times in AI chatbots to obtain better results [ 68 ]. However, an AI language model operates differently from regular, deterministic software. These models are probabilistic in nature, forming responses by estimating the probability of the next word according to statistical patterns in their training data [ 69 ]. Consequently, posing the same question twice may not always yield identical answers. Sixthly, we did not calculate the response time of the AI chatbots since this variable is affected by the speed of the internet connection and data traffic [ 51 ]. Seventhly, we assessed the accuracy of AI chatbots in a single country by prompting questions in Italian, which may limit the generalizability of our findings to other contexts and languages [ 70 , 71 ]. Finally, we did not compare the responses of AI chatbots with those of human students since there is no national ranking for admission in Italy, and each university draws up its ranking on its own.

AI chatbots have shown promising accuracy in quickly predicting correct answers, producing writing that is grammatically correct and coherent in a conversation for the Italian entrance university standardized examination test of healthcare science degrees. However, the study provides data regarding the overall performances of different AI Chatbots with regard to the standardized examinations provided in the last 13 years to all candidates willing to enter a healthcare science degree in Italy. Therefore, findings should be placed in the context of a research exercise and may support the current debate regarding the use of AI chatbots in the academic context. Further research is needed to explore the potential of AI chatbots in other educational contexts and to address their limitations as an innovative tool for education and test preparation.

Availability of data and materials

The datasets generated and/or analysed during the current study are available in the Open Science Framework (OSF) repository, https://osf.io/ue5wf/ .

Abbreviations

  • Artificial intelligence

Confidence interval

Consorzio Interuniversitario per il Calcolo Automatico dell'Italia Nord Orientale

Generative pre-trained transformer

Interquartile range

Language model for dialogue application

Pathways language model

Strengthening of Reporting of Observational Studies in Epidemiology

Redazione. Test d’ammissione professioni sanitarie, il 14 settembre 2023. Sanità Informazione. 2023. https://www.sanitainformazione.it/professioni-sanitarie/1settembre-test-dammissione-alle-professioni-sanitarie-fissato-per-il-14-settembre-2023-alle-ore-13-in-tutta-italia/ . Accessed 6 May 2024.

Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198.

Article   Google Scholar  

Rossettini G, Cook C, Palese A, Pillastrini P, Turolla A. Pros and cons of using artificial intelligence Chatbots for musculoskeletal rehabilitation management. J Orthop Sports Phys Ther. 2023;53:1–17.

Fütterer T, Fischer C, Alekseeva A, Chen X, Tate T, Warschauer M, et al. ChatGPT in education: global reactions to AI innovations. Sci Rep. 2023;13:15310.

Mohammadi S, SeyedAlinaghi S, Heydari M, Pashaei Z, Mirzapour P, Karimi A, et al. Artificial intelligence in COVID-19 Management: a systematic review. J Comput Sci. 2023;19:554–68.

Mehraeen E, Mehrtak M, SeyedAlinaghi S, Nazeri Z, Afsahi AM, Behnezhad F, et al. Technology in the Era of COVID-19: a systematic review of current evidence. Infect Disord Drug Targets. 2022;22:e240322202551.

SeyedAlinaghi S, Abbaspour F, Mehraeen E. The Challenges of ChatGPT in Healthcare Scientific Writing. Shiraz E-Med J. 2024;25(2):e141861. https://doi.org/10.5812/semj-141861 .

Karabacak M, Ozkara BB, Margetis K, Wintermark M, Bisdas S. The advent of generative language models in medical education. JMIR Med Educ. 2023;9:e48163.

Mohammad B, Supti T, Alzubaidi M, Shah H, Alam T, Shah Z, et al. The pros and cons of using ChatGPT in medical education: a scoping review. Stud Health Technol Inform. 2023;305:644–7.

Google Scholar  

Abd-Alrazaq A, AlSaad R, Alhuwail D, Ahmed A, Healy PM, Latifi S, et al. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. 2023;9:e48291.

Azer SA, Guerrero APS. The challenges imposed by artificial intelligence: are we ready in medical education? BMC Med Educ. 2023;23:680.

Levin G, Horesh N, Brezinov Y, Meyer R. Performance of ChatGPT in medical examinations: a systematic review and a meta-analysis. BJOG Int J Obstet Gynaecol. 2023. https://doi.org/10.1111/1471-0528.17641 .

Passby L, Jenko N, Wernham A. Performance of ChatGPT on Specialty Certificate Examination in Dermatology multiple-choice questions. Clin Exp Dermatol. 2023:llad197. https://doi.org/10.1093/ced/llad197 .

Lewandowski M, Łukowicz P, Świetlik D, Barańska-Rybak W. ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Specialty Certificate Examination in Dermatology. Clin Exp Dermatol. 2023:llad255. https://doi.org/10.1093/ced/llad255 .

Giannos P. Evaluating the limits of AI in medical specialisation: ChatGPT’s performance on the UK Neurology Specialty Certificate Examination. BMJ Neurol Open. 2023;5:e000451.

Teebagy S, Colwell L, Wood E, Yaghy A, Faustina M. Improved performance of ChatGPT-4 on the OKAP examination: a comparative study with ChatGPT-3.5. J Acad Ophthalmol. 2023;15:e184–7.

Madrid-García A, Rosales-Rosado Z, Freites-Nuñez D, Pérez-Sancristóbal I, Pato-Cour E, Plasencia-Rodríguez C, et al. Harnessing ChatGPT and GPT-4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training. Sci Rep. 2023;13:22129.

Haze T, Kawano R, Takase H, Suzuki S, Hirawa N, Tamura K. Influence on the accuracy in ChatGPT: Differences in the amount of information per medical field. Int J Med Inf. 2023;180:105283.

Yanagita Y, Yokokawa D, Uchida S, Tawara J, Ikusaka M. Accuracy of ChatGPT on medical questions in the national medical licensing examination in Japan: evaluation study. JMIR Form Res. 2023;7:e48023.

Rosoł M, Gąsior JS, Łaba J, Korzeniewski K, Młyńczak M. Evaluation of the performance of GPT-3.5 and GPT-4 on the polish medical final examination. Sci Rep. 2023;13:20512.

Brin D, Sorin V, Vaid A, Soroush A, Glicksberg BS, Charney AW, et al. Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Sci Rep. 2023;13:16492.

Kaneda Y, Takahashi R, Kaneda U, Akashima S, Okita H, Misaki S, et al. Assessing the performance of GPT-3.5 and GPT-4 on the 2023 Japanese nursing examination. Cureus. 2023;15:e42924.

Kleinig O, Gao C, Bacchi S. This too shall pass: the performance of ChatGPT-3.5, ChatGPT-4 and new bing in an Australian medical licensing examination. Med J Aust. 2023;219:237.

Roos J, Kasapovic A, Jansen T, Kaczmarczyk R. Artificial intelligence in medical education: comparative analysis of ChatGPT, Bing, and medical students in Germany. JMIR Med Educ. 2023;9:e46482.

Ali R, Tang OY, Connolly ID, Fridley JS, Shin JH, Zadnik Sullivan PL, et al. Performance of ChatGPT, GPT-4, and google bard on a neurosurgery oral boards preparation question bank. Neurosurgery. 2023. https://doi.org/10.1227/neu.0000000000002551 .

Patil NS, Huang RS, van der Pol CB, Larocque N. Comparative Performance of ChatGPT and Bard in a Text-Based Radiology Knowledge Assessment. Can Assoc Radiol J. 2024;75(2):344–50. https://doi.org/10.1177/08465371231193716 .

Toyama Y, Harigai A, Abe M, Nagano M, Kawabata M, Seki Y, et al. Performance evaluation of ChatGPT, GPT-4, and bard on the official board examination of the Japan radiology society. Jpn J Radiol. 2023. https://doi.org/10.1007/s11604-023-01491-2 .

Fowler T, Pullen S, Birkett L. Performance of ChatGPT and Bard on the official part 1 FRCOphth practice questions. Br J Ophthalmol. 2023:bjo-2023-324091. https://doi.org/10.1136/bjo-2023-324091 . Online ahead of print.

Meo SA, Al-Khlaiwi T, AbuKhalaf AA, Meo AS, Klonoff DC. The Scientific Knowledge of Bard and ChatGPT in Endocrinology, Diabetes, and Diabetes Technology: Multiple-Choice Questions Examination-Based Performance. J Diabetes Sci Technol. 2023:19322968231203987. https://doi.org/10.1177/19322968231203987 . Online ahead of print.

Kumari A, Kumari A, Singh A, Singh SK, Juhi A, Dhanvijay AKD, et al. Large language models in hematology case solving: a comparative study of ChatGPT-3.5, google bard, and microsoft bing. Cureus. 2023;15:e43861.

Dhanvijay AKD, Pinjar MJ, Dhokane N, Sorte SR, Kumari A, Mondal H. Performance of large language models (ChatGPT, Bing Search, and Google Bard) in solving case vignettes in physiology. Cureus. 2023;15:e42972.

Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG. Evaluation of generative artificial intelligence large language models ChatGPT, google bard, and microsoft bing chat in supporting evidence-based dentistry: a comparative mixed-methods study. J Med Internet Res. 2023. https://doi.org/10.2196/51580 .

Torres-Zegarra BC, Rios-Garcia W, Ñaña-Cordova AM, Arteaga-Cisneros KF, Chalco XCB, Ordoñez MAB, et al. Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National licensing medical examination: a cross-sectional study. J Educ Eval Health Prof. 2023;20.

Giannos P, Delardas O. Performance of ChatGPT on UK standardized admission tests: insights from the BMAT, TMUA, LNAT, and TSA examinations. JMIR Med Educ. 2023;9:e47737.

Guigue P-A, Meyer R, Thivolle-Lioux G, Brezinov Y, Levin G. Performance of ChatGPT in French language Parcours d’Accès Spécifique Santé test and in OBGYN. Int J Gynaecol Obstet Off Organ Int Fed Gynaecol Obstet. 2023. https://doi.org/10.1002/ijgo.15083 .

Healthcare Science. NSHCS. https://nshcs.hee.nhs.uk/healthcare-science/ . Accessed 6 May 2024.

Alessandri Bonetti M, Giorgino R, Gallo Afflitto G, De Lorenzi F, Egro FM. How does ChatGPT perform on the Italian residency admission national exam compared to 15,869 medical graduates? Ann Biomed Eng. 2023. https://doi.org/10.1007/s10439-023-03318-7 .

Scaioli G, Moro GL, Conrado F, Rosset L, Bert F, Siliquini R. Exploring the potential of ChatGPT for clinical reasoning and decision-making: a cross-sectional study on the Italian medical residency exam. Ann DellIstituto Super Sanità. 2023;59:267–70.

von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007;4:e296.

Nowell J. Guide to ethical approval. BMJ. 2009;338:b450.

Accesso programmato a livello nazionale. Mi - Ministero dell’istruzione. https://www.miur.gov.it/accesso-programmato-a-livello-nazionale . Accessed 6 May 2024.

Il Consorzio. Cineca. http://www.cineca.it/chi-siamo/il-consorzio . Accessed 6 May 2024.

Salute M della. Professioni sanitarie. https://www.salute.gov.it/portale/professioniSanitarie/dettaglioContenutiProfessioniSanitarie.jsp?lingua=italiano&id=808&area=professioni-sanitarie&menu=vuoto&tab=1 . Accessed 6 May 2024.

Test d’ingresso ai corsi ad accesso programmato e alle scuole di specializzazione. Cineca. http://www.cineca.it/sistemi-informativi-miur/studenti-carriere-offerta-formativa-e-altri-servizi/test-dingresso-ai . Accessed 6 May 2024.

Scuola secondaria di secondo grado. Mi - Ministero dell’istruzione. https://www.miur.gov.it/scuola-secondaria-di-secondo-grado . Accessed 6 May 2024.

Test ammissione professioni sanitarie anni precedenti. TaxiTest. https://taxitest.it/test-ingresso-professioni-sanitarie-anni-passati/ . Accessed 6 May 2024.

Soluzioni dei Test d’Ingresso per Professioni Sanitarie 2023. https://www.studentville.it/app/uploads/2023/09/soluzioni-test-cineca-professioni-sanitarie-2023.pdf . Accessed 6 May 2024.

ChatGPT. https://chat.openai.com . Accessed 6 May 2024.

Microsoft Copilot: il tuo AI Companion quotidiano. Microsoft Copilot: il tuo AI Companion quotidiano. https://ceto.westus2.binguxlivesite.net/ . Accessed 6 May 2024.

Gemini: chatta per espandere le tue idee. Gemini. https://gemini.google.com . Accessed 6 May 2024.

Mihalache A, Popovic MM, Muni RH. Performance of an artificial intelligence chatbot in ophthalmic knowledge assessment. JAMA Ophthalmol. 2023;141:589–97.

Trabasso T. The Development of Coherence in Narratives by Understanding Intentional Action. In: Stelmach GE, Vroon PA, editors. Advances in Psychology. Vol. 79. North-Holland; 1991. p. 297–314. ISSN 0166-4115, ISBN 9780444884848. https://doi.org/10.1016/S0166-4115(08)61559-9 .

Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. How does ChatGPT perform on the United States medical licensing examination? the implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312.

Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74.

Saravia E. Prompt Engineering Guide. https://github.com/dair-ai/Prompt-Engineering-Guide . 2022. Accessed 6 May 2024.

Giray L. Prompt engineering with ChatGPT: a guide for academic writers. Ann Biomed Eng. 2023. https://doi.org/10.1007/s10439-023-03272-4 .

Massey PA, Montgomery C, Zhang AS. Comparison of ChatGPT–3.5, ChatGPT-4, and orthopaedic resident performance on orthopaedic assessment examinations. JAAOS -. J Am Acad Orthop Surg. 2023;31:1173.

Guerra GA, Hofmann H, Sobhani S, Hofmann G, Gomez D, Soroudi D, et al. GPT-4 artificial intelligence model outperforms ChatGPT, medical students, and neurosurgery residents on neurosurgery written board-like questions. World Neurosurg. 2023;S1878–8750(23):01144.

Cuthbert R, Simpson AI. Artificial intelligence in orthopaedics: can chat generative pre-trained transformer (ChatGPT) pass Section 1. Postgrad Med J. 2023;99:1110–4.

Friederichs H, Friederichs WJ, März M. ChatGPT in medical school: how successful is AI in progress testing? Med Educ Online. 2023;28.

Weng T-L, Wang Y-M, Chang S, Chen T-J, Hwang S-J. ChatGPT failed Taiwan’s family medicine board exam. J Chin Med Assoc JCMA. 2023;86:762–6.

Bhayana R, Krishna S, Bleakney RR. Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations. Radiology. 2023;307:e230582.

Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29:1930–40.

Global competency framework for universal health coverage. https://www.who.int/publications-detail-redirect/9789240034686 . Accessed 6 May 2024.

ChatGPT — Release Notes | OpenAI Help Center. https://help.openai.com/en/articles/6825453-chatgpt-release-notes . Accessed 6 May 2024.

Microsoft. Visual Search API | Microsoft Bing. Bingapis. https://www.microsoft.com/en-us/bing/apis/bing-visual-search-api . Accessed 6 May 2024.

What’s ahead for Bard: More global, more visual, more integrated. Google. 2023. https://blog.google/technology/ai/google-bard-updates-io-2023/ . Accessed 6 May 2024.

Zhu L, Mou W, Yang T, Chen R. ChatGPT can pass the AHA exams: Open-ended questions outperform multiple-choice format. Resuscitation. 2023;188:109783.

Probabilistic machine learning and artificial intelligence | Nature. https://www.nature.com/articles/nature14541 . Accessed 6 May 2024.

Ebrahimian M, Behnam B, Ghayebi N, Sobhrakhshankhah E. ChatGPT in Iranian medical licensing examination: evaluating the diagnostic accuracy and decision-making capabilities of an AI-based model. BMJ Health Care Inform. 2023;30:e100815.

Seghier ML. ChatGPT: not all languages are equal. Nature. 2023;615:216.

Download references

Acknowledgements

The authors thanks Sanitätsbetrieb der Autonomen Provinz Bozen/Azienda Sanitaria della Provincia Autonoma di Bolzano for covering the open access publication costs.

The authors declare that they receive fundings from the Department of Innovation, Research, University and Museums of the Autonomous Province of Bozen/Bolzano for covering the open access publication costs of this study.

Author information

Silvia Gianola and Alvisa Palese both authors have contributed equally.

Authors and Affiliations

School of Physiotherapy, University of Verona, Verona, Italy

Giacomo Rossettini

Department of Physiotherapy, Faculty of Sport Sciences, Universidad Europea de Madrid, Villaviciosa de Odón, 28670, Spain

Department of Rehabilitation, Hospital of Merano (SABES-ASDAA), Teaching Hospital of Paracelsus Medical University (PMU), Merano-Meran, Italy

Lia Rodeghiero

School of Speech Therapy, University of Verona, Verona, Italy

Federica Corradi

Department of Orthopaedics, Duke University, Durham, NC, USA

Duke Clinical Research Institute, Duke University, Durham, NC, USA

Department of Population Health Sciences, Duke University, Durham, NC, USA

Department of Biomedical and Neuromotor Sciences (DIBINEM), Alma Mater University of Bologna, Bologna, Italy

Paolo Pillastrini & Andrea Turolla

Unit of Occupational Medicine, IRCCS Azienda Ospedaliero-Universitaria Di Bologna, Bologna, Italy

Unit of Clinical Epidemiology, IRCCS Istituto Ortopedico Galeazzi, Milan, Italy

Greta Castellini & Silvia Gianola

Department of Medical Sciences, University of Udine, Udine, Italy

Stefania Chiappinotto & Alvisa Palese

You can also search for this author in PubMed   Google Scholar

Contributions

GR, SG, AP conceived and designed the research and wrote the first draft. LR, FC, managed the acquisition of data. SG, GC, SC, CC, PP, AT managed the analysis and interpretation of data. GR, SG, AP wrote the first draft. All authors read, revised, wrote and approved the final version of manuscript.

Authors' information

A multidisciplinary group of healthcare science educators promoted and developed this study in Italy. The group consisted of professors, lecturers, and tutors actively involved in university education in different healthcare science disciplines (e.g., rehabilitation, physiotherapy, speech therapy, nursing).

Corresponding authors

Correspondence to Giacomo Rossettini , Lia Rodeghiero , Stefania Chiappinotto , Silvia Gianola or Alvisa Palese .

Ethics declarations

Ethics approval and consent to participate.

Not applicable, no humans and patients have been involved in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Rossettini, G., Rodeghiero, L., Corradi, F. et al. Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study. BMC Med Educ 24 , 694 (2024). https://doi.org/10.1186/s12909-024-05630-9

Download citation

Received : 24 January 2024

Accepted : 04 June 2024

Published : 26 June 2024

DOI : https://doi.org/10.1186/s12909-024-05630-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health occupations
  • Physical therapy modalities
  • Speech therapy

BMC Medical Education

ISSN: 1472-6920

essay vs multiple choice exams

IMAGES

  1. (PDF) The students' perceptions: essay versus multiple-choice type exams

    essay vs multiple choice exams

  2. 5 Types of Exam Questions: Characteristics, Pros & Cons

    essay vs multiple choice exams

  3. A Speech on "Multiple Choice Tests Are Better than Essay Tests."

    essay vs multiple choice exams

  4. Types of Exams

    essay vs multiple choice exams

  5. TOEFL Writing Topic: Completing writing projects are more beneficial

    essay vs multiple choice exams

  6. Essay versus Multiple-Choice Type Classroom Exams: The Student’s

    essay vs multiple choice exams

VIDEO

  1. PMS Essay vs CSS Essay

  2. Essay on My View to make exams stress free/ My view to make exams stress free paragraph /400 words

  3. How to Study for Multiple Choice Exams

  4. Techniques for Mastering Multiple-Choice Exams at Home

  5. Multi level testi qanday bo'ldi? Shaxsiy tajribam va ko'plab foydali maslahatlar. #multi_level #cefr

  6. Ace Your Exam; Master Time Management

COMMENTS

  1. Advantages, Disadvantages of Different Types of Test Questions

    Advantages. Save instructors the time and energy involved in writing test questions. Use the terms and methods that are used in the book. Disadvantages. Rarely involve analysis, synthesis, application, or evaluation (cross-discipline research documents that approximately 85 percent of the questions in test banks test recall) Limit the scope of ...

  2. 17.1: Should I give a multiple-choice test, an essay test, or something

    Advantages and disadvantages of multiple-choice tests. Multiple-choice testing became popular in the 1900's because of the efficiency that it provided (Swartz, 2006). According to Matzen and Hoyt, "Beginning in 1901, the SAT was a written exam, but as the influence of psychometricians grew in 1926, the SAT became a multiple-choice test" (2006).

  3. 17.6: What are the benefits of essay tests?

    A) They allow for better expression. B) There is little probability for randomness. C) The time taken is less overall. D) A & B. 3)What is NOT a benefit of essay assessment for the teacher. A)They help the instructor better understand the subject. B)They remove some the work required for multiple choice.

  4. Why Essay questions are inherently better than Multiple Choice ...

    Multiple choice, on the other hand, is objective. It requires the examinee to recognize a correct answer from a list of options, and thus showcases recognition ability rather than creativity.

  5. Writing Good Multiple Choice Test Questions

    1. Avoid complex multiple choice items, in which some or all of the alternatives consist of different combinations of options. As with "all of the above" answers, a sophisticated test-taker can use partial knowledge to achieve a correct answer. 2. Keep the specific content of items independent of one another.

  6. Multiple-Choice Tests: Revisiting the Pros and Cons

    Multiple-Choice Tests: Revisiting the Pros and Cons. February 21, 2018. Maryellen Weimer, PhD. Post Views: 54,191. On too many multiple-choice tests, the questions do nothing more than assess whether students have memorized certain facts and details. But well-written questions can move students to higher-order thinking, such as application ...

  7. The students' perceptions: essay versus multiple-choice type exams

    made respondents feel more at ease than essay exams (M=2.78) while taking exam. Frequency. distributions showed that 68% of the sample e xpected to receive high or very high scores on. multiple ...

  8. Multiple Choice Questions: Benefits, Debates, and Best Practices

    Essays and short answer questions, while effective, will inevitably delay grading. Auto-graded multiple-choice questions allow instructors to test their students quickly and efficiently, without hiring additional graders. Time and Scope: There's a reason why MCQs are a default for most standardized testing. By nature, MCQs allow for fast ...

  9. PDF Essay Versus Multiple-Choice: A Perspective from the Undergraduate

    A preformed proforma containing few questions regarding essay and multiple choice questions is given to them to fill. The results are analyzed using simple statistics methods. Results: The 59.5% students prefer multiple choice questions (MCQs) over essay question. 20.8% don't prefer MCQs and 19.5% were neutral. 24.1% prefer essay questions ...

  10. How to assess students: multiple-choice questions vs essays

    There is more structure to the questions and marking criteria that allows for students to really demonstrate their skills. An MCQ test will simply be a score out of 100, with each question having a strict right or wrong answer, whereas an essay gives students the opportunity to explore the material they have learnt.

  11. Types of Exams

    Each type of exam has different considerations and preparation, in addition to knowing the course material. This chapter extends the discussion from the previous chapter and examines different types of exams, including multiple choice, essay, and maths exams, and some strategies specific for those exam types.

  12. Multiple-choice questions: pros and cons

    Pros of multiple-choice questions: MCQs are a flexible questioning technique, they can be used at various points in a lesson and throughout the learning process. MCQs can be used for both formative and summative assessment and can be used inside or outside of the classroom. MCQs can be versatile in terms of the content and type of questions ...

  13. Recognition vs Recall

    Recognition vs Recall. Recognition is easier than recall. Multiple-choice tests are generally easier than fill-in-the-blanks tests or essays because it is easier to recognize the correct answer ...

  14. eLearning Assessments: Multiple-Choice Alternatives

    Multiple-Choice Alternatives To Consider. Given the limitations of multiple-choice tests, it is essential to consider alternative evaluation methods more effective in measuring learning. Some alternative evaluation methods to multiple-choice include (but are not limited to): Essay exams. Essay exams require students to write an answer in their ...

  15. PDF Do Essay and Multiple-choice Questions Measure the Same Thing ...

    Interestingly, MC scores are more successful at predicting essay scores for final exams: The R2 values for the final exam regressions are close to 50 percent, while those for the. term tests are in the low- to mid-30's.6 For the full sample, the R2 of the regression of.

  16. Essay versus Multiple-Choice Type Classroom Exams: The Student's

    In Zeidner's (1987) paper, the author used this inventory to compare the students' perception and attitude toward essay and a multiple-choice test. The reported reliability is 0.85 for both essay ...

  17. 6 Test-Taking Strategies for Multiple Choice Exams [With Examples and

    Step 3: When preceding and succeeding answers are different, then pick T as your response because T is likelier than F. So, we pick T for both 4 and 9. The answer sheet now is: Step 4: You're now left with the first two questions. Here, TF will be the best answer, as it'll form a non-repeating pattern. 2.

  18. Essay vs. Multiple-choice: Which test is preferable?

    offers 2 main conclusions: (1) multiple-choice tests are even more inadequate than hoffmann maintains, and (2) standardized essay tests provide an excellent potential alternative to multiple-choice. the reason the 1st is true is because the criticism of analogical multiple-choice test items (including odd-one-in items) is not carried to its logical conclusion.

  19. A Comparison of a Multiple Choice and an Essay Test

    Below is a comparison of Essays vs. Multiple-Choice Exams. Preparation; Preparing for amultiple-choicetest is an easy task that requires the writer to identify important information when he/she see it. Anessay examrequires that the writer gather enough knowledge on the subject matter; such the writer can be able to answer to answer any prompt ...

  20. Argumentative Essays Test

    The argumentative essay is a genre of writing that requires the student to investigate a topic; collect, generate, and evaluate evidence; and establish a position on the topic in a concise manner. Please note: Some confusion may occur between the argumentative essay and the expository essay. These two genres are similar, but the argumentative ...

  21. Do you prefer multiple choice or essay based exams? : r/college

    My professor just said that they will be changing the format of the next exam to mostly multiple choice ( as opposed to mostly essay). I'm assuming they are doing this to 1) make grading easier 2) help students do better. However in my opinion multiple choice makes things much harder. On the surface essay sounds harder but provided you put in ...

  22. Full-Length Paper Practice Tests

    The PDF versions of our practice tests are nonadaptive and are recommended only for students who will test with paper-based accommodations on test day. When you're ready to score your test, download the scoring guide and answer explanations for your practice test and check your answers.

  23. [High School Biology] Essay Questions vs Multiple Choice

    Multiple choice gives students an out to not know anything about biology and use test taking skills to pass. I also allow unlimited retakes for students to improve on what they know. The most important thing I can offer is decide on what your goals are as a biology teacher and work toward that each and every day. 1.

  24. I Prefer Essay Exams Over Multiple Choice, Don't @ Me

    Yes, essay exams are more time consuming and that scares a lot of people away because they want to be done as fast a possible, but the more you write and have knowledge about the course the more information you can provide to your professor who is grading it. Now yeah finals may be over but I still have another semester to go through and hope ...

  25. Full-Length SAT Suite Practice Tests

    Find full-length practice tests on Bluebook™ as well as downloadable paper (nonadaptive) practice tests to help you prepare for the SAT, PSAT/NMSQT, PSAT 10, and PSAT 8/9.

  26. Abortion Rights Debate Shifts to Pregnancy and Fertility as Election

    Two years after Roe was struck down, the conversation has focused on the complications that can come with pregnancy and fertility, helping to drive more support for abortion rights.

  27. Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini

    The CINECA Test provided 60-80 range of independent questions per year for a total of 820 multiple-choice questions considered for the analysis. Every question presents five multiple-choice options, with only one being the correct answer and the remaining four being incorrect . According to the law, over the years, the CINECA test consisted ...