helpful professor logo

75 Open-Ended Questions Examples

75 Open-Ended Questions Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

open-ended questions examples definition and benefits, explained below

Open-ended questions are inquiries that cannot be answered with a simple “yes” or “no” and require elaboration.

These questions encourage respondents to provide more detailed answers, express opinions, and share experiences.

They can be useful in multiple contexts:

  • In conversation , it elicits more information about someone and can help break the ice or deepen your relationship with them.
  • In education , open-ended questions are used as prompts to encourage people to express themselves, demonstrate their knowledge, or think more deeply about other people.
  • In research , they are used to gather detailed responses from research participants who, if not asked open-ended questions, may not give valuable detailed or in-depth responses.

An example of an open-ended question is:

“What did you enjoy most about your recent vacation?”

Open-Ended Questions Examples

Examples of open-ended questions for students.

  • What did you find most interesting or surprising about today’s lesson?
  • How would you explain this concept to someone who has never encountered it before?
  • Can you think of a real-life example of what we are talking about today?
  • When doing the task, what did you find most challenging and why?
  • How does this topic connect to the topic we were discussing in last week’s lesson?
  • When you walk out of this lesson today, what is the most important insight you’ll take with you?
  • When you were solving this problem, what strategies did you draw upon? Can you show them to me?
  • If you could change one thing about how you did today’s task, what would it be and why?
  • How do you feel about the progress you have made in the unit so far, and what areas do you think you need to work on?
  • What questions do you still have about this topic that we can address in our next lesson?
  • How do you think this subject will be relevant to your life outside of the classroom, such as on the weekends or even in the workplace once you leave school?
  • We tried just one way to solve this problem. Can you think of any alternative approaches we could have taken to reach the same results?
  • What resources or strategies do you think were most useful when solving this problem?
  • What were the challenges you faced when completing this group work task and how would you work to resolve them next time?
  • What are some of the possible weaknesses of the theory we’ve been exploring today?
  • How has your understanding of this topic evolved throughout the course of this unit?
  • What are some real-world applications of what we’ve learned today?
  • If you were to design an experiment to test this hypothesis, what would be your approach?
  • Can you think of any counterarguments or alternative perspectives on this issue?
  • How would you rate your level of engagement with this topic, and what factors have influenced your level of interest?

Examples of Open-Ended Questions for Getting to Know People

  • So, can you tell me about the first time you met our mutual friend who introduced us?
  • How did you get interested in your favorite hobby?
  • How have your tastes in music changed over time?
  • Can you explain a memorable memory from your childhood?
  • Are there any books, movies, or TV shows that you’ve enjoyed recently that you could recommend? Why would you recommend them to me?
  • How do you usually spend your weekends or leisure time?
  • Can you tell me about a restaurant experience you had that you really enjoyed and why it was so memorable?
  • What’s your fondest memory of your childhood pet?
  • What first got you interested in your chosen career?
  • If you could learn a new skill or take up a new hobby, what would it be and why?
  • What’s the best piece of advice you’ve ever received from a parent or mentor?
  • If you were to pass on one piece of advice to your younger self, what would lit be?
  • Tell me about something fun you did in the area recently that you could recommend that I do this weekend on a budget of $100?
  • If you could have a think for a second, would you be able to tell me your short-term, medium-term, and long-term personal goals ?
  • If you could travel anywhere in the world, where would you go and why?

Examples of Open-Ended Questions for Interviews

  • Can you tell me about yourself and your background, and how you came to be in your current position/field?
  • How do you approach problem-solving, and what methods have you found to be most effective?
  • Can you describe a particularly challenging situation you faced, and how you were able to navigate it?
  • What do you consider to be your greatest strengths, and how have these played a role in your career or personal life?
  • Can you describe a moment of personal growth or transformation, and what led to this change?
  • What are some of your passions and interests outside of work, and how do these inform or influence your professional life?
  • Can you tell me about a time when you faced criticism or negative feedback, and how you were able to respond to it?
  • What do you think are some of the most important qualities for success in your field, and how have you worked to develop these qualities in yourself?
  • Can you describe a moment of failure or setback, and what you learned from this experience?
  • Looking to the future, what are some of your goals or aspirations, and how do you plan to work towards achieving them?

Examples of Open-Ended Questions for Customer Research

  • What factors influenced your decision to purchase this product or service?
  • How would you describe your overall experience with our customer support team?
  • What improvements or changes would you suggest to enhance the user experience of our website or app?
  • Can you provide an example of a time when our product or service exceeded your expectations?
  • What challenges or obstacles did you encounter while using our product or service, and how did you overcome them?
  • How has using our product or service impacted your daily life or work?
  • What features do you find most valuable in our product or service, and why?
  • Can you describe your decision-making process when choosing between competing products or services in the market?
  • What additional products or services would you be interested in seeing from our company?
  • How do you perceive our brand in comparison to our competitors, and what factors contribute to this perception?
  • What sources of information or communication channels did you rely on when researching our product or service?
  • How likely are you to recommend our product or service to others, and why?
  • Can you describe any barriers or concerns that might prevent potential customers from using our product or service?
  • What aspects of our marketing or advertising caught your attention or influenced your decision to engage with our company?
  • How do you envision our product or service evolving or expanding in the future to better meet your needs?

Examples of Open-Ended Questions for Preschoolers

  • Can you tell me about the picture you drew today?
  • What is your favorite thing to do at school, and why do you like it?
  • How do you feel when you play with your friends at school?
  • What do you think would happen if animals could talk like people?
  • Can you describe the story we read today? What was your favorite part?
  • If you could be any animal, which one would you choose to be and why?
  • What would you like to learn more about, and why does it interest you?
  • How do you help your friends when they’re feeling sad or upset?
  • Can you tell me about a time when you solved a problem all by yourself?
  • What is your favorite game to play, and how do you play it?
  • If you could create your own superhero, what powers would they have and why?
  • Can you describe a time when you were really brave? What happened?
  • What do you think it would be like to live on another planet?
  • If you could invent a new toy, what would it look like and what would it do?
  • Can you tell me about a dream you had recently? What happened in the dream?

Open-Ended vs Closed-Ended Questions

DefinitionRequire elaboration and full sentence responses. These questions cannot be answered with “yes” or “no”.Can be answered with “yes,” “no,” or a very brief response, without elaboration.
PurposeEncouraging deeper explanations, expression, and analysis from the respondent.Gathering specific information, getting an explicit response, or confirming details.
Example“Can you explain what happened to you when you went on vacation?”“Did you enjoy your vacation?”
BenefitPromotes deep thinking because in asking for a detailed response, students have to process and formulate complete thoughts.Is great for gathering fast input, for example on likert scales during research or, during teacher-centered instruction, to quickly ensure students are following you.
LimitationsOften requires one-to-one discussion so is impractical in large group situations. Requires a skilled conversationalist who can think up questions that will elicit detailed responses.Tends not to elicit detailed insights so cannot gather the full picture. It doesn’t help us get a nuanced understanding of people’s thoughts and opinions.
Ideal UseIn education, to get people thinking deeply about a topic. In conversation, to get people to share more about themselves with you and start an interesting conversation In research, to gather in-depth data from interviews and qualitative studies that can lead to rich insights.In education, to gather formative feedback during teacher-centered instruction. In conversation, to get the clarifying information you need quickly. In reseasrch, to conducts large-scale surveys, polls, and quantitative studies that can generate population-level insights.

Benefits of Open-Ended Questions

Above all, open-ended questions require people to actively think. This engages them in higher-order thinking skills (rather than simply providing restricted answers) and forces them to expound on their thoughts.

The best thing about these questions is that they benefit both the questioner and the answerer:

  • Questioner: For the person asking the question, they benefit from hearing a full insight that can deepen their knowledge about their interlocutor.
  • Answerer: For the person answering the question, they benefit because the very process of answering the question helps them to sort their thoughts and clarify their insights.

To expound, below are four of the top benefits.

1. Encouraging critical thinking

When we have to give full answers, our minds have to analyze, evaluate, and synthesize information. We can’t get away with a simple yes or no.

This is why educators embrace open-ended questioning, and preferably questions that promote higher-order thinking .

Expounding on our thoughts enables us to do things like:

  • Thinking more deeply about a subject
  • Considering different perspectives
  • Identifying logical fallacies in our own conceptions
  • Developing coherent and reasoned responses
  • Reflecting on our previous actions
  • Clarifying our thoughts.

2. Facilitating self-expression

Open-ended questions allow us to express ourselves. Imagine only living life being able to say “yes” or “no” to questions. We’d struggle to get across our own personalities!

Only with fully-expressed sentences and monologues can we share our full thoughts, feelings, and experiences. It allows us to elaborate on nuances, express our hesitations, and explain caveats.

At the end of explaining our thoughts, we often feel like we’ve been more heard and we have had the chance to express our full authentic thoughts.

3. Building stronger relationships

Open-ended questioning creates good relationships. You need to ask open-ended questions if you want to have good conversations, get to know someone, and make friends.

These sorts of questions promote open communication, speed up the getting-to-know-you phase, and allow people to share more about themselves with each other.

This will make you more comfortable with each other and give the person you’re trying to get to know a sense that you’re interested in them and actively listen to what they have to say. When people feel heard and understood, they are more likely to trust and connect with others.

Tip: Avoid Loaded Questions

One mistake people make during unstructured and semi-structured interviews is to ask open-ended questions that have bias embedded in them.

For an example of a loaded question, imagine if you asked a question: “why did the shop lifter claim he didn’t take the television without paying?”

Here, you’ve made a premise that you’re asking the person to consent to (that the man was a shop lifter).

A more neutral wording might be “why did the man claim he didn’t take the television without paying?”

The second question doesn’t require the person to consent to the notion that the man actually did the shop lifting.

This might be very important, for example, in cross-examining witnesses in a police station!

When asking questions, use questions that encourage people to provide full-sentence responses, at a minimum. Use questions like “how” and “why” rather than questions that can be answered with a brief point. This will allow people the opportunity to provide more detailed responses that give them a chance to demonstrate their full understanding and nuanced thoughts about the topic. This helps students think more deeply and people in everyday conversation to feel like you’re actually interested in what they have to say.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 119 Bloom’s Taxonomy Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ All 6 Levels of Understanding (on Bloom’s Taxonomy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Self-Actualization Examples (Maslow's Hierarchy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ Forest Schools Philosophy & Curriculum, Explained!

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Open-Ended Questions: 28 Examples of How to Ask Properly

Roland Vojkovský

The power of open-ended questions lies in the insights they unlock.

Mastering open-ended questions is key, as they unlock more than just brief replies. They invite deeper thoughts, opening doors to honest conversations. The skill of openness and support is crucial for team leaders who want to cultivate a similar culture among their employees and customers. Unlike yes-or-no questions, open-ended ones pave the way for people to express themselves fully.

They are not just about getting answers, but about understanding perspectives, making them a valuable tool in the workplace, schools, and beyond. Through these questions, we dig deeper, encouraging a culture where thoughts are shared openly and ideas flourish.

What is an open-ended question?

Open-ended questions kick off with words like “Why?”, “How?”, and “What?”. Unlike the yes-or-no kind, they invite a fuller response. It’s not about getting quick answers, but about making the respondent think more deeply about their answers.

These questions ask people to pause, reflect, and delve into their thoughts before responding. It’s more than just getting an answer—it’s about understanding deeper feelings or ideas. In a way, open-ended questions are bridges to meaningful conversations, leading to a richer exchange of ideas and insights.

Comparison: Open-ended vs closed-ended questions

Open-ended and closed-ended questions serve as the two sides of the inquiry coin, each with its unique advantages.

Open-ended questions:

  • Kickstart with “How”, “Why”, and “What”
  • No set answers, sparking more thought
  • Encourage detailed responses, explaining the ‘why’ or ‘how’

Closed-ended questions:

  • Often have a “Yes” or “No” response
  • Feature predetermined answers (e.g., Options A, B, C)
  • Aim for specific, clear-cut responses, making them quick to answer

Together, they balance a conversation. Open-ended questions open up discussions, while close-ended questions keep them on track.

Benefits of asking open-ended questions

  • Deeper understanding : They dig deeper, unveiling more than just surface-level information.
  • Enhanced communication : Open-ended questions foster a two-way dialogue, making conversations more engaging.
  • Building trust: When people feel heard, it builds trust and a strong rapport.
  • Encourages critical thinking: These questions nudge towards reflection, enhancing critical thinking skills.
  • Uncovering insights : They can bring out hidden insights that might stay buried otherwise.
  • Problem-solving: By identifying core issues, they pave the way for effective problem-solving.
  • Personal growth : Promoting self-reflection, open-ended questions contribute to personal growth and awareness.

As you can see, open-ended questions pave the way for in-depth responses. Unlike a simple ‘yes’ or ‘no’, they encourage individuals to share more. This leads to richer engagements, giving a peek into others’ perspectives. It’s more than just collecting data; it’s about understanding the context behind it. Through open-ended questions, discussions become more engaging and informative. It’s a step towards fostering a culture of open communication and meaningful interactions.

28 examples of open-ended questions

Questions for team meetings:

  • What steps could enhance our meeting’s effectiveness?
  • How does our meeting structure support or hinder our goals?
  • What topics should be prioritized in our next meeting?
  • How can we make our meetings more engaging and productive?
  • What was the most impactful part of today’s meeting?
  • If you could change one thing about our meetings, what would it be?
  • How do our meetings compare to those in other departments?

For company surveys:

  • What aspects of our culture contribute to your job satisfaction?
  • How could we modify our workspace to boost productivity?
  • What are your thoughts on our current communication channels?
  • How would a flexible work schedule impact your work-life balance?
  • What training or resources would further your career development here?
  • How do our company values align with your personal values?
  • What suggestions do you have for improving team collaboration?

Ideas for brainstorming sessions:

  • What alternative solutions could address this challenge?
  • How might we streamline our brainstorming process?
  • What barriers are hindering creative thinking in our sessions?
  • How do you feel about the diversity of ideas presented?
  • What methods could we employ to encourage more innovative thinking?
  • How can we better document and follow up on ideas generated?
  • What factors should be considered when evaluating potential solutions?

For classroom discussions:

  • What teaching methods engage you the most?
  • If you could redesign our classroom, what changes would you make?
  • How does peer interaction enhance your learning experience?
  • What topics or subjects would you like to explore in more depth?
  • How could technology be integrated to enhance learning?
  • What challenges do you face in achieving your academic goals?
  • How could the school support you better in overcoming academic hurdles?

How to craft effective open-ended questions

Crafting effective open-ended questions is an art. It begins with choosing the right starters like “How”, “What”, and “Why”.

  • Example: How did you come up with this idea?
  • Example: What were the main challenges faced?
  • Example: Why do you think this approach works best?

Using these starters makes it easier to receive thoughtful answers that lead to deeper thinking and understanding.

Beyond starters, here are more tips:

  • Be clear: Ensure clarity to avoid confusion.
  • Avoid leading: Don’t direct towards a specific answer.
  • Keep it simple: Steer clear of complex language.
  • Encourage thought: Frame questions to prompt reflection.
  • Be open: Prepare for unexpected answers.
  • Practice active listening: Show genuine interest.
  • Follow-Up: Delve deeper with additional questions.

Characteristics of good open-ended questions:

  • Interest: Be genuinely interested in the responses.
  • Clarity: Keep your question clear and straightforward.
  • Neutral tone: Avoid leading or biased words.
  • Emotive verbs: Use verbs that evoke thoughts or emotions, like ‘think’, ‘feel’, or ‘believe’.
  • Non-accusatory: Frame your question to avoid sounding accusatory, which can hinder honest responses.

For instance, instead of asking “Why did you choose this method?”, try “What led you to choose this method?”. It feels less accusatory and more open to insightful responses.

When to Use Open-Ended Questions

Open-ended questions are invaluable tools for diving into meaningful conversations, whether in live discussions or self-paced surveys. Acting like keys, they unlock the reasoning behind people’s thoughts and feelings. For example, incorporating open-ended questions into your Net Promoter Score (NPS) surveys can offer insights into why customers assigned a specific score.

These questions are particularly effective for sparking deeper thinking and discussions. Imagine you’re in a team meeting and you ask, “What can we do to better deliver our projects?” The room is likely to fill with useful suggestions. Similarly, in customer service emails , posing a question like “How can we improve your experience?” can provide insights that go beyond the scope of pre-crafted templates.

Start your day  with great  quality  content

In educational settings, questions like “How can we make learning this easier for you?” can encourage thoughtful answers. This not only enhances the learning environment but also fosters a culture of open communication. By asking such questions, you’re doing more than just seeking answers; you’re inviting deeper thought and engagement.

The real magic of open-ended questions lies in their ability to transform basic interactions into opportunities for greater understanding and learning. Whether you’re conducting a survey, such as an Employee Net Promoter Score , or simply having a team discussion, these questions add context and depth. They turn simple exchanges into meaningful conversations, helping you reach the ultimate goal—whether you’re talking to team members or customers.

Bonus: 8 of our favorite open-ended questions for customer feedback

Embarking on the open-ended questions journey? While Nicereply specializes in collecting easy-to-digest feedback through stars, smiley faces, or thumbs up/down, we see the value in the detailed insights open-ended questions can provide. Here’s a list of our favorite open-ended questions to enhance your customer satisfaction insights:

  • How could we improve your experience with our customer service?
  • What did you appreciate most about your interaction with our team?
  • Were there any aspects of our service that fell short of your expectations?
  • What additional services or features would you like us to offer?
  • How would you describe your overall satisfaction with our service?
  • What suggestions do you have for our support team to serve you better?
  • What were the key factors that influenced your satisfaction with our service?
  • How does our customer service compare to others you have experienced?

Though Nicereply’s focus is on clear-cut feedback, engaging with open-ended questions on a separate note can offer a richer understanding of your customer’s experience.

1: How could we improve your experience with our customer service?

Asking for feedback shows you’re keen on making your service better. It helps understand what customers think, find out what’s missing, and aim for the best. This question really shows that a company cares about improving.

2: What did you appreciate most about your interaction with our team?

Finding out what customers like helps grow those good parts. It’s a way to cheer on what’s going well and make sure these good habits keep going strong.

3: Were there any aspects of our service that fell short of your expectations?

Knowing what let customers down is the first step to fixing it. This question can bring out hidden issues, making it easier to sort them out. It also shows customers that their happiness is important and their worries are heard, which can really boost the bond between the customer and the company, a crucial factor in building customer loyalty .

4: What additional services or features would you like us to offer?

Uncovering customer desires helps in tailoring services to meet their needs. It’s a proactive step toward innovation based on customer-driven insights.

5: How would you describe your overall satisfaction with our service?

This question opens up a space for many different reactions and stories. It captures a general feeling that can be explored more for deeper understanding.

6: What suggestions do you have for our support team to serve you better?

This question invites customers to share ideas on improving our service. It’s a positive way to get useful feedback. It also shows a commitment to getting better and valuing what customers have to say, which can build trust and good relations.

7: What were the key factors that influenced your satisfaction with our service?

Looking into the details of satisfaction helps to understand what makes good service for customers. It’s a logical way to break down customer satisfaction.

8: How does our customer service compare to others you have experienced?

A comparative question provides a reality check and a broader industry perspective. It’s a way to understand your competitive standing from a customer-centric viewpoint.

It also may provide insights into areas where competitors excel, offering a benchmark for improvement, or areas where your service shines, which can be leveraged in marketing and brand positioning.

Conclusion: Open-ended questions in a nutshell

Open-ended questions are conversation starters, allowing for a richer exchange of ideas. They help individuals express themselves more fully, paving the way for a deeper understanding.

In business, particularly in customer support, these questions are crucial. They help unearth the customer’s perspective, providing key insights for improving service. For support professionals, every open-ended question is an opportunity to better understand customer needs and enhance the dialogue. Through these questions, a culture of open communication and continuous learning is fostered, which is essential for delivering exceptional customer service.

How did you like this blog?

Nice

Roland is the go-to guy for content marketing at Nicereply. With over a decade of experience in the field, he took the reins of the SEO department in April 2023. His mission? To spread the word about customer experience far and wide. Outside of the digital world, Roland enjoys quality time with his wife and two daughters. And if he's in the mood, you might catch him lifting weights at the gym—but don't hold your breath!

Related articles

TOP Questions for CSAT, NPS & CES [Free resource]

TOP Questions for CSAT, NPS & CES [Free resource]

Best questions for customer satisfaction survey.

18 Ways to Ask Survey Questions to Get Customer Feedback

18 Ways to Ask Survey Questions to Get Customer Feedback

The best customer service tips every week. no spam, we promise..

Get guides, support templates, and discounts first. Join us.

Are you a freelance writer? Do you want your articles published on Nicereply blog?

Get in touch with us

  • Admission Essay
  • Statement of Purpose Editing
  • Personal Statement Editing
  • Recommendation Letter
  • Motivation Letter
  • Cover Letter
  • Supplemental Essay
  • Letter of Continued Interest
  • Scholarship Essay
  • Role Model Essay
  • Our Editors
  • College Admission Essay Examples
  • College Cover Letter Examples
  • College Personal Statement Examples
  • Graduate Personal Statement Examples
  • Graduate Statement of Purpose Examples
  • MBA Essay Examples
  • MBA Personal Statement Examples
  • MBA Resume Examples
  • MBA Recommendation Letter Examples
  • Medical School Personal Statement Examples
  • Medical School Recommendation Letter Examples
  • Pricing Plans
  • Public Health
  • Dissertation
  • Research Paper
  • Thesis Editing
  • Academic Editing
  • Motivation letter
  • Letter of Recommendation
  • Personal Statement
  • Statement of Purpose

What Is An Open Ended Question? Answering It Through Essay

EssayEdge > Blog > What Is An Open Ended Question? Answering It Through Essay

Table of Contents:

What is an Open-Ended Question?

Open-ended questions are those that do not define the scope you should take (i.e., how many and what kinds of experiences to discuss). Like personal statements for other types of applications, open-ended essays have more room for creativity, as you must make the decision on issues such as how expansive or narrow your topic should be. For business schools, the most common question of this type asks about your personal background, but many questions that look straightforward are actually relatively open-ended.

For example, a question that asks you to describe your leadership style is more open than a question that asks you to describe a single leadership experience. This question defines the kind of experience you should discuss, but not the number. Therefore you still face decisions on how many examples to use and how to integrate them. On the other hand, a question that asks you to discuss your most important activity limits you to one example, but leaves open from which realm you will choose that example. Therefore you still face decisions on what theme you will use to drive your discussion. In both cases, you should use the guidelines discussed in this lesson to structure your essay.

The key aims of this lesson are the same as for the previous one: you will learn how to identify and develop an overarching theme and to organize your content in the most effective structure. Thus, you will learn how to answer open ended questions to write a perfect grad school essay. There will also be some overlap in subsections to provide a step-by-step guideline.

As we explained in the last lesson, the overarching theme you decide on will inform the manner in which you organize the rest of your content. But in contrast to the type of essay discussed in the previous lesson, you don’t have a series of questions to guide your thought process for these open-ended types. Instead, you must analyze your main ideas and examples and identify the underlying theme that ties them together.

There are two extremes that you should avoid, as demonstrated by the following examples:

TOO BROAD: “A variety of experiences have shaped me into the person I am today.” TOO NARROW: “My character is defined by hard work.”

It is better to err on the side of specificity, but to avoid the problem of sounding too narrow and over-simplistic, you should add layers to create a more sophisticated theme. For example: “While perseverance helped me to survive academically during my first years in the U.S., I discovered a more profound love of learning when I chose my major in college.”

The same two methods of articulating your theme apply here as they did to the complex essays. We will go through them again with different examples.

Need help? Check out EssayEdge editing services:

The Upfront Approach

The idea here is to articulate your theme in the introduction, suggesting the focus of your argument as you would in a thesis statement. This applicant faces one of the most typical open ended questions examples, “What matters most to you and why?” Many people will choose a concrete topic, such as family or religion. In those cases, it’s still essential to have a theme in addition to the topic, so the essay doesn’t amount to a disordered listing of facts. The approach that this applicant uses is unique in that the topic is itself a theme: “a lifelong pursuit to improve myself as a human being.” To add further depth to this theme, he explains how he will approach the topic from three angles: professional, spiritual, and personal.

Not all essays need to be as clearly outlined as this one is. Nevertheless, this essay demonstrates the effectiveness of asserting a clear theme that offers direction for the rest of the discussion.

The Gradual Approach

Because you are writing personal essays, you might prefer to allow the argument to unfold more naturally as a story. Each paragraph will build upon previous points as an underlying theme gradually emerges. The conclusion then ties these individual themes together and includes some kind of encapsulation of the material that preceded it. This applicant writes a summary of his personal and family background. He begins by making each point on its own terms, without trying to force an all-encompassing interpretation on his life.

Gradually, however, ideas begin to recur about obstacles, sacrifice, and the united resolve that his family showed. He puts these pieces together in the final paragraph: “My family created a loving home in which I was able to develop the self-confidence that I need in order to overcome many of the challenges that I face in my career. In addition, growing up in a family of very modest means, and being conscious of my parents’ sacrifices, has given me a powerful sense of drive.”

Organization

Answering open-ended questions will naturally give you more freedom in adopting an arrangement for your ideas. While one strategy comes from the previous lesson, the other two are new.

Hierarchy of Evidence

This approach will be less common for open-ended questions because the majority of them ask about personal background, and in those cases you’re not looking to emphasize accomplishments by bringing them to the forefront. Nevertheless, if there’s something in your personal background that would make you stand out, you should not hesitate to open with that rather than stick to more conventional orderings.

Showing Progress

We do not have a section advising chronological order, because despite its convenience, you should not choose such an approach for its own sake. A chronological essay often reads like a dull list, undiscriminating in its details. On the other hand, the Showing Progress approach often results in a chronological order for independent reasons.

The guiding principle here is to structure your evidence in a way that demonstrates your growth, from a general initial curiosity to a current definite passion, or from an early aptitude to a refined set of skills. It differs from the Hierarchy of Evidence approach because your strongest point might come at the end, but its strength lies precisely in the sense of culmination that it creates.

This applicant faces a variation of the failure question. Instead of being asked to discuss one failure, he has to reflect on the quotation, “Mistakes are the portals of discovery.” (Note: here the theme is given to you, but the scope is not defined. Therefore the example is still useful, as the writer has to choose how to organize his evidence.) After discussing his initial mistake, he describes subsequent actions with clear comparisons to the original experience that demonstrate the progress he has made. Moreover, his choice to discuss two separate mistakes creates a second level of progress, as the lessons he learns after the second mistake are clearly more advanced and mature.

Juxtaposing Themes

If two experiences are closely related but occurred years apart, it makes more sense to develop them as one set of ideas than to interrupt them with unrelated points. This essay, quoted above under the Gradual Approach subsection, moves through the applicant’s personal background point by point, instead of attempting to tell a chronological story. He devotes separate paragraphs to different family members and discusses his experience with the religious conflicts in Ireland in its own segment. Thus each idea is developed in full without being interrupted by points that would fit in only because of chronology.

Your decision between these latter two approaches comes down to the nature of your content—most importantly, the number of ideas you’re juggling. Moreover, showing progress is more significant in an essay about self-development than one about more external factors. Finally, note that you can combine the two approaches by showing progress within self-contained thematic units.

Robin W. - professional essay editor and proofreader

Popular Posts

June 2, 2022 How To Start a Scholarship Essay: Catch Reader’s Attention Fast

May 16, 2022 My Role Model Essay: A Few Ways to Elaborate on The Subject

May 3, 2022 How To Start a Personal Statement? | Writing Tips and Samples

Related Posts

May 20, 2024 Passive versus Active Voice: How To Write in Active Voice?

April 23, 2024 How to Overcome Writer’s Block and Craft a Perfect College Essay

March 1, 2023 Questions to Ask In a College Interview

©2024 Student Media LLC. All rights reserved.

EssayEdge: Essay Editing & Proofreading Service.

Our mission is to prepare you for academic and career success.

  •   Log In  
  •   Sign Up  
  • Forgot password

Unable to log in? Please clear your browser's cache and then refresh this page and try again

Reset password Please enter your email address to request a password reset.

check you email

Check your email We’ve just sent a password reset link to your email.

This information is used to create your account

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • Communication Skills

How to Write Open‐Ended Questions

Last Updated: March 9, 2023 Fact Checked

This article was co-authored by wikiHow Staff . Our trained team of editors and researchers validate articles for accuracy and comprehensiveness. wikiHow's Content Management Team carefully monitors the work from our editorial staff to ensure that each article is backed by trusted research and meets our high quality standards. There are 7 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 93,336 times. Learn more...

Open-ended questions cannot be answered with a simple “yes” or “no.” Instead, they have multiple potential right answers, and require thought, reflection, and explanation from the person responding. [1] X Research source That being said, open-ended questions require as much effort to write as they do to answer. Whether you’re getting ready for an academic discussion, preparing to interview someone, or developing a survey for sales or market research, keep in mind that your questions should ideally spark reflection, discussion, and new ideas from your respondents.

Determining a Specific Purpose

Step 1 Prepare open-ended questions based on reading for class discussions.

  • Take notes on potential questions as you read. While you read the source material for your class discussion, write down broad, big-picture questions about what you’re reading. If you have identified or been given a purpose for reading, use it to guide the questions that you might ask. Later, you can use these notes to help write more polished, final open-ended questions.
  • If you have trouble coming up with specific questions while reading, underline or circle portions of the text that seem important, confusing, or connected to your purpose for reading. You can return to these later as starting points for your written open-ended questions.

Step 2 Add open-ended questions to market research surveys to gain new insights.

  • For example, instead of asking: “Were you satisfied with your experience?” You could try something like: “What about your experience did you find most satisfying, and what about it did you find frustrating or difficult?” Instead of simply giving a “yes” or “no” answer, your respondents will give you specific information, and possibly new ideas for improving your product or service. [3] X Research source
  • However, if you’re looking for simpler, more quantitative data, it might be easier to rely on multiple-choice, yes-no, or true-false questions, all of which are closed-ended. For example, if you’re trying to find out which gelato flavor was the most popular at your shop this month, it would be easier to ask a closed-ended question about which the respondent purchased most frequently, and then list all available flavors as potential answers.

Step 3 Use open-ended interview questions to thoroughly screen a potential job candidate.

  • Examples of effective open-ended questions to ask in an employment interview include: “In a previous job, have you ever made a mistake that you had to discuss with your employer? How did you handle the situation?” or “When you’re very busy, how do you deal with stress?”

Step 4 Prepare open-ended questions for journalistic interviews to ensure thorough responses.

  • This strategy can be especially useful when interviewing a candidates for public office, who are often more concerned with pushing their own platform than with giving thorough, honest answers. Closed-ended questions allow interviewees like these to halt the conversation with a “Yes, but…” or “No, but…” response, and then redirect it towards their own agenda.

Structuring Effective Questions

Step 1 Begin your question with “how,” “why,” or “what.”

  • This isn’t a hard-and-fast rule – you can write a closed-ended question with any leading word. For example, “What color shirt was she wearing?” is decidedly a closed-ended question.

Step 2 Create questions that analyze, compare, clarify, or explore cause and effect.

  • Analytical or meaning-driven questions might ask why a character in a literary text is behaving a certain way, what the importance of a particular concept is, or what the meaning of a scene or image might be. In a class discussion about a novel, you might ask: “What is the significance of the fact that Mary held back tears as she finished her donut towards the end of Chapter 2?”
  • Comparison questions might ask about similarities or differences between character perspectives, or ask the respondent to compare and contrast two different methods or ideas. For example, in a marketing survey, you could ask, “Which model of can opener – the Ergo-Twist or the Ergo-Twist II – was easier to use, and why?"
  • Clarifying questions might ask what the meaning of a complicated idea or an unclear term might be. For instance, if you’re interviewing someone who keeps bringing up “the war on Christmas,” you might ask them, “What exactly do you mean by that statement? Who is attacking Christmas, and how?”
  • Cause-and-effect questions might ask why a character is displaying an emotion in a particular situation, or what connections might exist between two different ideas. An example of a cause-and-effect question that you might ask in an interview could be: “What aspects of your experience in college sports might influence your approach to this job?”

Step 3 Avoid questions that are vague, leading, or answerable in one word.

  • An example of an excessively vague question might be “What about Jeff’s strange behavior?” (Well, what about it?)
  • A leading question hints at the expected answer, thus making it difficult for students who have different ideas to speak up. An example might be: “Why is the ocean a symbol of human insignificance and existential despair?”
  • An example of a yes or no question would be: “Does the grandfather disapprove of his granddaughter’s desire to become a cowgirl?”

Step 4 Avoid questions with limited possible answers.

  • This could mean offering survey respondents a text box to type or write their answers in, rather than bubbles to fill in.
  • In a conversational setting, like a journalistic interview, this means avoiding giving your subject potential answers when you pose the question. For example, instead of asking, “Would you prioritize an aggressive overhaul of public transportation or the increased use of alternative fuels?” ask a question like: “What strategies would you prioritize to make our city more energy-efficient?”

Step 5 Follow up closed-ended questions with open-ended questions.

  • For example, if you ask a multiple-choice question like “How often do you visit your local public library? A) Often, B) Sometimes, or C) Never,” you could follow it up with questions like: “If you chose A, what aspects of our library keep you coming back?” or “If you chose C, what prevents or dissuades you from visiting the library?”

Step 6 Check over your questions to make sure that they’re open-ended.

Expert Q&A

You might also like.

Juicy Questions to Ask Your Friends

  • ↑ https://examples.yourdictionary.com/examples-of-open-ended-and-closed-ended-questions.html
  • ↑ https://www.nngroup.com/articles/open-ended-questions/
  • ↑ https://www.indeed.com/career-advice/career-development/open-ended-questions-examples
  • ↑ https://www.indeed.com/career-advice/interviewing/tough-open-ended-questions
  • ↑ https://www.poynter.org/reporting-editing/2004/the-way-we-ask/
  • ↑ https://hbr.org/2018/05/the-surprising-power-of-questions
  • ↑ https://www.artofmanliness.com/people/social-skills/social-briefing-8-better-conversations-asking-open-ended-questions/

About This Article

wikiHow Staff

  • Send fan mail to authors

Reader Success Stories

Meranda Stuard-Newcomb

Meranda Stuard-Newcomb

Jan 25, 2019

Did this article help you?

open ended questions about essays

Featured Articles

3 Cool Methods for Inventing a Nickname

Trending Articles

Know if You're Dating a Toxic Person

Watch Articles

Put a Bracelet on by Yourself

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Don’t miss out! Sign up for

wikiHow’s newsletter

Writing Explained

What Are Open-Ended, Close-Ended Questions? Definition, Examples

Home » The Writer’s Dictionary » What Are Open-Ended, Close-Ended Questions? Definition, Examples

Open-ended question definition: Open-ended questions are questions that have unlimited response options.

Close-ended question definition: Close-ended questions are questions that have limited response options.

What is an Open-ended Question?

Open-ended questions are questions that allow for various response options. Open-ended questions do not expect a particular answer. Rather, they allow the individual providing the response to answer however he chooses.

Examples of Open-ended Questions

Open ended questions examples

  • What was your childhood like?
  • How did you decide to enter this profession?
  • When would you like to visit the museum?

Open-ended questions are common in job interviews.

What is a Close-ended Question?

Open and closed questionnaire

The answers to close-ended questions are limited and require certain answers.

Typically, close-ended questions lend themselves to “yes” or “no” responses. Furthermore, close-ended questions are usually specific in nature.

Examples of Close-ended Questions

  • Did you attend the conference?
  • Will you eat dinner with us?
  • Do you like vanilla ice cream?
  • When were you born?

As you can see, the answers to these questions will be much less involved than those of the open question.

Open-End Questions vs. Close-Ended Questions

Open-ended questions and close-ended questions are different in that they elicit very different responses.

The following questions illustrate close- and open-ended questions side-by-side. The questions are similar in subject matter, but the responses will vary depending on the question style.

Open-ended vs. Close-ended Questions:

  • What is your favorite ice cream flavor? / Do you like chocolate ice cream?
  • How are you feeling? / Are you feeling well?
  • What are you plans this evening? / Do you have dinner plans?
  • What homework do you have to complete? / Do you have math homework?
  • Where is your shirt? / Is your shirt in the closet?
  • Where should I buy a new blouse? / Should I buy a blouse at the mall?
  • When is your birthday? / Is your birthday in May?
  • What books did you read this summer? / Did you read a book from the suggested list?
  • Where is your next vacation? / Do you think you will go to Europe soon?
  • How did you meet your husband? / Are you married?

As you can see from these examples, each question type brings out a different kind of response. Close-ended questions are more specific, while open-ended ones are much more “open.”

How Is Each Question Used?

Open ended interview questions

Close-ended questions are best used when you want a short, direct answer to a very specific question. They are less personal in nature and are best used when the person asking wants a quick answer.

Are the following questions open- or close-ended questions?

  • Will you attend the dance tonight?
  • How will you evade the storm?
  • Did you bring the camera?
  • Why can’t I join you?
  • Would you like a new dress?

See answers below.

Summary: What Are Open-Ended, Close-Ended Questions?

Define open-ended question: an open-ended question is a question that does not expect a specific, narrow answer.

Define closed-ended question: a close-ended question is a question that expects a specific answer and does not give leeway outside of that answer.

In summary,

  • Open-ended questions are broad and do not expect a specific answer.
  • Close-ended questions are limiting and expect a specific answer.

engVid - Free English Video Lessons

  • All Lessons
  • business english
  • comprehension
  • culture & tips
  • expressions
  • pronunciation

Adam's English lessons

How to Ask Open-Ended Questions (with 100+ examples!)

' src=

How do you make friends? How do you win clients? How do you build relationships? The secret lies in knowing how to turn a conversation with someone into a connection .

Often, that happens when people share not just facts, but also opinions, thoughts, feelings, experiences, and stories. But how do you start a conversation so people give you such meaningful responses?

The key is to ask open-ended questions , rather than closed ones. Closed questions usually produce a short, yes or no response; they tend to limit the conversation. On the other hand, open questions produce a longer, fuller response; they expand the conversation.

The most important benefit of open-ended questions is that they let you find out more than you expected. People may share fears and problems, hopes and solutions, ideas and possibilities. By understanding both types of questions, you can use the right one at the right time, depending on the situation and your goals.

This resource will show you the difference between closed and open questions, when and how to use each, and then give you over 100 sample closed questions and of 100 sample open-ended questions!

Both closed and open questions serve a purpose. Sometimes, one starts with a simple, closed question, and then moves on to a more open one. In fact, if you don’t switch to open questions, the dialogue will feel like an interrogation, rather than a conversation!

Sample Closed Conversation:

Sample open conversation:.

FEATURES
Closed Questions Open Questions
encourage a yes/no answer encourage a full answer
limit conversation develop conversation
make people think less make people think more
make people think superficially make people think deeply
evoke short, factual answers evoke longer, meaningful answers
produce facts and basic information encourage thoughts, opinions, feelings, stories
produce few or no surprises may produce surprises
give control to the questioner give control to the speaker
are like multiple-choice questions on a test are like short-answer questions on a test
USES
Closed Questions Open Questions

It’s a beautiful day, isn’t it?

Do you like this kind of weather?

Do you speak English?

How did you learn to speak English so well?

Is this laptop available in blue?

Are you happy with your new laptop?

Wouldn’t you love to drive a car like this?
Are you happy with your cellphone company?

How can we achieve peace?
How can we prevent a war?

If I agree to that price, will you sign now?
Would you prefer to pay on the 1st or the 15th?

What features would you like to see in a cellphone plan?

Would you buy this car?
How much would you pay?

What would make you buy this car?
Question Words
Closed Questions Open Questions
Do/Did What
Are/Was/Were/Will How
Who Describe
When Tell me about
Where Why

Sample Questions

Learning to ask open-ended questions can lead to some of the most interesting and meaningful conversations – and connections – in your life!

Watch the lesson below for a full explanation and demonstration of how to use open-ended questions in conversation.

engVid quiz

Test your understanding of this English lesson

' src=

  • Privacy Policy

© 2024 LearnVid Inc.

The Ultimate Guide to Open-Ended Questions vs. Closed-Ended Questions

  • Written By Lena Katz
  • Updated: November 15, 2023
What is an open-ended question? An open-ended question is one that can only be answered by a unique thought or statement in someone’s own words. Unlike a closed-ended question, it cannot be answered in one word, or by yes/no, or by multiple choice. Open-ended questions encourage people to incorporate more of their own information and point of view.

For stronger connections,  better insights , and more business, experts recommend one conversational tool above all in the demo or discovery phase: open-ended questions. Profile writers use them all the time to elicit thoughts and anecdotes from their subjects.

Smart marketers also use them to maximize authentic engagement with new business leads and current clients. However, there’s a method and skill required to ask open-ended questions… and part of it is realizing and leveraging the other, equally important benefits of asking closed-ended questions.

In this article, we’ll go over the best habits to get into for asking open-ended questions, when to use closed-ended questions instead, scenarios when you might need to use both, the different ways they impact data collection , and some examples of open versus closed questions as used in marketing, sales, and content interviews.

But first, a little teaser of examples for each approach…

Examples of open-ended questions:

  • Where would you like your business to grow from here?
  • What would success look like to you?
  • What campaigns are out there right now that caught your eye, and for what reasons?
  • What are a couple of day-to-day practices of yours that people can implement for greater success/fulfillment in their own lives?
  • Can you give me a few dates for a follow-up call?

Examples of closed-ended questions

  • Are you satisfied with your current sales numbers?
  • What is your #1 goal?
  • Did you like your competitor’s latest campaign/commercial?
  • Where can someone go to learn more about what you do?
  • When would you like to set a follow-up?

What is an open-ended question?

What is an open-ended question?

An open-ended question is one that can only be answered by a unique thought or statement in someone’s own words — it cannot be answered in one word, or by yes/no, or by multiple choice. Open-ended questions encourage people to come up with a more thoughtful and filled-out answer incorporating more of their own information and point of view.

People who want to keep an exchange of information and flow of thoughts going with whomever they’re interviewing will generally stick with open-ended questions. These questions encourage interviewees to explore their “why” and to give context to their decisions.

They illuminate the reasoning behind decisions and opinions. In interviews, they help the writer/producer get to know and understand a subject… and then pass that insight along to readers.

Why/when are open-ended questions recommended/important?

They can be used at any time when it’s more important to the interviewer to elicit thoughts and opinions and insights than to get definitive answers.

Situations may include:

  • Informational interviews with business prospects
  • Discovery sessions with potential or new clients
  • Feedback sessions with existing clients
  • Testimonial interviews
  • Interviews for profiles
  • Market research — when you’re trying to gauge people’s perception of a brand
  • Market research — customer insight interviews
  • Customer satisfaction surveys —  solicit people’s opinions

Do’s for crafting open-ended questions:

  • Do start off with “Why…” or “What…” But if you fear that even with that opening, your question will lead to a succinct answer, build in a request for the interviewee to share their thoughts, not get straight to the point.
  • Do ask people to explain something.
  • Do ask people for their thoughts on something.
  • Do ask for an example.
  • Do remember, an open-ended question can also be phrased as a statement: “Tell me about a moment when…”
  • Do follow a closed-ended question with an open-ended question — to get exact data, and then an explanation of the data provided.

Don’ts for crafting open-ended questions:

  • Don’t make them so broad that people get confused.
  • Don’t encourage lengthy answers to every question (especially if this is a survey situation).
  • Don’t overuse them and forget to get quantitative data.
  • Don’t make them two-part questions where each part requires its own separate train of thought.
  • Don’t prompt an answer or make any suggestions that could push an answer in a certain direction.

Using yes/no questions

What is a closed-ended question?

We’ve briefly touched upon closed-ended questions just to compare with open-ended ones. Now, let’s define exactly what they are and in what scenarios it’s better to use them.

Closed-ended questions require one specific answer — either a yes/no or a choice between a few options. Sometimes they’re in pursuit of a fact, and sometimes a decision. These types of questions are used to collect quantitative data , which can be mapped out on charts or graphs.

The answers are also used to come up with numerical ratings of how a company is performing or meeting customer expectations. When used by salespeople, closed-ended questions can also be a tactic to assess how cold or warm a lead is, and to move the sales process along.

For interviewers such as writers, closed-ended questions are often used to establish background facts about a topic or person. They can also be used for winding up an exploratory Q+A session with some definitive conclusions.

You see this on reality TV interviews often. One person shares her drama with another cast member, explores the person’s possible motivations, speculates on her intentions, and then the interviewer asks:  Do you trust that person?   No.   Do you still think of her as a friend?   No.

It puts a bow on the conversation and lets viewers know where the storyline is headed.

Why/when are closed-ended questions important to use?

  • When you want to get fast facts or basic biographical details
  • When you need answers to be exact
  • When you are collecting quantitative data
  • When the answer is provided, it will determine whether or not it makes sense to continue pursuing a lead (especially related to budget and timeline)
  • When you are setting goals and KPIs that you’ll be expected to deliver against
  • When you’re fact-checking
  • When your legal department is going to want to put information into a contract

Do’s for crafting closed-ended questions:

  • Do begin the question with Have , Will or Do/Did .
  • Do switch up the question structure between yes/no, multiple-choice, rating scale multiple-choice, and fact-based answers.
  • Do create the questions according to what data you need to get from a study, survey, or questionnaire.
  • Do follow (or lead) a closed-ended question with an open-ended question to get both quantitative and qualitative information.

Don’ts for crafting closed-ended questions:

  • Don’t provide a selection of multiple choice answers that’s too limited to cover the full range of possibilities.
  • Don’t assume that everyone will be able to make a yes/no answer based on their experience of something.
  • Don’t attempt to craft complex or two-part questions as you might with an open-ended question.
  • Don’t use this format to explore emotions or feelings.
  • Don’t create a survey or study that is only closed-ended questions; at minimum have an open-ended question at the end of each section that allows people to explain their answers or give context to them.

Open-ended vs. close-ended questions

Open-ended vs. closed-ended questions

Let’s have a look at the different purposes they serve, how they complement each other, what kind of data they garner, and how each can be used in our three scenarios (a sales call, a marketing exercise, a writers’ interview).

  • An open-ended question opens up a topic for exploration and discussion while a closed-ended question leads to a closed-off conversational path. After “Yes” or “No” or the specific one-word answer to the question, the thread is done.
  • Open-ended questions lead to qualitative answers while closed-ended questions lead to quantitative answers.
  • Open-ended questions ask people for their why while closed-ended questions ask people for their decision .

In shopper behavior analysis:

  • Open-ended questions spend time peeling back the layers of why someone feels some way about a product.
  • Closed-ended questions take a person through their buying habits: how often do they buy a product, which brand do they typically buy, have they heard of your brand, do they buy it.

In sales meetings:

  • Open-ended questions help you understand your potential customer better.
  • Closed-ended questions help you realistically decide whether there’s business to close.

In marketing research:

  • Open-ended questions are good for getting customer insights.
  • Closed-ended questions are good for establishing who is a loyal customer and who has little brand awareness or loyalty.

In writing profiles or bios:

  •  Open-ended questions are good for establishing a connection, getting lots of nuanced details, and pulling back the curtain on a person’s life.
  • Closed-ended questions are good for establishing their credentials , hitting biographical details, and fact-checking anecdotes you discovered during preliminary research.

Sample open-ended questions vs. closed-ended questions

Open-ended vs. closed question set examples for sales professionals.

10 open-ended vs. closed question set examples

For sales professionals.

When you’re in sales, open-ended questions are good for understanding more about your customer and opening up a real dialogue. Closed-ended questions are good for getting prospects to let you know whether they have any intentions of signing a contract any time soon.

Sales example 1:

CLOSED : Were you happy with your former [agency/SaaS provider/other competing product or vendor]? OPEN : What was it about your former [competing product/vendor] that has you looking for a new vendor?

Sales example 2:

CLOSED:  Are you satisfied with your current sales numbers? OPEN : Where would you like your business to grow from here?

Sales example 3:

CLOSED : Have you ever executed the kind of project/campaign we specialize in before, either on your own or with a different partner? OPEN : Tell me about a case study or existing campaign/project in the market that is in this category that you really like. It can be one of your own, or another company.

Sales example 4:

CLOSED : (after a product demo) Do you have any questions? OPEN :  We went through a lot of information just now. What part stood out to you the most, either because you loved it or because you’d like a little more time to understand?

Sales example 5:

CLOSED : (after going through prices) Does this fall more-or-less into the budget range you have in mind? OPEN:  Could you tell me how you’d want to customize a scope-of-work or what services would be important to you? That way I can come up with a price quote.

Sales example 6:

CLOSED : What’s your main goal that you’re hoping I can help with? OPEN :  What are your immediate and also your big-picture goals?

Sales example 7:

CLOSED : Are you interested in buying/subscribing to/getting a membership to the product I’ve shown you today? OPEN : Now that we’ve previewed our product/service together, what are you thinking your next step will be?

Sales example 8:

CLOSED : When would you like to set a follow-up? OPEN : Can you give me a few dates for a follow-up call?

Sales example 9:

CLOSED : Do you feel like you got all the information you needed? OPEN : Before we wrap, can you tell me what you’d like to look over again — either here or as an email follow-up?

Sales example 10:

CLOSED : On a scale of 1-10, how would you rate our team’s service up to this point? OPEN : Please share anything specific that stood out to you about the service you’ve received from our team so far.

Open-ended vs. closed question set examples for marketers.

10 open-ended vs. closed question set examples for marketers

Marketers are constantly interacting with customers, stakeholders, current clients and leads — their lives are an interesting mix of collecting data and fostering connection.

Just look at a social media manager’s day-to-day: Half may be spent analyzing paid campaign results and crunching numbers. The other half may be spent following up on an angry customer’s Facebook tirade or getting people’s permission to use content for UGC.

Today’s marketer needs to be able to flip from analyzing facts to feelings, balance trends with tried-and-true, ask closed-ended to open-ended questions instantaneously, and then explain their findings to the non-marketers that they work with or are hoping to work with soon.

Marketing example 1:

CLOSED : Are you satisfied with the quantity and quality of new business leads you’re currently getting? OPEN : What are your thoughts on the new business/lead-gen process at your company as it is now?

Marketing example 2:

CLOSED : What is your #1 goal? OPEN : What would success look like to you?

Marketing example 3:

CLOSED : Have you considered putting your budget toward X channel or tactic? OPEN : What channels and tactics do you feel are important to include in your next marketing plan?

Marketing example 4:

CLOSED : Did you like your competitor’s latest campaign/commercial? OPEN : What campaigns are out there right now that caught your eye, and for what reason?

Marketing example 5:

CLOSED : Which of the four logos shown here is best in your opinion? OPEN : Why did that one stand out to you?

Marketing example 6:

CLOSED : On a scale of 1 to 10, how satisfied were you with the information provided on our website? OPEN : What areas/sections do you think we can improve and how?

Marketing example 7:

CLOSED : Did you like the first version of the video I just sent over? OPEN : If you had a chance to watch the video I sent, what’s your feedback?

Marketing example 8:

CLOSED : What’s your budget for this activation/campaign/partnership? OPEN : There are a few ways we’ve discussed that a partnership could play out. How flexible is your budget if I were to send three different options?

Marketing example 9:

CLOSED : Are you mainly looking at reach, engagement or conversion as the key metric to gauge success in this campaign? OPEN : Let’s discuss what KPIs will be used to determine success in this campaign.

Marketing example 10:

CLOSED : Can we move forward with X project at $X budget for the dates presented? OPEN : We are ready to answer any final questions you might have before moving forward with this project.

Using open-ended vs. closed questions in interviews

10 open-ended vs. closed question set examples for interviewers:

One pitfall that’s common and you really need to be cautious of with experts and executives is the false open-ended question. This is a question phrased so it could lead to a personal anecdote or insight, but could also be answered with a “No.”

While experts and execs usually like to talk about their work , they will sometimes answer something with a simple “No” because they haven’t thought about it before, and they don’t really have an opinion.

All the open-ended sample questions here are crafted to avoid the possibility of a “No.”

Interview example 1:

CLOSED : What’s your job title? OPEN : How would you describe your professional specialty/expertise /niche?

Interview example 2:

CLOSED : What’s your focus right now? OPEN : Tell me one of your key focuses right now and why you’re interested in it.

Interview example 3:

CLOSED : Do you like X trend? OPEN : Name three of your favorite trends in our industry right now and why you like them.

Interview example 4:

CLOSED : What would you consider your key accomplishment in your field to be? OPEN : Please walk us through the accomplishment that gave you the most satisfaction in your career.

Interview example 5:

CLOSED : What degrees, awards or certifications do you have? OPEN : Of the degrees and awards you’ve received, which would you say are the most meaningful, and why?

Interview example 6:

CLOSED : Was it difficult to transition from [#1 well-documented career] to [#2]? OPEN : You successfully transitioned from [#1 well-documented career] to [#2]. Explain to us how that happened.

Interview example 7:

CLOSED : Can you tell us who will be in your next project/speaking at your next event? OPEN : How do you choose collaborators or speakers for your projects/events?

Interview example 8:

CLOSED : Where can someone go to learn more about what you do? OPEN : What are a couple day-to-day practices of yours that people can implement for greater success/fulfillment in their own lives?

Interview example 9:

CLOSED : What’s new/next for you? OPEN : What upcoming project or venture are you most excited about and why?

Interview example 10:

CLOSED : What social channels can we find you at? OPEN : If we all go follow you on Instagram or Twitter, what kind of content are we going to see?

Each kind of questions are equally valuable.

Open- and closed-ended questions are equally valuable.

While open-ended questions are a buzzword among salespeople and business coaches right now, we think the proper mix of open- and closed-ended is essential to any discovery process.

If you understand the difference between them, know how and for what purpose to use each, and can rework a closed-ended question into an open-ended question on the fly when needed, then you’re halfway to being a great interviewer .

Whether in sales or medical research or journalism, questions are a means to create connections and explore stories. They’re also a way to get useful data. One leads to the “why,” and the other leads to the “yes.”

The real question is: What’s next?

Now that you’re an expert on open and closed-ended questions, you’ll be a master at creating authentic engagement with your brand. But if you need some help, ClearVoice has got you covered. Our managed content creation and expert teams can help you produce content that can maximize your brand’s growth and impact. Connect with us here to see how.

Stay in the know.

We will keep you up-to-date with all the content marketing news and resources. You will be a content expert in no time. Sign up for our free newsletter.

Elevate Your Content Game

Transform your marketing with a consistent stream of high-quality content for your brand.

Marketer showing high-quality content.

You May Also Like...

Top Five Mistakes to Avoid When Writing Blog Intros

Top Five Mistakes to Avoid When Writing Blog Intros

The Top Mistakes Marketers Make When Using AI

The Top Mistakes Marketers Make When Using AI

By the byline interview Adam Carpenter

Behind the Byline: Adam Carpenter

  • Content Production
  • Build Your SEO
  • Amplify Your Content
  • For Agencies

Why ClearVoice

  • Talent Network
  • How It Works
  • Freelance For Us
  • Statement on AI
  • Talk to a Specialist

Get Insights In Your Inbox

  • Privacy Policy
  • Terms of Service
  • Intellectual Property Claims
  • Data Collection Preferences

How to ask open-ended questions? Crucial tips and examples

This article explains open-ended questions definition, why you need to ask them, how to ask open-ended questions, and real-world examples to get you started.

Mehal Rashid

Ever feel like you’re pulling teeth in conversations with clients or colleagues? You ask a question, hoping to spark a lively discussion, but all you get is a one-word answer and an awkward silence. The struggle is real.

But what if I told you that you could turn those unproductive chats into engaging exchanges that yield valuable insights with open-ended questions?

This article explains open-ended questions definition, why you need to ask them, how to ask open-ended questions, and some examples to get you started.

Why use open-ended questions?

Open-ended questions are questions that can’t be answered with a simple “yes” or “no.”

Instead, they encourage a more thoughtful and detailed response. Using open-ended questions in a survey can really jazz up your data collection game.

These questions give your respondents the chance to express themselves freely. The freedom encourages them to share insights, ideas, and even stories that you might not have thought to ask about.

These types of questions are super valuable when you’re exploring new topics or trying to understand complex issues. They allow for flexibility and depth with which you uncover hidden gems of insight that you might have missed with closed-ended questions.

And let’s not forget the human touch. When people see an open-ended question, it’s like you’re saying, “Hey, I really want to hear what you have to say.”

It shows that you value their input and are genuinely interested in their perspective, leading them to provide answers with authenticity and not just out of compliance.

How to ask open-ended questions: The secret sauce

How to ask open-ended questions

So, how do you ask open-ended questions effectively?

Let’s dive deeper into some expert tips that describe how to ask open-ended questions.

1. Build the momentum with “How” or “What”:

Begin with “How” or “What” questions to encourage respondents to provide detailed, thoughtful responses.

These interrogatives prompt individuals to share their experiences, opinions, or feelings in their own words, rather than limiting them to a simple “yes” or “no.”

For instance, instead of asking, “Did you enjoy the event?” you could inquire, “What aspects of the event did you find most enjoyable?”

This approach invites participants to express themselves freely and provides richer qualitative data.

2. Be specific, but not leading:

While it’s essential to provide some direction with your questions, be careful not to lead respondents toward a particular answer.

Being specific in your questioning helps guide the conversation, but avoid phrasing that may influence the participant’s response.

For instance, instead of asking, “Do you think the service was excellent?” opt for a neutral approach like, “How would you describe your experience with the service?”

This will allow individuals to share their perspectives without feeling pressured to conform to a predefined opinion.

3. Encourage elaboration:

To extract valuable insights, let your respondents answer open-ended questions and then follow up on their previous answers.

When someone provides a response, probe further to encourage them to expand on their thoughts or experiences.

Learn how to ask open-ended questions that foster a thorough exploration of the topic and bring forward hidden layers of information.

For example, if a survey participant mentions enjoying a restaurant, you could ask what specific aspects they appreciated or if they have any memorable anecdotes to share.

Let’s move on to some examples and tips for open-ended questions.

Open-ended questions examples

Here are some real-world examples so you can get a better idea of how to ask open-ended questions.

Team meetings

Open-ended questions encourage participation, foster discussion, and promote critical thinking among team members.

Here are some open-ended questions to ask your team.

  • What progress have we made since our last meeting?
  • How can we improve our processes to enhance efficiency and productivity?
  • What are your thoughts on the current project timeline, and do you foresee any potential roadblocks?
  • What additional resources or support do you need to achieve your objectives?
  • In what ways can we better collaborate as a team to achieve our goals?

Customer feedback

Open-ended questions in customer feedback surveys solicit detailed insights and suggestions from customers.

They help businesses understand their customer’s needs and preferences to improve products or services.

  • What do you like most about our product/service, and why?
  • Can you share an experience where our product/service fell short of your expectations?
  • What improvements would you like to see in future versions of our product/service?
  • How likely are you to recommend our product/service to others, and what factors influenced your rating?

Employee engagement surveys

Open-ended questions in employee engagement surveys allow employees to express their opinions, concerns, and suggestions.

This way, organizations can identify areas for improvement and foster a positive work environment.

  • What aspects of your job do you find most rewarding, and why?
  • How would you describe the company culture, and what changes, if any, would you like to see?
  • What barriers, if any, do you encounter that hinder your productivity or job satisfaction?
  • What initiatives would you like to see implemented to support employee well-being and professional development?

Problem-solving

Open-ended questions in problem-solving scenarios encourage critical thinking, creativity, and collaboration that produce innovative solutions.

  • What do you think is the root cause of the problem we’re facing?
  • What are some potential solutions we haven’t considered yet?
  • How did you resolve any similar challenges in the past? (If any)
  • What resources or expertise do we need to effectively address this problem?

Research involves gathering information, analyzing data, and drawing conclusions to inform decision-making or address specific questions. Open-ended questions in research contexts facilitate exploration and deeper understanding of complex topics.

  • What are the key objectives or questions we hope to answer through this research?
  • What existing knowledge or literature can we draw upon to inform our research?
  • What methodologies or approaches do you think would be most effective in collecting relevant data?

5 Benefits of open-ended questions

Benefits of open-ended questions

Here are some advantages of open-ended questions:

1. Encourage deep reflection

Open-ended questions prompt respondents to think critically and reflect on their thoughts, feelings, and experiences.

By inviting individuals to express themselves in their own words, these questions encourage deeper introspection and provide better response quality .

2. Capture diverse perspectives

One of the perks of open-ended questions is their ability to capture a wide range of perspectives. Rather than limiting respondents to predefined options, open-ended questions allow individuals to share their unique viewpoints, insights, and experiences.

The diversity of perspectives enriches your data and provides a more comprehensive understanding of the topic at hand.

3. Collect unanticipated insights

Open-ended questions have a knack for revealing unexpected insights and revelations.

By giving respondents the freedom to express themselves without constraints, these questions can reveal new ideas, opinions, and experiences that may not have been considered otherwise.

This element of surprise leads to valuable discoveries and deeper understanding.

4. Foster engagement and participation

When respondents see across open-ended questions, it signals that their input is valued and their voices are heard.

This sense of validation and appreciation can increase engagement and participation in the surveys you create , as individuals feel motivated to share their thoughts and contribute to the conversation.

Plus, these questions can make surveys feel less like a chore and more like a meaningful exchange that leads to higher response rates and richer data.

5. Adapt to dynamic conversations

Open-ended questions are flexible and adaptable, making them suitable for a variety of situations and contexts.

Whether you’re exploring new topics, delving into complex issues, or seeking detailed feedback, open-ended questions can accommodate the dynamic nature of conversation.

The versatility allows you to tailor your questions to suit the specific needs and objectives of your survey, all while ensuring that you gather the most relevant and valuable information possible.

Open-ended vs. Close-ended questions

Closed-ended questions give respondents a list of options to choose from, making it easy to gather specific, standardized data.

For example, “What’s your favorite color: Red, Blue, or Green?” It’s straightforward, no-nonsense, and great for collecting quantitative data fast.

Such questions are perfect for things like demographics, preferences, or simple opinions.

On the flip side, open-ended questions invite respondents to share their thoughts and feelings in their own words.

For instance, “How would you describe your ideal vacation?” It’s an open-ended question that allows for deeper insights and qualitative data.

The main difference between open questions vs closed questions lies in their usage.

Let’s say you’re designing a survey about customer satisfaction for a restaurant. You might use closed-ended questions to ask about things like food quality or service speed.

But if you want to know why someone keeps coming back or what they’d love to see on the menu, open-ended questions are your best bet.

A combination of open questions and closed questions helps you understand the complete picture of the topic at hand.

Create engaging open-ended questions surveys in minutes!

Learning how to ask open-ended questions is a game-changer for businesses, but it takes practice and following the right approach.

With the tips and examples provided in this article, you’re well on your way to becoming a master interrogator – in the best sense of the word, of course!

Now, if you’re all set to create this type of question, consider using Formaloo to make the process hassle-free. Formaloo is a user-friendly online form builder and survey maker that can help you design engaging open-ended surveys in minutes.

Sign up for free to try out Formaloo’s super easy interface and cutting-edge features.

open ended questions about essays

Inspiration

Best feedback survey questions.

open ended questions about essays

Survey vs questionnaire: Their key differences

open ended questions about essays

Everything you need to know about Yes or No questions in surveys

open ended questions about essays

What are multiple choice questions-MCQ? A comprehensive guide

Get started for free.

Formaloo is free to use for teams of any size. We also offer paid plans with additional features and support.

open ended questions about essays

What you want is what you get

open ended questions about essays

Formaloo for

Formaloo is the best customer engagement platform that lets you build beautiful forms, quizzes, customer portals, CRMs, and any other business apps without any code - all in one place! More than 25,000 businesses use Formaloo every day to create customer engagement tools like quizzes, calculation forms, membership websites, client portals, HR dashboards, and smart surveys with AI.

Made with passion by Formaloo team © 2024 Formaloo Solutions Inc. 10 Dundas St, Toronto, Canada. All rights reserved.

  • BeginningReads™
  • DecodableReads™
  • TopicReads™ – Primary
  • FYI for Kids
  • SummerReads™
  • Talking Points For Kids
  • Stories of Words
  • TopicReads™ – Middle School
  • Search for...
  • Core Vocabulary Word Zones
  • Core Vocabulary Word Maps
  • Core Vocabulary Word Pictures
  • Academic Word List
  • E4: Exceptional Expressions for Everyday Events
  • S4: Super Synonym Sets for Stories
  • Content Area Word Pictures
  • Read-Aloud Favorites
  • Recommended read-aloud books for knowledge-building and social-emotional learning
  • The Reading GPS
  • The Reading GPS gives teachers information about whether students are moving toward the goal of proficient reading.

Dr. Elfrieda Hiebert’s Text Elements by Task (TExT) model underlying our texts has been validated through scientific research

  • Teach Your Child to Read & Spell

open ended questions about essays

  • Pat Cunningham's Comprehension Response Sheets

Book cover ofComprehension Response Sheets by Pat Cunningham

  • Teach Your Child Lessons: BeginningReads

Tutoring lessons for all 10 levels of BeginningReads.

  • ToolKit for Tutoring

open ended questions about essays

  • Comprehension Guides from Reading Partners

Comprehension Guides for TextProject Texts

  • Lesson Plan for a Fluency Intervention
  • CCSS Webinar Series
  • Text Complexity
  • small changes = BIG RESULTS
  • The Science of Reading Blog and Video Series
  • Text Matters—a Magazine for Educators

Backed by the latest research, Text Matters articles highlight important background knowledge along with practical ideas for improving reading instruction.

  • Videos and Slideshows
  • Fostering Hope with Children’s Literature

A young white male teacher reading aloud has turned around a picture book with a red cover to show his class of mainly African-American K-1st grade children.

  • Beginning Readers: Instruction & Texts
  • Reading Volume & Silent Reading Stamina
  • Vocabulary & Knowledge
  • Reading Research Reports

open ended questions about essays

  • What the Quasi-Regular Orthography of English Means for Bringing Students to Proficient Reading
  • Enhancing Opportunities for Decoding and Knowledge Building through Beginning Texts

open ended questions about essays

  • Frankly Freddy Blog

Preparing Students in Writing Responses to Open-Ended Questions

open ended questions about essays

The new 2015–2016 assessments written by Smarter Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers both heavily feature questions that require students to provide evidence for their reply. This is a dramatic departure from simple multiple-choice questions where student can guess the best response if they are unsure of the answer. What can teachers do to prepare students for this more rigorous form of testing? How can teachers help students pinpoint the heart of open-ended questions to give the best response?

Suppose you are a student taking one of the new assessments that have been developed to measure attainment of the Common Core State Standards for the English Language Arts (CCSS/ELA). After reading a text about a baseball-loving girl and her grandmother, you look at the questions you are to answer. Here is what you see:

What does Naomi learn about Grandma Ruth? Use details from the text to support your answer. ( Grandma Ruth, Smarter Balanced test sample)

This task is an example of the Smarter Balanced Assessment Consortium’s (SBAC) tasks for grades 3–5. It is illustrative of a task format thousands of students will encounter when they take that assessment in the fall of 2014. It is also similar to a format found on the end-of-year assessment tasks used by the Partnership for Assessment of Readiness for College and Careers (PARCC).

These tasks, open-ended questions as well as research simulations (often described as performance assessments), require students to construct their own responses rather than select them from a set of given possibilities. And, if you are a typical student, this assessment may be the first time that you have been required to respond to a task by doing little more than filling in a bubble. Needless to say, if you are a typical student, responding successfully to such a task might prove daunting.

A great deal has been written—and continues to be written almost daily—about implementing classroom instruction that promotes the skills and knowledge called for in the CCSS/ELA. Less focus has been given, however, to addressing the additional skills and different kinds of knowledge that are called for to complete some of the tasks found on the new CCSS/ELA-related assessments.

The skills and knowledge that underlie understanding the expectations of and writing responses to higher-level questions are not simply test-taking abilities. Rather they are skills and dispositions that apply to both demonstrating achievement on the assessments and, more importantly, to effective information processing in the 21st century.

Focusing on open-ended tasks (future issues of Text Matters will address other types of tasks, such as research simulations), this issue of Text Matters identifies the skills and knowledge that students will need if they are to achieve success on the new CCSS/ELA-related assessments and offers ideas for ways that teachers can develop these skills and understandings. The three main goals of this article are:

  • to describe how students need to approach the close reading of the questions, or tasks on the assessments;
  • to identify the kinds of skills and knowledge students need in writing clear, comprehensible responses; and
  • to examine issues related to fluency in writing and stamina that arise as students work with extended texts.

Applying Close Reading to Open-Ended Assessment Tasks

As of early 2014, most American students are not accustomed to writing extended responses for assessment questions. Rarely do state assessments (and even more rarely commercial publishers’ norm-referenced tests) require students to write even a phrase or a sentence or two in response to questions, much less an entire paragraph. They might write answers to some questions in core reading materials. However, these answers are seldom extended and the questions do not yet reflect the “close reading” intent of the CCSS/ELA, which involves inspecting a text closely for evidence that supports responses. In many classrooms, students do little writing in response to their reading. Most often, they construct responses, usually orally, for an immediate audience (e.g., a small group or the entire class in a classroom setting). If a response misses the point of a text, the student gets immediate clarification and correction from the teacher or a peer.

On the CCSS/ELA-related assessment tasks, however, students will be reading texts on their own and writing responses for an audience that is remote. And, whereas students in a classroom setting get a second chance for a correct response as the teacher repeats a question in a class discussion or asks for greater clarification on a written report or essay, there is no such fallback for students who miss the intent of an open-ended test task. The lack of immediate feedback and guidance creates a major impediment to the ability of students to write responses that demonstrate what they comprehend from the text and provide support from the text for these responses.

In addition, when one examines student responses to open-ended tasks, it becomes apparent that many students also do not read the questions carefully, and their responses are off target or not sufficient.

Consider this example of a constructed-response question and what specifically it requires students to do:

What could you conclude about the author’s bias? Provide two pieces of evidence from the text that support your conclusion.

Mistakes students are likely to make in answering this question have nothing to do with their comprehension of the stimulus text. Often these mistakes reflect lack of attention to the specifics of the task and lack of completeness in responding. Common mistakes that students make in their responses include the following:

  • They provide only one piece of evidence from the text.
  • They provide their own ideas, but no evidence from the text.
  • They provide adequate evidence but no clearly stated conclusion.
  • They fail to pay attention to the verbs in the questions.
  • They do not make a clear connection between their conclusion and the evidence.
  • They respond in an incomplete matter that is often difficult to understand.

The first three problems indicate that close reading is a skill applicable not only to how students must read the stimulus text, but also to how they must read a question and think about what it requires them to do.

Another mistake students often make as they read assessment tasks is the failure to pay attention to the verbs in the questions. For these open-ended tasks, the scoring guides are closely aligned with the verbs, and teachers must make sure students understand that there are differences among explain , describe , list , summarize , and identify . For example, a student response that describes a situation will not receive a full score if the assessment task asks the student to explain it. Some lessons on these verb differences and on how to respond to questions that contain each can help students in their careful reading of tasks and successful construction of responses. Students also need practice responding to questions with different verbs and discussing how their responses reflect the verbs’ intent. This attention to understanding the verbs of questions is useful for almost all students, but it is critical for English language learners. The final two mistakes made by students in their responses, as noted above, are largely conceptual shortcomings that will be discussed in the following section.

One important note: It is crucial that teachers directly teach close reading of tasks. Students might be able to perform an assessment task but fail to demonstrate their ability because they misread the task. Moreover, attention to the specific requirements of tasks is not only a skill but a critical disposition for success at school, at work, and even in personal pursuits such as sports and hobbies. Helping students recognize the importance of attention to task requirements in all aspects of their lives promotes the development of this disposition. Teachers can use games such as Simon Says to develop this ability with very young students. Keeping classroom discussions on topic or work groups on task can promote this disposition as students move across grades.

The new CCSS/ELA-related assessments contain a variety of open-ended tasks, in addition to the ones that have already been described. Table 1 lists ways in which students may fall short in their responses to particular kinds of tasks.

Table 1
Examples of Open-Ended Tasks and Mistakes Students Make with Them
Example of TaskExamples of Frequent Mistakes Made by Students
Give three reasons, based on details in the text, that Wolfgang thought he was doing the right thing.
What is the main point the author is making in this article? Provide three details that make that point.
Tell which character you believe was the bravest and give evidence from the story that shows that the character was brave

Students’ faulty responses reflect a lack of experience with the types of tasks—tasks that require students to read closely and attend to the evidence in the text. To become competent at these tasks requires experience with such tasks and deliberate instruction of strategies and close reading of tasks. Three actions on the part of teachers will support students in developing the competence that will keep them on the road to college and career readiness:

  • Provide students with opportunities to respond to open-ended questions with connected discourse.
  • Read responses to open-ended questions as a class and discuss whether the responses actually describe, explain, support, etc. or are off task. This demonstrates the importance of close reading of questions and lays the foundation for students’ self-checking their own responses.
  • Help students to develop the habit of checking answers, similar to checking an answer in math. Is this the type of answer the question requires? Does it make sense? Are all the required pieces here?

Writing Complete, Comprehensible Responses

Look again at the typical mistakes students make, such as those listed in Table 1. The related mistakes that students are likely to make in responding to the tasks reflect two major problems:

  • Students write in an incomplete, difficult-to-understand manner, as if they were speaking to someone familiar rather than writing for a stranger or remote reader.
  • Students do not make clear connections between their conclusions and the text evidence.

Teachers can help students avoid these problems by helping them to understand who their readers will be and by demonstrating for them how to frame the responses in ways that make explicit connections between their ideas and information from the text.

Writing for Remote Readers

Writing for “remote readers” is a new experience for young students who are accustomed to sharing their writing with teachers and peers who can give feedback about clarity on the spot. Teachers need to help their students understand that as they write responses on a large-scale assessment, they are writing for readers who are unfamiliar with them personally and who will not available to ask for clarifications or to point out shortcomings of their writing. Indeed, students need to know that their responses might even be “read” and scored by a computer.

In addition, students, especially younger students, are not aware of the importance of providing clear indications of their thinking in their writing. During class discussions of text-related questions, students can ask for clarifications and have incomplete or vague responses corrected. When writing answers for a stranger to read, clarity is essential. Showing students some unclear responses to questions and discussing how to fix them is one step in developing both their awareness of the need for clarity and their skill in providing it. Having them work in groups to improve the clarity of their own responses and those of peers is another approach that can help focus student attention on how to apply this skill.

Finally, prompting students to self-monitor by asking questions is an especially effective way to help them keep in mind the need for clarity as they write. A guiding checklist can provide them with hints such as the following:

  • Can someone who is not sitting next to me understand my response without asking for clarification?
  • Would the evidence from the text that I’ve chosen to support my response convince me?
  • Is my response thorough and complete? Can I add details from the text to make it stronger?
  • Does my response answer the question?

Making Explicit Connections

Making connections between ideas in writing is a key aspect of clarity. As in the examples above, most CCSS/ELA-related assessment tasks ask students to give evidence from the text to support their responses. Examination of student work shows that those who are unfamiliar with this kind of test question commonly provide just a conclusion and list two details from the text. They seldom offer any information as to how these details support their conclusion.

Direct instruction and practice with both written and oral responses can develop students’ skill in making connections explicit. The following are some practices and activities that teachers can use both to help students develop a model for thorough, complete answers and to learn about the aspects of their writing that trigger confusion in readers:

  • Provide opportunities for students to share feedback with each other on the quality of their responses. (This is a handy habit to develop for both college and career readiness.)
  • Encourage students to use applications such as Box or Dropbox set up for the classroom to provide responses to each others’ written responses, compositions, and thoughts about class work. Students should reflect on the clarity of their own writing as well as provide peers with feedback.
  • Constantly provide opportunities for students to self-monitor their oral and written writing responses.
  • Conduct a bull’s eye activity to guide student discussions about the quality of sample responses to questions (Kapinus, 2002). Using a target chart such as the one in Figure 1, teachers can explain that just as the target has different rings of difficulty, responses have different levels of completeness.

Figure 1 Bull’s Eye Chart

Bull's Eye Target

Modified bloom’s taxonomy [ [ 11 ] ].

figure 2

Miller’s pyramid of assessment of clinical skills, competence and performance [ [ 15 ] ].

Assessment is central to the educational process, and has benefits beyond that of measuring knowledge and competence alone; principally in directing and stimulating learning, and in providing feedback to teachers and learners [ 17 ]. Recent research supports a critical role for assessment in consolidating learning, and strengthening and facilitating memorisation and recall. There is accumulating evidence that the process of stimulating recall through testing enhances learning and retention of learned material. This has been termed the testing effect , and several hypotheses have been put forward to explain it, including increased cognitive effort, conceptual and semantic processing, and increased attention to the properties distinguishing the learnt item from similar items, which strengthens the relationship between the cue which triggers the memory and the memory item itself [ 18 ],[ 19 ]. It appears to be principally the act of retrieving information from memory which strengthens knowledge and knowledge retention [ 20 ],[ 21 ], irrespective of whether retrievable is covert or overt [ 22 ]. Importantly, high-level questions appear to stimulate deeper conceptual learning and better learning retention then those pitched at a lower level [ 23 ]. A number of strategies have been proposed to exploit this in educational practice, including those recently summarised for use in medical education [ 24 ]. This is in a sense related to the “generation effect”, where it has been shown that spontaneously generating information as opposed to learning it passively improves subsequent recall [ 18 ],[ 19 ].

Assessment in educational practice

It is accepted that standards of assessment are inherently variable. There is therefore an obligation, in summative assessment, to ensure that assessment meets certain minimum criteria [ 25 ]. Achieving this in the individual instance is challenging, given the wide range of skills and knowledge to be assessed, marked variation in the knowledge of assessment of those who must assess and the highly variable environments in which the assessment takes place. There is now an extensive literature on assessment, in terms of research, guidelines and recommendations [ 26 ],[ 27 ]. Importantly, modern approaches recognise that no single form of assessment is suitable for every purpose, and stressed the need for programmatic assessment , which explicitly recognises that assessment is best served by a careful combination of a range of instruments matched to a particular purpose at each stage of the learning cycle, such as for formative, diagnostic or summative purposes [ 25 ],[ 26 ],[ 28 ].

Written assessment

Despite the proliferation of assessment methodologies which attempt to test the competence of medical students directly, such as OSCE, OSPE, case-based assessment, mini-CEX and workplace-based assessment, written assessments remain in widespread use. Much of the knowledge base required by the clinician is not necessarily testable in the performance format. Additionally, in comparison with most practical assessment formats, written tests are easier to organize and deliver, requiring little more than pen and paper or a computer, a venue, question setters and markers who need not be physically present.

In general, all forms of written assessment may be placed into one of two categories. Constructed response or open-ended questions include a variety of written formats in which the student is required to generate an answer spontaneously in response to a question. The prototypical example is the essay. There are many variants including short answer questions (SAQ), mini-essay questions, single-word and single-sentence questions and the modified essay question (MEQ). The selected-response or closed-ended format is typified by the multiple-choice question (MCQ) assessment, where candidates select the most appropriate answer from a list of options rather than generating an answer spontaneously. Many variants of the multiple-choice format have been used: current best practice recommends the use of one-best-answer (of three, four or five possible answers), and extended matching item (EMI) formats [ 29 ]. In this debate I shall use the term open-ended when referring to the constructed-response format, and multiple-choice as a synonym for the selected-response format.

All high-stakes assessments should meet an adequate standard in terms of quality and fairness, as measured by a number of parameters, summarised recently in a consensus statement [ 30 ]. Principal among these are the classic psychometric parameters of reproducibility (reliability or consistency; that a result would not essentially change with retesting under similar conditions), and validity or coherence, which I describe in detail below. Other important measures by which assessments should be judged are equivalence (assessments administered at different institutions or during different testing cycles produce comparable outcomes), feasibility (particularly in terms of efficiency and cost effectiveness), educational effect (the student who takes the assessment is thereby motivated to undertake appropriate learning), catalytic effect (the assessment provides outcomes that, when fed back into the educational programme, result in better teaching and learning) and acceptability to both teachers and learners.

It is generally accepted that the multiple-choice format, in contrast to the open-ended format, has high reliability and is efficient, a consequence primarily of wide sampling, and to a lesser extent, of its objectivity. In support of the open-ended format, it has been widely held that this format is superior at testing higher cognitive levels of knowledge and has greater validity. This belief is intuitively appealing and appears to represent the viewpoint of many of those involved in medical assessment, including those with extensive knowledge and experience in medical education. In an attempt to gain the best of both formats, there has been a shift from the prototypical essay towards newer formats comprising a larger number of short, structured questions, a development intended to retain the perceived benefit of the open-ended question with the superior reliability of the MCQ.

Thus the two formats are generally seen to be in tension, MCQ being significantly more reliable, the open-ended format having greater validity. In this debate I will compare the performance of the open-ended format with MCQ in summative assessment, particularly in final exit examinations. I draw attention to the large body of evidence which supports the view that, in summative assessment, the multiple-choice format is intrinsically able to provide all the value of the open-ended format and does so more reliably and cost effectively, thus throwing into question the justification for the inclusion of the open-ended format in summative assessment. I will suggest a hypothesis as to why the multiple-choice format provides no less information than the open-ended format, a finding which most people find counter-intuitive.

A critical concept is that assessment is not only of learning, but also for learning [ 27 ],[ 31 ]. In the first case, the purpose of assessment is to determine whether that which is required to be learnt has in fact been learnt. In the second case, it is acknowledged that assessment may in itself be a powerful driver for learning at the cognitive level. This is supported by a body of evidence indicating the powerful effect of assessment on strengthening memorisation and recall [ 20 ],[ 22 ],[ 23 ]. In this debate I concentrate primarily on summative assessment in its role as assessment of learning ; one must however remain aware that those methods of assessment best suited to such summative assessment may not be identical to those best suited to assessment for learning ; indeed, it would be surprising if they were.

For the first part of the 20 th century, written assessment in medicine consisted largely of essay-writing [ 30 ]. Multiple-choice assessment was developed for psychological testing by Robert Yerkes immediately before the First World War and then rapidly expanded for the testing of army recruits. Yerkes was interested in assessing learning capacity—not necessarily human—and applied it to crows [ 32 ] and pigs [ 33 ] as well as psychiatric patients and mentally challenged subjects, a group among whom it was widely used for a number of years thereafter [ 34 ],[ 35 ]. Application to educational assessment has been credited to Frederick J. Kelly in 1914, who was drawn to it by its efficiency and objectivity [ 36 ].

Throughout its history, the multiple-choice format has had many detractors. Their principal arguments are that closed-ended questions do not stimulate or test complex constructive cognitive processes, and that if the ability to construct rather than choose a correct answer is not actively assessed, there is a potential that it will be neither taught nor learnt [ 37 ]-[ 41 ].

As Rotfield has stated: "Students proudly show off their high grades, from multiple-choice exams, as if their future careers will depend on knowing which choice to make instead of discerning which choices exist" [ 42 ]. Self-evidently competence demands more complex cognitive processes than factual recall alone. The ability to invoke these higher levels of cognition is clearly a skill which should be explicitly assessed. Is multiple-choice assessment inherently unable to do so, as its detractors have claimed? The belief that open-ended questions test high-order cognitive skills whereas multiple-choice questions do not and that therefore by inference open-ended questions evoke and test a reasoning process which is more representative of real-life problem-solving than multiple-choice, is a serious concern which I address in this review. We begin however with a comparison of the two formats in terms of reproducibility and feasibility.

Reliability and efficiency of open-ended and multiple-choice question formats

Wider sampling greatly increases reproducibility, compensating as it does for unevenness in a candidate’s knowledge, varying quality of questions and even the personality of examiners [ 43 ],[ 44 ]. That the reproducibility of the multiple-choice format is much higher than that of the open-ended format is borne out in numerous studies comparing the two formats [ 45 ]-[ 47 ]. Recognition of these shortcomings has led to the design of open-ended-formats specifically intended to increase reproducibility and objectivity, while maintaining the supposed advantages of this format in terms of validity. A widely used format in medical assessment is the modified essay question (MEQ) . The format is of a clinical scenario followed by a series of sequential questions requiring short answers. This was expressly designed to bridge a perceived gap between multiple-choice and SAQ as it was believed that it would prove better at testing high-order cognitive skills than multiple-choice while allowing for more standardised marking than the standard open-ended question [ 45 ].

Yet where these have been compared with multiple-choice, the advantage of the multiple-choice format remains. A large number of questions and multiple markers are required in order to provide acceptable reliability for MEQs and essay questions [ 45 ]. Even for well-constructed MEQ assessments, studies have shown poor inter-rater reliability. Thus in an MEQ paper in a final undergraduate medical exit examination marked in parallel by several assessors, statistically significant differences between the scores of the different examiners were shown in 50% of the questions, as well as significant differences in the median scores for the examination as a whole [ 47 ]. Nor were these differences trivial; a substantial difference in outcome in terms of likelihood of failure were shown. This is cause for concern. Schuwirth et al . have stressed the necessity for interpreting reliability in terms of outcome, particularly in terms of pass/fail misclassification, and not merely in terms of numeric scores such as Cronbach’s alpha [ 27 ]. In this and other such studies the open-ended questions were of the highest possible quality practically achievable, typically MEQ's carefully prepared by skilled question writers working in teams, reviewed for appropriateness and scored using an analytic scoring scheme designed to minimise inter-rater variability. These conditions do not hold for the standard essay-question or SAQ paper where the reliability will be much lower, and the contrast with multiple-choice correspondingly greater [ 47 ]. Open-ended items scored on a continuum, such as 0-100%, have much lower inter-rater reliability than those scored against a rigid marking schedule. Therefore the discrepancy in reliability for the "graded essay" marked on a continuum versus multiple-choice is much larger than it is for more objectively scored open-ended formats.

In contrast to the open-ended question format, the multiple-choice is objective and allows multiple sampling of a subject. The result is high reproducibility. Furthermore it substantially reduces the potential for a perception of examiner bias, and thus the opportunity for legal challenge by the unsuccessful candidate [ 48 ]. The multiple-choice format is efficient. Lukhele et al . studied a number of national university-entrance examinations which included both multiple-choice items and essay questions [ 49 ]. They found that 4-8 multiple-choice items provided the same amount of information as a single essay, and that the essay’s efficiency in providing information about the candidate’s ability per minute of testing was less than 10% of that of an average multiple-choice item. For a middle-level examinee, approximately 20 times more examination time was required for an essay to obtain the same information as could be obtained from a multiple-choice assessment. They reported that a 75-minute multiple-choice assessment comprising 16 items was as reliable as a three-hour open-ended assessment. Though the relative gain in efficiency using multiple-choice in preference to essay questions varies according to subject, it is an invariable finding [ 49 ].

Though the initial development of an multiple-choice assessment is labour-intensive, this decreases with increasing experience on the part of item-writers, and decreases further once a question bank has been developed from which questions can be drawn for re-use. The lower efficiency of the open-ended question is not restricted to examination time but also the requirement for grading by examiners. Typically an open-ended test requires from 4 to 40 times as long to administer as a multiple-choice test of equivalent reliability [ 50 ]. In one study, the cost of marking the open-ended items was 300 times that of the multiple-choice items [ 49 ]; the relative cost of scoring the papers may exceed a factor of 1000 for a large examination [ 50 ].

The multiple-choice format thus has a clear advantage over open-ended formats in terms of reproducibility, efficiency and cost-effectiveness. Why then are open-ended questions still widely used? Principally this is because of a belief that essay-type questions, SAQ and their variants test higher-order cognitive thinking in a manner that MCQ cannot, and consequently have higher validity. It has been repeatedly stated that the MCQ format is limited in its ability to test deep learning, and is suitable for assessing facts only, whereas open-ended questions assess dynamic cognitive processes such as the strength of interconnected rules, the use of the mental models, and the mental representations which follow [ 37 ]-[ 39 ]; in short that open-ended questions permit the assessment of logical and reasoning skills in a manner that multiple-choice does not [ 40 ],[ 41 ]. Is there evidence to support these assertions?

The ability to test higher-order cognitive skills

The revised Bloom's taxonomy of learning [ 9 ]-[ 12 ] is helpful in evaluating the level of cognition drawn upon by an assessment (Figure  1 ). By convention, assessment questions targeting the first two levels, are regarded as low-level questions, the third level as intermediate, and the fourth to sixth levels as high-level.

Those who understand the principles underlying the setting of high-quality multiple-choice items have no difficulty in accepting that multiple-choice is capable of assessing high-order cognition [ 10 ],[ 13 ],[ 14 ]. The shift from true-false questions, (which in order to avoid ambiguity frequently test factual information only) to the one-best-answer and EMI formats have facilitated this [ 29 ]. Indeed, there exist well-validated instruments specifically designed to assess critical thinking skills and to measure their development with progress through college-level educational programs, which are entirely multiple-choice based, such as the California Critical Thinking Skills Test [ 51 ],[ 52 ]. Schuwirth and Van der Vleuten [ 48 ] make a distinction between context-rich and context-free questions. In clinical assessment, a context-rich question is typically presented as a case vignette. Information within the vignette is presented to candidates in its original raw format, and they must then analyse, interpret and evaluate this information in order to provide the answer. The stimulus reflects the question which the candidate must answer and is therefore relevant to the content of the question. An example of a final-year question in Internal Medicine is shown in the following example. Such a question requires analysis ( What is the underlying problem? ), application ( How do I apply what I know to the treatment of this patient? ) and evaluation ( Which of several possible treatments is the most appropriate? ), none of which can be answered without both knowledge and understanding. Thus 5 of Bloom’s 6 levels have been tested.

Example of a context-rich multiple-choice item in internal medicine

A 24-year-old woman is admitted to a local hospital with a short history of epistaxis. On examination she is found to have a temperature of 36.9°C. She is wasted, has significant generalised lymphadenopathy and mild oral candidiasis but no dysphagia. A diffuse skin rash is noticed, characterised by numerous small purple punctate lesions. A full blood count shows a haemoglobin value of 110 g/L, a white cell count of 3.8×10 9 per litre and platelet count of 8.3×10 9 per litre. Which therapeutic intervention is most urgently indicated in this patient?

Antiretroviral therapy

Fluconazole

Platelet concentrate infusion

None of the options offered are obviously unreasonable or easily excluded by the candidate who attempts to shortcut the cognitive processes required in answering it by searching for clues in the options themselves. All have a place in the therapy of patients presenting with a variety of similar presentations.

Answering this item requires:

Analysis . In order to answer this item successfully, the candidate will have to recognise (1) that this patient is highly likely to be HIV-positive (given the lymphadenopathy, evidence of oral candidiasis and the high local prevalence of HIV), (2) that the presentation is suggestive of immune thrombocytopenic purpura (given the epistaxis, skin manifestations and very low platelet count), (3) that other commonly-seen concomitant features such as severe bacterial infection and extensive esophageal candidiasis are excluded by a number of negative findings.

Evaluation . Further, in order to answer this item successfully, the candidate will have to (1) consider the differential diagnosis for the principal components of the clinical vignette and, by process of evaluation, decide which are the most likely; (2) decide which of the diagnoses require treatment most urgently, (3) decide which form of therapy will be most appropriate for this.

Knowledge, understanding and application . It is utterly impossible to “recognise” the correct answer to this item without having worked through this process of analysis and evaluation, and the knowledge required to answer it must clearly be informed by deep learning, understanding and application. Hence five of the six levels of Bloom’s taxonomy have been tested. Furthermore it would appear an eminently reasonable proposition that the candidate who correctly answers this question will indeed be able to manage such a patient in practice, hence implying structural validity.

Though guessing has a 20% chance of providing the correct answer, this will be eliminated as a factor by assessing performance across multiple such items and applying negative marking to incorrect answers.

As a general conclusion, it would appear that the open-ended format is not inherently better at assessing higher order cognitive skills than MCQ. The fundamental determinant is the way in which the question is phrased in order to stimulate higher order thinking; if phrased inappropriately, the open-ended format will not perform any better than MCQ. A crucial corollary is that in comparing formats, it is essential to ensure that MCQ questions crafted to elicit high order thinking (particularly those which are context-rich) are compared with open-ended questions crafted to the same level; it is inappropriate to compare high-order items in one format with low order items in the other. Several studies have investigated the effect of the stimulus on thought processes in the open questions and have shown that the stimulus format is more important than the response format . Scores on questions in open-ended format and multiple-choice format correlate highly (approaching 100%) for context-rich questions testing the same material. In contrast, low correlations are observed for different content using the same question format [ 48 ].

In response to the low objectivity and reliability of the classic essay-type questions, modified open-ended formats have evolved which typically combine short answers, carefully crafted questions and rigid marking templates. Yet this increase in reliability appears to come at a significant cost to the presumed advantage of the open-ended format over the multiple-choice format in testing higher orders of cognition. Feletti and Smith have shown that as the number of items in the open-ended examination increases, questions probing high-order cognitive skills tend to be replaced by questions requiring factual recall alone [ 46 ]. Hence as accuracy and reliability increase, any difference between such an assessment and a multiple-choice assessment in terms of other indicators tends to disappear; ultimately they converge on an essentially identical assessment [ 47 ],[ 49 ].

Palmer and Devitt [ 45 ] analysed a large number of multiple-choice and MEQ questions used for summative assessment in a clinical undergraduate exam. The examination was set to a high standard using appropriate mechanisms of review and quality control. Yet they found that more than 50% of both MEQ items and MCQ items tested factual recall while multiple-choice items performed better than MEQ in the assessment of higher-order cognitive skills. They reported that "the modified essay question failed in its role of consistently assessing higher cognitive skills whereas the multiple-choice frequently tested more than mere recall of knowledge”.

In a subsequent study of a rigorously prepared and controlled set of exit examinations, they reported that the proportion of questions testing higher-level cognitive skills was lower in the MEQ paper then in the MCQ paper. More than 50% of the multiple-choice items assessed higher level cognition, as opposed to just 25% of the MEQ items. The problem was compounded by a higher frequency of item-writing flaws in the MEQ paper, and flaws were found in the marking scheme in 60% of the MEQ's. The authors conclude that “The MEQ paper failed to achieve its primary purpose of assessing higher cognitive skills” [ 47 ].

We therefore appear to be dealing with a general rule: the more highly open-ended questions are structured with the intention of increasing reliability, the more closely they converge on an equivalent multiple-choice question in terms of performance, thus negating any potential advantage of the open-ended format over the closed-ended [ 53 ]; indeed they appear frequently to underperform MCQ items in the very area in which they are believed to hold the advantage. Thus the shift to these newer forms of assessment may actually have had a perverse effect in diminishing the potential for the open-ended assessment to evaluate complex cognitive processes. This does not imply that open-ended items such as SAQ, MEQ and key-feature assessments, particularly those designed to assess clinical reasoning, are inherently inferior to MCQ; rather it is a warning that there is a very real risk in practice of “dumbing-down” such questions in an attempt to improve reliability, and empiric observations suggest that this is indeed a consequence frequently encountered even in carefully crafted assessments.

Combining multiple-choice and open-ended tests in the same assessment, in the belief that one is improving the strength of the assessment, leads to an overall less reliable assessment than is constituted by the multiple-choice section on its own [ 49 ], thus causing harm rather than adding benefit [ 50 ].

The second argument, frequently advanced in support of the open-ended format, is that it has greater validity; that spontaneously recalling and reproducing knowledge is a better predictor of the student’s eventual ability to handle complex problems in real-life then is the ability to select an answer from a list [ 54 ]. Indeed, this argument is intuitively highly appealing. The case for the retention of open-ended questions in medical undergraduate and postgraduate assessment largely rests on validity, with the assumption that asking the candidate to describe how they would diagnose, investigate and treat a patient predicts future clinical competence more accurately than does the ability to select the right response from a number of options [ 55 ],[ 56 ]. The question of validity is central. If the open-ended format is genuinely of higher validity than the multiple-choice format, then there is a strong case for retaining essay-type questions, SAQ and MEQ in the assessment protocol. If this contention cannot be supported, then the justification for retaining open-ended items in summative assessment may be questioned.

Is the contention true? Essentially, this may be explored at two levels. The first is to correlate outcomes between the two formats. The second is to perform appropriate statistical analysis to determine whether these formats are indeed testing different dimensions or “factors”.

Validity is an indicator of how closely the assessment actually measures the quality it purportedly sets out to test. It is self-evident that proficiency in many domains, including clinical practice, requires not only the ability to recall factual knowledge, but also the ability to generate and test hypotheses, integrate knowledge and apply it appropriately as required.

Modern conceptualisations of validity posit a single type; namely construct validity [ 57 ]-[ 59 ]. This is based on the premise that ultimately all validity rests on the fidelity with which a particular assessment reflects the underlying construct, “intangible collections of abstract concepts and principles which are inferred from behaviour and explained by educational or psychological theory” [ 60 ]. Construct validity is then defined as a process of investigation in which the constructs are carefully delineated, and evidence at multiple levels is sought which supports a valid association between scores on that assessment and the candidate's proficiency in terms of that construct. For example, five types of evidence have been proposed which may provide support for such an association [ 60 ],[ 61 ], namely content, the response process, internal structure, relationship to other variables and consequences. In this discussion we highlight the relevant to the last two methods; convergent correlations between the two forms of assessment, and the impact of test scores on later performance, particularly that requiring problem-solving under conditions encountered in the work situation. This “is particularly important to those employers more interested in hiring competent workers than good test takers” [ 62 ].

Direct comparisons of the open-ended and multiple-choice formats

Correlation.

Numerous studies have assessed the correlation of scores between the two formats. If scores are highly correlated, the two formats are essentially measuring the same thing in which case, in terms of validity, there is no advantage of one over the other. With few exceptions, studies indicate that scores on the two forms of assessment are highly correlated. Norman et al. compared the two formats prospectively and showed a strong correlation between the two sets of scores [ 63 ]. A similar result was found by Palmer et al. who suggested that the two types of examination were essentially testing similar characteristics [ 47 ]. Similarly Norcini et al. found that written patient management problems and multiple choice items appeared to be measuring essentially the same aspects of clinical competence, though the multiple-choice items did so more efficiently and with greater reliability [ 17 ]. Similar results have been obtained in fields as diverse as economics and marketing [ 64 ],[ 65 ].

In general correlations between the two formats are higher when the questions in each format are specifically designed to be similar (stem-equivalent), and lower where the items in the two formats differ. However, the difference is not great: in a meta-analysis, Rodriguez found a correlation across 21 studies of 0.92 for stem-equivalent items and 0.85 across 35 studies for non-stem-equivalent items. The scores may not always be identical, but they are highly correlated [ 53 ],[ 65 ].

Factor analysis: do the formats measure more than one construct?

Identification of the actual constructs measured in an assessment has proved challenging given the lack of congruence between the simple cognitive assumptions on which testing is often based and the very complex cognitive nature of the constructs underlying understanding [ 66 ]. A number of studies have used confirmatory factor analysis and principal component analysis to determine whether the constructs tested by the two formats lie along a single dimension or along two or more divergent dimensions. Bennett et al . compared a one factor model with a two factor model to examine the relationship of the open-ended and closed-ended formats and found that in general the single factor provided a better fit. This suggests that essentially the two formats are testing the same thing [ 67 ]. Similarly Bridgeman and Rock found, using a principal components model, that both formats appeared to load on the same factor, implying that the open-ended format was not providing information on a different dimension [ 68 ]. Thissen and Wainer found that both formats could largely be ascribed to a single shared factor but did find some specific open-ended factors for which only the open-ended items contributed [ 69 ]. Though Lissitz et al . [ 70 ] quote a study by JJ Manhart, which found a two-factor model generally more appropriate than a one factor model, this study has not been published and the significance of the divergence cannot be assessed.

In a study of high school assessments using confirmatory factor analysis, Lissitz et al. showed a correlation of 0.94 between the two formats in the domains of algebra and biology; a two-factor model provided a very slight increment over a one-factor model in terms of fit. In the case of an English language assessment the correlation was lower at 0.74 and a two-factor model provided a better fit. In a test of US government, intermediate results were found with the correlation of 0.83 and a slight superiority of a two-factor model. This suggests that the addition of open-ended items in biology and algebra provided little further information beyond the multiple-choice items, whereas in other domains—English and government—the two formats are to some degree measuring different constructs [ 70 ]. Indeed, the literature in general suggests that differences in format appeared to be of little significance in the precise sciences such as biology and mathematics, but may have some relevance in fields such as history and languages, as suggested by Traub and Fisher [ 71 ]. In summary, there is little evidence to support the belief that the open-ended format is testing dimensions which the multiple-choice format cannot [ 53 ],[ 70 ],[ 72 ].

Construct validity was specifically assessed by Hee-Sun et al . [ 73 ], who attempted to measure the depth of understanding among school-level science students revealed by multiple-choice and short written explanatory answers respectively. They reported that students who showed higher degrees of knowledge integration were more likely to score highly on multiple-choice, though the reverse did not hold true. They suggested that the multiple-choice items were less effective in distinguishing adjacent grades of understanding as opposed to distinguishing high-performance from low performance, a finding similar to that of Wilson and Wang [ 74 ] and Ercikan et al . [ 75 ]. Unfortunately the generalisability of these results is limited since the multiple-choice items were poorly standardised, both in format and in difficulty, and the circumstances under which the testing was conducted were essentially uncontrolled.

Lukhele et al . performed a rigorous analysis of high-quality university placement exams taken by thousands of candidates [ 49 ]. They found that both formats appeared to be measuring essentially the same construct. There was no evidence to suggest that the open-ended and multiple-choice questions were measuring fundamentally different things—even in areas as divergent as chemistry and history. Factorial analysis suggested that there were two variant dimensions reflected in the scores of the multiple-choice and open-ended sections, one slightly more related to multiple-choice and the other to the open-ended format. However these were highly correlated, whatever the factor is that is specifically measured by the open-ended format, multiple-choice would measure it almost as well. Thus for all practical purposes, in such summative assessments, multiple-choice assessments can satisfactorily replace open-ended assessments.

An important principle is that the variance introduced by measuring “the wrong thing” in the multiple-choice is small in comparison with the error variance associated with the open-ended format given its low reliability. This effectively cancels out any slight advantage in validity [ 49 ] (Figure  3 ). Indeed, Wainer and Thissen state that “measuring something that is not quite right accurately may yield far better measurement than measuring the right thing poorly” [ 50 ].

figure 3

Stylized depiction of the contrasting ability of the presumed open-ended and multiple-choice formats to assess recognition and recall as opposed to higher forms of cognitive learning. Ideally, multiple-choice and open-ended questions would measure two different abilities (such as recall/recognition versus reasoning/application) – this may be shown as two divergent axes (shown on left). The error variance associated with each type of question is indicated by the shaded blocks, and is much greater for the open-ended question, given its inherent lower reliability. In practice, it appears that the two axes are closely aligned, implying that the two types of questions are measuring essentially the same thing (shown on right). What little additional information the open-ended question might be giving (as shown by a slight divergence in axis) is offset by its wide error variance, which in effect overlaps the information given by the multiple-choice question, thus significantly reducing the value of any additional information it provides.

In summary, where studies have suggested that the open-ended format is measuring something that multiple-choice does not (particularly in older studies), the effect has tended to be minimal, or possibly explicable on methodological grounds, or indefinable in terms of what is actually being measured. In contrast, methodologically sound studies converge on the conclusion that the difference in validity between the two formats is trivial. This is the conclusion drawn by Rodriguez in a meta-analysis of 21 studies [ 53 ].

Demonstrating an essential similarity for the two formats under the conditions of summative assessment does not necessarily mean that they provide identical information. It is possible and indeed likely that open-ended questions may make intermediate steps in thinking and understanding visible, thus serving a useful role in diagnostic as opposed to summative assessment [ 73 ],[ 75 ],[ 76 ]. Such considerations are particularly useful in using assessment to guide learning rather than merely as a judgment of competence [ 77 ]. In summative assessment at a stage prior to final exit from a programme, and particularly in formative assessment, the notion of assessment for learning becomes important; and considerations such as the generation effect and the potentiation of memory recall by testing cannot be ignored. Interestingly, a recent publication suggests that multiple-choice format testing is as effective as SAQ-format testing in potentiating memorisation and recall [ 23 ], thus supporting the contention that well-crafted MCQ and open-ended questions are essentially stimulating the same cognitive processes in the learner.

Some authors have raised the concern that students may constitutionally perform differentially on the two forms of assessment, and might be disadvantaged by a multiple-choice assessment should their strengths lie in the open-ended format. Studies in this area have been reassuring. Bridgeman and Morgan found that discrepant results were not predictive of poor academic performance as assessed by other parameters [ 78 ]. Ercikan et al . reported that discrepancies in the outcome between open-ended and multiple-choice tests were largely due to the low reliability of the open-ended component and inappropriate testing strategies [ 75 ]. A study which correlated the two formats with each other and with other measures of student aptitude showed a high degree of correlation and was unable to identify students who clearly had a propensity to perform consistently better on one format than the other [ 79 ]. Thus the belief that some students are constitutionally more suited to open-ended questions than to multiple-choice would appear to be unfounded.

An important question is whether the format of assessment effects the type of learning students use in preparation for it. As early as 1971, Hakstian suggested that anticipation of a specific form of examination did not result in any change in the amount or type of preparation, or any difference in performance in subsequent testing [ 80 ]. He concluded as follows: “The use of various types of tests to foster various kinds of study and learning, although widely advocated would seem to be a practice based on intuitive appeal, but not convincingly supported by empirical research. In particular, the contention that the superiority of the essay examination is its ability to promote more desirable study methods and higher performance on tasks requiring organisation, and deeper comprehension analysis of information should be re-evaluated in light of the evidence in the present study of no differences between groups in terms of study methods, the essay examination, or items from the higher levels of the cognitive domain”. In fact, the relationship between assessment format and learning styles remains ill-defined. Though some studies have suggested that students tended to make more use of surface learning strategies in preparation for MCQ and deeper learning strategies in preparation for open-ended questions [ 81 ],[ 82 ], other studies have failed to show such an association [ 80 ],[ 83 ]. Some studies have even failed to show that deep learning approaches correlated with better performance in applied MCQ’s and a written course project, both of which required high level cognitive performance [ 84 ],[ 85 ], though, a significant finding was that a surface learning strategy appeared deleterious for both factual and applied MCQ scores [ 85 ].

Indeed, a review of the literature on learning strategies suggests that the notion that one or other assessment format consistently calls forth a particular learning strategy is simplistic, and much of the evidence for this may have been misinterpreted [ 86 ]. The student’s choice of learning style appears to be dependent on multiple interacting and to some extent, confounding factors, most importantly the student’s innate learning motivation and preferred learning strategy. This is however subject to modification by other factors, particularly the student’s own perception of whether the assessment is directed at assessment of factual knowledge or of understanding, a perception which may frequently not coincide with the intentions of the examiner [ 87 ]. Individual differences in learning strategy probably outweigh any other consideration, including the assessment format, though this is not constant and students will adapt their preferred learning strategy according to their perception of the requirement for a particular assessment [ 88 ]. A further study has suggested that the approach to learning the student brings into the course is the strongest predictor of the learning style they will employ subsequently and, irrespective of the instructor’s best efforts, the only factor significantly correlated with the change in learning style is a change in the student’s perception of the cognitive demands of the assessment. Thus students are frequently strategic in their choice of learning strategy, but the strategies may be misplaced [ 87 ]. The student’s academic ability may be relevant; one study has shown that more academically able science students correctly identified the MCQ as requiring deep knowledge and adopted an appropriate learning strategy, whereas less able students interviewed the assessment as principally a test of recall and used a counter-productive surface-learning strategy.

Hadwin et al . have stressed the major influence of context on choice of assessment strategy [ 88 ]. There is for example evidence that students will modify their strategy according to whether the assessment is perceived as a final examination or as an interim assessment, irrespective of format [ 81 ]. So-called construct-irrelevant factors such as female gender and increasing maturity tend to correlate with selection of a deep learning strategy [ 85 ] independent of assessment format, while the association of anxiety and other emotional factors with a particular assessment will impair performance and thus operate as a confounding factor [ 89 ],[ 90 ]. In discussing their results, Smith and Miller stated that “Neither the hypothesis that multiple-choice examination will promote student use of surface strategy nor the hypothesis that essay examination will promote student use of deep strategy were supported” [ 91 ]. As a general conclusion, it would appear valid to say that current evidence is insufficient to suggest that the open-ended format should be preferred over MCQ or vice versa on the grounds that it promotes more effective learning strategies.

It is also important to be aware that open-ended assessments may bring confounding factors into play, for example testing language mastery or skills rather than the intended knowledge domain itself [ 70 ], and hand-written answers also penalise students with poor writing skills, low writing speeds and poor handwriting [ 65 ].

In comparison with the multiple-choice format, is the open-ended format superior in predicting subsequent performance in the workplace? This has been assessed and the answer, surprisingly, is that it may be less predictive. Rabinowitz and Hojat [ 92 ] correlated the single MEQ assessment and five multiple-choice assessments written at the conclusion of a series of six clerkships with performance after graduation. Results in multiple-choice assessment consistently demonstrated the highest correlations with subsequent national examination scores and with objective assessments of performance in the workplace. The MEQ questions showed the lowest correlation. Wilkinson and Frampton directly compared an assessment based on long and short essay-type questions with a subsequent assessment protocol containing short essay questions and two multiple-choice papers [ 56 ], correlating these with performance in the subsequent internship year using robust rating methodologies. They found no significant correlation between the scores of the open-ended question protocol and assessments of performance in the workplace after graduation. In contrast they found that the combination of the SAQ paper and two multiple-choice papers showed a highly significant correlation with subsequent performance. This study showed that the predominant use of multiple-choice in the assessment resulted in a significant improvement in the structural validity of the assessment in comparison with essay-type questions alone. It was unable to answer the question as to whether the open-ended questions are necessary at all since the multiple-choice component was not compared with the performance rating independently of the essay questions. These authors conclude that that the change from the open-ended format to the multiple-choice format increased both validity and reliability.

Recommendations from the literature

Wainer and Thissen stated that: “We have found no evidence of any comparison of the efficacy of the two formats (when a particular trait was specified and skilled item writers then constructed items to measure it) in which the multiple-choice item format was not superior” [ 50 ]. Lukhele et al . concluded: “Thus, while we are sympathetic to… the arguments… regarding the advantages of open-ended format, we have yet to see convincing psychometric evidence supporting them. We are awash in evidence of their drawbacks”, and further, “… We are forced to conclude that open-ended items provide this information in more time at greater cost than the multiple-choice items. This conclusion is surely discouraging to those who feel that open-ended items are more authentic and, hence, in some sense, more useful than multiple-choice items. It should be” [ 49 ].

Palmer et al . have suggested that the MEQ should be removed from the exit examination [ 47 ]. Given that MEQ's are difficult to write to a high standard and in such a way that they test high-order cognitive skills, and given the time required and the subjectivity in marking, their use does not represent an efficient use of resources. Indeed, they state “… MEQ's often do little more than test the candidate's ability to recall a list of facts and frustrate the examiner with a large pile of papers to be hand-marked”. They conclude there is no good measurement reason for including open-ended items in the high-stakes assessment, given that the MEQ performed poorly in terms of testing high-order thinking in comparison with the multiple-choice despite considerable effort to produce quality questions.

Schuwirth and Van der Vleuten too have suggested that there is no justification for the use of SAQ in assessment, since the stimulus of most SAQ can also be applied with multiple-choice. They recommend that SAQ should not be used in any situation except where the spontaneous generation of the answer is absolutely essential. Furthermore, they believe that there is little place for context-free questions in medical assessment as the context-rich stimulus approximates clinical practice more closely [ 48 ].

Why does the open-ended format persist in medical assessment?

Hence the evidence suggests that in written summative assessment the multiple-choice format is no less able to test high-order thinking than open-ended questions, may have higher validity and is superior in reliability and cost-effectiveness. Remarkably this evidence extends as far back as 1926 [ 53 ],[ 93 ], and the reasons underlying the persistence of the open-ended format in assessment are of some interest. I suggest a number of factors. Studies bear out the common-sense expectation that questions designed to test factual knowledge only—irrespective of whether these are presented as open-ended or in multiple-choice format—do not test the same level of reasoning as more complex questions [ 94 ]. Indeed, a recurring finding in the literature is that the so-called deficiencies of the multiple-choice format lie more with the quality of the individual question item (and by inference, with the question-setter), than with the format per se . This leads to a self-fulfilling prophecy: examiners who do not appreciate the versatility of the multiple-choice format set questions which only test low-order thinking and not surprisingly achieve results which confirm their bias. Palmer et al. state that criticism of multiple-choice as being incapable of testing high-order thinking is in fact criticism of poorly written questions, and that the same criticism can be directed at open-ended assessments [ 45 ]. There is indeed evidence that stem-equivalent items tend to behave similarly, irrespective of whether the item is phrased as an open-ended question or in MCQ format. It is therefore essential that in making comparisons, the items compared are specifically crafted to assess the same order of cognition. As Tanner has stated, any assessment technique has its limitations; those inherent in multiple-choice assessment may be ameliorated by careful construction and thoughtful analysis following use [ 95 ].

Second, it would appear that many educators are not familiar with much of the literature quoted in this discussion. The most persuasive material is found in the broader educational literature, and though there are brief references in the medical education literature to some of the studies to which I have referred [ 47 ],[ 48 ], as well as a few original studies performed in the medical assessment context [ 17 ],[ 45 ],[ 47 ],[ 63 ], the issue does not appear to have enjoyed prominence in debate and has had limited impact on actual assessment practice. In their consensus statement and recommendations on research and assessment, Schuwirth et al. stress the need for reference beyond the existing medical education literature to relevant scientific disciplines, including cognitive psychology [ 27 ]. In the teaching context, it is remarkable how the proposition that the open-ended format is more appropriate in testing the knowledge and skills ultimately required for the workplace has been repeatedly and uncritically restated in the literature in the absence of compelling evidence to support it.

Third is the counter-intuitiveness of this finding. Indeed, the proposition that the open-ended format is more challenging than MCQ is intuitively appealing. Furthermore, there is the “generation effect”; experimental work has shown that spontaneous generation of information, as opposed to reading enhances recall [ 18 ],[ 19 ]. Although this applies to learning rather than to assessment, many teachers implicitly attribute a similar but reversed process to the act of recall, believing that spontaneous recall is more valid than cued recall. However, validity at face value is an unreliable proxy for true validity, and the outcome in practice may contradict what seems intuitively correct [ 48 ]. As the literature on learning increases, it has become apparent that evidenced-based practice frequently fails to coincide with the intuitive appeal of a particular learning methodology. Examples include the observation that interleaved practice is more effective than blocked practice and distributed practice is more effective than massed practice in promoting acquisition of skills and knowledge [ 21 ]. There is a need for assessment to be evidence-based; to an extent assessment would appear to lag behind learning and teaching methodology in this respect. Rohrer and Pashler have suggested that underutilisation of learning strategies shown to be more effective than their traditional counterparts, such as learning through testing, distributed practice and interleaved practice, remain so because of “the widespread (but erroneous) feeling that these strategies are less effective than their alternatives” [ 21 ].

Fourth and perhaps most defensible is concern that there is much that as yet remains unknown about the nature of assessment; particularly seen from the viewpoint of assessment for learning, and given very interesting new insights into the cognitive basis of memorisation, recall and reasoning, a field which is as yet largely unexplored, and may be expected to have a significant impact on the choice of assessment format. For diagnostic purposes, the open-ended format may hold value, since it is better able to expose the students intermediate thinking processes and therefore allow precise identification of learning difficulties [ 72 ]. Newer observations such as the generation effect [ 18 ],[ 19 ], the testing effect [ 20 ],[ 23 ], the preassessment effect, where the act of preparation for an assessment is itself a powerful driver of learning [ 96 ], and the post-assessment effect, such as the effect of feedback [ 96 ] are clearly important; were it to be shown that a particular format of assessment, such as the open-ended question, was superior in driving learning, then this would be important information which might well determine the choice of assessment. At this point however no such reliable information exists. Preliminary work suggests that MCQ items are as effective as open-ended items in promoting the testing effect [ 23 ]. None of these considerations are as yet sufficiently well supported by experimental evidence to argue definitively for the inclusion of open-ended questions on the basis of their effect on learning, though the possibility clearly remains. Furthermore, this debate has concentrated on high-stakes, summative exit assessments where the learning effects of assessment are presumably less important than they are at other stages of learning. Certainly, open-ended assessment remains appropriate for those domains not well-suited to multiple-choice assessment such as data gathering, clinical judgement and professional attitudes [ 92 ] and may have value for a particular question which cannot be presented in any other format [ 48 ]. Though the evidence is less compelling, open-ended items may be superior in distinguishing between performances of candidates occupying the two extremes of performance [ 75 ].

Cognitive basis for the observation

The need for assessment of research to move beyond empiric observations to studies based on a sound theoretical framework has recently been stressed [ 27 ],[ 96 ]. There is as yet little written on the reasons for the counter-intuitive finding that MCQ is as valid as open-ended assessments in predicting clinical performance. I suggest that the observation is highly compatible with cognitive-constructivist and situated learning theory, and in particular the theory of conceptual change [ 97 ]. Fundamental to this theory is the concept of mental models. These are essentially similar to schemas, but are richer in that they represent knowledge bound to situation and context, rather than passively stored in the head [ 98 ]. Mental models may therefore be thought of as cognitive artifacts constructed by an individual based on his or her preconceptions, cognitive skills, linguistic comprehension, and perception of the problem, which evolve as they are modified through experience and instruction [ 99 ]. Conceptual change is postulated to represent the mechanism underlying meaningful learning, and is a process of progressively constructing and organizing a learner’s personal mental models [ 100 ],[ 101 ]. It is suggested that an effective mental model will integrate six different aspects: knowledge appropriately structured for a particular domain (structural knowledge), pathways for solving problems related to the domain (procedural knowledge), mental images of the system, associations (metaphors), the ability to know when to activate mental models (executive knowledge), and assumptions about the problem (beliefs) [ 102 ]. Therefore increasing proficiency in any domain is associated not just with an enlarging of store of knowledge and experience, but also with increasing complexity in the extent to which knowledge is organised and the manner in which it is stored and accessed [ 103 ], particularly as complex mental models which may be applied to problem-solving [ 104 ]. A counterpart in the domain of medical expertise is the hierarchy of constructs proposed by Schmidt et al . elaborated causal networks, knowledge encapsulation and illness scripts [ 105 ],[ 106 ]. Conceptual change theory has a clear relationship to our current understanding of expertise, which is postulated to emerge where knowledge and concepts are linked as mental representations into propositional networks which allow rapid processing of information and the omission of intermediate steps in reasoning [ 107 ],[ 108 ]; typically the expert’s knowledge is grouped into discrete packets or chunks, and manipulation of these equates to the manipulation of a large amount of information simultaneously without conscious attention to any individual component [ 104 ]. In comparison with non-experts, the representations of experts are richer, more organised and abstract and are based on deep knowledge; experts also recognise the conditions under which use of particular knowledge is appropriate [ 109 ]. As Norman has stated, “expert problem-solving in medicine is dependent on (1) prior experiences which can be used in routine solution of problems by pattern recognition processes and (2) elaborated conceptual knowledge applicable to the occasional problematic situation ” [ 110 ]. The processes of building expertise and that of constructing mental models are essentially parallel [ 99 ].

Therefore any form of assessment intended to measure proficiency must successfully sample the candidate’s organisation of and access to knowledge, and not just content knowledge alone [ 99 ],[ 111 ]. I have reviewed the empirical evidence which suggests that the multiple-choice format is indeed predictive of proficiency, which provides important evidence that it is valid. This is explicable in terms of mental models. An alternative view of a mental model is as an internal representation of a system that the learner brings to bear in a problem-solving situation [ 103 ],[ 104 ],[ 112 ]. The context-rich written assessment [ 48 ] is essentially an exercise in complex problem-solving, and fits the definition of problem-solving as “cognitive processing aimed at accomplishing certain goals when the solution is unknown” [ 103 ],[ 113 ].

Zhang has introduced the concept of a “distributed cognitive task”: a task requiring that information distributed across both the internal mind and the external environment is processed [ 114 ]. If we extend Zhang’s concept of external representation to include a hypothetical patient, the subject of the clinical vignette, who represents the class of all such patients, then answering the context-rich multiple-choice item may be seen as a distributed cognitive task. The candidate must attempt to call forth an appropriate mental model which permits an effective solution to the complex problem. In a sequence of events which parallels that described by Zhang, the candidate must internalise the information provided in the vignette, form an accurate internal representation (an equivalent concept is that of the problem space, a mental representation of the problem requiring solution [ 115 ]); this in turn activates and interacts with the relevant mental models and is followed by externalization: the return of the product of the interaction of internal representation and mental model to the external environment, and the selection of a solution. In effect a relationship has been defined between environmental information, activation of higher level cognition and externalisation of internal representations [ 114 ].

Assessment items which require complex problem-solving call on mental models appropriate to that particular context, and the item can only be answered confidently and correctly if the mental model is present at the level of proficiency. There is therefore no such thing as the student with generic expertise “in answering multiple-choice questions”, which explains the findings of Hakstian [ 80 ], Bridgeman and Morgan [ 78 ], Ercikan et al. [ 75 ] and Bleske-Rechek et al . [ 79 ], none of whom found convincing evidence for the existence of a class of student with a particular skill in answering multiple-choice questions.

Recent observations that retrieval of knowledge improves retention, and may be enhanced in the learning process by frequent testing [ 20 ],[ 21 ], and in particular a recent publication summarising four studies performed in an authentic learning environment which demonstrates that that testing using MCQ format is as effective as SAQ testing [ 23 ], supports the hypothesis that the MCQ format engages with high order cognitive processes, in both learning and retrieval of memory. This is further supported by their finding that high-level test questions stimulate deeper conceptual learning and better learning retention then do low-level test questions [ 23 ].

In summary, the multiple-choice item is testing the integrity and appropriateness of the candidate’s mental models, and in doing so, is in fact assessing proficiency. If the item is designed to test factual recall only then it will fail for this purpose, since it is the solution of a complex problem which tests the strength of the mental model and the cognitive processes which interact with it. Yet even a low-quality assessment based on factual recollection will correlate significantly with proficiency. Firstly, all mental models are based on a foundation of structural knowledge. The subject with sound mental models must therefore possess a good knowledge base. Secondly, possessing effective and appropriate mental models facilitates the retention and recall of knowledge [ 103 ]. Not surprisingly therefore, even on a fact-based assessment, good students will correctly recall the information and excel; students with deficient mental models, are less likely to be able to recall the information when needed. This is supported by the work of Jensen et al . [ 116 ] who found that high order questions stimulated deep conceptual understanding and retention, and correlated with higher performance on both subsequent high order assessment items and low-order assessment items. Indeed, recognition and recall are highly correlated [ 50 ]. There is evidence that the cognitive processes evoked by the multiple-choice format are not influenced by cueing [ 117 ], though the reasons for the frequent observation that MCQ scores are higher than those for equivalent open-ended item assessments raise concern that cueing may yet have a role [ 118 ]. However, where the stem and options have been well-designed―particularly such that the distractors all appear attractive to the candidate without the requisite knowledge― cueing should not be an issue [ 29 ],[ 48 ], and the common argument that it is easier to recognize an answer than it is to generate it spontaneously would appear not to hold true.

Problem-solving skills are poorly generalizable [ 41 ]. This is explicable in that mental models are essentially domain-specific, representing a particular set of knowledge and circumstances, but the actual process of developing them is highly dependent on domain-general processes including metacognition, self-regulation and cognitive flexibility [ 99 ].

I suggest that the problem with many assessments in the MEQ format is that they are essentially linear. By requiring the candidate to think one step at a time, the assessment effectively misses the crux of the problem-solving process, which is to look at and respond to a complex problem in its entirety, and not stepwise. The context-rich vignette-based multiple-choice item by contrast presents a complex problem which must be holistically assessed. Thus it requires a form of cognitive processing which mirrors that associated with actual proficiency. Hybrid formats such as key feature assessments in effect also break down the clinical reasoning process into a sequence of sequential steps; whether this is regarded as a drawback will depend on the relative importance ascribed to decision-making at critical points in the decision tree and global assessment of a problem viewed holistically. This is a critical area for future research in clinical reasoning.

Educators who mistrust the multiple-choice format have tended to concentrate on the final, and cognitively the least important, step in this whole process: the selection of a particular option as the answer, while ignoring the complex cognitive processes which precede the selection. Indeed, in a good assessment, the candidate is not “selecting” an answer at all. They recognise the external representation of a problem, subject the internalised representation to high level cognitive processing, and then externalise the product as a solution [ 119 ], which (almost as if coincidentally) should coincide with one of the options given.

The multiple-choice format is by no means unlimited in its capacity to test higher-order thinking. The literature on problem-solving stresses the importance of highly-structured complex problems, characterised by unknown elements with no clear path to the solution and indeed a potential for there to be many solutions or even no solution at all [ 99 ]. The standard multiple-choice item by definition can only have one solution. Thus, though it may be context-rich, it is limited in its complexity. It is difficult however to imagine how a practically achievable open-ended written assessment might perform better. In order to accommodate complexity, the question would essentially have to be unstructured—thereby eliminating all the structured short-answer progeny of the essay format, such as MEQ. In order to permit the candidate to freely demonstrate the application of all his or her mental resources to a problem more complex than that permitted by a multiple-choice vignette, one would in all probability require that the candidate is afforded the opportunity to develop an extensive, unstructured and essentially free-ranging, essay-length response; marking will be inherently subjective and we are again faced with the problem of narrow sampling, subjectivity and low reliability.

In effect the choice would then lie between an assessment comprising one or two unstructured essay length answers with low objectivity and reliability, and a large number of highly reliable multiple choice items which will effectively test high-order problem-solving, but will stop short of a fully complex situation. Perhaps this is a restatement of the assertion that “measuring something that is not quite right accurately may yield far better measurement than measuring the right thing poorly” [ 50 ], the situation depicted in Figure  3 .

Another way of understanding the validity of the multiple-choice format is by comparing the responses of candidates at different phases of the learning process with the stages of increasing proficiency posited by Dreyfus et al . [ 16 ] (Table  1 ). Here the first column comprises the stages of learning; in this context, we shall regard stage of learning as synonymous with level of proficiency or expertise, which is a measure of the effectiveness of problem-solving skill. The second column contains descriptors for each stage chosen for their relevance to complex problem-solving posed by a well-constructed context-rich multiple-choice item. The third column contains a description of the likely performance on that item of a candidate at that stage of proficiency. The relationship between proficiency and performance in a complex multiple-choice item is in fact remarkably direct. The candidate who has reached the stage of proficiency or expertise will be more likely to select the correct response than candidates at a lower level, and the more widely such proficiency is spread across the domain, the higher the aggregate score in the assessment. Though the score for a standard multiple-choice item is binary (all or nothing), the assessment as a whole is not. Whereas candidates in the top categories are likely to arrive at a correct solution most of the time, and students in the lowest category hardly ever, the middle order candidates with less secure mental models will answer with less confidence, but will in a number of items proportional to their proficiency, come up with the correct solution, their mental models proving to be sufficiently adequate for the purpose. Over a large number of items such a multiple-choice assessment will therefore provide a highly accurate indication of the level of proficiency of the candidate. To avoid all confounding variables however it is absolutely essential that the options are set such that cueing is eliminated.

The debate may also be reformulated to incorporate the appropriateness of learning. Deep learning is characterised by an understanding of the meaning underlying knowledge, reflection on the interrelationships of items of information, understanding of the application of knowledge to everyday experience, integration of information with prior learning, the ability to differentiate between principle and example and the organisation of knowledge into a coherent, synthetic structure [ 99 ],[ 100 ]—essentially an alternative formulation of the mental model. One can thus argue that the candidate who possesses deep knowledge has, by the very fact of that possession, demonstrated that they have the sort of comprehensive and intuitive understanding of the subject—in short, the appropriate mental models as described by Jonassen and Strobel [ 97 ],[ 101 ]—to allow the information to be used for problem-solving. Correspondingly, the weak student lacks deep knowledge, and this will be exposed by a well-constructed multiple-choice assessment, provided that the items are written in a manner which explores the higher cognitive levels of learning.

Therefore, if candidates demonstrate evidence of extensive, deeply-learned knowledge, and the ability to solve complex problems, be it through the medium of multiple-choice assessment or any other form of assessment, then it is safe to assume that they will be able to apply this knowledge in practice. This accounts for the extensive correlation noted between multiple-choice performance, performance in open-ended assessments, and tests of subsequent performance in an authentic environment.

The argument that open-ended questions do not test higher order cognitive skills, and consequently lack validity, is not supported by the evidence. Some studies may have been confounded by the unfair comparison of high-order items in one format with low-order items in another. This cannot be discounted as partly responsible for the discrepancies noted in some of the work I have referenced, such as that of Hee-Sun et al . [ 73 ], yet where the cognitive order of the items have been carefully matched, a number of careful studies suggest that, particularly in science and medicine, the two modalities assess constructs which though probably not identical, overlap to the extent that using both forms of assessment is redundant. Given the advantage of the multiple-choice format in reliability, efficiency and cost-effectiveness, the suggestion that open-ended items may be replaced entirely with multiple-choice items in summative assessment is one which deserves careful consideration. This counter-intuitive finding highlights our lack of understanding of the cognitive processes underlying both clinical competence and its assessment, and suggests that much further work remains to be done. Despite the MCQ format’s long pedigree, it is clear that we understand little about the cognitive architecture invoked by this form of assessment. The need for a greater role for theoretical models in assessment research has been stressed [ 27 ],[ 96 ]. As illustrated in this debate, medical teaching and assessment must be based on a solid theoretical framework, underpinned by reliable evidence. Hard evidence combined with a plausible theoretical model - which must attempt to explain the observations on the basis of cognition - will provide the strongest basis for the identification of effective learning and assessment methodologies.

That the multiple-choice format demonstrates high validity is due in part to the observation that well-constructed, context-rich multiple-choice questions are fully capable of assessing higher orders of cognition, and that they call forth cognitive problem-solving processes which exactly mirror those required in practice. On a theoretical basis it is even conceivable that the multiple-choice format will show superior performance in assessing proficiency in contrast with some versions of the open-ended format; there is indeed empirical evidence to support this in practice [ 56 ],[ 92 ]. Paradoxically, the open-ended format may demonstrate lower validity than well-written multiple-choice items; since attempts to improve reliability and reduce objectivity by writing highly focused questions marked against standardised, prescriptive marking templates frequently “trivialize” the question, resulting in some increase in reproducibility at the expense of a significant loss of validity [ 120 ]. Indeed, I have argued that, based on an understanding of human cognition and problem-solving proficiency, context-rich multiple-choice assessments may be superior in assessing the very characteristics which the proponents of the open-ended format claim as a strength of that format.

Though current evidence supports the notion that in summative assessment open-ended items may well be redundant, this conclusion should not be uncritically extrapolated to situations where assessment for learning is important, such as in formative assessment and in summative assessment at early and intermediate stages of the medical programme given that conclusive evidence with respect to the learning effects of the two formats is as yet awaited.

Author’s contribution

The author was solely responsible the literature and writing the article.

Author’s information

RJH is currently Dean and Head of the School of Clinical Medicine at the University of KwaZulu-Natal, Durban, South Africa. He studied at the University of Cape Town, specialising in Internal Medicine and subsequently hepatology, before moving to Durban as Professor of Medicine. He has a longstanding interest in medical education, and specifically in the cognitive aspects of clinical reasoning, an area in which he is currently supervising a number of research initiatives.

Abbreviations

Modified essay question

Multiple-choice question

Short answer question

Objective structured clinical examination

Siemens G: Connectivism: Learning as Network-Creation. [ http://www.elearnspace.org/Articles/networks.htm ]

Siemens G: Connectivism: A learning theory for the digital age. Int J Instr Technol Distance Learn. 2005, 2: 3-10.

Google Scholar  

Perkins DN, Salomon G: Learning transfer. International Encyclopaedia of adult education and training. Edited by: Tuijnman AC. 1996, Pergamon Press, Tarrytown, NY, 422-427. 2

Haskell EH: Transfer of learning: Cognition, Instruction, and Reasoning. 2001, Academic Press, New York

Spelke E: Initial Knowledge: Six Suggestions. Cognition on cognition. Edited by: Mehler J, Franck S. 1995, The MIT Press, Cambridge, MA US, 433-447.

Barnett SM, Ceci SJ: When and where do we apply what we learn? A taxonomy for far transfer. Psychol Bull. 2002, 128: 612-637.

Brown AL: Analogical Learning and Transfer: What Develops?. Similarity and Analogical Reasoning. Edited by: Vosniadou S, Ortony A. 1989, Cambridge University Press, New York, 369-412.

Gick ML, Holyoak KJ: Schema Induction and Analogical Transfer. 2004, Psychology Press, New York, NY US

Bloom BS: The Cognitive Domain. Taxonomy of Educational Objectives, Handbook I. 1956, David McKay Co Inc, New York

Anderson LW, Krathwohl DR, Airasian PW, Cruikshank KA, Mayer RE, Pintrich PR, Raths J, Wittrock MC: A Taxonomy for Learning, Teaching, and Assessing: a revision of Bloom's Taxonomy of Educational Objectives. 2001, Longman, New York

Anderson LW, Sosniak LA: Bloom's Taxonomy: A Forty-year Retrospective. Ninety-third yearbook of the National Society for the Study of Education: Part II. Edited by: Anderson LW, Sosniak LA. 1994, University of Chicago Press, Chicago IL

Conklin J: A taxonomy for learning, teaching, and assessing: a revision of Bloom's taxonomy of educational objectives. Educ Horiz. 2005, 83: 154-159.

Haladyna TM, Downing SM: A taxonomy of multiple-choice item-writing rules. Appl Meas Educ. 1989, 2: 37-51.

Haladyna TM: Developing and Validating Multiple-choice Test Items . Mahwah NJ: L. Erlbaum Associates; 1999.

Miller GE: The assessment of clinical skills/competence/performance. Acad Med. 1990, 65: S63-S67.

Dreyfus HL, Dreyfus SE, Athanasiou T: Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. 1986, Free Press, New York

Norcini JJ, Swanson DB, Grosso LJ, Webster GD: Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence. Med Educ. 1985, 19: 238-247.

Taconnat L, Froger C, Sacher M, Isingrini M: Generation and associative encoding in young and old adults: The effect of the strength of association between cues and targets on a cued recall task. Exp Psychol. 2008, 55: 23-30.

Baddeley AD, Eysenck MW, Anderson M: Memory. 2010, Psychology Press, New York

Karpicke J, Grimaldi P: Retrieval-based learning: a perspective for enhancing meaningful learning. Educ Psychol Rev. 2012, 24: 401-418.

Rohrer D, Pashler H: Recent research on human learning challenges conventional instructional strategies. Educ Res. 2010, 39: 406-412.

Smith MA, Roediger HL, Karpicke JD: Covert retrieval practice benefits retention as much as overt retrieval practice. J Exp Psychol Learn Mem Cogn. 2013, 39: 1712-1725.

McDermott KB, Agarwal PK, D’Antonio L, Roediger HL, McDaniel MA: Both multiple-choice and short-answer quizzes enhance later exam performance in middle and high school classes. J Exp Psychol Appl. 2014, 20: 3-21.

Cutting MF, Saks NS: Twelve tips for utilizing principles of learning to support medical education. Med Teach. 2012, 34: 20-24.

Schuwirth LWT, Van der Vleuten CPM: General overview of the theories used in assessment: AMEE Guide No. 57. Med Teach. 2011, 33: 783-797.

Van der Vleuten CP, Schuwirth LW: Assessing professional competence: from methods to programmes. Med Educ. 2005, 39: 309-317.

Schuwirth L, Colliver J, Gruppen L, Kreiter C, Mennin S, Onishi H, Pangaro L, Ringsted C, Swanson D, Van der Vleuten C, Wagner-Menghin M: Research in assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011, 33: 224-233.

Schuwirth LWT, Van der Vleuten CPM: Programmatic assessment and Kane's validity perspective. Med Educ. 2012, 46: 38-48.

Case SM, Swanson DB: Constructing Written Test Questions for the Basic and Clinical Sciences. 2002, National Board of Medical Examiners, Philadelphia, 3

Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, Galbraith R, Hays R, Kent A, Perrott V, Roberts T: Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011, 33: 206-214.

Shepard LA: The role of assessment in a learning culture. Educ Res. 2000, 29: 4-14.

Coburn CA, Yerkes RM: A study of the behavior of the crow corvus americanus Aud. By the multiple choice method. J Anim Behav. 1915, 5: 75-114.

Yerkes RM, Coburn CA: A study of the behavior of the pig Sus Scrofa by the multiple choice method. J Anim Behav. 1915, 5: 185-225.

Brown W, Whittell F: Yerkes' multiple choice method with human adults. J Comp Psychol. 1923, 3: 305-318.

Yerkes RM: A New method of studying the ideational behavior of mentally defective and deranged as compared with normal individuals. J Comp Psychol. 1921, 1: 369-394.

Davidson C: Davidson CN: Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn. 2011, Viking Press, New York

Frederiksen JR, Collins A: A Systems Approach to Educational Testing. Technical Report No. 2. 1990, Center for Technology in Education, New York

Guthrie JT: Testing higher level skills. J Read. 1984, 28: 188-190.

Nickerson RS: New directions in educational assessment. Educ Res. 1989, 18: 3-7.

Stratford P, Pierce-Fenn H: Modified essay question. Phys Ther. 1985, 65: 1075-1079.

Wass V, Van der Vleuten C, Shatzer J, Jones R: Assessment of clinical competence. Lancet. 2001, 357: 945.

Rotfield H: Are we teachers or job trainers?. Acad Mark Sci Q. 1998, 2: 2.

Crocker L, Algina J: Introduction to Classical & Modern Test Theory. 1986, Holt, Rinehart and Winston, Inc., Fort Worth, TX

Angoff W: Test reliability and effective test length. Psychometrika. 1953, 18: 1-14.

Palmer EJ, Devitt PG: Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Med Educ. 2007, 7: 49-49.

Feletti GI, Smith EK: Modified essay questions: Are they worth the effort?. Med Educ. 1986, 20: 126-132.

Palmer EJ, Duggan P, Devitt PG, Russell R: The modified essay question: its exit from the exit examination?. Med Teach. 2010, 32: e300-e307.

Schuwirth LW, Van der Vleuten CPM: Different written assessment methods: what can be said about their strengths and weaknesses?. Med Educ. 2004, 38: 974-979.

Lukhele R, Thissen D, Wainer H: On the relative value of multiple-choice, constructed response, and examinee-selected items on two achievement tests. J Educ Meas. 1994, 31: 234-250.

Wainer H, Thissen D: Combining multiple-choice and constructed-response test scores: toward a Marxist theory of test construction. Appl Meas Educ. 1993, 6: 103-118.

Facione PA: The California Critical Thinking Skills Test--College Level. Technical Report #1. Experimental Validation and Content Validity. 1990, California Academic Press, Millbrae CA

Facione PA, Facione NC, Blohm SW, Giancarlo CAF: The California Critical Thinking Skills Test [Revised]. In Millbrae CA: California Academic Press; 2007.

Rodriguez MC: Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. J Educ Meas. 2003, 40: 163-184.

Falk B, Ancess J, Darling-Hammond L: Authentic Assessment in Action: Studies of Schools and Students at Work. 1995, Teachers College Press, United States of America

Rethans JJ, Norcini JJ, Baron-Maldonado M, Blackmore D, Jolly BC, LaDuca T, Lew S, Page GG, Southgate LH: The relationship between competence and performance: implications for assessing practice performance. Med Educ. 2002, 36: 901-909.

Wilkinson TJ, Frampton CM: Comprehensive undergraduate medical assessments improve prediction of clinical performance. Med Educ. 2004, 38: 1111-1116.

Baker EL: Standards for Educational and Psychological Testing. Sage Publications, Inc; 2012.

Eignor DR: The Standards for Educational and Psychological Testing. APA Handbook of Testing and Assessment in Psychology, Vol 1: Test Theory and Testing and Assessment in Industrial and Organizational Psychology. Edited by: Geisinger KF, Bracken BA, Carlson JF, Hansen J-IC, Kuncel NR, Reise SP, Rodriguez MC. 2013, American Psychological Association, Washington, DC, US, 245-250.

Eignor DR: Standards for the development and use of tests: The Standards for Educational and Psychological Testing. Eur J Psychol Assess. 2001, 17: 157-163.

Downing SM: Validity: on the meaningful interpretation of assessment data. Med Educ. 2003, 37: 830.

Messick S: The interplay of evidence and consequences in the validation of performance assessments. Educ Res. 1994, 23: 13-23.

Kuechler WL, Simkin MG: Why is performance on multiple-choice tests and constructed-response tests Not more closely related? theory and an empirical test. Decis Sci J Innov Educ. 2010, 8: 55-73.

Norman GR, Smith EK, Powles AC, Rooney PJ: Factors underlying performance on written tests of knowledge. Med Educ. 1987, 21: 297-304.

Bacon DR: Assessing learning outcomes: a comparison of multiple-choice and short-answer questions in a marketing context. J Mark Educ. 2003, 25: 31-36.

Kastner M, Stangla B: Multiple choice and constructed response tests: Do test format and scoring matter?. Procedia - Social and Behav Sci. 2011, 12: 263-273.

Nichols P, Sugrue B: The lack of fidelity between cognitively complex constructs and conventional test development practice. Educ Measurement: Issues Pract. 1999, 18: 18-29.

Bennett RE, Rock DA, Wang M: Equivalence of free-response and multiple-choice items. J Educ Meas. 1991, 28: 77-92.

Bridgeman B, Rock DA: Relationships among multiple-choice and open-ended analytical questions. J Educ Meas. 1993, 30: 313-329.

Thissen D, Wainer H: Are tests comprising both multiple-choice and free-response items necessarily less unidimensional. J Educ Meas. 1994, 31: 113.

Lissitz RW, Xiaodong H, Slater SC: The contribution of constructed response items to large scale assessment: measuring and understanding their impact. J Appl Testing Technol. 2012, 13: 1-52.

Traub RE, Fisher CW: On the equivalence of constructed- response and multiple-choice tests. Appl Psychol Meas. 1977, 1: 355-369.

Martinez ME: Cognition and the question of test item format. Educ Psychol. 1999, 34: 207-218.

Hee-Sun L, Liu OL, Linn MC: Validating measurement of knowledge integration in science using multiple-choice and explanation items. Appl Meas Educ. 2011, 24: 115-136.

Wilson M, Wang W-C: Complex composites: Issues that arise in combining different modes of assessment. Appl Psychol Meas. 1995, 19: 51-71.

Ercikan K, Schwartz RD, Julian MW, Burket GR, Weber MM, Link V: Calibration and scoring of tests with multiple-choice and constructed-response item types. J Educ Meas. 1998, 35: 137-154.

Epstein ML, Lazarus AD, Calvano TB, Matthews KA, Hendel RA, Epstein BB, Brosvic GM: Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. Psychological Record. 2002, 52: 187-201.

Schuwirth LWT, Van der Vleuten CPM: Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011, 33: 478-485.

Bridgeman B, Morgan R: Success in college for students with discrepancies between performance on multiple-choice and essay tests. J Educ Psychol. 1996, 88: 333-340.

Bleske-Rechek A, Zeug N, Webb RM: Discrepant performance on multiple-choice and short answer assessments and the relation of performance to general scholastic aptitude. Assessment Eval Higher Educ. 2007, 32: 89-105.

Hakstian AR: The Effects of Type of Examination Anticipated on Test Preparation and Performance. J Educ Res. 1971, 64: 319.

Scouller K: The influence of assessment method on Students' learning approaches: multiple choice question examination versus assignment essay. High Educ. 1998, 35: 453-472.

Thomas PR, Bain JD: Contextual dependence of learning approaches: The effects of assessments. Human Learning: J Pract Res Appl. 1984, 3: 227-240.

Watkins D: Factors influencing the study methods of Australian tertiary students. High Educ. 1982, 11: 369-380.

Minbashian A, Huon GF, Bird KD: Approaches to studying and academic performance in short-essay exams. High Educ. 2004, 47: 161-176.

Yonker JE: The relationship of deep and surface study approaches on factual and applied test-bank multiple-choice question performance. Assess Eval Higher Educ. 2011, 36: 673-686.

Joughin G: The hidden curriculum revisited: a critical review of research into the influence of summative assessment on learning. Assess Eval Higher Educ. 2010, 35: 335-345.

Scouller KM, Prosser M: Students' experiences in studying for multiple choice question examinations. Stud High Educ. 1994, 19: 267.

Hadwin AF, Winne PH, Stockley DB, Nesbit JC, Woszczyna C: Context moderates students' self-reports about how they study. J Educ Psychol. 2001, 93: 477-487.

Birenbaum M: Assessment and instruction preferences and their relationship with test anxiety and learning strategies. High Educ. 2007, 53: 749-768.

Birenbaum M: Assessment preferences and their relationship to learning strategies and orientations. High Educ. 1997, 33: 71-84.

Smith SN, Miller RJ: Learning approaches: examination type, discipline of study, and gender. Educ Psychol. 2005, 25: 43-53.

Rabinowitz HK, Hojat M: A comparison of the modified essay question and multiple choice question formats: their relationship to clinical performance. Fam Med. 1989, 21: 364-367.

Paterson DG: Do new and old type examinations measure different mental functions?. School Soc. 1926, 24: 246-248.

Schuwirth LW, Verheggen MM, Van der Vleuten CPM, Boshuizen HP, Dinant GJ: Do short cases elicit different thinking processes than factual knowledge questions do?. Med Educ. 2001, 35: 348-356.

Tanner DE: Multiple-choice items: Pariah, panacea or neither of the above?. Am Second Educ. 2003, 31: 27.

Cilliers FJ, Schuwirth LW, van der Vleuten CP: Modelling the pre-assessment learning effects of assessment: evidence in the validity chain. Med Educ. 2012, 46: 1087-1098.

Jonassen DH, Strobel J: Modeling for Meaningful Learning. Engaged Learning with Emerging Technologies. Edited by: Hung D. 2006, Springer, Amsterdam, 1-27.

Derry SJ: Cognitive schema theory in the constructivist debate. Educ Psychol. 1996, 31: 163-174.

Kim MK: Theoretically grounded guidelines for assessing learning progress: cognitive changes in Ill-structured complex problem-solving contexts. Educ Technol Res Dev. 2012, 60: 601-622.

Mayer RE: Models for Understanding. Rev Educ Res. 1989, 59: 43-64.

Jonassen D, Strobel J, Gottdenker J: Model building for conceptual change. Interact Learn Environ. 2005, 13: 15-37.

Jonassen DH: Tools for representing problems and the knowledge required to solve them. Edited by Tergan S-O, Keller T. Berlin, Heidelberg: Springer; 2005:82–94.

Bogard T, Liu M, Chiang Y-H: Thresholds of knowledge development in complex problem solving: a multiple-case study of advanced Learners' cognitive processes. Educ Technol Res Dev. 2013, 61: 465-503.

Van Gog T, Ericsson KA, Rikers RMJP: Instructional design for advanced learners: establishing connections between the theoretical frameworks of cognitive load and deliberate practice. Educ Technol Res Dev. 2005, 53: 73-81.

Schmidt HG, Norman GR, Boshuizen HP: A cognitive perspective on medical expertise: theory and implication. Acad Med. 1990, 65: 611-621.

Schmidt HG, Rikers RMJP: How expertise develops in medicine: knowledge encapsulation and illness script formation. Med Educ. 2007, 41: 1133-1139.

Norman G, Young M, Brooks L: Non-analytical models of clinical reasoning: the role of experience. Med Educ. 2007, 41: 1140-1145.

Ericsson KA, Prietula MJ, Cokely ET: The Making of an Expert. Harv Bus Rev. 2007, 85: 114-121.

Hoffman RR: How Can Expertise be Defined? Implications of Research From Cognitive Psychology. Exploring Expertise. Edited by: Williams R, Faulkner W, Fleck J. 1996, University of Edinburgh Press, Edinburgh, 81-100.

Norman GR: Problem-solving skills, solving problems and problem-based learning. Med Educ. 1988, 22: 279-286.

Ifenthaler D, Seel NM: Model-based reasoning. Comput Educ. 2013, 64: 131-142.

Jonassen D: Using cognitive tools to represent problems. J Res Technol Educ. 2003, 35: 362-381.

Mayer RE, Wittrock MC: Problem-Solving Transfer. Handbook of Educational Psychology. Edited by: Berliner DC, Calfee RC. 1996, Macmillan Library Reference USA, New York, NY, 47-62.

Zhang J, Norman DA: Representations in distributed cognitive tasks. Cogn Sci. 1994, 18: 87-122.

Simon HA: Information-Processing Theory of Human Problem Solving. Handbook of Learning & Cognitive Processes: V Human Information. Edited by: Estes WK. 1978, Lawrence Erlbaum, Oxford England, 271-295.

Jensen JL, Woodard SM, Kummer TA, McDaniel MA: Teaching to the test…or testing to teach: exams requiring higher order thinking skills encourage greater conceptual understanding. Educ Psychol Rev. 2014, 26: 307-329.

Cohen-Schotanus J, Van der Vleuten CPM: A standard setting method with the best performing students as point of reference: practical and affordable. Med Teach. 2010, 32: 154-160.

Desjardins I, Touchie C, Pugh D, Wood TJ, Humphrey-Murto S: The impact of cueing on written examinations of clinical decision making: a case study. Med Educ. 2014, 48: 255-261.

Pretz JE, Naples AJ, Sternberg RJ: Recognizing, Defining, and Representing Problems. The Psychology of Problem Solving. Edited by: Davidson JE, Sternberg RJ. 2003, Cambridge University Press, New York, NY US, 3-30.

Schuwirth LWT, Schuwirth LWT, Van der Vleuten CPM: ABC of learning and teaching in medicine: written assessment. BMJ: British Med J (International Edition). 2003, 326: 643-645.

Download references

Acknowledgements

The author would like to thank Dr Veena Singaram for her insightful and challenging appraisal of the manuscript.

Author information

Authors and affiliations.

Clinical and Professional Practice Research Group, School of Clinical Medicine, University of KwaZulu-Natal, Durban, 4013, South Africa

Richard J Hift

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Richard J Hift .

Additional information

Competing interests.

The author declares that he has no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, authors’ original file for figure 3, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Hift, R.J. Should essays and other “open-ended”-type questions retain a place in written summative assessment in clinical medicine?. BMC Med Educ 14 , 249 (2014). https://doi.org/10.1186/s12909-014-0249-2

Download citation

Received : 08 May 2014

Accepted : 07 November 2014

Published : 28 November 2014

DOI : https://doi.org/10.1186/s12909-014-0249-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Conceptual change
  • Mental models
  • Multiple choice

BMC Medical Education

ISSN: 1472-6920

open ended questions about essays

  • Open Ended Questions: Definition + [30 Questionnaire Examples]

busayo.longe

Open-ended questions are the questions asked that do not give the option of a yes/no answer, instead, they require full sentences. They usually signify the beginning of a dialogue.

If you sincerely want to connect on deeper levels and encourage other people to talk about themselves, you should ideally use open-ended questions to stimulate your conversation and get the ball rolling.

What is Open Ended Question

Open-ended questions are those which require more thought and more than a simple one-word answer. An open-ended question is designed to encourage a full, meaningful, and deliberate answer using the subject’s own knowledge and/or feelings.

It is the opposite of a closed-ended question, which encourages a short or single-word answer.

Read More: Close Ended Questions:Definition + [Questionnaire Examples]

Uses of Open-Ended Questions

Open-ended questions are used for interviews, with the caveat that there are those with no right or wrong answers. An interview question, for example, could be along the lines of a person asking an interviewee about their past work experience. As a requirement, Open-ended questions demand that the applicant offer more detail and demonstrate their ability to communicate effectively.

Open-ended questions can be used in examinations such that the student is required to provide a response. Unlike multiple-choice(close-ended) tests that do not allow much, if any, room for error, open-ended questions give the students room to convince the examiner. Usually, a test is composed of a few open-ended questions compared to the 50 to 100-question multiple-choice(close-ended) assessment.

For a Survey, open-ended questions are ideal for a number of reasons. First, they allow an infinite number of possible answers. They also collect more detail and the person administering the questions might even learn something they didn’t expect. For complex issues, open-ended questions ensure you get adequate answers. Lastly, open-ended questions for a survey encourage creative answers and self-expression and help you understand how your respondents think.

In transacting business, open-ended questions are essential for sales success. They allow reps to get inside the head of prospects and better understand their pain points. The right open-ended questions help ensure that reps are building rapport, uncovering pain points, establishing needs, and clearly articulating the value of their offering.

How to Ask or Craft Open-Ended Questions

Since open-ended questions are designed to prompt long, detailed answers, here are a few tips that can help you ask open-ended questions better.

  • Start your questions with How, Why, What, etc
  • Ensure you as questions that ask for the reasons behind events, try to clarify or investigate issues
  • Avoid asking questions that can be answered with a Yes/No. Instead, use leading questions . For example, ‘Do you think you can lift 50kg in one go?’ Will not prompt the respondent to offer more information. Instead, ask ‘Why do you think you can/cannot lift 50kg in one go?’
  • Follow up close-ended questions with open-ended ones. 

Examples of Open-Ended Questions

  • When do you need to get this issue resolved?
  • What do you see as the next action steps for the firm?
  • What is your timeline for purchasing this product?
  • What other data points should we know before moving forward?
  • How did you get involved in the business?
  • What kind of challenges are you facing?
  • What’s the most important priority to you with this? Why?
  • What other issues are important to you?
  • What would you like to see improved?
  • How do you measure their weight?
  • What budget has been established for this product?
  • What are your thoughts?
  • Who else is involved in this decision?
  • What could make this no longer a priority?
  • What’s changed since we last talked?
  • What concerns do you have?
  • What time did that happen?
  • When do these issues arise?
  • What is this problem costing you?
  • What would you change about your current solution?
  • Have you given up trying to fix the problem?
  • Who else needs to be involved in this purchase decision?
  • What’s your budget?
  • How do you think changing this area could improve your day-to-day work?
  • What would you want to achieve in the next year by making this change?
  • If time and money were no object and you had full authority to do whatever you want, what would you change about your current system?
  • What has your past purchase experience been with Apple watches?
  • When was the last time you evaluated something like this?
  • Why or why not would you say you were satisfied with your past experiences with this vendor?
  • How would you describe the level of service with your current provider?

Advantages of Open-Ended Questions over Close-Ended Questions  

  • Freedom of expression

Open-ended questions allow you to better understand the respondent’s true feelings and attitudes about the survey subject. Close-ended questions, due to their limitations, do not give respondents the choice to truly give their opinions.

  • Qualitative information

Open-ended questions allow respondents taking your survey to include more information, giving you, the researcher, more useful, contextual feedback. Close-ended questions provide none of those. The answers are short, concise, and very direct.

  • Additional Information

Open-ended questions in surveys solicit additional information to be contributed by respondents. They are sometimes also called infinite-response questions or unsaturated-type questions. Generally, close-ended questions require respondents to answer in just one or two words.

  • Reduce Errors

Open-ended questions cut down on two types of response error; respondents are not likely to forget the answers they have to choose from if they are given the chance to respond freely, and open-ended questions simply do not allow respondents to disregard reading the questions and just “fill in” the survey with all the same answers.

  • Demographic Information

Since open-ended questions allow for obtaining extra information from the respondent, such as demographic information, surveys that use open-ended questions can be used more readily for secondary analysis by other researchers than can surveys that do not provide contextual information about the survey population.

When to Choose Open-Ended Questions Over Close-Ended Questions

If you’re looking for questions that allow someone to give a free-form answer, the open-ended questions are the choice. Even though close-ended questions are often good for surveys, because you get higher response rates because users don’t have to type so much, they don’t accomplish this.

A key benefit of open-ended questions is that they allow you to find more than you anticipate. People are more likely to share motivations that you didn’t expect and mention behaviors and concerns that you knew nothing about. When you ask people to explain things to you, they often reveal surprising mental models, problem-solving strategies, hopes, fears, and much more. Closed-ended questions stop the conversation and eliminate any surprises.

Guide To Interpreting Open-Ended Questionnaire Data

As much as Open-ended questions provide the most feedback, it is important to note that they are a lot harder to analyze. This is because, unlike close-ended questions that provide quantitative data, open-ended questions provide qualitative data.

There are a number of things you should note when interpreting Open-Ended Questionnaire Data, here are a few guides to help you on your way.

  • Spend time perusing through your responses – As you get to understand your data, make a mental note to highlight all interesting answers you think will be relevant.
  • Categorize your answers – Ensure you have at least one category assigned to each answer. It is possible that one answer fits into more than one category.
  • Sort the categories – In the existing categories, strip them down to sub-categories, this is so that you maintain an understanding of the answers. For example, a category like Service can be subdivided into customer satisfaction and referrals.
  • Review your responses – In the different categories and sub-categories, review the response that had the most responses and decipher the recurring theme.
  • Prepare your conclusions – At this point, you’re beginning to see a pattern. You can now make comparisons with those open-ended questions that have similarities with the quantitative answers from closed-ended questions.

Why Formplus is the Best Data Collection Tool for Open-Ended Survey

  • Short & Long Text Field

With the Short text field, best used for receiving short/single-line text-based answers such as names, location, and statements, you can set a minimum and maximum length of characters your users can input. 

The Long text field, as provided by Formplus, is ideal for long answers such as addresses, comments, additional ideas, suggestions, messages, and short essay answers. 

  • File Management

You can use Formplus to send form responses to Google Sheets instantly. The Google Sheets integration makes it easy to collaborate on documents and keep your team members up-to-date.

  • Export Data in PDF/CSV

Formplus lets you store tabular data, such as a spreadsheet or database. Also in the Formplus Responses setting you can customize the email notification message, including the user’s response in the notification email sent. You can also receive submissions as a PDF/Doc attachment in your emails as well as enable the option to display images on your attachments.

  • Data Storage

Formplus has an unlimited file upload storage, you can submit files, photos, or videos via your online forms without any restriction to the size or number of files that can be uploaded.  You can choose to store your received data in your cloud storage of choice. There is also a native Google Sheets integration, which lets you get survey responses updated into spreadsheets automatically created for each form.

  • Customization

Formplus’ easy-to-use form builder allows you to create powerful forms within minutes. Simply click or drag and drop your desired form fields into the builder. You can build any type of online form ranging from Contact Forms to Inventory Forms, Formplus has the tools to help you collect data seamlessly.

  • Visual Analytics

You can also monitor your survey performance and identify your traffic source and location with Formplus Analytics. With online form builder analytics, you can determine the number of times the open-ended survey was filled and the number of respondents who reached the abandonment rate. You can also find out the location of respondents as well as the type of device used by the respondent to complete the survey.

How to Conduct an Open-Ended Survey with Formplus Data Collection Tool 

  • Register or sign up on the Formplus builder

Start creating your Open-Ended Survey by signing up with either your Google, Facebook, or Email account. There’s a free plan with limited features you can use to get started.

Sign up to design your Open-Ended Survey with Formplus.

Input your Open-Ended Survey title and use the form builder choice options to start creating your Survey.

Beautify your Open-Ended Survey with Formplus Customisation features.

  • Set the form width and layout
  • Change the form background type and color to suit your brand
  • Add your brand’s logo and image to the forms
  • Change font color and sizes
  • Edit submission button to match form color
  • Do you have already made custom CSS to beautify your open-ended? If yes, just copy and paste it to the CSS option.

Edit your Open-ended survey settings for your specific needs

  • Formplus builder gives you the liberty to choose your storage options (Formplus Storage, Dropbox, OneDrive, and Google Drive).
  • You can also limit the number of responses, enable Captcha to prevent spamming, and collect information about your respondent’s location.

Set an introductory message to respondents before they begin the survey

  • Toggle the “start button” to post the final submission message or redirect respondents to another page when they submit their survey responses. 
  •  Initiate an autoresponder message for all your survey respondents. 

Share links to your Open-ended survey page with respondents

View Responses to the Open-ended Survey

Toggle with the presentation of your summary from the options. Whether as a single, table or cards. In addition, you can make graphs from received responses, and translate these into charts and key metrics. 

Let Formplus Analytics interpret your data from your Open-ended survey

You can also monitor your form performance and identify your traffic source and location with Formplus Analytics.

With online form builder analytics, you can determine:

  • The number of times the open-ended survey was filled
  • The number of respondents reached
  • Abandonment Rate: The rate at which respondents exit the open-ended survey without submitting it.
  • The percentage of respondents who completed the online form
  • Average time spent per visit
  • Location of respondents.
  • The type of device used by the respondents to complete the open-ended survey

In any circumstance, open-ended questions guarantee a much more effective result in communication. In the search for a complete and meaningful answer, you need to employ open-ended questions. The best part of Open-ended questions is that they prompt respondents to provide answers using their own words.

For a survey, open-ended questions provide a researcher with qualitative data that they can draw inferences from. On the whole, Open-ended questions make respondents include more information, including feelings, attitudes, and understanding of the subject matter.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • close ended
  • close open ended questions
  • epen question
  • open close ended questions
  • open ended question examples
  • busayo.longe

Formplus

You may also like:

Matrix Question Surveys: Types, Examples, Pros & Cons

Introduction Matrix questions are a type of survey question that allows respondents to answer multiple statements using rates in rows...

open ended questions about essays

Open vs Close-Ended Question: 13 Key Differences

Simple guide on the difference between close and open ended questions. Where and how to use them.

25 Great NPS Survey Question Examples

This article outlines 25 great NPS survey questions to help you gather feedback from your customers

Close Ended Questions: Definition, Types + Examples

Ultimate guide to understanding close ended questions, examples, advantages and questionnaire examples in surveys

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

How to Answer Open-Ended Essay Questions

Editorial team.

Two person sitting in library.jpg

How to Answer Open-Ended Essay Questions. It's test time, and this one isn't multiple choice. Your teacher gives you a sheet of paper with a question on it. The only problem is, you can't immediately see a definite answer. It's time to pull it together and, at the very least, be able to sound like you know what you're talking about.

Explore this article

  • Read the question carefully
  • Mull the question over before attempting to answer
  • Develop an opinion
  • Jot down an outline
  • Write your essay
  • Read your essay over

1 Read the question carefully

Read the question carefully. Make sure you understand what is being asked of you. Think of the different meanings of the specific words within the question.

2 Mull the question over before attempting to answer

Mull the question over before attempting to answer. Absorb it. Think about what you know or have learned about the topic. Taking this time can have a calming effect, which will help you to write a more cogent response.

3 Develop an opinion

Develop an opinion, if you haven't already. Your argument will be more convincing if you believe what you are writing. This decision will be your thesis. You don't have to take an extreme stance. If you are ambivalent about the topic, be prepared to address this in your essay. Being able to cite arguments for and against either side will make you appear to have a better understanding of the material.

4 Jot down an outline

Jot down an outline. A disorganized essay, despite its content, will not get your point across. Make sure to address any possible objections to your thesis early in the essay, and save your strongest arguments for the end.

5 Write your essay

Write your essay, referring to your outline. Hopefully, after having taken time to develop a thesis, mull over the topic, and sketch an outline, your thoughts will flow from you into clear writing.

6 Read your essay over

Read your essay over, if you have time. Focus on high-order content, such as ideas and themes. Make sure you have thoroughly answered the question asked of you. If you are not under strict time constraints, take the time to check for proper grammar and spelling.

  • Do not plagiarize. Your teacher will likely know that you aren't using your own words. There are severe consequences for academic dishonesty.

About the Author

This article was written by the CareerTrend team, copy edited and fact checked through a multi-point auditing system, in efforts to ensure our readers only receive the best information. To submit your questions or ideas, or to simply learn more about CareerTrend, contact us [here](http://careertrend.com/about-us).

Related Articles

How to Answer a Question in Paragraph Form

How to Answer a Question in Paragraph Form

How to Write a Thesis Statement in High School Essays

How to Write a Thesis Statement in High School Essays

How to Write a Conclusion in My Nursing Paper

How to Write a Conclusion in My Nursing Paper

Transitions For Essays

Transitions For Essays

How to Answer a Reading Prompt on Standardized Test

How to Answer a Reading Prompt on Standardized Test

How to Write a Topic Summary for an Essay

How to Write a Topic Summary for an Essay

How to Identify a Hypothesis

How to Identify a Hypothesis

How to Write a 100-Word Essay

How to Write a 100-Word Essay

How to Write a Discursive Essay

How to Write a Discursive Essay

How to Write an Essay Abstract

How to Write an Essay Abstract

How to Develop Analytical Skills

How to Develop Analytical Skills

How to Write a Hypothesis to an Analytical Essay

How to Write a Hypothesis to an Analytical Essay

How to Conclude a Thesis Paper

How to Conclude a Thesis Paper

How to Start an Informative Paper

How to Start an Informative Paper

How to End an Informative Paper

How to End an Informative Paper

How Do I Write a Short Response on a Standardized Test?

How Do I Write a Short Response on a Standardized Test?

How to Write an Essay Synopsis

How to Write an Essay Synopsis

Major Parts of a Term Paper

Major Parts of a Term Paper

How to Make an Essay for the Accuplacer Test

How to Make an Essay for the Accuplacer Test

How to Write About an Ethical Dilemma

How to Write About an Ethical Dilemma

Regardless of how old we are, we never stop learning. Classroom is the educational resource for people of all ages. Whether you’re studying times tables or applying to college, Classroom has the answers.

  • Accessibility
  • Terms of Use
  • Privacy Policy
  • Copyright Policy
  • Manage Preferences

© 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Based on the Word Net lexical database for the English Language. See disclaimer .

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Open-ended vs. closed questions in user research.

open ended questions about essays

January 26, 2024 2024-01-26

  • Email article
  • Share on LinkedIn
  • Share on Twitter

When conducting user research, asking questions helps you uncover insights. However, how you ask questions impacts what and how much you can discover .

In This Article:

Open-ended vs. closed questions, why asking open-ended questions is important, how to ask open-ended questions.

There are two types of questions we can use in research studies: open-ended and closed.

  Open-ended questions allow participants to give a free-form text answer. Closed questions (or closed-ended questions) restrict participants to one of a limited set of possible answers.

Open-ended questions encourage exploration of a topic; a participant can choose what to share and in how much detail. Participants are encouraged to give a reasoned response rather than a one-word answer or a short phrase.

Examples of open-ended questions include:

  • Walk me through a typical day.
  • Tell me about the last time you used the website.
  • What are you thinking?
  • How did you feel about using the website to do this task?

Note that the first two open-ended questions are commands but act as questions. These are common questions asked in user interviews to get participants to share stories. Questions 3 and 4 are common questions that a usability-test facilitator may ask during and after a user attempts a task, respectively.

Closed questions have a short and limited response. Examples of closed questions include:

  • What’s your job title?
  • Have you used the website before?
  • Approximately, how many times have you used the website?
  • When was the last time you used the website?

Strictly speaking, questions 3 and 4 would only be considered “closed” if they were accompanied by answer options, such as (a) never, (b) once, (c) two times or more. This is because the number of times and days could be infinite. That being said, in UX, we treat questions like these as closed questions.

In the dialog between a facilitator and a user below, closed questions provide a short, clarifying response, while open-ended questions result in the user describing an experience.

T

Using Closed Questions in Surveys

Closed questions are heavily utilized in surveys because the responses can be analyzed statistically (and surveys are usually a quantitative exercise). When used in surveys, they often take the form of multiple-choice questions or rating-scale items , rather than open-text questions. This way, the respondent has the answer options provided, and researchers can easily quantify how popular certain responses are. That being said, some closed questions could be answered through an open-text field to provide a better experience for the respondent. Consider the following closed questions:

  • In which industry do you work?
  • What is your gender?

Both questions could be presented as multiple-choice questions in a survey. However, the respondent might find it more comfortable to share their industry and gender in a free-text field if they feel the survey does not provide an option that directly aligns with their situation or if there are too many options to review.

Another reason closed questions are used in surveys is that they are much easier to answer than open-ended ones. A survey with many open-ended questions will usually have a lower completion rate than one with more closed questions.

Using Closed Questions in Interviews and Usability Tests

Closed questions are used occasionally in interviews and usability tests to get clarification and extra details. They are often used when asking followup questions. For example, a facilitator might ask:

  • Has this happened to you before?
  • When was the last time this happened?
  • Was this a different time than the time you mentioned previously?

Closed questions help facilitators gather important details. However, they should be used sparingly in qualitative research as they can limit what you can learn.

open ended questions about essays

The greatest benefit of open-ended questions is that they allow you to find more than you anticipate. You don’t know what you don’t know.   People may share motivations you didn’t expect and mention behaviors and concerns you knew nothing about. When you ask people to explain things, they often reveal surprising mental models , problem-solving strategies, hopes, and fears.

On the other hand, closed questions stop the conversation. If an interviewer or usability-test facilitator were to ask only closed questions, the conversation would be stilted and surface-level. The facilitator might not learn important things they didn’t think to ask because closed questions eliminate surprises: what you expect is what you get.

open ended questions about essays

Closed Questions Can Sometimes Be Leading

When you ask closed questions, you may accidentally reveal what you’re interested in and prime participants to volunteer only specific information. This is why researchers use the funnel technique , where the session or followup questions begin with broad, open-ended questions before introducing specific, closed questions.

Not all closed questions are leading. That being said, it’s easy for a closed question to become leading if it suggests an answer.

The table below shows examples of leading closed questions . Reworking a question so it’s not leading often involves making it open-ended, as shown in column 2 of the table below.

One way to spot a leading, closed question is to look at how the question begins. Leading closed questions often start with the words “did,” “was,” or “is.” Open-ended questions often begin with “how” or “what.”

New interviewers and usability-test facilitators often struggle to ask enough open-ended questions. A new interviewer might be tempted to ask many factual, closed questions in quick succession, such as the following:

  • Do you have children?
  • Do you work?
  • How old are you?
  • Do you ever [insert behavior]?

However, these questions could be answered in response to a broad, open-ended question like Tell me a bit about yourself .

When constructing an interview guide for a user interview, try to think of a broad, open-ended version of a closed question that might get the participant talking about the question you want answered, like in the example above.

When asking questions in a usability test, try to favor questions that begin with “how,” or “what,” over “do,” or “did” like in the table below.

Another tip to help you ask open-ended questions is to use one of the following question stems :

  • Walk me through [how/what]...
  • Tell me a bit about…
  • Tell me about a time where…

Finally, you can ask open-ended questions when probing. Probing questions are open-ended and are used in response to what a participant shares. They are designed to solicit more information. You can use the following probing questions in interviews and usability tests.

  • Tell me more about that.
  • What do you mean by that?
  • Can you expand on that?
  • What do you think about that?
  • Why do you think that?

Ask open-ended questions in conversations with users to discover unanticipated answers and important insights. Use closed questions to gather additional small details, gain clarification, or when you want to analyze responses quantitatively.

Related Topics

  • Research Methods Research Methods

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=LpV3tMy_WZ0

Open vs. Closed Questions in User Research

open ended questions about essays

Always Pilot Test User Research Studies

Kim Salazar · 3 min

open ended questions about essays

Level Up Your Focus Groups

Therese Fessenden · 5 min

open ended questions about essays

Inductively Analyzing Qualitative Data

Tanner Kohler · 3 min

Related Articles:

Field Studies Done Right: Fast and Observational

Jakob Nielsen · 3 min

Should You Run a Survey?

Maddie Brown · 6 min

The Funnel Technique in Qualitative User Research

Maria Rosala and Kate Moran · 7 min

Card Sorting: Pushing Users Beyond Terminology Matches

Samhita Tankala and Jakob Nielsen · 5 min

Card Sorting: Uncover Users' Mental Models for Better Information Architecture

Samhita Tankala and Katie Sherwin · 11 min

The Diverge-and-Converge Technique for UX Workshops

Therese Fessenden · 6 min

surveys | December 18, 2019

Open-Ended Question: What it is, How to Use it (+Examples)

open ended questions about essays

Daniel Ndukwu

Customer research is a large discipline with multiple methods to get the right information from your audience or customer base. 

Surveys are among the most effective ways to get deep insights from your most engaged users. It helps you understand how they feel about specific topics and give you perspectives – through open-ended questions – you might have otherwise missed.

These insights, also known as the voice of the customer , can expand your marketing , improve your products, and cancel out objections. In a world of choice, this is becoming even more important.

In this article, you’ll learn what open-ended questions are, their advantages, how to use them, and solid examples to make them easier to implement.

Table of Contents

What is an Open-Ended Question?

Open-ended questions are a type of unstructured survey question that allows the respondent more room to reply in an open text format thereby providing the opportunity to give more detailed answers. The only limitation usually imposed is a character limit so open-ended questions can be divided into long answer and short answer questions. 

Put another way, a respondent can draw on their knowledge, feelings, and understanding of the question and topic to give more insightful answers. They’re not limited by preset question options. 

An example of an open-ended question could be “how do you feel about your new job?”

open-ended question example

Research from Vision Critical found that 87% of consumers want to have a say in a company’s products and services. Open-ended questions give them the opportunity to share information in a way that close ended questions don’t. 

Open-Ended Vs Close Ended Questions

As shared in the last section, open-ended questions are free-form and allow respondents to use an open text format to give replies. They’re able to say whatever they want in response to your questions. 

Close ended questions, on the other hand, are structured and have a preset group of questions a respondent can answer. Though they can still help you, you’re not able to use the voice of the customer to inform your decisions. 

Each one has merits and demerits. For example, an open-ended question allows you to probe much deeper but a close ended question allows you to get concise information that can be quantified. It’s much easier to quantify yes or nos than a paragraph of text. 

A relatable example comes from the standardized tests most of us took in school. They usually had two parts – the multiple-choice questions which are equivalent to close ended questions and the essay questions which are equal to open-ended questions. 

A close ended question: Yes or No – Was George Washington was the first president of the United States? 

An open-ended question: From the perspective of the British, what was the cause of the Revolutionary War?

As you can see from the examples, the open-ended survey questions will give you a look into the thought process of your customers. 

Advantages of open-ended questions

Some advantages are obvious while others aren’t but they tend to be more important than the disadvantages because the responses you get have so many direct tangible uses for your business. A few of the advantages include:  

Thoughtful responses

With a series of multiple-choice questions, respondents can quickly scan and choose an answer. That answer may or may not be indicative of how they feel. Open-ended questions force your respondents to slow down long enough to consider the question and give a thoughtful answer. 

Even if they give a short answer that doesn’t shed much light on the situation, it’s still helpful. It tells you the respondent’s answers shouldn’t have as much weight. 

That makes sense meme

Customers can share their feelings 

The internet is crowded. The lifespan of a Tweet is about 30 minutes and that of a Facebook post is roughly 1 hour. Anything you say is lost in a short amount of time. 

With surveys and open-ended questions, you give your customers an opportunity to voice their opinions and create in a way that can create change in your organization. If they like what you’re doing and care about your products then they’ll take the time to give you useful feedback. 

Identifying weak spots 

Close ended questions are notorious for only giving you half the answer you need. For example, if you ask a customer “how was your experience with us today?” and they answer “disappointing”, there’s no room to ask them why. This can leave you wondering if you have a real problem.

An open-ended question gives them the opportunity to tell you it was disappointing and lay out the reasons why. With that information, you can determine if it was an isolated incident or something that demands immediate attention. 

More Detailed Information

Open-ended questions were built to deliver qualitative information and, like we talk about in our free course , the more detailed the information you get from a respondent the more engaged they are. 

That information is indicative of your hottest buyer segment and the details they reveal will help you create better messages, identify your ideal target market , and otherwise make the right decisions in your business. 

The key to using the extra information these types of survey questions can give you is to look for patterns in the data. If one person says something then you may or may not be on the verge of a breakthrough. If five or ten people say something similar then there’s promise. 

When you should use open-ended questions

These questions lend themselves well to qualitative research. That means they should be used when quality is more important than the quantity of data. 

In other words, it’s used when you want to use the answers to find deep insights into the mind of your target audience. For example, you’d use them in the following situations: 

  •   When a detailed response is needed so you can use the voice of the customer or detect patterns in the types of responses you get.
  •   When you want your prospects to think critically about the question and the possible response they’ll give
  •   If varied answers will help you develop a better understanding of the topic or field (like when you want to enter a new market ).
  •   When you need to ask complicated questions and your respondents will benefit from being able to work through their thought process.

When to avoid open-ended questions

Unfortunately, you can’t always use open-ended questions. Sometimes,  a quick answer is ideal. For example, you want to know if someone has heard of your brand before. There’s no need to wax poetic about the possible reasons why they’ve not heard of it. Yes or no will do.

There are multiple situations in which free-form questions would do more harm than good. A few of them include: 

  • When you have a longer survey and are short on resources to analyze the patterns in text answers 
  • If you want to make a quick and automated analysis of the data
  • Only have basic questions that don’t need much expansion
  • Have a structured survey that derives it’s usefulness when a respondent chooses one of the available answers (like an NPS style survey or a Likert scale survey ) 

The correct way to ask open-ended questions

There is a right way and a wrong way to ask questions – especially open-ended questions. Like all surveys, you want to collect unbiased data so you can make decisions that move the needle in the right direction. The wording of your questions can have a big impact on how its perceived by your respondent. 

Don’t lead respondents

Surveys are not the time to convince someone of your view or to purposely elicit a positive response. Avoid wording that would predispose someone to answer positively or negatively. 

For example, a question like “we’re considered a market leader and have over 10,000 customers, what do you think about our company?” is biased. It predisposes the respondent to give you positive feedback.  

Consider talking to a team member or an impartial third party and showing them your questions to ensure they’re not biased. Put yourself in the shoes of the respondent and ask yourself if the question makes you feel positively or negatively towards the person asking. 

Use close ended and open-ended questions together

This method is a staple of consumer research. The most effective surveys ask a close-ended question and, depending on how the respondent answers, an open-ended question is used as a follow-up. It helps focus the respondent and bring out insights that would otherwise be missed by a close-ended question. 

Another benefit of using these two questions together stems from getting qualitative and quantitative answers. You’re able to say X people were dissatisfied with the product and X people were satisfied. For the ones who were dissatisfied, these are the reasons and places where we can improve.

Be aware of the Difference Between Question Types

At times, it can be difficult to determine if a closed or open question will be better for your needs. There’s a quick way to determine the best type. If you want the reasons behind an answer then use open-ended. If you want the raw answer without explanations then use the close ended questions. 

Of course, this should be determined on a case by case basis. When in doubt, it may be a better idea to change the question or exclude it altogether. It’s more important to get clean data.

Focus on feelings before facts

Phrase your questions so they’ll help you understand the reasons and emotions behind an answer. Instead of “How would you describe your support experience today?” Ask “how do you feel about your support experience today?” 

The difference is subtle but it can help you understand the emotions associated with an experience or product. If it’s a negative sentiment then you can take steps to change that. If it was a positive sentiment then you can focus on doubling down on what’s working.

11 Open-Ended Question Examples 

1. how does x make you feel.

This question leans towards an emotional response instead of a purely objective one. It’s helpful when finding marketing copy that incorporates the voice of the customer.

2. What do you consider fun?

This question is useful because it helps reveal psychographic information and can also help you uncover different ways to position your products. For example, you can be the perfect widget for bike enthusiasts. 

3. What brought you to our website today?

This works on two separate levels. You can find out which advertising channels are working and the reason why people are seeking you out. This will help refine your messaging. 

4. What are your thoughts about ‘Product X’?

The question above reveals unbiased information about how your products are perceived. You’re asking the customer to say what they think is good (or bad) about your products.

5. What can we do better?

This question is direct and assumes that there’s room for improvement in your products and services. Use with caution because it may force your respondent to find problems where none exist.

6. What aspects of our website do you like?

This, again, is a direct question that may force users to mention things they don’t truly like. Use with caution. 

7. How do you prefer to shop (or workout, or travel, etc.)?

Questions like these help you derive insights that make your products fit seamlessly into the lives of your target market. If your people like to work out at home, you can create products that cater to that preference. If they like to travel by road, you can create relevant products. 

8. What do you like about x?

The X here can be general or specific depending on what you chose to focus on. For example, “what do you like about our customer service or what do you like about our company?” One of the questions gives a broad answer and the other is focused.

9. What do you dislike about x?

The opposite perspective of the previous question open-ended question.

10. How can we create a better experience for you? 

These questions focus on the direct improvement of a product or service. For example, what can we do to make your support experience better?

11. How can we make it easier for you to purchase today?

This question may not be ideal for a standard questionnaire because it works best in real-time. Instead, you can use it in your live chat or chatbots to engage people at the point of purchase.

Conclusion 

There are multiple ways to go about customer research. One of the most powerful and inexpensive is surveys. 

They can give you deep insights from a large number of people in a relatively short amount of time. This article has gone through everything you need to know to make effective open-ended questions to improve your business and grow your audience. 

Let me know what you think in the comments and don’t forget to share it. 

Open-ended question FAQ

What is an open-ended question.

Open-ended questions are a type of unstructured survey question that allows the respondent more room to reply in an open text format and provides the opportunity to give more detailed answers

What are the advantages of open-ended questions?

  • More thoughtful responses
  • Respondents can give more detailed answers which reveal more insights
  • Give respondents an opportunity to speak their minds
  • Identify weak spots in your organization

What to consider before using open-ended questions?

  • The way the question is worded
  • How to follow-up with responses for maximum value
  • The right time to use open-ended and close ended questions

Are close ended or open-ended questions better?

This depends on the situation and your goals. Open-ended questions give more insights but close ended can help with quantification of responses.                    

Good evening, My name is Cristina Raffaghello, Adjunct Professor at Eastern Piedmont University in Vercelli and online e-campus University Novedrate, Italy. In my opinion, your article is useful for teaching, as well. Do you think I may use in my course of Germanic Philology? This discipline regards the origin and development of Germanic languages (English, German, Frisian Gothic and Islandic) from linguistical, juridical, historical and literary viewpoints. I thank you for your attention. Cheers, Cristina

Of course Cristina, you’re free to use it to teach your students.

I want to write research proposal on knowledge and practices on malnutrition amongst women can I use open or closed questionnaire

Both, of course. It depends on the kind of data you’re looking for and how well-versed you are on the topic before conducting the research.

This is so interesting

Inspiring, educative. I am doing a qualitative research study, and I find both open and closed.-ended questions fitting

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Related Articles

Learn / Blog / Article

Back to blog

Open-ended questions vs. close-ended questions: examples and how to survey users

Unless you’re a mind reader, the only way to find out what your users are thinking is to ask them. That's what surveys are for. 

But the way you ask a question often determines the kind of answer you get—and one of the first decisions you have to make is: are you going to ask an open-ended or a closed-ended question?

open ended questions about essays

Last updated

Reading time.

open ended questions about essays

Understanding the difference between open-ended and close-ended questions helps you ask better, more targeted questions, so you can get actionable answers. The question examples we cover in this article look at open- and closed-ended questions in the context of a website survey, but the principle applies across any type of survey you may want to run. 

Start from the top or skip ahead to 

What’s the difference between open-ended and closed-ended questions?

4 tips on how to craft your survey questions for a maximum response rate

5 critical open-ended questions to ask customers

When to ask open-ended questions vs. closed-ended questions

Open-ended vs. close-ended questions: what’s the difference?

Open-ended questions are questions that cannot be answered with a simple ‘yes’ or ‘no’, and instead require the respondent to elaborate on their points.

Open-ended questions help you see things from a customer’s perspective as you get feedback in their own words instead of stock answers. You can analyze open-ended questions using spreadsheets , view qualitative research and data analysis trends, and even spot elements that stand out with word cloud visualizations.

Closed-ended questions are questions that can only be answered by selecting from a limited number of options, usually multiple-choice questions with a single-word answer (‘yes’ or ‘no’) or a rating scale (e.g. from strongly agree to strongly disagree).

Closed-ended questions give limited insight, but can easily be analyzed for quantitative data . For example, one of the most popular closed questions in market research is the Net Promoter Score® (NPS) survey, which asks people “How likely are you to recommend this product/service on a scale from 0 to 10?” and uses numerical answers to calculate overall score trends. Check out our NPS survey template to see this closed-ended question in action.

open ended questions about essays

Let’s take a look at the examples of open-ended questions vs. closed-ended questions above.

All the closed questions in the left column can be responded to with a one-word answer that gives you the general sentiment of each user and a few useful data points about their satisfaction, which help you look at trends and percentages. For example, did the proportion of people who declared themselves happy with your website change in the last three, six, or 12 months?

The open-ended questions in the right column let customers provide detailed responses with additional information so you understand the context behind a problem or learn more about your unique selling points . If you’re after qualitative data like this, the easy way to convert closed-ended into open-ended questions is to consider the range of possible responses and re-word your questions to allow for a free-form answer.

💡 Pro tip : when surveying people on your website with Hotjar Surveys , our Survey Logic feature lets you ask follow-up questions that help you find out the what and the why behind your users’ actions. 

For more inspiration, here are 20+ real examples of open- and closed-ended questions you can ask on your website, along with a bunch of free pre-built survey templates and 50+ more survey questions to help you craft a better questionnaire for your users. 

Or, take advantage of Hotjar’s AI for Surveys , which generates insightful survey questions based on your research goal in seconds and prepares an automated summary report with key takeaways and suggested next steps once results are in.

Use Hotjar to build your survey and get the customer insights you need to grow your business.

How to ask survey questions for maximum responses

It’s often easy to lead your customers to the answer you want, so make sure you’re following these guidelines:

1. Embrace negative feedback

Some customers may find it hard to leave negative feedback if your questions are worded poorly.

For example, “We hope there wasn’t anything bad about your experience with us, but if so, please let us know” is better phrased neutrally as “Let us know if there was anything you’d like us to do differently.” It might sting a little to hear negative comments, but it’s your biggest opportunity to really empathize with customers and fuel your UX improvements moving forward.

2. Don’t lead your customers

“You bought 300 apples over the past year. What's your favorite fruit?” is an example of a leading question . You just planted the idea of an apple in your customers' mind. Valuable survey questions are open and objective—let people answer them in their own words, from their own perspective, and you’ll get more meaningful answers.

3. Avoid asking ‘and why?’

Tacking “and why?” on at the end of a question will only give you simple answers. And, no, adding “and why?” will not turn closed-ended questions into open-ended ones!

Asking “What did you purchase today, and why?” will give you an answer like “3 pairs of socks for a gift” (and that’s if you’re lucky), whereas wording the question as “Why did you choose to make a purchase today?” allows for an open answer like, “I saw your special offer and bought socks for my niece.”

4. Keep your survey simple

Not many folks love filling in a survey that’s 50 questions long and takes an hour to complete. For the most effective data collection (and decent response rates), you need to keep the respondents’ attention span in mind. Here’s how:

Keep question length short : good questions are one-sentence long and worded as concisely as possible

Limit the number of questions : take your list of planned questions and be ruthless when narrowing them down. Keep the questions you know will lead to direct insight and ditch the rest.

Show survey progress : a simple progress bar, or an indication of how many questions are left, motivates users to finish your survey

5 of our favorite open-ended questions to ask customers

Now that you know how to ask good open-ended questions , it’s time to start putting the knowledge into practice.

To survey your website users, use Hotjar's feedback tools to run on-page surveys, collect answers, and visualize results. You can create surveys that run on your entire site, or choose to display them on specific pages (URLs).

Different types of Hotjar surveys

As for what to ask—if you're just getting started, the five open-ended questions below are ideal for any website, whether ecommerce or software-as-a-service:

1. How can we make this page better?

If you missed the expectations set by a customer, you may have over-promised or under-delivered. Ask users where you missed the mark today, and you’ll know how to properly set, and meet, expectations in the future. An open platform for your customers to tell you their pain points is far more valuable for increasing customer satisfaction than guessing what improvements you should make. Issues could range from technical bugs to lack of product range.

2. Where exactly did you first hear about us?

An open “How did you find out about us?” question leaves users to answer freely, without leading them to a stock response, and gives you valuable information that might be harder to track with traditional analytics tools.

We have a traffic attribution survey template ready and waiting for you to get started.

3. What is stopping you from [action] today?

A “What is stopping you?” question can be shown on exit pages ; the open-form answers will help you identify the barriers to conversion that stop people from taking action.

Questions like this can also be triggered in a post-purchase survey on a thank you or order confirmation page. This type of survey only focuses on confirmed customers: after asking what almost stopped them, you can address any potential obstacles they highlight and fix them for the rest of your site visitors.

4. What are your main concerns or questions about [product/service]?

Finding out the concerns and objections of potential customers on your website helps you address them in future versions of the page they’re on and the products they’ll use. It sounds simple, but you’ll be surprised by how candid and helpful your users will be when answering this one.

Do you want to gather feedback on your product specifically? Learn what to improve and understand what users really think with our product feedback survey template and this expert advice on which product questions to ask when your product isn't selling.

5. What persuaded you to [take action] today?

Learning what made a customer click ‘buy now’ or ‘sign up’ helps you identify your levers. Maybe it’s low prices, fast shipping, or excellent customer service—whatever the reason, finding out what draws customers in and convinces them to stay helps you emphasize these benefits to other users and, ultimately, increase conversions.

Ask the right questions at the right time to get the insights you need

Whether you’re part of a marketing, product, sales, or user research team, asking the right questions through customer interviews or on-site surveys helps you collect feedback to create better user experiences and increase conversions and sales.

The type of question you choose depends on what you’re trying to achieve:

Ask a closed-ended question when you want answers that can be plotted on a graph and used to show trends and percentages. For example, answers to the closed-ended question “Do you trust the information on [website]?” helps you understand the proportion of people who find your website trustworthy versus those who do not.

Ask an open-ended question when you want in-depth answers to better understand your customers and their needs , get more context behind their actions, and investigate the reasons behind their satisfaction or dissatisfaction with your product. For example, the open-ended question “If you could change anything on this page, what would it be?” allows your customers to express, in their own words, what they think you should be working on next.

Not only is the kind of question you ask important—but the moment you ask it is equally relevant. Hotjar Surveys , our online survey tool , has a user-friendly survey builder that lets you effortlessly craft a survey and embed it anywhere on your web page to ask the right questions at the right time and place.

Related articles

open ended questions about essays

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

open ended questions about essays

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

open ended questions about essays

Shadz Loresco

open ended questions about essays

An 8-step guide to conducting empathetic (and insightful) customer interviews in mid-market companies

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

  • Tailored Exam Creation
  • Multidimensional Exams
  • Systematic Randomization
  • Insightful Assessments
  • Anti-cheating Safeguards
  • Large-scale Assessments
  • Assessment Workflows
  • Tailored Reporting
  • Effective Question Banking
  • Hiring Assessments English Assessment Learning & Development Admission Exams Competition Exams Certification Exams Vocational Training Test Prep Centers Mass Hiring
  • Guides Video Tutorials Case Studies Blog Demo Assessments Hiring Test Library Faq
  • Guides Video tutorials Case studies Blog Demo assessments Hiring test library Faq
  • Hiring assessments English assessment Learning & development Admission exams Competition exams Certification exams Vocational training Test prep centers Mass hiring

Open Ended Question Types, Examples and Samples

Created by TestInvite / May, 2024

In open-ended questions, the candidate is asked not to choose an answer, but to create it. How the candidate will create the answer is determined by the “answering type” of that open-ended question. Open-ended questions answered by writing text or numbers can be evaluated automatically by the assessment system with the automatic evaluation rules that can be determined in advance in the system, and the answers given can also be evaluated by the tutors. If preferred, comments can be added to the answers and shared with candidates.

Types of Open Ended Questions

Open ended question types each type of which provides different tools for candidates to construct their answer, can be categorized according to their answering methods. Candidates can compose their answers by typing, speaking, writing code, uploading files, taking photo/screenshots and filling in tables, in accordance with the type of open ended question type you choose.

Open ended questions answered by writing text

Short answer type questions.

Candidates are given a one-line text field where they can write their answers.

Candidates answer questions by writing text in the box

Essay type questions

Candidates are given a multiple-lines text field where they can write their answer.

Multiple-line text field offered to the candidate for questions that usually require long answers (article writing, etc.) and a sample article writing question.

Customizing the text field

The given text field can be made into a more easily understandable and useful format for the candidate by using many methods.

  • A heading can be added to the text field.
  • A temporary content can be added to the text field.
  • An auxiliary content can be added to the text field.
  • Short symbols or phrases can be placed at the beginning or end (prefix,suffix) of the text field.
  • A character counter can be added that immediately displays the number of characters in the given answer.
  • The width of the text field can be specified.
  • If the text field is multi-line, the number of lines of the space to be given can be determined.
  • The candidate can write from right-to-left or left-to-right.

Adjustment screen where the style and content of the text field to be given to candidates are customized by the admin.

Providing a virtual keyboard enabling to answer in different languages

If you want candidates to give an answer in a certain language, but you think that they may not have a keyboard compatible with that language, a virtual keyboard can be added to the candidates' screen in the relevant language. In this way, candidates can type their answers using the keys on the virtual keyboard provided.

Language selection from the admin screen for the virtual keyboard to be added to the open-ended question.

Virtual keyboard languages supported by the examination system:

Providing auxiliary keypads

If you want candidates to give an answer using the keyboard, but you want to provide a virtual keyboard in order to ensure some characters that you think may be missing on the keyboard, you can activate the auxiliary keypads.

Open ended question sample with an auxiliary keypad with special characters.

Providing auxiliary numerical keypads

If you want the candidate to type numbers as the answer, you can activate a virtual keypad consisting of numbers, point or comma.

Open ended question sample with a numerical keypad

Limiting answers given by writing

By defining a regular expression (Regular Expression - regex), you can enforce that candidates' response conforms to a rule. If candidates write a text not complying with the rule, you can determine how you will alert them.

Open-ended question example with an answer that does not match the specified regular expression and showing an automatic alert.

Automatic evaluation of the answer

The answer given by the candidate by writing to a question can be evaluated automatically by the system through assessing it with a rule defined by the admin.

  • Equality rules: Evaluation can be made automatically, depending on whether the text written by the candidate is equal or unequal to the text defined beforehand.

Determination of the text to be used in assesing, and the grade to be given in case of equality

  • Equality rules for any or none: The answer given by the candidate is compared with many texts defined beforehand. If the answer is fully equal to one of texts or if it is not equal to any of them, the evaluation can be made automatically.

Determination of the comparison texts to be used for grading automatically, and the grade to be given in case the answer is equal to any of the texts.

  • Rules defined by regular expression: The answer given by the candidate is compared with a predefined regular expression (Regular expression-regex). Automatic evaluation can be made if it complies with the rule or not.

Determination of the comparison rule using regex, and the grade to be given in case of a match

Defining a special evaluation function to assess automatically the answer

You can define a special function that will automatically evaluate the candidate's answer. In this case, the answer given by the candidate will be automatically evaluated by the function.

Selection of the special functions defined by admins to evaluate answers

Open ended questions answered by writing numbers

If you want the question to be answered by writing a number, an open ended question answered by writing a number can be preferred. In such case, the candidate can only write one number in the answer space.

An open-ended question example answered by writing a number.

Customizing the number writing space

The text field of the answer can be made into a more easily understandable and useful format for the candidate by employing several methods.

  • A temporary content can be defined in the text field.

Adjustment screen where the style and content of the answer field to be given to the candidate are customized by the admin.

Determining numerical entry method and number format

  • Numerical Entry: The candidate can enter a number in accordance with your preferences for using periods and commas that you determine.

Specifying the type of answer that the candidate is asked to write as Numerical Entry.

  • Numerical Entry with Scientific Notation: The candidate can enter a number in accordance with the rules of writing numbers in scientific notation.

An open ended question example that is expected to be answered with scientific notation.

  • Numerical and Scientific Entry: The candidate can enter an answer using either numerical or scientific writing.

Option to confirm both types of notation for the candidate's requested answer

Specifying thousand and precision separators to use while entering numbers

When the candidate writes a number to answer the question, it can be determined which punctuation marks to be used as thousand and precision separator.

Thousand and precision separator selections screen

Specifying the maximum precision digit for the numerical entries

The maximum precision of the numbers can be specified by the admin. In this way, the number that the candidate will write can consist of as many digits after the comma (or after the point) as admin determines.

The screen to set the maximum precision of the number expected to be written as the answer. .

Limitation of the interval of numbers that can be written as answer

By defining a regular expression (Regular Expression - regex), you can enforce that candidates2 answers conform to a rule. If a text that does not comply with the rule is written, you can determine how to alert candidates.

Setting up as a rule with Regex the answer format requested in the question and adding an alert message

Automatic evaluation of the answer given as a number

The answer can be evaluated automatically through comnparing it with a rule defined by the admin.

  • Equality rules: Evaluation can be made automatically, depending on whether the answer written by the candidate is equal or not equal to the number defined beforehand.

Defining equality rule and grading method for the evaluation of the answer

  • Equality rules for any or none: The answer given by candidates is compared with different numbers defined beforehand by the admin. The evaluation can be made automatically, dependengin on whether answer fully matches with one of the ruels or not.

Defining several values with which answer should match and the grading method according to the match rate

  • Rule of being in a number interval: Evaluation can be made automatically by determining whether the number written by the candidate is within a number interval defined by the admin.

Determination of the number interval where the answer given by the candidate should be and the grading method when this condition is met.

  • Rules defined by regular expression: The number written by the candidate is compared with a predefined regular expression (Regular expression - regex). An automatic evaluation can be made depending on whether the answer complies with the rule or not.

Defining a custom evaluation function to automatically evaluate typed number

You can write a function that will evaluate automatically the candidate's answer. In this case, the answer given by the candidate will be evaluated automatically by the function.

Open ended questions answered by filling a table

You can create a table, specify the number of rows and columns in the table, and open spaces that the candidate can answer by typing in the desired cells of the table. You can define a single-line text space or a multi-line text space for each cell in the table.

Screen for editing the number of rows and columns of the table while creating a question with a table

Adding fixed contents to cells in a table

You can add fixed contents to cells in the table, in order to render it more understandable.

Adding fixed content to cells in the table to provide information to the candidate.

Displaying Column and Row Headings

You can add a heading to each row and column in the table. In this way, you can make your table more clear.

Adding a heading to one of the columns in the table to be shown to the candidate

Open ended questions answered by creating content

Candidates gives an answer to the question by creating content. They use the rich content editor to create content.

Following permissions can be granted in the answer creation process:

  • Adding image: Candidate adds an image file to the content they prepared.
  • Adding audio: Candidate adds an audio file to the content they prepared.
  • Adding video: Candidate adds a video file to the content they prepared.
  • Adding file: Candidate adds a file to the content they prepared.

You can provide the candidate with a virtual keyboard in the question creation process.

An open ended question sample that the candidate is asked to answer the question by creating a content

Open ended questions answered by recording audio (speaking questions)

Candidates answer the question by recording their voice. For this, they are provided with a voice recording application allowing them to record their voice while speaking.

  • A time limit can be determined for the recording
  • The number of how many times the candidate can do recording can be limited by the admin.

The screen showing the maximum duration of the audio recording that candidates will take and how many times they have right to try recording.

Open ended questions answered by recording a video (interview questions)

Candidates answer the question by recording their voice and image together via the webcam on their device. For this, they are provided with a video recording application where they can record by their camera.

The screen showing the maximum duration of the video recording that candidates will take and how many times they have right to try recording.

Open ended questions answered by writing code

Candidates answer the question by writing code in a programming language determined by admin. A code editor compatible with the chosen programming language is provided so that candidates can write their answers.

In the code editor which is provided, you can show an initial content. For this, you can save the question by adding the code you want in the editor.

Open ended question sample answered by using the code editor.

Determining the programming language or script of the code editor

You can choose among more than 50 language options such as TypeScript, JavaScript, C#, Java, SQL, Python, Ruby, PHP, C, C++ for the code editor.

Determination of the programming language that is going to be used by candidates, in the question creation process

Open ended questions answered by uploading a file

You can make candidates answer the question by uploading a file. Candidates upload the file they prepared as an answer by using the file upload tool.

An open ended question example that candidates answer by uploading a file from their device

Open ended questions answered by taking a photo or uploading a screenshot

You can make candidates answer the question by uploading a screenshot on their computer, or by taking a photo via a camera connected to their devices.

An open ended question sample that candidates answer by taking a photo with the camera of device they use

Evaluation of the Answers by the Tutors

Answers given to open-ended questions can be evaluated by tutors after the exams are completed. When an exam consisting of open-ended questions is completed, the system indicates how many questions need to be evaluated in the relevant exam.

Each exam in the results list has the number of questions pending to be evaluated.

After the answers are evaluated, the exam report is automatically updated.

There are 2 methods for evaluating the answers of open-ended questions. These methods can be determined from the test settings either to effect all questions, or one by one for each question in the result reports.

The evaluation method of open ended questions either by the nominal or by the percentage can be selected from the settings for that test.

  • The answer can be evaluated out of 100. In this case, after the evaluation, the exam results are updated by multiplication of the score of the question and the evaluation rate of the answer given to the question. For instance, if the score of the question in the exam is 5 and the answer given to the question is evaluated as 40% correct (successful), the candidate will receive 2 points (5 x 40%) from this question.

Grading a question with percentage in the result reports

  • A fixed grade can be determined for the answer. For instance, if the grade determined for the question is five, then the answer given by the candidate can be graded starting from a maximum of 5.

Grading a question with nominal points in the result reports

Giving feedback for the answers

A commentary content can also be included in the evaluation report of the open-ended questions. The comment may include an explanation of the evaluation, as well as listing the correct and incorrect factors in the answer, clarifying which parts of the answer are incorrect and why they are, and how the correct answer should be.

The content creation screen that candidates can display after answering the question.

Delivering evaluations and comments to candidates

The results of an exam that is created with open-ended questions can be shared with the candidates after the evaluation process. Candidates can see their answer to each question, read the evaluation score they have received, and the evaluator's comment if it is wanted to be shown.

  • Main Benefits of Testinvite
  • Certification Bodies use Testinvite
  • Testinvite for Test-Preparation
  • Assessments for Learning Validation
  • Security & Integrity
  • Short Introduction to Testinvite
  • Assessment Monetization
  • Creating Assessments
  • Creating Questions
  • Question Types
  • Matching Questions
  • Multiple Choice Questions
  • Open Ended Questions
  • Ranking Questions
  • Custom Development Services
  • Proctored Testing
  • Question Bank
  • Simultaneous Assessments
  • Test Administration
  • Shuffled Question Pools

Talk to Sales!

Want to learn more about TestInvite

  • Hiring Assessments
  • English Assessment
  • Learning & Development
  • Admission Exams
  • Competition Exams
  • Certification Exams
  • Vocational Training
  • Test Prep Centers
  • Mass Hiring
  • Privacy Policy
  • Gdpr Compliance
  • Terms & Conditions
  • Terms Of Use
  • Company Information
  • Our Services
  • Partnerships

Distinctive Features

Cookie notice.

We use cookies to improve your experience on our website and the performance of our marketing activities. for more info please visit: Privacy policy

  • Marketplace
  • Future Proof

How to Write Open-Ended Questions: Your Guide to More Thoughtful Feedback

open ended survey questions

Vice President, Innovation, Profiles Division

What Are Open-Ended Questions?

Open-ended questions are questions that encourage survey respondents to provide answers with depth and nuance rather than a simple “yes” or “no”. There are various goals to keep in mind when writing open-ended questions. In most cases, however, goals are not about the volume of feedback – they are about the quality. Whether you are trying to get respondents to be analytical, creative, or spontaneous, the biggest challenge you face is encouraging people to think and think in the right sort of ways.

Most respondents spend 15 seconds answering the average open-ended question and provide an average of five words. Five words might be enough if it constitutes meaningful feedback, but often it’s humdrum verbiage, which is a bit of a nightmare to analyse.

If you were to ask people to watch an ad and write down what they thought of it, the most common response would be "it was OK." The second- and third-most common remarks would be, "I liked it" and "I didn't like it," and the fourth would be "I don't know." These responses are clearly not a great deal of value, so you might as well have asked a yes/no question, “did you like it or not.”

The Purpose and Value of Open-Ended Questions

Open-ended questions are crucial in information gathering. They elicit responses that might not surface with multiple choice, scale-based, or short answer questions. These questions allow for a range of answers, giving us a broader perspective.

By encouraging respondents to narrate their experiences, we gain insight into individual viewpoints. The variations, commonalities, themes, and unexpected insights that surface help create a well-rounded understanding of the subject matter.

In a data-driven world, recognizing the role of open-ended questions is vital. They infuse life into hard facts, offering context, depth, and, ultimately, a richer meaning to our research.

9 Best Practices for Writing Open-Ended Questions

Here are 9 of our best practices for eliciting more thoughtful feedback with open-ended questions. Find the complete guide, with all 23 tips, here .

  • Bring respondents into the problem
  • Challenge respondents
  • Use word counters
  • Deal with spelling and literacy concerns
  • Tackle boredom thresholds
  • Ask a seed question
  • Make it personal
  • Realise the power of the word "imagine"

1. Bring respondents into the problem

Bring respondents into the problem you are trying to solve if possible. Instead of asking "Why do you like this brand?" you might say, "This brand is a lot more popular amongst some people than others and we are trying to understand why."

2. Challenge respondents

Using language that challenges respondents to do something can be an extremely powerful weapon to encourage feedback. Instead of asking "What brands come to mind?" you might re-phrase that to say, “How many brands can you guess?" Or, instead of saying "What words come to mind when you think of this brand," you might say, "can you think of the most popular words people associate with this brand?"

Kantar has discovered that this type of approach can sometimes double the level of feedback as well.

3. Use word counters

Asking people to write not more than “X” number of words on a topic and using a word counter so they can see how much they have written is a great way to extend the volume of open-ended feedback. The “X” limit might be 3 words or 300, depending on the circumstances.

4. Deal with spelling and literacy concerns

One of the reasons people can be reluctant to give open-ended feedback in surveys is because they are worried about their spelling and how well they can write. There are a couple of ways of dealing with this. First, make it clear that you are not worried about it. Secondly, we have found that showing a range of informal feedback, with not necessarily perfect spelling, encourages respondents to be less concerned about their personal style of delivering feedback.

5. Tackle boredom thresholds

People put far less effort into answering open-ended questions at the end of a survey. There are often trigger points you will notice wherein people tend to give up, and their desire to get to the end of a survey cuts in so strongly that their responses to open-ended questions dry up.

6. Ask a seed question

One of the best tricks to get people to put more thought into answering an open-ended question is to ask a question first that seeds a thinking process. For example, asking “Do you think your bank is ‘perfect?” This then seeds a question about what a bank could do to improve its service.

7. Set rules

Adding a rule to a question can make an open-ended question a whole lot more interesting and fun for respondents to answer, thus triggering more thoughtful feedback. The easiest way to do this is by placing word limits - e.g., “review this film in 3 words.”

8. Make it personal

Always remember you are one human being talking to another, and most people's favourite topic is themselves. Try, wherever possible, to frame the question in a personal context. Instead of asking “Why do you like this brand?” you might ask, "If a friend asked you what you thought of this brand, what would you say?"

9. Realise the power of the word "imagine"

The word “imagine” is a magical word to use as a means of getting people to think. Kantar Profiles did an experiment in which we asked one group of people to make a list of their favourite shops. We asked another group, “Imagine you could design your perfect shopping centre. What shops would you have in it?" This increased the feedback 4-fold.

What can you achieve if you get it all right?

In experiments where we redesign the way we ask an open-ended question, the biggest uplift in feedback we have ever achieved has been a 6-fold increase from 17 words to 103 worlds.

This used a combination of techniques listed above. With the right thought and approach, you can easily achieve double or treble the volume of feedback by re-engineering the wording of a question.

But note that, as we said at the start, it is often not about volume of words so much as about the clarity of thinking these words stimulate. We find that with increases in volume feedback, is also linked to more thoughtful feedback.

What do you do with all that open-ended data?

All this begs the question about what you do with all this open-ended feedback. And that can be overwhelming. Some clever ways of automatically processing open ended feedback have emerged with social media text analytic software and natural language computational techniques. You can also try Kantar Profiles’ Taxonomy Builder , a free excel tool, here.

Access all 23 tips

To read all 23 tips for collecting more thoughtful responses from open-ended questions in online surveys, use the form below to download the complete guide. For more on this topic, watch the on-demand webinar with Jon Puleston .

Download the guide

If the form below doesn't appear click here.

online survey research

Open and Closed Questions: Circumstances Reconciliation Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

An example of an open-ended question

Circumstances that require open-ended questions, a follow-up question that would add value to a counseling session, an example of a closed-ended question, circumstances that require closed-ended questions, adding value to a counseling session through a follow-up question.

It should be noted that open-ended questions should be asked after considerable deliberation since they tend to reveal excessive information about the respondent. In addition, the question posed should bear relevance to the topic being pursued. In a bid, to establish a rapport with a client, I would ask him, or her to brief me about the goings-on in his or her life.

Open-ended questions are beneficial to counselors whenever clients are required to provide an elaborate response to a question they have been asked. This is because they provide complicated and in-depth answers to queries that are raised during interactive sessions. Most importantly, this variety of questions comes in handy whenever there is a need to create an insight into a predicament by probing deeper into client concerns.

Consequently, such questions are best used during therapies and other reflective sessions, whenever a persuasion is a viable option during the process. In addition, clients will be attended to properly only if the service providers understand the issues affecting them in their daily lives. It is commonplace that asking open-ended questions is the best way to establish these truths (Schultz, 2010).

It is noteworthy that a person’s life entails diverse and varied aspects that they have to deal with daily. As a result, establishing the root of their afflictions would require additional, further probing into the same matter. Assuming the client had issues at his or her place of work, the ideal follow-up question would require him or her how they feel whenever they discharge their duties at the workplace. This will limit the scope of their answer; while equally allowing them a free hand within the confines of the question to talk about the pressing matters.

These questions are used whenever the questioner intends to narrow down a protracted conversation, to get down to a verdict or finale. They may be asked during the initial stages of research sessions to ascertain the credibility and astuteness of respondents, before the commencement of the fact-finding. In case a survey is being carried out; an ideal question would entail establishing if the client required further clarification on the topic. This would influence the action taken by the researcher, whether he will proceed with the session or take some time to clarify matters to the respondent.

It has been established that closed-ended questions are often used whenever a ‘yes’ or ’no’ response is required. This is because the nature of the question is restrictive, hence prohibiting the respondent from providing further information. It should be noted that most respondents opt to abstain from issuing additional information whenever they are asked such a question (James, 2008).

In addition, these questions often appear to lead the respondent, since one of the possible answers is always mentioned in the query (Bradburn, Sudman & Wansik, 2004). This may affect the integrity of the response issued, subsequently interfering with the entire process. This serves to highlight the importance of these questions when verifying facts that have been reported with regards to a topic of concern.

Assuming the client required clarification about the research topic and the interviewer had responded accordingly, establishing whether the respondent could field the questions is mandatory. In a counseling session, a proper follow-up question could seek to establish, if the respondent was in a position, to proceed with the session, or not. This will enable the counselor to determine a suitable course of action to pursue. If the client is at ease, the session may proceed as intended. In case the client is still uncomfortable; the counselor may find suitable methods of making them relax before proceeding with that session.

Bradburn, N., Sudman, S & Wansik, B. (2004). Asking Questions: the definitive guide to questionnaire design: for market research, political polls, and social and health questionnaires . California, CA: John Wiley and Sons.

James, R. (2008). Crisis intervention strategies . New Jersey, NJ: Cengage Learning.

Schultz, M. (2010). Open Ended Questions for Your Prospects and Customers . Sales marks. Web.

  • Walmart, Target, and Costco: Probing Financial Statements
  • Probing Physics Performance: A Mixed-Methods Approach
  • Reconciliation Is Possible: Focus on South Africa
  • The Shifting Gender Composition of Psychology: The Discipline
  • Psychology Article Analysis: The Shared Reality
  • Exodus Earth: A Scenario for Human Survival
  • Practitioner-Scholar Model in Psychology
  • Thanks!: How Practicing Gratitude Can Make You Happier by Robert Emmons
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, January 6). Open and Closed Questions: Circumstances Reconciliation. https://ivypanda.com/essays/open-and-closed-questions-circumstances-reconciliation/

"Open and Closed Questions: Circumstances Reconciliation." IvyPanda , 6 Jan. 2022, ivypanda.com/essays/open-and-closed-questions-circumstances-reconciliation/.

IvyPanda . (2022) 'Open and Closed Questions: Circumstances Reconciliation'. 6 January.

IvyPanda . 2022. "Open and Closed Questions: Circumstances Reconciliation." January 6, 2022. https://ivypanda.com/essays/open-and-closed-questions-circumstances-reconciliation/.

1. IvyPanda . "Open and Closed Questions: Circumstances Reconciliation." January 6, 2022. https://ivypanda.com/essays/open-and-closed-questions-circumstances-reconciliation/.

Bibliography

IvyPanda . "Open and Closed Questions: Circumstances Reconciliation." January 6, 2022. https://ivypanda.com/essays/open-and-closed-questions-circumstances-reconciliation/.

open ended questions about essays

Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web

Feb 7, 2023 | Yusuf Mehdi - Corporate Vice President & Consumer Chief Marketing Officer

  • Share on Facebook (opens new window)
  • Share on Twitter (opens new window)
  • Share on LinkedIn (opens new window)

The new Bing screenshot

To empower people to unlock the joy of discovery, feel the wonder of creation and better harness the world’s knowledge, today we’re improving how the world benefits from the web by reinventing the tools billions of people use every day, the search engine and the browser.

Today, we’re launching an all new, AI-powered Bing search engine and Edge browser, available in preview now at Bing.com , to deliver better search, more complete answers, a new chat experience and the ability to generate content. We think of these tools as an AI copilot for the web.

“AI will fundamentally change every software category, starting with the largest category of all – search,” said Satya Nadella, Chairman and CEO, Microsoft. “Today, we’re launching Bing and Edge powered by AI copilot and chat, to help people get more from search and the web.”

There are 10 billion search queries a day, but we estimate half of them go unanswered. That’s because people are using search to do things it wasn’t originally designed to do. It’s great for finding a website, but for more complex questions or tasks too often it falls short.

The new Bing and Edge – Your copilot for the web

We have brought together search, browsing and chat into one unified experience you can invoke from anywhere on the web, delivering:

  • Better search. The new Bing gives you an improved version of the familiar search experience, providing more relevant results for simple things like sports scores, stock prices and weather, along with a new sidebar that shows more comprehensive answers if you want them.
  • Complete answers. Bing reviews results from across the web to find and summarize the answer you’re looking for. For example, you can get detailed instructions for how to substitute eggs for another ingredient in a cake you are baking right in that moment, without scrolling through multiple results.
  • A new chat experience. For more complex searches – such as for planning a detailed trip itinerary or researching what TV to buy – the new Bing offers new, interactive chat. The chat experience empowers you to refine your search until you get the complete answer you are looking for by asking for more details, clarity and ideas – with links available so you can immediately act on your decisions.
  • A creative spark. There are times when you need more than an answer – you need inspiration. The new Bing can generate the content to help you. It can help you write an email, create a 5-day itinerary for a dream vacation to Hawaii, with links to book your travel and accommodations, prep for a job interview or create a quiz for trivia night. The new Bing also cites all its sources, so you’re able to see links to the web content it references.
  • New Microsoft Edge experience. We’ve updated the Edge browser with new AI capabilities and a new look, and we’ve added two new functionalities: Chat and compose. With the Edge Sidebar, you can ask for a summary of a lengthy financial report to get the key takeaways – and then use the chat function to ask for a comparison to a competing company’s financials and automatically put it in a table. You can also ask Edge to help you compose content, such as a LinkedIn post, by giving it a few prompts to get you started. After that, you can ask it to help you update the tone, format and length of the post. Edge can understand the web page you’re on and adapts accordingly.

LinkedIn chat screenshot

My anniversary is coming up in September, help me plan a trip somewhere fun in Europe, leaving from London.

Will the Ikea Klippan loveseat fit into my 2019 Honda Odyssey?

Bing chat screenshot

Reinventing search with AI

The new Bing experience is a culmination of four technical breakthroughs:

  • Next-generation OpenAI model . We’re excited to announce the new Bing is running on a new, next-generation OpenAI large language model that is more powerful than ChatGPT and customized specifically for search. It takes key learnings and advancements from ChatGPT and GPT-3.5 – and it is even faster, more accurate and more capable.
  • Microsoft Prometheus model . We have developed a proprietary way of working with the OpenAI model that allows us to best leverage its power. We call this collection of capabilities and techniques the Prometheus model. This combination gives you more relevant, timely and targeted results, with improved safety.
  • Applying AI to core search algorithm . We’ve also applied the AI model to our core Bing search ranking engine, which led to the largest jump in relevance in two decades. With this AI model, even basic search queries are more accurate and more relevant.
  • New user experience . We’re reimagining how you interact with search, browser and chat by pulling them into a unified experience. This will unlock a completely new way to interact with the web.

These groundbreaking new search experiences are possible because Microsoft has committed to building Azure into an AI supercomputer for the world, and OpenAI has used this infrastructure to train the breakthrough models that are now being optimized for Bing.

Microsoft and OpenAI – innovating together, responsibly

Together with OpenAI, we’ve also been intentional in implementing safeguards to defend against harmful content. Our teams are working to address issues such as misinformation and disinformation, content blocking, data safety and preventing the promotion of harmful or discriminatory content in line with our AI principles .

The work we are doing with OpenAI builds on our company’s yearslong effort to ensure that our AI systems are responsible by design. We will continue to apply the full strength of our responsible AI ecosystem – including researchers, engineers and policy experts – to develop new approaches to mitigate risk.

Live today in limited preview, expanding to millions soon

The new Bing is available today in a limited preview on desktop, and everyone can visit Bing.com today to try sample queries and sign up for the waitlist. We’re going to scale the preview to millions in the coming weeks. A mobile experience will also be in preview soon.

We’re excited to put the new Bing and Edge into the real world to get the critical feedback required to improve our models as we scale.

Related links:

Amy Hood, Microsoft executive vice president and chief financial officer, will host a conference call with investors at 2:30 p.m. PT.

Brad Smith, Microsoft vice chair and president: Meeting the moment: advancing the future through responsible AI

Learn more about advertising on the new Bing

More information about the announcement

Tags: AI , Bing , Microsoft Edge

  • Check us out on RSS

open ended questions about essays

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

The Daily logo

  • Apple Podcasts
  • Google Podcasts

Real Teenagers, Fake Nudes: The Rise of Deepfakes in American Schools

Students are using artificial intelligence to create sexually explicit images of their classmates..

open ended questions about essays

Hosted by Sabrina Tavernise

Featuring Natasha Singer

Produced by Sydney Harper and Shannon M. Lin

Edited by Marc Georges

Original music by Marion Lozano ,  Elisheba Ittoop and Dan Powell

Engineered by Chris Wood

Listen and follow The Daily Apple Podcasts | Spotify | Amazon Music | YouTube

Warning: this episode contains strong language, descriptions of explicit content and sexual harassment

A disturbing new problem is sweeping American schools: Students are using artificial intelligence to create sexually explicit images of their classmates and then share them without the person depicted even knowing.

Natasha Singer, who covers technology, business and society for The Times, discusses the rise of deepfake nudes and one girl’s fight to stop them.

On today’s episode

Natasha Singer , a reporter covering technology, business and society for The New York Times.

A girl and her mother stand next to each other wearing black clothing. They are looking into the distance and their hair is blowing in the wind.

Background reading

Using artificial intelligence, middle and high school students have fabricated explicit images of female classmates and shared the doctored pictures.

Spurred by teenage girls, states have moved to ban deepfake nudes .

There are a lot of ways to listen to The Daily. Here’s how.

We aim to make transcripts available the next workday after an episode’s publication. You can find them at the top of the page.

The Daily is made by Rachel Quester, Lynsea Garrison, Clare Toeniskoetter, Paige Cowett, Michael Simon Johnson, Brad Fisher, Chris Wood, Jessica Cheung, Stella Tan, Alexandra Leigh Young, Lisa Chow, Eric Krupke, Marc Georges, Luke Vander Ploeg, M.J. Davis Lin, Dan Powell, Sydney Harper, Mike Benoist, Liz O. Baylen, Asthaa Chaturvedi, Rachelle Bonja, Diana Nguyen, Marion Lozano, Corey Schreppel, Rob Szypko, Elisheba Ittoop, Mooj Zadie, Patricia Willens, Rowan Niemisto, Jody Becker, Rikki Novetsky, John Ketchum, Nina Feldman, Will Reid, Carlos Prieto, Ben Calhoun, Susan Lee, Lexie Diao, Mary Wilson, Alex Stern, Sophia Lanman, Shannon Lin, Diane Wong, Devon Taylor, Alyssa Moxley, Summer Thomad, Olivia Natt, Daniel Ramirez and Brendan Klinkenberg.

Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly. Special thanks to Sam Dolnick, Paula Szuchman, Lisa Tobin, Larissa Anderson, Julia Simon, Sofia Milan, Mahima Chablani, Elizabeth Davis-Moorer, Jeffrey Miranda, Maddy Masiello, Isabella Anderson, Nina Lassam and Nick Pitman.

Natasha Singer writes about technology, business and society. She is currently reporting on the far-reaching ways that tech companies and their tools are reshaping public schools, higher education and job opportunities. More about Natasha Singer

Advertisement

COMMENTS

  1. 75 Open-Ended Questions Examples (2024)

    2. Facilitating self-expression. Open-ended questions allow us to express ourselves. Imagine only living life being able to say "yes" or "no" to questions. We'd struggle to get across our own personalities! Only with fully-expressed sentences and monologues can we share our full thoughts, feelings, and experiences.

  2. Open-Ended Questions: 28 Examples of How to Ask Properly

    Comparison: Open-ended vs closed-ended questions. Open-ended and closed-ended questions serve as the two sides of the inquiry coin, each with its unique advantages. Open-ended questions: Kickstart with "How", "Why", and "What". No set answers, sparking more thought. Encourage detailed responses, explaining the 'why' or 'how'.

  3. What Is An Open Ended Question? Answering It Through Essay

    Open-ended questions are those that do not define the scope you should take (i.e., how many and what kinds of experiences to discuss). Like personal statements for other types of applications, open-ended essays have more room for creativity, as you must make the decision on issues such as how expansive or narrow your topic should be.

  4. How to Write Open‐Ended Questions: 10 Steps (with Pictures)

    Download Article. 1. Begin your question with "how," "why," or "what.". As you begin writing your questions, start them with words that could prompt multiple possible answers. Questions that open with more specific words, such as "which" or "when," often have a single correct answer. [6]

  5. PDF Developing Effective Open-Ended Questions and Arguable, Research-Based

    MMW2, W2009. Developing Effective Open-Ended Questions and Arguable, Research-Based Claims for Academic Essays. Asking Open-Ended, Arguable Questions. In academic papers, the thesis is typically an answer to a question about a significant issue that has more than one possible answer and requires research to provide evidence.

  6. What Are Open-Ended, Close-Ended Questions ...

    Define closed-ended question: a close-ended question is a question that expects a specific answer and does not give leeway outside of that answer. In summary, Open-ended questions are broad and do not expect a specific answer. Close-ended questions are limiting and expect a specific answer. Answers. Examples of open questions. Learn the ...

  7. How to Ask Open-Ended Questions (with 100+ examples!)

    The key is to ask open-ended questions, rather than closed ones. Closed questions usually produce a short, yes or no response; they tend to limit the conversation. On the other hand, open questions produce a longer, fuller response; they expand the conversation. The most important benefit of open-ended questions is that they let you find out ...

  8. Open-Ended Questions vs. Closed: 30 Examples & Comparisons

    An open-ended question opens up a topic for exploration and discussion while a closed-ended question leads to a closed-off conversational path. After "Yes" or "No" or the specific one-word answer to the question, the thread is done. Open-ended questions lead to qualitative answers while closed-ended questions lead to quantitative answers.

  9. How to ask open-ended questions? Crucial tips and examples

    1. Encourage deep reflection. Open-ended questions prompt respondents to think critically and reflect on their thoughts, feelings, and experiences. By inviting individuals to express themselves in their own words, these questions encourage deeper introspection and provide better response quality. 2.

  10. Preparing Students in Writing Responses to Open-Ended Questions

    The three main goals of this article are: to describe how students need to approach the close reading of the questions, or tasks on the assessments; to identify the kinds of skills and knowledge students need in writing clear, comprehensible responses; and. to examine issues related to fluency in writing and stamina that arise as students work ...

  11. 100 Open-Ended Questions and What They Are

    To put it as simply as possible, open-ended questions are questions that require more than a short, fixed response. Open-ended questions try to avoid answers like "Yes.", "No.", "The Battle of Midway.", or "Onions.". Open-ended questions attempt to make the person who is answering the question give a more detailed and elaborate ...

  12. Should essays and other "open-ended"-type questions retain a place in

    Written assessments fall into two classes: constructed-response or open-ended questions, such as the essay and a number of variants of the short-answer question, and selected-response or closed-ended questions; typically in the form of multiple-choice. It is widely believed that constructed response written questions test higher order cognitive processes in a manner that multiple-choice ...

  13. Open Ended Questions: Definition + [30 Questionnaire Examples]

    What is Open Ended Question. Open-ended questions are those which require more thought and more than a simple one-word answer. An open-ended question is designed to encourage a full, meaningful, and deliberate answer using the subject's own knowledge and/or feelings. It is the opposite of a closed-ended question, which encourages a short or ...

  14. How to Answer Open-Ended Essay Questions

    It's test time, and this one isn't multiple choice. Your teacher gives you a sheet of paper with a question on it. The only problem is, you can't immediately see a definite answer. ... COLLEGE ; TESTS ; VOCAB ; LIFE ; TECH ; How to Answer Open-Ended Essay Questions. EDITORIAL TEAM CLASS. How to Answer Open-Ended Essay Questions. It's test time ...

  15. Open-Ended vs. Closed Questions in User Research

    Open-Ended vs. Closed Questions. There are two types of questions we can use in research studies: open-ended and closed. Open-ended questions allow participants to give a free-form text answer. Closed questions (or closed-ended questions) restrict participants to one of a limited set of possible answers.. Open-ended questions encourage exploration of a topic; a participant can choose what to ...

  16. 18 Tough Open-Ended Questions (And How To Answer Them)

    18 Tough Open-Ended Questions (And How To Answer Them) Interviewers ask open-ended interview questions during the hiring process to learn more about a candidate's experience and relevant abilities. The ability to answer open-ended interview questions in a detailed and thoughtful manner can show your problem-solving and critical-thinking skills.

  17. Examples of Open-Ended vs. Closed-Ended Questions

    Open-ended questions can be a little hard to spot sometimes. How can you know if a question is open-ended or closed-ended? Browse these examples to find out. ... while open-ended questions are more like subjective short responses and essay questions. Now that you know the difference between these question types, ...

  18. Open-Ended Question: What it is, How to Use it (+Examples)

    For example, an open-ended question allows you to probe much deeper but a close ended question allows you to get concise information that can be quantified. It's much easier to quantify yes or nos than a paragraph of text. A relatable example comes from the standardized tests most of us took in school.

  19. Open-Ended Questions [vs Close-Ended] + Examples

    Closed-ended questions are questions that can only be answered by selecting from a limited number of options, usually multiple-choice questions with a single-word answer ('yes' or 'no') or a rating scale (e.g. from strongly agree to strongly disagree). Closed-ended questions give limited insight, but can easily be analyzed for ...

  20. Open Ended Questions

    Open-ended questions that the candidate can answer by writing, speaking, filling in tables, taking pictures, writing code, uploading files and using similar methods are essential components in many exams,tests, and assessment processes. In open-ended questions, the candidate is asked not to choose an answer, but to create it.

  21. How to Write Open-Ended Questions

    In a data-driven world, recognizing the role of open-ended questions is vital. They infuse life into hard facts, offering context, depth, and, ultimately, a richer meaning to our research. 9 Best Practices for Writing Open-Ended Questions Here are 9 of our best practices for eliciting more thoughtful feedback with open-ended questions. Find the ...

  22. Open-Ended Questions: Case Study

    Such a request demonstrates that the practitioner is actually interested in the answer and helps to escape cliché questions, which can receive closed-ended answers. Another example of rephrasing open-ended questions was Question 4 provided above. Children often come back with "Fine!" after being asked how school was today.

  23. Open and Closed Questions: Circumstances Reconciliation Essay

    An example of an open-ended question. It should be noted that open-ended questions should be asked after considerable deliberation since they tend to reveal excessive information about the respondent. In addition, the question posed should bear relevance to the topic being pursued. In a bid, to establish a rapport with a client, I would ask him ...

  24. Reinventing search with a new AI-powered Microsoft Bing and Edge, your

    There are 10 billion search queries a day, but we estimate half of them go unanswered. That's because people are using search to do things it wasn't originally designed to do. It's great for finding a website, but for more complex questions or tasks too often it falls short. The new Bing and Edge - Your copilot for the web

  25. PDF Supreme Court of The United States

    ing results, extracurricular involvement, essay quality, per-sonal factors, and student background. Id., at 600. Readers are responsible for providing numerical ratings for the aca-demic, extracurricular, personal, and essay categories. Ibid. During the years at issue in this litigation, un-derrepresented minority students were "more likely to

  26. Abortion United Evangelicals and Republicans. Now That Alliance Is

    The Southern Baptist Convention, long a bellwether for American evangelicalism, voted to oppose the use of in vitro fertilization.

  27. Real Teenagers, Fake Nudes: The Rise of Deepfakes in American Schools

    Warning: this episode contains strong language, descriptions of explicit content and sexual harassment. A disturbing new problem is sweeping American schools: Students are using artificial ...