14 Examples of Formative Assessment [+FAQs]

formative assessment examples nursing

Traditional student assessment typically comes in the form of a test, pop quiz, or more thorough final exam. But as many teachers will tell you, these rarely tell the whole story or accurately determine just how well a student has learned a concept or lesson.

That’s why many teachers are utilizing formative assessments. While formative assessment is not necessarily a new tool, it is becoming increasingly popular amongst K-12 educators across all subject levels. 

Curious? Read on to learn more about types of formative assessment and where you can access additional resources to help you incorporate this new evaluation style into your classroom.

What is Formative Assessment?

Online education glossary EdGlossary defines formative assessment as “a wide variety of methods that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course.” They continue, “formative assessments help teachers identify concepts that students are struggling to understand, skills they are having difficulty acquiring, or learning standards they have not yet achieved so that adjustments can be made to lessons, instructional techniques, and academic support.”

The primary reason educators utilize formative assessment, and its primary goal, is to measure a student’s understanding while instruction is happening. Formative assessments allow teachers to collect lots of information about a student’s comprehension while they’re learning, which in turn allows them to make adjustments and improvements in the moment. And, the results speak for themselves — formative assessment has been proven to be highly effective in raising the level of student attainment, increasing equity of student outcomes, and improving students’ ability to learn, according to a study from the Organization for Economic Co-operation and Development (OECD). 

On the flipside of the assessment coin is summative assessments, which are what we typically use to evaluate student learning. Summative assessments are used after a specific instructional period, such as at the end of a unit, course, semester, or even school year. As learning and formative assessment expert Paul Black puts it, “when the cook tastes the soup, that’s formative assessment. When a customer tastes the soup, that’s summative assessment.”

formative assessment examples nursing

14 Examples of Formative Assessment Tools & Strategies

There are many types of formative assessment tools and strategies available to teachers, and it’s even possible to come up with your own. However, here are some of the most popular and useful formative assessments being used today.

  • Round Robin Charts

Students break out into small groups and are given a blank chart and writing utensils. In these groups, everyone answers an open-ended question about the current lesson. Beyond the question, students can also add any relevant knowledge they have about the topic to their chart. These charts then rotate from group to group, with each group adding their input. Once everyone has written on every chart, the class regroups and discusses the responses. 

  • Strategic Questioning

This formative assessment style is quite flexible and can be used in many different settings. You can ask individuals, groups, or the whole class high-level, open-ended questions that start with “why” or “how.” These questions have a two-fold purpose — to gauge how well students are grasping the lesson at hand and to spark a discussion about the topic. 

  • Three-Way Summaries

These written summaries of a lesson or subject ask students to complete three separate write-ups of varying lengths: short (10-15 words), medium (30-50 words), and long (75-100). These different lengths test students’ ability to condense everything they’ve learned into a concise statement, or elaborate with more detail. This will demonstrate to you, the teacher, just how much they have learned, and it will also identify any learning gaps. 

  • Think-Pair-Share

Think-pair-share asks students to write down their answers to a question posed by the teacher. When they’re done, they break off into pairs and share their answers and discuss. You can then move around the room, dropping in on discussions and getting an idea of how well students are understanding.

Looking for a competitive edge? Explore our Education certificate options today! >

  • 3-2-1 Countdown

This formative assessment tool can be written or oral and asks students to respond to three very simple prompts: Name three things you didn’t know before, name two things that surprised you about this topic, and name one you want to start doing with what you’ve learned. The exact questions are flexible and can be tailored to whatever unit or lesson you are teaching.

  • Classroom Polls

This is a great participation tool to use mid-lesson. At any point, pose a poll question to students and ask them to respond by raising their hand. If you have the capability, you can also use online polling platforms and let students submit their answers on their Chromebooks, tablets, or other devices.

  • Exit/Admission Tickets

Exit and admission tickets are quick written exercises that assess a student’s comprehension of a single day’s lesson. As the name suggests, exit tickets are short written summaries of what students learned in class that day, while admission tickets can be performed as short homework assignments that are handed in as students arrive to class.

  • One-Minute Papers

This quick, formative assessment tool is most useful at the end of the day to get a complete picture of the classes’ learning that day. Put one minute on the clock and pose a question to students about the primary subject for the day. Typical questions might be:

  • What was the main point?
  • What questions do you still have?
  • What was the most surprising thing you learned?
  • What was the most confusing aspect and why?
  • Creative Extension Projects

These types of assessments are likely already part of your evaluation strategy and include projects like posters and collage, skit performances, dioramas, keynote presentations, and more. Formative assessments like these allow students to use more creative parts of their skillset to demonstrate their understanding and comprehension and can be an opportunity for individual or group work.

Dipsticks — named after the quick and easy tool we use to check our car’s oil levels — refer to a number of fast, formative assessment tools. These are most effective immediately after giving students feedback and allowing them to practice said skills. Many of the assessments on this list fall into the dipstick categories, but additional options include writing a letter explaining the concepts covered or drawing a sketch to visually represent the topic. 

  • Quiz-Like Games and Polls

A majority of students enjoy games of some kind, and incorporating games that test a student’s recall and subject aptitude are a great way to make formative assessment more fun. These could be Jeopardy-like games that you can tailor around a specific topic, or even an online platform that leverages your own lessons. But no matter what game you choose, these are often a big hit with students.

  • Interview-Based Assessments

Interview-based assessments are a great way to get first-hand insight into student comprehension of a subject. You can break out into one-on-one sessions with students, or allow them to conduct interviews in small groups. These should be quick, casual conversations that go over the biggest takeaways from your lesson. If you want to provide structure to student conversations, let them try the TAG feedback method — tell your peer something they did well, ask a thoughtful question, and give a positive suggestion.

  • Self Assessment

Allow students to take the rubric you use to perform a self assessment of their knowledge or understanding of a topic. Not only will it allow them to reflect on their own work, but it will also very clearly demonstrate the gaps they need filled in. Self assessments should also allow students to highlight where they feel their strengths are so the feedback isn’t entirely negative.

  • Participation Cards

Participation cards are a great tool you can use on-the-fly in the middle of a lesson to get a quick read on the entire classes’ level of understanding. Give each student three participation cards — “I agree,” “I disagree,” and “I don’t know how to respond” — and pose questions that they can then respond to with those cards. This will give you a quick gauge of what concepts need more coverage.

5 REASONS WHY CONTINUING EDUCATION MATTERS FOR EDUCATORS

The education industry is always changing and evolving, perhaps now more than ever. Learn how you can be prepared by downloading our eBook.

formative assessment examples nursing

List of Formative Assessment Resources

There are many, many online formative assessment resources available to teachers. Here are just a few of the most widely-used and highly recommended formative assessment sites available.

  • Arizona State Dept of Education

FAQs About Formative Assessment

The following frequently asked questions were sourced from the Association for Supervision and Curriculum Development (ASCD), a leading education professional organization of more than 100,000 superintendents, principals, teachers, and advocates.  

Is formative assessment something new?

No and yes. The concept of measuring a student’s comprehension during lessons has existed for centuries. However, the concept of formative assessment as we understand it didn’t appear until approximately 40 years ago, and has progressively expanded into what it is today.

What makes something a formative assessment?

ASCD characterized formative assessment as “a way for teachers and students to gather evidence of learning, engage students in assessment, and use data to improve teaching and learning.” Their definition continues, “when you use an assessment instrument— a test, a quiz, an essay, or any other kind of classroom activity—analytically and diagnostically to measure the process of learning and then, in turn, to inform yourself or your students of progress and guide further learning, you are engaging in formative assessment. If you were to use the same instrument for the sole purpose of gathering data to report to a district or state or to determine a final grade, you would be engaging in summative assessment.”

Does formative assessment work in all content areas?

Absolutely, and it works across all grade levels. Nearly any content area — language arts, math, science, humanities, and even the arts or physical education — can utilize formative assessment in a positive way.

How can formative assessment support the curriculum?

Formative assessment supports curricula by providing real-time feedback on students’ knowledge levels and comprehension of the subject at hand. When teachers regularly utilize formative assessment tools, they can find gaps in student learning and customize lessons to fill those gaps. After term is over, teachers can use this feedback to reshape their curricula.

How can formative assessment be used to establish instructional priorities?

Because formative assessment supports curriculum development and updates, it thereby influences instructional priorities. Through student feedback and formative assessment, teachers are able to gather data about which instructional methods are most (and least) successful. This “data-driven” instruction should yield more positive learning outcomes for students.

Can formative assessment close achievement gaps?

Formative assessment is ideal because it identifies gaps in student knowledge while they’re learning. This allows teachers to make adjustments to close these gaps and help students more successfully master a new skill or topic.

How can I help my students understand formative assessment?

Formative assessment should be framed as a supportive learning tool; it’s a very different tactic than summative assessment strategies. To help students understand this new evaluation style, make sure you utilize it from the first day in the classroom. Introduce a small number of strategies and use them repeatedly so students become familiar with them. Eventually, these formative assessments will become second nature to teachers and students.

Before you tackle formative assessment, or any new teaching strategy for that matter, consider taking a continuing education course. At the University of San Diego School of Professional and Continuing Education, we offer over 500 courses for educators that can be completed entirely online, and many at your own pace. So no matter what your interests are, you can surely find a course — or even a certificate — that suits your needs.

Be Sure To Share This Article

  • Share on Twitter
  • Share on Facebook
  • Share on LinkedIn

Your Salary

Browse over 500+ educator courses and numerous certificates to enhance your curriculum and earn credit toward salary advancement.

Related Posts

Woman writing on sticky notes creating a lesson plan

UWorld-Nursing-Logo

Home » Educators » Comparing Assessment Types in Nursing Education

Comparing Assessment Types in Nursing Education

  • Last Updated: March 4, 2024

Nursing instructor working in assessments

  • NCLEX-PN , NCLEX-RN

There are several different types of assessments that nursing instructors may use to measure their students’ progress. Among the most common are summative, formative, and benchmark assessments. We’ll explore the strengths and weaknesses of each, as well as their unique benefits for both instructors and students. We’ll also discuss how partnering with UWorld Nursing can elevate the effectiveness of your assessments and better prepare your students for NCLEX success.

Formative Assessment and Summative Assessment

Formative Assessments in Nursing

Formative assessments are used to identify a student’s strengths and weaknesses, allowing for immediate remediation. Unlike other assessment types, these are generally low-stakes and used to aid in the learning process rather than determine a grade.

Pros and Cons of Formative Assessments

Formative assessments are a great diagnostic tool rather than an evaluative one. They can be a tremendous help to both instructors and students in terms of identifying and addressing knowledge gaps. In nursing education, high-quality formative assessments are designed to pinpoint specific areas where individuals, groups, or the entire class need improvement. This targeted feedback helps students strengthen their clinical judgment and understanding, ultimately leading to better patient care.

UWorld Content for Formative Assessments

UWorld University partners can create unlimited formative assessments using classic and NGN items from our QBank. To create a formative assessment, simply select relevant questions based on subject, system, or client needs categories. Just like in our student QBanks, these questions come with detailed explanations to help with remediation. Instructors can also review student performance at an individual and class-wide level.

Educator and student working on Next Generation NCLEX question from UWorld’s Learning Platform for Nursing.

Familiarizing Students with Adaptive Testing

Students can take unlimited computerized adaptive tests (CAT) practice tests in the self-study area of the Learning Platform. While not true formative assessments, they serve as an excellent preparation tool because they mirror the adaptive conditions of the real NCLEX. Upon completing a UWorld CAT, students receive their overall score, their level of preparedness, and the difficulty factor of the questions they answered. Instructors can also view the results to help with remediation.

Summative Assessments in Nursing

Summative assessments evaluate a student’s knowledge of a subject at the end of an instructional period based on a uniform standard (e.g., a midterm or final exam).

Pros and Cons of Summative Assessments

Summative assessments are useful for determining a student’s overall understanding of a topic. In nursing, their high-stakes nature can be used as an opportunity to teach productive study strategies and habits in anticipation of the NCLEX. However, summative assessments typically don’t give students an opportunity to learn from their mistakes. They can be used to assess the effectiveness of a course or program but are not intended to be diagnostic for students.

UWorld Content for Summative Assessments

Instructors can assign up to six summative assessments with NCLEX-style questions through our Learning Platform for Nursing. Each assessment contains 100 unique NGN and classic questions that cannot be found in our student or faculty QBanks. Upon completion, students can review their answers and read detailed rationales for each question, while instructors can view thorough performance reports.

Maximizing Your UWorld Nursing Assessments

When used correctly, UWorld Nursing assessments are a remarkably powerful remediation and NCLEX preparation tool. Your students will get firsthand experience with NGN-style questions and time constraints, as well as exposure to the most relevant NCLEX topics. Instructors can then identify which students are on track or falling behind. Here’s how:

  • Assign at least three assessments each semester (or one about every two months)
  • Have your students review the answer explanations upon completing each assessment
  • View in-depth performance reports to track your students’ progress and identify at-risk students
  • Based on these results, you can create unique assignments for individuals or groups to turn their weaknesses into strengths

The analytics instructors receive are twofold. First is a breakdown of student performance across subjects, systems, and topic areas. Second is an accurate prediction of each student’s chance of passing the NCLEX (low, borderline, high, or very high) based on statistically validated scoring.

What about Benchmark Testing in Nursing?

Benchmark assessments measure a student’s progress toward reaching an educational goal over time. Administered in predefined intervals, such as at the beginning and end of a nursing program, educators generally use benchmark assessments to compare their students’ understanding against a uniform standard (determined by accrediting bodies).

Pros and Cons of Benchmark Assessments

The greatest benefit of using benchmark assessments is the ability for educators and administrators to compare their students’ performance against national standards. A weakness is that benchmark testing requires additional safeguards to ensure results are fair and accurate (e.g., providing a secure or proctored testing environment).

Legislative Changes to Benchmark Testing in Nursing Education

Benchmark testing is a hot topic in nursing education, with compelling arguments for and against the practice. It’s important to note that your program may be impacted by legislation limiting the use of benchmark tests created by private entities. Texas is a recent example:

In Fall 2023, the Texas Legislature passed a bill with stricter regulations on standardized examinations used by nursing schools and educational programs. While there are a number of ways standardized benchmark tests can still be implemented, the goal is to prohibit their use as a graduation requirement and minimize their impact on students’ grades.

Is There a Best Assessment Type for Nursing Students?

Summative, formative, and benchmark assessments all have their place in nursing education. Because summative and benchmark assessments are evaluative in nature, they can help determine if students are on track with educational objectives; however, formative assessments are better at identifying at-risk students earlier and increasing student engagement in the learning process.

Regardless of your course structure, the UWorld Learning Platform for Nursing can be used to elevate student performance through NCLEX-style questions, detailed reports, and built-in remediation methods. Our resources are flexible and align with the new AACN Essentials , enabling easy integration with any nursing curricula.

Educator and student working on Next Generation NCLEX question from UWorld’s Learning Platform for Nursing.

High-yield videos, thousands of practice questions, multiple self-assessment tests, and more.

Latest From the UWorld Nursing Blog

Nurse speaking to a patient

Integrating Communication in Nursing Education (AACN Essentials)

Nursing student surrounded by educators and practicing nurses.

Competency-Based Learning in Nursing Education

IPASS Processing Logo

IPass Processing Leverages UWorld for Accessible NCLEX® Prep

We use cookies to learn how you use our website and to ensure that you have the best possible experience. By continuing to use our website, you are accepting the use of cookies. Learn More

Formative Assessment Strategies for Healthcare Educators

formative assessment examples nursing

Formative assessments are those lower-stakes assessments that are delivered during instruction in some way, or 'along the way' so to speak. As an educator, it was always a challenge to identify if or what my students were understanding, what skills they had acquired, and if or how I should adjust my teaching strategy to help improve their learning. I’m guessing I am not alone with this. In medical education, the pace is so fast that many instructors feel like they do not have the time to spare in giving assessments ‘along the way’, but would rather focus on teaching everything students need for the higher-stakes exams. With medical education being incredibly intense and fast, this is completely understandable. However, there must be a reason so much research supports the effectiveness in administering formative assessments….along the way.

One reason formative assessments are proven so useful is they provide meaningful and useful feedback; feedback that can be used by both the instructor and students.

Results from formative assessments should have a direct relation to the learning objectives established by the instructor, and because of this, the results provide trusted feedback for both the instructor and student. This is incredibly important. For instructors, it allows them to make immediate adjustments to their teaching strategy and for the students, it helps them develop a more reliable self-awareness of their own learning. These two things alone are very useful, but when combined, they can result in an increase in student outcomes.

Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities.  

1. Pre-Assessment:

Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas

  • When instructors have feedback from student assessments prior to class, it is easier to tailor the lesson to student needs.
  • Posing questions prior to class can help students focus on what the instructor thinks is important.
  • By assessing students before class, it helps ensure students are more prepared for what learning will take place in class.
  • Pre-assessments can provide more ‘in-class’ time flexibility- knowing ahead of time which knowledge gaps students may have allows the instructor to better use class time in a more flexible way...not as many ‘surprises’ flexibility.

formative assessment examples nursing

2. Frequent class assessments:

Provides students with feedback for learning during class, and provides a focus for students related to important topics which help increase learning gains

formative assessment examples nursing

  • Adding more formative assessments during class increases student retention.
  • Frequent formative assessments help students stay focused by giving them natural ‘breaks’ from either a lecture or the activity.
  • Multiple formative assessments can provide students with a “road-map” to what the instructor feels is important (i.e. what will appear on summative assessments).
  • By using frequent assessments, the instructor can naturally help students with topic or content transitions during a lecture or activity.
  • The data/feedback from the assessments can help instructors better understand which instructional methods are most effective- in other words, what works and what doesn’t.

3. Guided Study assessments (group or tutorial):

‍ Provides students with opportunities to acquire information needed to complete the assessment, for example through research or group work, and increases student self-awareness related to their own knowledge (gaps)

formative assessment examples nursing

  • Assessments where students are expected to engage in research allows them to develop and use higher-level thinking skills.
  • Guided assessments engage students in active learning either independently or through collaboration with a group.
  • Small group assessments encourage students to articulate their thinking and reasoning, and helps them develop self-awareness about what they do and do not yet understand.
  • Tutorial assessments can provide the instructor real-time feedback for student misconceptions and overall understanding- allowing them to make important decisions about how to teach particular topics.

4. Take-Home assessments: ‍

Allows students to preview the instructors assessment style, are low-stakes and self-paced to allow students to engage with the material, and provides the instructor with formative feedback 

  • Assessments that students can engage in outside of class gives them a ‘preview’ of the information that they will likely need to retrieve again on a summative exam.
  • When students take an assessment at home, the instructor can receive feedback with enough time to adjust the classroom instruction to address knowledge gaps or misconceptions.
  • Take home assessments can help students develop self-awareness of their own misunderstandings or knowledge gaps.

formative assessment examples nursing

5.“Bedside” observation:

Informs students in clinical settings of their level of competence and learning, and may improve motivation and improve participation in clinical activities.

  • Real-time formative assessments can provide students with critical feedback related to the skills that are necessary for practicing medicine.
  • On the fly assessments can help clinical instructors learn more about student understanding as well as any changes they can make in their instruction.
  • Formative assessments in a clinical setting can equip clinical instructors with a valuable tool to help them make informed decisions around their teaching and student learning.
  • Bedside assessments provide a standardized way of formatively assessing students in a very unpredictable learning environment.

The challenge for many instructors is often in the “how” when delivering formative assessments. Thankfully, improving teaching and learning through the use of formative assessments (and feedback) can be greatly enhanced with educational technology. DaVinci Education’s Leo platform provides multiple ways in which you can deliver formative assessments. With Leos’ exam feature you can:

  • Assign pre-class, in-class or take-home quizzes
  • Deliver IRATs used during TBL exercises to assess student individual readiness
  • Deliver GRATs used during TBL exercises by using Leo’s digital scratch-off tool to encourage collaboration and assess group readiness
  • Monitor student performance in real-time using Leo’s Monitor Exam feature
  • Customize student feedback options during or following an assessment

References:

Burch, v. c., seggie, j. l., & gary, n. e. (2006, may). formative assessment promotes learning in undergraduate clinical clerkships. retrieved from https://www.ncbi.nlm.nih.gov/pubmed/16751919, feedback and formative assessment tools . (n.d.). retrieved from http://www.queensu.ca/teachingandlearning/modules/assessments/11_s2_03_feedback_and_formative.html, hattie, j. and timperely, h. (2007). the power of feedback. review of educational research , 77, 81–112, heritage, m. 2014, formative assessment: an enabler of learning, retrieved from http://www.amplify.com/assets/regional/heritage_fa.pdf, magna publications, inc. (2018). designing better quizzes: ideas for rethinking your quiz practices . madison, wi., schlegel, c. (2018). objective structured clinical examination (osce). osce – kompetenzorientiert prüfen in der pflegeausbildung , 1–7. doi: 10.1007/978-3-662-55800-3_1, other resources.

formative assessment examples nursing

510 Meadowmont Village Circle #129 Chapel Hill, NC 27517 ‍ (919) 694-7498

View privacy policy

DAVINCI EDUCATION MANAGEMENT SYSTEM®, ACADEMIC PORTRAIT®, and LEO® are the registered trademarks of DaVinci Education, Inc.

formative assessment examples nursing

  • Research article
  • Open access
  • Published: 08 June 2021

Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction

  • Oscar Arrogante 1 ,
  • Gracia María González-Romero 1 ,
  • Eva María López-Torre 1 ,
  • Laura Carrión-García 1 &
  • Alberto Polo 1  

BMC Nursing volume  20 , Article number:  92 ( 2021 ) Cite this article

20k Accesses

22 Citations

1 Altmetric

Metrics details

Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies.

Two hundred eighteen undergraduate nursing students participated in a cross-sectional study, using a mixed-method. MAES© (self-learning methodology in simulated environments) sessions were developed to assess students by formative evaluation. Objective Structured Clinical Examination sessions were conducted to assess students by summative evaluation. Simulated scenarios recreated clinical cases of critical patients. Students´ performance in all simulated scenarios were assessed using checklists. A validated questionnaire was used to evaluate satisfaction with clinical simulation. Quantitative data were analysed using the IBM SPSS Statistics version 24.0 software, whereas qualitative data were analysed using the ATLAS-ti version 8.0 software.

Most nursing students showed adequate clinical competence. Satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance.

The best solution to reduce students’ complaints with summative evaluation is to orient them to the simulated environment. It should be recommended to combine both evaluation strategies in simulated-based assessment, providing students feedback in summative evaluation, as well as evaluating their achievement of learning outcomes in formative evaluation.

Peer Review reports

Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [ 1 – 5 ]. In addition, simulation-based educational programs have demonstrated to be more useful than traditional teaching methodologies [ 4 , 6 ]. As a result, most nursing faculties are integrating this methodology into their study plans [ 7 ]. SBE has the potential to shorten the learning curve for students, increase the fusion between theoretical knowledge and clinical practice, establish deficient areas in students, develop communication and technical skills acquisition, improve patient safety, standardise the curriculum and teaching contents, and offer observations of real-time clinical decision making [ 5 , 6 , 8 , 9 ].

SBE offers an excellent opportunity to perform not only observed competency-based teaching, but also the assessment of these competencies. Simulated-based assessment (SBA) is aimed at evaluating various professional skills, including knowledge, technical and clinical skills, communication, and decision-making; as well as higher-order competencies such as patient safety and teamwork skills [ 1 – 4 , 10 ]. Compared with other traditional assessment methods (i.e. written or oral test), SBA offers the opportunity to evaluate the actual performance in an environment similar to the ‘real’ clinical practice, assess multidimensional professional competencies, and present standard clinical scenarios to all students [ 1 – 4 , 10 ].

The main SBA strategies are formative and summative evaluation. Formative evaluation is conducted to establish students’ progression during the course [ 11 ]. This evaluation strategy is helpful to educators in improving students’ deficient areas and testing their knowledge [ 12 ]. Employing this evaluation strategy, educators give students feedback about their performance. Subsequently, students self-reflect to evaluate their learning and determine their deficient areas. In this sense, formative evaluation includes an ideal phase to achieve the purposes of strategy: the debriefing [ 13 ]. International Nursing Association for Clinical Simulation and Learning (INACSL) defines debriefing as a reflective process immediately following the simulation-based experience where ‘participants explore their emotions and question, reflect, and provide feedback to one another’. Its aim is ‘to move toward assimilation and accommodation to transfer learning to future situations’ [ 14 ]. Therefore, debriefing is a basic component for learning to be effective after the simulation [ 15 , 16 ]. Furthermore, MAES© (according to its Spanish initials of self-learning methodology in simulated environments) is a clinical simulation methodology created to perform formative evaluations [ 17 ]. MAES© allows evaluating specifically nursing competencies acquired by several nursing students at the same time. MAES© is structured through the union of other active learning methodologies such as self-directed learning, problem-based learning, peer education and simulation-based learning. Specifically, students acquire and develop competencies through self-directed learning, as they voluntarily choose competencies to learn. Furthermore, this methodology encourages students to be the protagonists of their learning process, since they can choose the case they want to study, design the clinical simulation scenario and, finally, actively participate during the debriefing phase [ 17 ]. This methodology meets all the requirements defined by the INACSL Standards of Best Practice [ 18 ]. Compared to traditional simulation-based learning (where simulated clinical scenarios are designed by the teaching team and led by facilitators), the MAES© methodology (where simulated clinical scenarios are designed and led by students) provides students nursing a better learning process and clinical performance [ 19 ]. Currently, the MAES© methodology is used in clinical simulation sessions with nursing students in some universities, not only in Spain but also in Norway, Portugal and Brazil [ 20 ].

In contrast, summative evaluation is used to establish the learning outcomes achieved by students at the end of the course [ 11 ]. This evaluation strategy is helpful to educators in evaluating students’ learning, the competencies acquired by them and their academic achievement [ 12 ]. This assessment is essential in the education process to determine readiness and competence for certification and accreditation [ 10 , 21 ]. Accordingly, Objective Structured Clinical Examination (OSCE) is commonly conducted in SBA as a summative evaluation to evaluate students’ clinical competence [ 22 ]. Consequently, OSCE has been used by educational institutions as a valid and reliable method of assessment. OSCE most commonly consists of a ‘round-robin’ of multiple short testing stations, in each of which students must demonstrate defined clinical competencies, while educators evaluate their performance according to predetermined criteria using a standardized marking scheme, such as checklists. Students must rotate through these stations where educators assess students’ performance in clinical examination, technical skills, clinical judgment and decision-making skill during the nursing process [ 22 , 23 ]. This strategy of summative evaluation incorporates actors performing as simulated patients. Therefore, OSCE allows assessing students’ clinical competence in a real-life simulated clinical environment. After simulated scenarios, this evaluation strategy provides educators with an opportunity to give students constructive feedback according to their achieved results in the checklist [ 10 , 21 – 23 ].

Despite both evaluation strategies are widely employed in SBA, there is scarce evidence about the possible differences in satisfaction with clinical simulation when nursing students are assessed using formative and summative evaluation. Considering the high satisfaction with the formative evaluation perceived by our students during the implementation of the MAES© methodology, we were concerned if this satisfaction would be similar using the same simulated clinical scenarios through a summative evaluation. Additionally, we were concerned about the reasons why this satisfaction would be different using both strategies of SBA. Therefore, the aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation methodology in undergraduate nursing students, as well as to compare their satisfaction with this methodology using two strategies of SBA, such as formative and summative evaluation. In this sense, our research hypothesis is that both strategies of SBA are effective in acquiring nursing competencies, but student satisfaction with the formative evaluation is higher than with the summative evaluation.

Study design and setting

A descriptive cross-sectional study using a mixed-method and analysing both quantitative and qualitative data. The study was conducted from September 2018 to May 2019 in a University Centre of Health Sciences in Madrid (Spain). This centre offers Physiotherapy and Nursing Degrees.

Participants

The study included 3rd-year undergraduate students (106 students participated in MAES© sessions within the subject ‘Nursing care for critical patients’) and 4th-year undergraduate students (112 students participated in OSCE sessions within the subject ‘Supervised clinical placements – Advanced level’) in Nursing Degree. It should be noted, 4th-year undergraduate students had completed all their clinical placements and they had to approve OSCE sessions to achieve their certification.

Clinical simulation sessions

To assess the clinical performance of 3rd-year undergraduate students using formative evaluation, MAES© sessions were conducted. This methodology consists of 6 elements in a minimum of two sessions [ 17 ]: Team selection and creation of group identity (students are grouped into teams and they create their own identity), voluntary choice of subject of study (each team will freely choose a topic that will serve as inspiration for the design of a simulation scenario), establishment of baseline and programming skills to be acquired through brainstorming (the students, by teams, decide what they know about the subject and then what they want to learn from it, as well as the clinical and non- technical skills they would like to acquire with the case they have chosen), design of a clinical simulation scenario in which the students practice the skills to be acquired (each team commits to designing a scenario in the simulation room), execution of the simulated clinical experience (another team, different from the one that has designed the case, will enter the high-fidelity simulation room and will have a simulation experience), and finally debriefing and presentation of the acquired skills (in addition to analysing the performance of the participants in the scenario, the students explain what they learned during the design of the case and look for evidence of the learning objectives).

Alternatively, OSCE sessions were developed to assess the clinical performance of 4th-year undergraduate students using summative evaluation. Both MAES© and OSCE sessions recreated critically ill patients with diagnoses of Exacerbation of Chronic Obstructive Pulmonary Disease (COPD), acute coronary syndrome haemorrhage in a postsurgical, and severe traumatic brain injury.

It should be noted that the implementation of all MAES© and OSCEs sessions followed the Standards of Best Practice recommended by the INACSL [ 14 , 24 – 26 ]. In this way, all the stages included in a high-fidelity session were accomplished: pre-briefing, briefing, simulated scenario, and debriefing. Specifically, a session with all nursing students was carried out 1 week before the performance of OSCE stations to establish a safe psychological learning environment and familiarize students with this summative evaluation. In this pre-briefing phase, we implemented several activities based on practices recommended by the INACSL Standards Committee [ 24 , 25 ] and Rudolph, Raemer, and Simon [ 27 ] for establishing a psychologically safe context. Although traditional OSCEs do not usually include the debriefing phase, we decided to include this phase in all OSCEs carried out in our university centre, since we consider this phase is quite relevant to nursing students’ learning process and their imminent professional career.

Critically ill patient’s role was performed by an advanced simulator mannequin (NursingAnne® by Laerdal Medical AS) in all simulated scenarios. A confederate (a health professional who acts in a simulated scenario) performed the role of a registered nurse or a physician who could help students as required. Occasionally, this confederate could perform the role of a relative of a critically ill patient. Nursing students formed work teams of 2–3 students in all MAES© and OSCE sessions. Specifically, each work team formed in MAES© sessions received a brief description of simulated scenario 2 months before and students had to propose 3 NIC (Nursing Interventions Classification) interventions [ 28 ], and 5 related nursing activities with each of them, to resolve the critical situation. In contrast, the critical situation was presented to each work team formed in OSCE sessions for 2 min before entering the simulated scenario. During all simulated experiences, professors were monitoring and controlling the simulation with a sophisticated computer program in a dedicated control room. All simulated scenarios lasted 10 min.

After each clinical simulated scenario was concluded, a debriefing was carried out to give students feedback about their performance. Debriefings in MAES© sessions were conducted according to the Gather, Analyse, and Summarise (GAS) method, a structured debriefing model developed by Phrampus and O’Donnell [ 29 ]. According to this method, the debriefing questions used were: What went well during your performance?; What did not go so well during your performance?; How can you do better next time? . Additionally, MAES© includes an expository phase in debriefings, where the students who performed the simulated scenario establish the contributions of scientific evidence about its resolution [ 17 ]. Each debriefing lasted 20 min in MAES© sessions. In contrast, debriefings in OSCE sessions lasted 10 min and they were carried out according to the Plus-Delta debriefing tool [ 30 ], a technique recommended when time is limited. Consequently, the debriefing questions were reduced to two questions: What went well during your performance?; What did not go so well during your performance? . Within these debriefings, professors communicated to students the total score obtained in the appropriate checklist. Each debriefing lasted 10 min in OSCE sessions. After all debriefings, students completed the questionnaires to evaluate their satisfaction with clinical simulation. In OSCE sessions, students had to report their satisfaction only with the scenario performed, which took part in a series of clinical stations.

In summary, Table  1 shows the required elements for formative and summative evaluation according to the Standards of Best Practice for participant evaluation recommended by the INACSL [ 18 ]. It should be noted that our MAES© and OSCE sessions accomplished these required elements.

Instruments

Clinical performance.

Professors assessed students’ clinical performance using checklists (‘Yes’/‘No’). In MAES© sessions, checklists were based on the 5 most important nursing activities included in the NIC [ 28 ] selected by nursing students. Table  2 shows the checklist of the most important NIC interventions and its related nursing activities selected by nursing students in the Exacerbation of Chronic Obstructive Pulmonary Disease (COPD) simulated scenario. In contrast, checklists for evaluating OSCE sessions were based on nursing activities selected by consensus among professors, registered nurses, and clinical placement mentors. Nursing activities were divided into 5 categories: nursing assessment, clinical judgment/decision-making, clinical management/nursing care, communication/interpersonal relationships, and teamwork. Table  3 shows the checklist of nursing activities that nursing students had to perform in COPD simulated scenario. During the execution of all simulated scenarios, professors checked if the participants perform or not the nursing activities selected.

Clinical simulation satisfaction

To determine satisfaction with clinical simulation perceived by nursing students, the Satisfaction Scale Questionnaire with High-Fidelity Clinical Simulation [ 31 ] was used after each clinical simulation session. This questionnaire consists of 33 items with a 5-point Likert scale ranging from ‘strongly disagree’ to ‘totally agree’. These items are divided into 8 scales: simulation utility, characteristics of cases and applications, communication, self-reflection on performance, increased self-confidence, relation between theory and practice, facilities and equipment and negative aspects of simulation. Cronbach’s α values for each scale ranged from .914 to .918 and total scale presents satisfactory internal consistency (Cronbach’s α value = .920). This questionnaire includes a final question about any opinion or suggestion that participating students wish to reflect after the simulation experience.

Data analysis

Quantitative data were analysed using IBM SPSS Statistics version 24.0 software for Windows (IBM Corp., Armonk, NY, USA). Descriptive statistics were calculated to interpret the results obtained in demographic data, clinical performance, and satisfaction with clinical simulation. The dependent variables after the program in the two groups were analyzed using independent t-tests. The differences in the mean changes between the two groups were analyzed using an independent t-test. Cohen’s d was calculated to analyse the effect size for t-tests. Statistical tests were two-sided (α = 0.05), so the statistical significance was set at 0.05. Subsequently, all students’ opinions and comments were analysed using the ATLAS-ti version 8.0 software (Scientific Software Development GmbH, Berlin, Germany). All the information contained in these qualitative data were stored, managed, classified and organized through this software. All the reiterated words, sentences or ideas were grouped into themes using a thematic analysis [ 32 ]. It should be noted that the students’ opinions and comments were preceded by the letter ‘S’ (student) and numerically labelled.

A total of 218 nursing students participated in the study (106 students were trained through MAES© sessions, whereas 112 students were assessed through OSCE sessions). The age of students ranged from 20 to 43 years (mean = 23.28; SD = 4.376). Most students were women ( n  = 184; 84.4%).

In formative evaluation, professors checked 93.2% of students selected adequately both NIC interventions and its related nursing activities for the resolution of the clinical simulated scenario. Subsequently, these professors checked 85.6% of students, who participated in each simulated scenario, performed the nursing activities previously selected by them. In summative evaluation, students obtained total scores ranged from 65 to 95 points (mean = 7.43; SD = .408).

Descriptive data for each scale of satisfaction with clinical simulation questionnaire, t-test, and effect sizes (d) of differences between two evaluation strategies are shown in Table  4 . Statistically significant differences were found between two evaluation strategies for all scales of the satisfaction with clinical simulation questionnaire. Students´ satisfaction with clinical simulation was higher for all scales of the questionnaire when they were assessed using formative evaluation, including the ‘negative aspects of simulation’ scale, where the students perceived fewer negative aspects. The effect size of these differences was large (including the total score of the questionnaire) (Cohen’s d values > .8), except for the ‘facilities and equipment’ scale, which effect size was medium (Cohen’s d value > .5) [ 33 ].

Table  5 shows specifically descriptive data, t-test, and effect sizes (d) of differences between both evaluation strategies for each item of the clinical simulation satisfaction questionnaire. Statistically significant differences were found between two evaluation strategies for all items of the questionnaire, except for items ‘I have improved communication with the family’, ‘I have improved communication with the patient’, and ‘I lost calm during any of the cases’. Students´ satisfaction with clinical simulation was higher in formative evaluation sessions for most items, except for item ‘simulation has made me more aware/worried about clinical practice’, where students informed being more aware and worried in summative evaluation sessions. Most effect sizes of these differences were small or medium (Cohen’s d values ranged from .238 to .709) [ 33 ]. The largest effect sizes of these differences were obtained for items ‘timing for each simulation case has been adequate’ (d = 1.107), ‘overall satisfaction of sessions’ (d = .953), and ‘simulation has made me more aware/worried about clinical practice’ (d = -.947). In contrast, the smallest effect sizes of these differences were obtained for items ‘simulation allows us to plan the patient care effectively’ (d = .238) and ‘the degree of cases difficulty was appropriate to my knowledge’ (d = .257).

In addition, participating students provided 74 opinions or suggestions expressed through short comments. Most students’ comments were related to 3 main themes after the thematic analysis: utility of clinical simulation methodology (S45: ‘it has been a useful activity and it helped us to recognize our mistakes and fixing knowledge’, S94: ‘to link theory to practice is essential’), to spend more time on this methodology (S113: ‘I would ask for more practices of this type‘, S178: ‘I feel very happy, but it should be done more frequently’), and its integration into other subjects (S21: ‘I consider this activity should be implemented in more subjects’, S64: ‘I wish there were more simulations in more subjects’). Finally, students´ comments about summative evaluation sessions included other 2 main themes related to: limited time of simulation experience (S134: ‘time is short’, S197: ‘there is no time to perform activities and assess properly’) and students´ anxiety (S123: ‘I was very nervous because people were evaluating me around’, S187: ‘I was more nervous than in a real situation’).

The most significant results obtained in our study are the nursing competency acquisition through clinical simulation by nursing students and the different level of their satisfaction with this methodology depending on the evaluation strategy employed.

Firstly, professors in this study verified most students acquired the nursing competencies to resolve each clinical situation. In our study, professors verified that most nursing students performed the majority of the nursing activities required for the resolution of each MAES© session and OSCE station. This result confirms the findings in other studies that have demonstrated nursing competency acquisition by nursing students through clinical simulation [ 34 , 35 ], and specifically nursing competencies related to critical patient management [ 9 , 36 ].

Secondly, students’ satisfaction assessed using both evaluation strategies could be considered high in most items of the questionnaire, regarding their mean scores (quite close to the maximum score in the response scale of the satisfaction questionnaire). The high level of satisfaction expressed by nursing students with clinical simulation obtained in this study is also congruent with empirical evidence, which confirms that this methodology is a useful tool for their learning process [ 6 , 31 , 37 – 40 ].

However, satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance. Reduced time is a frequent complaint of students in OSCE [ 23 , 41 ] and clinical simulation methodology [ 5 , 6 , 10 ]. Professors, registered nurses, and clinical placement mentors tested all simulated scenarios and their checklist in this study. They checked the time was enough for its resolution. Another criticism of summative evaluation is increased anxiety. However, several studies have demonstrated during clinical simulation students’ anxiety increase [ 42 , 43 ] and it is considered as the most disadvantage of clinical simulation [ 1 – 10 ]. In this sense, anxiety may influence negatively students’ learning process [ 42 , 43 ]. Although the current simulation methodology can mimic the real medical environment to a great degree, it might still be questionable whether students´ performance in the testing environment really represents their true ability. Test anxiety might increase in an unfamiliar testing environment; difficulty to handle unfamiliar technology (i.e., monitor, defibrillator, or other devices that may be different from the ones used in the examinee’s specific clinical environment) or even the need to ‘act as if’ in an artificial scenario (i.e., talking to a simulator, examining a ‘patient’ knowing he/she is an actor or a mannequin) might all compromise examinees’ performance. The best solution to reduce these complaints is the orientation of students to the simulated environment [ 10 , 21 – 23 ].

Nevertheless, it should be noted that the diversity in the satisfaction scores obtained in our study could be supported not by the choice of the assessment strategy, but precisely by the different purposes of formative and summative assessment. In this sense, there is a component of anxiety that is intrinsic in summative assessment, which must certify the acquisition of competencies [ 10 – 12 , 21 ]. In contrast, this aspect is not present in formative assessment, which is intended to help the student understand the distance to reach the expected level of competence, without penalty effects [ 10 – 12 ].

Both SBA strategies allow educators to evaluate students’ knowledge and apply it in a clinical setting. However, formative evaluation is identified as ‘assessment for learning’ and summative evaluation as ‘assessment of learning’ [ 44 ]. Using formative evaluation, educators’ responsibility is to ensure not only what students are learning in the classroom, but also the outcomes of their learning process [ 45 ]. In this sense, formative assessment by itself is not enough to determine educational outcomes [ 46 ]. Consequently, a checklist for evaluating students’ clinical performance was included in MAES© sessions. Alternatively, educators cannot make any corrections in students’ performance using summative evaluation [ 45 ]. Gavriel [ 44 ] suggests providing students feedback in this SBA strategy. Therefore, a debriefing phase was included after each OSCE session in our study. The significance of debriefing recognised by nursing students in our study is also congruent with the most evidence found  [ 13 , 15 , 16 , 47 ]. Nursing students appreciate feedback about their performance during simulation experience and, consequently, debriefing is considered as the most rewarding phase in clinical simulation by them  [ 5 , 6 , 48 ]. In addition, nursing students in our study expressed they could learn from their mistakes in debriefing. Learn from error is one of the most advantages of clinical simulation shown in several studies  [ 5 , 6 , 49 ] and mistakes should be considered learning opportunities rather than there being embarrassment or punitive consequences  [ 50 ].

Furthermore, nursing students who participated in our study considered the practical utility of clinical simulation as another advantage of this teaching methodology. This result is congruent with previous studies [ 5 , 6 ]. Specifically, our students indicated this methodology is useful to bridge the gap between theory and practice [ 51 , 52 ]. In this sense, clinical simulation has proven to reduce this gap and, consequently, it has demonstrated to shorten the gap between classrooms and clinical practices  [ 5 , 6 , 51 , 52 ]. Therefore, as this teaching methodology relates theory and practice, it helps nursing students to be prepared for their clinical practices and future careers. According to Benner’s model of skill acquisition in nursing [ 53 ], nursing students become competent nurses through this learning process, acquiring a degree of safety and clinical experience before their professional careers [ 54 ]. Although our research indicates clinical simulation is a useful methodology for the acquisition and learning process of competencies mainly related to adequate management and nursing care of critically ill patients, this acquisition and learning process could be extended to most nursing care settings and its required nursing competencies.

Limitations and future research

Although checklists employed in OSCE have been criticized for their subjective construction [ 10 , 21 – 23 ], they were constructed with the expert consensus of nursing professors, registered nurses and clinical placement mentors. Alternatively, the self-reported questionnaire used to evaluate clinical simulation satisfaction has strong validity. All simulated scenarios were similar in MAES© and OSCE sessions (same clinical situations, patients, actors and number of participating students), although the debriefing method employed after them was different. This difference was due to reduced time in OSCE sessions. Furthermore, it should be pointed out that the two groups of students involved in our study were from different course years and they were exposed to different strategies of SBA. In this sense, future studies should compare nursing students’ satisfaction with both strategies of SBA in the same group of students and using the same debriefing method. Finally, future research should combine formative and summative evaluation for assessing the clinical performance of undergraduate nursing students in simulated scenarios.

It is needed to provide students feedback about their clinical performance when they are assessed using summative evaluation. Furthermore, it is needed to evaluate whether they achieve learning outcomes when they are assessed using formative evaluation. Consequently, it should be recommended to combine both evaluation strategies in SBA. Although students expressed high satisfaction with clinical simulation methodology, they perceived a reduced time and increased anxiety when they are assessed by summative evaluation. The best solution is the orientation of students to the simulated environment.

Availability of data and materials

The datasets analysed during the current study are available from the corresponding author on reasonable request.

Martins J, Baptista R, Coutinho V, Fernandes M, Fernandes A. Simulation in nursing and midwifery education. Copenhagen: World Health Organization Regional Office for Europe; 2018.

Google Scholar  

Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2010;66:3–15.

Article   PubMed   Google Scholar  

Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90:499–541.

Article   Google Scholar  

Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016;16:152.

Article   PubMed   PubMed Central   Google Scholar  

Ricketts B. The role of simulation for learning within pre-registration nursing education—a literature review. Nurse Educ Today. 2011;31:650–4.

PubMed   Google Scholar  

Shin S, Park JH, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Educ Today. 2015;35:176–82.

Bagnasco A, Pagnucci N, Tolotti A, Rosa F, Torre G, Sasso L. The role of simulation in developing communication and gestural skills in medical students. BMC Med Educ. 2014;14:106.

Oh PJ, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35:e6–e15.

Stayt LC, Merriman C, Ricketts B, Morton S, Simpson T. Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students. J Adv Nurs. 2015;71:2563–74.

Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–82.

PubMed   PubMed Central   Google Scholar  

Billings DM, Halstead JA. Teaching in nursing: a guide for faculty. 4th ed. St. Louis: Elsevier; 2012.

Nichols PD, Meyers JL, Burling KS. A framework for evaluating and planning assessments intended to improve student achievement. Educ Meas Issues Pract. 2009;28:14–23.

Cant RP, Cooper SJ. The benefits of debriefing as formative feedback in nurse education. Aust J Adv Nurs. 2011;29:37–47.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Glossary. Clin Simul Nurs. 2016;12:S39–47.

Dufrene C, Young A. Successful debriefing-best methods to achieve positive learning outcomes: a literature review. Nurse Educ Today. 2014;34:372–6.

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63.

Díaz JL, Leal C, García JA, Hernández E, Adánez MG, Sáez A. Self-learning methodology in simulated environments (MAES©): elements and characteristics. Clin Simul Nurs. 2016;12:268–74.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM : Participant Evaluation. Clin Simul Nurs. 2016;12:S26–9.

Díaz Agea JL, Megías Nicolás A, García Méndez JA, Adánez Martínez MG, Leal CC. Improving simulation performance through self-learning methodology in simulated environments (MAES©). Nurse Educ Today. 2019;76:62–7.

Díaz Agea JL, Ramos-Morcillo AJ, Amo Setien FJ, Ruzafa-Martínez M, Hueso-Montoro C, Leal-Costa C. Perceptions about the self-learning methodology in simulated environments in nursing students: a mixed study. Int J Environ Res Public Health. 2019;16:4646.

Article   PubMed Central   Google Scholar  

Oermann MH, Kardong-Edgren S, Rizzolo MA. Summative simulated-based assessment in nursing programs. J Nurs Educ. 2016;55:323–8.

Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ. 1979;13:41–54.

CAS   PubMed   Google Scholar  

Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. Nurse Educ Today. 2009;29:394–404.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Design. Clin Simul Nurs. 2016;12:S5–S12.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Facilitation. Clin Simul Nurs. 2016;12:S16–20.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Debriefing. Clin Simul Nurs. 2016;12:S21–5.

Rudolph JW, Raemer D, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9:339–49.

Butcher HK, Bulechek GM, Dochterman JMM, Wagner C. Nursing Interventions Classification (NIC). 7th ed. St. Louis: Elsevier; 2018.

Phrampus PE, O’Donnell JM. Debriefing using a structured and supported approach. In: AI AIL, De Maria JS, Schwartz AD, Sim AJ, editors. The comprehensive textbook of healthcare simulation. New York: Springer; 2013. p. 73–84.

Chapter   Google Scholar  

Decker S, Fey M, Sideras S, Caballero S, Rockstraw L, Boese T, et al. Standards of best practice: simulation standard VI: the debriefing process. Clin Simul Nurs. 2013;9:S26–9.

Alconero-Camarero AR, Gualdrón-Romero A, Sarabia-Cobo CM, Martínez-Arce A. Clinical simulation as a learning tool in undergraduate nursing: validation of a questionnaire. Nurse Educ Today. 2016;39:128–34.

Mayan M. Essentials of qualitative inquiry. Walnut Creek: Left Coast Press, Inc.; 2009.

Cohen L, Manion L, Morrison K. Research methods in education. 7th ed. London: Routledge; 2011.

Lapkin S, Levett-Jones T, Bellchambers H, Fernandez R. Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clin Simul Nurs. 2010;6:207–22.

McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Revisiting “a critical review of simulation-based medical education research: 2003-2009”. Med Educ. 2016;50:986–91.

Abelsson A, Bisholt B. Nurse students learning acute care by simulation - focus on observation and debriefing. Nurse Educ Pract. 2017;24:6–13.

Bland AJ, Topping A, Wood BA. Concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today. 2011;31:664–70.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, design scale simulation, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34:1298–304.

Levett-Jones T, McCoy M, Lapkin S, Noble D, Hoffman K, Dempsey J, et al. The development and psychometric testing of the satisfaction with simulation experience scale. Nurse Educ Today. 2011;31:705–10.

Zapko KA, Ferranto MLG, Blasiman R, Shelestak D. Evaluating best educational practices, student satisfaction, and self-confidence in simulation: a descriptive study. Nurse Educ Today. 2018;60:28–34.

Kelly MA, Mitchell ML, Henderson A, Jeffrey CA, Groves M, Nulty DD, et al. OSCE best practice guidelines-applicability for nursing simulations. Adv Simul. 2016;1:10.

Cantrell ML, Meyer SL, Mosack V. Effects of simulation on nursing student stress: an integrative review. J Nurs Educ. 2017;56:139–44.

Nielsen B, Harder N. Causes of student anxiety during simulation: what the literature says. Clin Simul Nurs. 2013;9:e507–12.

Gavriel J. Assessment for learning: a wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Educ Prim Care. 2013;24:93–6.

Taras M. Summative and formative assessment. Act Learn High Educ. 2008;9:172–82.

Wunder LL, Glymph DC, Newman J, Gonzalez V, Gonzalez JE, Groom JA. Objective structured clinical examination as an educational initiative for summative simulation competency evaluation of first-year student registered nurse anesthetists’ clinical skills. AANA J. 2014;82:419–25.

Neill MA, Wotton K. High-fidelity simulation debriefing in nursing education: a literature review. Clin Simul Nurs. 2011;7:e161–8.

Norman J. Systematic review of the literature on simulation in nursing education. ABNF J. 2012;23:24–8.

King A, Holder MGJr, Ahmed RA. Error as allies: error management training in health professions education. BMJ Qual Saf. 2013;22:516–9.

Higgins M, Ishimaru A, Holcombe R, Fowler A. Examining organizational learning in schools: the role of psychological safety, experimentation, and leadership that reinforces learning. J Educ Change. 2012;13:67–94.

Hope A, Garside J, Prescott S. Rethinking theory and practice: Pre-registration student nurses experiences of simulation teaching and learning in the acquisition of clinical skills in preparation for practice. Nurse Educ Today. 2011;31:711–7.

Lisko SA, O’Dell V. Integration of theory and practice: experiential learning theory and nursing education. Nurs Educ Perspect. 2010;31:106–8.

Benner P. From novice to expert: excellence and power in clinical nursing practice. Menlo Park: Addison-Wesley Publishing; 1984.

Book   Google Scholar  

Nickless LJ. The use of simulation to address the acute care skills deficit in pre-registration nursing students: a clinical skill perspective. Nurse Educ Pract. 2011;11:199–205.

Download references

Acknowledgements

The authors appreciate the collaboration of nursing students who participated in the study.

STROBE statement

All methods were carried out in accordance with the 22-item checklist of the consolidated criteria for reporting cross-sectional studies (STROBE).

The authors have no sources of funding to declare.

Author information

Authors and affiliations.

Fundación San Juan de Dios, Centro de Ciencias de la Salud San Rafael, Universidad de Nebrija, Paseo de La Habana, 70, 28036, Madrid, Spain

Oscar Arrogante, Gracia María González-Romero, Eva María López-Torre, Laura Carrión-García & Alberto Polo

You can also search for this author in PubMed   Google Scholar

Contributions

OA: Conceptualization, Data Collection, Formal Analysis, Writing – Original Draft, Writing - Review & Editing, Supervision; GMGR: Conceptualization, Data Collection, Writing - Review & Editing; EMLT: Conceptualization, Writing - Review & Editing; LCG: Conceptualization, Data Collection, Writing - Review & Editing; AP: Conceptualization, Data Collection, Formal Analysis, Writing - Review & Editing, Supervision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Oscar Arrogante .

Ethics declarations

Ethics approval and consent to participate.

The research committee of the Centro Universitario de Ciencias de la Salud San Rafael-Nebrija approved the study (P_2018_012). According to the ethical standards, all participants received written informed consent and written information about the study and its goals. Additionally, written informed consent for audio-video recording was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Arrogante, O., González-Romero, G.M., López-Torre, E.M. et al. Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction. BMC Nurs 20 , 92 (2021). https://doi.org/10.1186/s12912-021-00614-2

Download citation

Received : 09 February 2021

Accepted : 17 May 2021

Published : 08 June 2021

DOI : https://doi.org/10.1186/s12912-021-00614-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical competence
  • High Fidelity simulation training
  • Nursing students

BMC Nursing

ISSN: 1472-6955

formative assessment examples nursing

Nursing Course and Curriculum Development – University of Brighton

Innovative – inspirational – inclusive.

formative assessment examples nursing

Formative assessment

Formative assessment i.e. assessment for learning

Formative assessment enables students to try out new assessment types, to express their learning, to get feedback on strengths and areas to develop and to understand the quality of their work whilst studying.

The module specification must state (in the ‘Teaching and learning activities’ section):

  • The form of the formative assessment task e.g. online test, essay plan seminar presentation, peer review of work
  • When it will take place i.e. during a taught session or independent study
  • How students will get feedback e.g. online test auto mark, oral feedback, written feedback; from peer or academic

Examples of formative assessments:

  • Set up a discussion board to develop learning networks
  • Develop a reading log
  • Review an article review
  • Group presentation
  • Mock examination
  • Self-assessment  – students generate criteria appropriate for assessing own work
  • Annotated bibliography What is an annotated bibliography?

Annotate exemplars of summative assessments to identify where the student was meeting the learning outcomes or could meet them better.  This can apply to all forms of assessment; essays, posters, OSCEs and presentations (if videoed)

Print Friendly, PDF & Email

We use cookies to personalise content, to provide social media features and to analyse our traffic. Read our detailed cookie policy

Nursing Education Network

formative assessment examples nursing

Formative & Summative Assessment

Introduce and provide an overview of formative & summative assessment:

  • Describe key concepts related to formative and summative assessment
  • Formative and summative assessment in healthcare
  • Making learning visible
  • Key learning resources

Formative & Summative Assessment Presentation [Download]

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email a link to a friend (Opens in new window)

Leave a Reply Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

formative assessment examples nursing

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Formative Assessment and Its Impact on Student Success

Hill, Rebecca DNP, CNE; Wong, John PhD; Thal, Rebecca MSN, FNP

By Rebecca Hill, DNP, CNE, Assistant Professor, John Wong, PhD, Assistant Professor, and Rebecca Thal, MSN, FNP, Graduate Student, School of Nursing, MGH Institute of Health Professions, Boston, MA ( [email protected] ).

formative assessment examples nursing

Full Text Access for Subscribers:

Individual subscribers.

formative assessment examples nursing

Institutional Users

Not a subscriber.

You can read the full text of this article if you:

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

An unfolding tabletop simulation training exercise on disaster planning for..., role of academic self-efficacy and social support on nursing students' test..., concept-based curricula: a national study of critical concepts, curriculum revision: making informed decisions, peer incivility among prelicensure nursing students: a call to action for....

Assessment and Evaluation in Nursing Education: A Simulation Perspective

  • First Online: 29 February 2024

Cite this chapter

formative assessment examples nursing

  • Loretta Garvey 7 &
  • Debra Kiegaldie 8  

Part of the book series: Comprehensive Healthcare Simulation ((CHS))

385 Accesses

Assessment and evaluation are used extensively in nursing education. In many instances, these terms are often used interchangeably, which can create confusion, yet key differences are associated with each.

Assessment in undergraduate nursing education is designed to ascertain whether students have achieved their potential and have acquired the knowledge, skills, and abilities set out within their course. Assessment aims to understand and improve student learning and must be at the forefront of curriculum planning to ensure assessments are well aligned with learning outcomes. In the past, the focus of assessment has often been on a single assessment. However, it is now understood that we must examine the whole system or program of assessment within a course of study to ensure integration and recognition of all assessment elements to holistically achieve overall course aims and objectives. Simulation is emerging as a safe and effective assessment tool that is increasingly used in undergraduate nursing.

Evaluation, however, is more summative in that it evaluates student attainment of course outcomes and their views on the learning process to achieve those outcomes. Program evaluation takes assessment of learning a step further in that it is a systematic method to assess the design, implementation, improvement, or outcomes of a program. According to Frye and Hemmer, student assessments (measurements) can be important to the evaluation process, but evaluation measurements come from various sources (Frye and Hemmer. Med Teacher 34:e288-e99, 2012). Essentially, program evaluation is concerned with the utility of its process and results (Alkin and King. Am J Evalu 37:568–79, 2016). The evaluation of simulation as a distinct program of learning is an important consideration when designing and implementing simulation into undergraduate nursing. This chapter will examine assessment and program evaluation from the simulation perspective in undergraduate nursing to explain the important principles, components, best practice approaches, and practical applications that must be considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Masters GN. Reforming Education Assessment: Imperatives, principles, and challenges. Camberwell: ACER Press; 2013.

Google Scholar  

MacLellan E. Assessment for Learning: the differing perceptions of tutors and students. Assess Eval High Educ. 2001;26(4):307–18.

Article   Google Scholar  

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63–7.

Article   CAS   PubMed   Google Scholar  

Alinier G. Nursing students’ and lecturers’ perspectives of objective structured clinical examination incorporating simulation. Nurse Educ Today. 2003;23(6):419–26.

Article   PubMed   Google Scholar  

Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40(11):1102–9.

Biggs J. Constructive alignment in university teaching: HERDSA. Rev High Educ. 2014;1:5–22.

Hamdy H. Blueprinting for the assessment of health care professionals. Clin Teach. 2006;3(3):175–9.

Welch S. Program evaluation: a concept analysis. Teach Learn Nurs. 2021;16(1):81–4.

Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE Guide No. 67. Med Teach. 2012;34(5):e288–e99.

Johnston S, Coyer FM, Nash R. Kirkpatrick's evaluation of simulation and debriefing in health care education: a systematic review. J Nurs Educ. 2018;57(7):393–8.

ACGM. Glossary of Terms: Accreditation Council for Graduate Medical Education 2020. https://www.acgme.org/globalassets/pdfs/ab_acgmeglossary.pdf .

Shadish WR, Luellen JK. History of evaluation. In: Mathison S, editor. Encyclopedia of evaluation. Sage; 2005. p. 183–6.

Lewallen LP. Practical strategies for nursing education program evaluation. J Prof Nurs. 2015;31(2):133–40.

Kirkpatrick DL. Evaluation of training. In: Craig RL, Bittel LR, editors. New York: McGraw Hill; 1967.

Cahapay M. Kirkpatrick model: its limitations as used in higher education evaluation. Int J Assess Tools Educ. 2021;8(1):135–44.

Yardley S, Dornan T. Kirkpatrick's levels and education 'evidence'. Med Educ. 2012;46(1):97–106.

Kirkpatrick J, Kirkpatrick W. An introduction to the new world Kirkpatrick model. Kirkpatrick Partners; 2021.

Bhatia M, Stewart AE, Wallace A, Kumar A, Malhotra A. Evaluation of an in-situ neonatal resuscitation simulation program using the new world Kirkpatrick model. Clin Simul Nurs. 2021;50:27–37.

Lippe M, Carter P. Using the CIPP model to assess nursing education program quality and merit. Teach Learn Nurs. 2018;13(1):9–13.

Kardong-Edgren S, Adamson KA, Fitzgerald C. A review of currently published evaluation instruments for human patient simulation. Clin Simul Nurs. 2010;6(1):e25–35.

Solutions S. Reliability and Validity; 2022

Rauta S, Salanterä S, Vahlberg T, Junttila K. The criterion validity, reliability, and feasibility of an instrument for assessing the nursing intensity in perioperative settings. Nurs Res Pract. 2017;2017:1048052.

PubMed   PubMed Central   Google Scholar  

Jeffries PR, Rizzolo MA. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: a national, multi-site, multi-method study (summary report). Sci Res. 2006;

Unver V, Basak T, Watts P, Gaioso V, Moss J, Tastan S, et al. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire. Contemp Nurse. 2017;53(1):60–74.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34(10):1298–304.

Guise J-M, Deering SH, Kanki BG, Osterweil P, Li H, Mori M, et al. Validation of a tool to measure and promote clinical teamwork. Simul Healthc. 2008;3(4)

Millward LJ, Jeffries N. The team survey: a tool for health care team development. J Adv Nurs. 2001;35(2):276–87.

Download references

Author information

Authors and affiliations.

Federation University Australia, University Dr, Mount Helen, VIC, Australia

Loretta Garvey

Holmesglen Institute, Healthscope Hospitals, Monash University, Mount Helen, VIC, Australia

Debra Kiegaldie

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Loretta Garvey .

Editor information

Editors and affiliations.

Emergency Medicine, Icahn School of Medicine at Mount Sinai, Director of Emergency Medicine Simulation, Mount Sinai Hospital, New York, NY, USA

Jared M. Kutzin

School of Nursing, University of California San Francisco, San Francisco, CA, USA

Perinatal Patient Safety, Kaiser Permanente, Pleasanton, CA, USA

Connie M. Lopez

Eastern Health Clinical School, Faculty of Medicine, Nursing & Health Sciences, Monash University, Melbourne, VIC, Australia

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Garvey, L., Kiegaldie, D. (2023). Assessment and Evaluation in Nursing Education: A Simulation Perspective. In: Kutzin, J.M., Waxman, K., Lopez, C.M., Kiegaldie, D. (eds) Comprehensive Healthcare Simulation: Nursing. Comprehensive Healthcare Simulation. Springer, Cham. https://doi.org/10.1007/978-3-031-31090-4_14

Download citation

DOI : https://doi.org/10.1007/978-3-031-31090-4_14

Published : 29 February 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-31089-8

Online ISBN : 978-3-031-31090-4

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of bmcnurs

Exploring the formal assessment discussions in clinical nursing education: An observational study

Ingunn aase.

1 SHARE- Centre for Resilience in Healthcare, Faculty of Health Sciences, University of Stavanger, Kjell Arholms gate 41, N-4036 Stavanger, Norway

Kristin Akerjordet

2 School of Psychology, Faculty of the Arts, Social Sciences & Humanities, University of Wollongong, Wollongong, NSW Australia

Patrick Crookes

3 School of Nursing, Midwifery and Public health, University of Canberra, Canberra, Australia

Christina T. Frøiland

Kristin a. laugaland, associated data.

Original de-identified data of the study will be stored at the Norwegian Centre of Research Data subsequent to completion of the project. Original de- identified data can be requested from the corresponding author upon reasonable request.

Introduction

According to EU standards, 50% of the bachelor education program in nursing should take place in clinical learning environments. Consequently, this calls for high quality supervision, where appropriate assessment strategies are vital to optimize students’ learning, growth, and professional development. Despite this, little is known about the formal assessment discussions taking place in clinical nursing education.

The aim of this study was to explore the characteristics of the formal assessment discussions taking place during first-year students’ clinical education in nursing homes.

An exploratory qualitative study was performed. The data consist of passive participant observations of 24 assessment discussions (12 mid-term and 12 final assessments) with first-year nursing students ( n =12), their assigned registered nurse mentors ( n =12) and nurse educators ( n =5). The study was conducted in three public nursing homes in a single Norwegian municipality. Data were subjected to thematic analysis. The findings were reported using the Standards for Reporting of Qualitative Research.

Three themes were identified regarding the characteristics of the formal assessment discussions: (1) adverse variability in structuring, weighting of theoretical content and pedagogical approach; (2) limited three-part dialogue constrains feedback and reflection; and (3) restricted grounds for assessment leave the nurse educators with a dominant role.

These characteristic signal key areas of attention to improve formal assessment discussions to capitalize on unexploited learning opportunities.

This study focuses on formal assessment practice of nursing students in clinical education in nursing homes. Enabling nursing students to acquire professional competence through clinical education in a variety of healthcare settings is a cornerstone of contemporary nurse education programs [ 1 ]. According to EU standards, 50% of the bachelor education program should take place in clinical learning environments. Consequently, this calls for high-quality clinical supervision that includes appropriate assessment strategies, which are critical to optimize students’ learning, professional development, and personal growth [ 2 ].

Formal assessment of nursing students in clinical education serves two purposes: 1) to facilitate learning by enabling students to judge their own achievements more accurately and encourage their continuous learning process; and 2) to provide certification of achievements [ 3 ]. Accordingly, there are two approaches to assessment: formative assessment and summative assessment. Formative assessment is focused on the learning needs of each student, identifying areas in need of development, and providing feedback. Feedback is the central component of effective formative assessment. A summative assessment is a summary of a student’s achievements and judgement as to whether he/she has met the required learning outcomes [ 4 , 5 ]. Student clinical placements are often assessed on a pass-fail basis, not by a letter grade [ 6 ].

The predominant clinical education model applied in nursing homes involves students being mentored and assessed by an RN and followed up by a nurse teacher [ 7 ]. The formal assessment during clinical education involves a partnership model where nursing students, their assigned Registered Nurse (RN) mentors and nurse educators, cooperate and share responsibility for facilitating and confirming the students’ achievement of expected learning outcomes [ 8 ]. However, substantial variations in assessment practices internationally and nationally have been reported, suggesting that the assessment practices of nursing students in clinical education lack consistency [ 2 , 3 , 9 , 10 ]. Consequently, a variety of tools for assessing student’s clinical competence exist, and these tools depend on different definitions of clinical competence and the components to be assessed such as knowledge, technical care skills, attitudes, behaviours, clinical judgment, and critical thinking [ 11 – 13 ]. Several international researchers have argued that reliable and comparable assessment of nursing students’ knowledge and skill would benefit greatly from the development and consistent use of national competency assessment tools [ 2 , 8 , 14 , 15 ]. In their discussion paper, Gjevjon et al. [ 16 ] highlighted the importance of assessing the students’ progression in clinical skills and ensuring that their nursing competence is in line with official requirements and professional expectations. This implies that student assessments cannot be limited to the period of a single clinical placement period.

Stakeholders have reported challenges with the assessment of students’ competence in clinical nursing education [ 2 , 17 – 19 ]. RN mentors report that they have trouble providing feedback and assessing student competence because of the absence of clear guidelines and assessment criteria [ 17 , 20 ]. RN mentors also experience having a passive and peripheral role during formal assessment discussions [ 18 , 21 ]. Conversely, nursing students report feeling insecure due to power disparities in the assessment discussions; students perceive their RN mentors as much more powerful than they are [ 22 ]. Moreover, students report that the personal chemistry between them and their assigned RN mentor could lead to differential treatment [ 22 ]. In comparison, nurse educators report that it is challenging to make the students’ competence and learning processes visible, and to ensure fair and equitable assessment of students in clinical placement/education (e.g., [ 23 ]). Difficulties in understanding the concepts used to describe the student learning outcomes, the language used in the assessment tools, limited mentor competence in assessment and restricted academic-clinical collaboration have also been reported in the literature (e.g., [ 2 , 24 – 26 ]). A systematic review of assessment of student competence found that the use of a valid and reliable assessment tool, with clear criteria and continued education and support for mentors is critical to quality in learning opportunities [ 3 ].

Formative assessment with feedback as key aspect is arguably one of the most important factors in terms of students’ learning, personal growth, and professional development in discussions of clinical assessment [ 24 , 27 ]. A multilevel descriptive study [ 20 ] concluded that students often do not receive sufficient constructive feedback in or during formal assessment discussions. This is of major concern since clinical learning is considered a signature pedagogy in preparation of nursing students for real-world practice (e.g., [ 2 , 28 ]). For workplace learning to be optimized, nursing students need to explore the complexity of the patients experience and to be able to discuss and evaluate patient care with others inside the clinical environment and in an academic setting (e.g., [ 29 , 30 ]). The focus of assessment is to aid nursing students’ continuous learning process which requires constructive feedbacks and opportunities for reflections between nursing student, RN mentor, and nurse educator [ 3 ]. Formal assessment offers a potential opportunity to optimize nursing students’ learning outcomes by extending and transforming their professional knowledge dialectically. The explicit focus of assessment is therefore very important as students tend to concentrate on achieving the required competencies which they are aware will be assessed [ 2 ].

Internationally, emerging evidence shows that summative assessment of nursing students’ competence is a matter of concern across countries and educational institutions as previously stressed due to a) lack of consistency in use of methods and tool, b) its openness to subject bias, and c) the fact that the quality of assessment varies greatly [ 2 , 3 ]. According to Helminen et al. [ 2 ], there are few studies of summative assessment, and as far as we know, no studies have explored the characteristics of these formal assessment discussions by observing what is really going on in this three-part dialogue between nursing students, RN mentors and nurse educators. There is therefore a need to further increase our knowledge and understanding of assessment discussions’ characteristics and how these discussions can enhance students’ learning (e.g., [ 2 , 20 , 21 ]). To fill this knowledge gap, the aim of this study was to explore the characteristics of the formal assessment discussions that take place during first-year students’ clinical education in a nursing home using observation. This is considered a novel methodological approach to explore this field in nursing education.

The study applied a qualitative, exploratory design using passive observation [ 31 ] to explore the characteristics of the mid-term and final assessment discussions. Such observations allow the researcher to be present and identifiable, but the researcher does not participate or interact with the people being observed [ 31 ]. Observational research is, as previously stressed, a novel way to study assessment discussions as most studies have applied interview methods retrospectively, as a source for collecting data about assessment discussions [ 3 , 20 ]. Observational research offers a rich, in-depth approach which in contrast to interviews allow the researcher to identify context-specific issues of importance, to learn what is taken for granted in a situation and to discover what is happening by watching and listening to arrive at new knowledge [ 31 ]. The observations of the formal assessment discussions were distributed among three of the researchers (IA, CF, KL) who individually carried out passive observations using a structured guide to ensure rigor [ 32 ]. The Standards for Reporting Qualitative Research (SPQR) were used.

The context of the observed assessment practices

In this study, nursing home placements are part of eight weeks of mandatory clinical education during nursing students’ first academic year. In the observational study, a preceptorship model was applied in which students are mentored by a RN and followed up by a nurse educator [ 7 ]. In Norway, mentorship is an integral part of an RN’s work. This implies that the mentor does not receive financial compensation or require formal training in mentorship (e.g., at master level). The RN mentors are employed by the nursing homes and the nurse educators included are employed by the university. The nurse educators are responsible for coordinating the students’ learning, organizing the assessment discussions assessing the nursing students in clinical placement.

The clinical education system for the students in this study is comprised of two formal assessment discussions: the midterm discussion, with a summative assessment and a formative assessment, and the final assessment discussion, where the whole period is encapsulated up in a summative assessment [ 20 ]. The mid-term and final summative assessments take the form of a three-part dialogue among the nursing student, the RN mentor, and the nurse educator at the placement site. Prior to the assessment discussions the nursing student must write an evaluation of his or her learning. This written self-assessment must be sent to the nurse educator and RN mentor two days before the assessment discussion. There is no requirement for written preparation or documentation from the RN mentors. The university would assign the student a pass or fail grade based on six competence areas (i.e., professional ethics and legal-, cooperation-, and patient-centered nursing, pedagogical, management and learning competence) with the accompanying learning outcomes. All six competence areas and accompanying learning outcomes with a particular focus on fundamentals of care were well known by the researchers (IA, CF, KL) serving as important pre-understanding for data-collection and analysis. Beyond this, all the researchers had experience as nurse educators from the nursing home context and one of the researchers (CF) holds a Master of Science degree in gerontology.

Setting and sample

The study was conducted in three public nursing homes within the same municipality in Western Norway as part of a larger research project: “Aiming for quality in nursing home care: Rethinking clinical supervision and assessment of nursing students in clinical studies” [ 19 ]. The nursing homes varied in patient numbers and staffing but were highly motivated to participate in the research project anchored in the top management team. Recruitment was based on a purposive, criterion-based sampling strategy [ 33 ] targeting the nursing students, RN mentors and nurse educators involved in assessment discussions. To make sure that the sample had the virtue of knowledge and expertise, RN with mentorship experiences from nursing homes were included ensuring diversity related to gender, age, and ethnicity. The nursing students and the nurse educators were recruited from the same university representing a distribution in age, gender, healthcare experience, and academic experience (See Table ​ Table1 1 ).

Characteristics of participants

Participant groupsAgePrevious experienceGenderNot Norwegian as mother language
Nursing students ( =12)19 - 29 years0- 2 years as healthcare assistants12 females1 of 12
RN mentors ( =12)25- 53 years3-23 year of experience as RN nurse11 females, 1male4 of 12
Nurse educators ( =5)38- 66 years3-33 years as educatorsFemales1 of 5

Prior to data collection, approval was obtained from the university and from the nurse managers at the nursing homes enrolled in the study. In addition, an information meeting was held by the first and last author of this study, with the eligible nursing students during their pre-placement orientation week on campus, the RN mentors at the selected nursing home sites, and with the nurse educators responsible for overseeing nursing students on placement.

Invitations to participate in the study were then sent to eligible participants at the three public nursing homes. Nursing students were recruited first, before their assigned RN mentors and nurse educators were emailed with an invitation to participate to ensure the quality of the study sample. Two co-researchers working in two of the three enrolled nursing homes assisted in the recruitment of the RN mentors. A number of 45 allocated nursing students had their clinical placement at these three included public nursing homes. Of the 45 potential nursing students invited to participate, 12 consented. Their assigned RN mentors ( n =12) and nurse educators ( n =5) also agreed to participate. A summary of participant group characteristics is displayed in Table ​ Table1 1 .

Four nursing students and four RN mentors enrolled from each nursing home. As nurse educators are responsible for overseeing several students during placements, fewer nurse educators than students and RN mentors agreed to participated. All but one of the participants, a RN mentor, were women. Out of 29 participants, six (one nursing student, one nurse educator and four RN mentors) did not have Norwegian as their mother language. None of the RNs had prior formal supervision competence and their experience with mentoring students ranged from 1-7 years. Seven of the 12 nursing students had healthcare experience prior to their placement period. Three of the five nurse educators held a PhD and the other two were lecturers with a master’s degree. Two of the nurse educators were overseeing students on nursing home placement for the first and second time, while the other three had several years of experience with nursing student placements within nursing homes. None of the nurse educators had expertise in gerontological nursing.

Data collection

To allow a first-hand experience of the formal assessment discussion passive observation were used as the main source of data collection. The observations were carried out separately by three researchers (IA, CF, KL), all of whom are experienced qualitative researchers with a background in nursing, nursing education and nursing research. The researchers were all familiar faces from lectures at the University and as previously emphasized, as nurse educators in the public nursing home settings. At the beginning of each assessment observation, time was taken to create bond of trust between the participants to reduce contextual stress. The observations were based on a structured observation guide (See Attachment 1). The observation guide contained relatively broad predefined categories: structure, content, duration, interaction, dialogue, and feedback. These predefined categories were used to guide and support the recording and notetaking process allowing space for spontaneous aspects of the formal assessment discussions [ 31 ]. The guide was based on the aim of the study and informed by the literature.

During the observations, each of the three researchers sat on chairs in a corner of the room (two-three meters away) to observe the interaction and listen unobtrusively to the assessment discussions during the study to reduce students’ experience of stress contextually. Observational notes were taken discreetly, according to the structured guide and combined descriptions and personal impressions [ 34 ]. Summaries, including reflective notes, were written in electronic format directly after the observations. The observations were conducted alongside the clinical placements in February and March 2019 with a duration of about 60 minutes on average. The choice of passive observation using three researchers was both of pragmatic and scientific reasons: a) to ensure that data was collected timely at colliding times of assessment meetings, and b) to verify the observed notes supported by triangulation during analysis and the interpretation process [ 33 ].

Data Analysis

Braun and Clarke’s [ 35 ] approach to thematic analysis was used to analyze the observational notes and summary transcripts. The thematic analysis was guided by the aim of the study and followed the six steps described by Braun and Clarke [ 35 ]: (1) becoming familiar with the data; (2) generating initial codes; (3) searching for themes; (4) reviewing themes; (5) defining and naming themes; and (6) producing the report. The 93 pages of observational notes were read independently by three of the researchers (IA, CF, KL) to obtain an overall impression of the dataset (See Fig. ​ Fig.1: 1 : Analysis process).

An external file that holds a picture, illustration, etc.
Object name is 12912_2022_934_Fig1_HTML.jpg

Analysis process

All the observational notes, both from mid-term- and final assessment discussions, were then compared and analyzed as a single dataset with attention to similarities and differences by generating initial codes and searching for themes. The reason for merging the data set was that the mid-term and final assessment discussions overlapped in terms of initial codes and emerging themes contributing to a deeper understanding of the results. The researchers IA, CF and KL met several times to discuss the coding process to finalize the preliminary themes. All authors revised, defined, and identified three key themes reaching consensus. This means that all authors contributed to analytic integrity by rich discussions throughout the process of analysis, clarification, validation, and dissemination of results [ 36 ]. The study also provides a description of the context of the formal assessment discussions, participants enrolled, data collection and the analysis process allowing the readers to evaluate the authenticity and transferability of the research evidence [ 36 ].

Ethical considerations

The study was submitted to The Regional Committees Research Ethics in Norway who found that the study was not regulated by the Health Research Act, since no health or patient data is registered. The study adhered to general ethical principles laid down by The National Committee for Medical & Health Research Ethics in Norway. In addition, the principles of confidentiality, voluntary participation and informed consent were applied by following the World Medical Association’ s Declaration of Helsinki. All participants gave their written informed consent they were informed about the right to withdraw from the study at any point. The nursing students were made aware that participation or non-participation would not affect other aspects of their clinical placement/education period or give them any advantages or disadvantages on their educational path. The study was approved by The Norwegian Centre for Research Data in two phases (Phase 1: NSD, ID 61309; Phase 2: NSD, ID 489776).

All methods were performed in accordance with relevant guidelines and regulations.

The analysis identified three themes that describe the characteristics of the formal assessment discussions taking place during first-year nursing students’ clinical education in nursing homes: (1) Adverse variability in structuring, weighting of theoretical content, and pedagogical approach, (2) Limited three-part dialogue constrains feedback and reflection, and (3) Restricted grounds for assessment leave the nurse educators with a dominant role. These themes are now presented.

Theme 1: Adverse variability in structuring, weighting of theoretical content and pedagogical approach

This theme illuminates adverse variability in the nurse educators structuring of the discussion, weighting of theoretical content (e.g., bridging of theory and practice), and the pedagogical approach applied across the assessment discussions. Some nurse educators went through each competence area and the accompanying learning outcomes, strictly following the sequence adopted in the assessment form. Others adopted a more flexible sequence of progression guided by the discussion. The latter approach led to frequent shifts in focus between competence areas; the observation notes described that students and RN mentors found it difficult to follow the discussion and the assessment process. For example, the observational notes illuminated that a student commented that she felt insecure because the nurse educator appeared to write notes pertaining to one competence area while talking about another one.

The data exposed variations in the nurse educators’ emphases and weighting of bridging theory and practice. While some nurse educators asked theoretical questions to gauge the students’ knowledge and to help students to link theory and practice, others nurse educators did not. An absence of theoretical questions was observed in several assessment discussions. Additionally, the nurse educators’ knowledge and referral to the course curriculum varied. Some nurse educators seemed to be familiar with the course curriculum and mentioned the theoretical subjects taught to the students’ pre-placement, other nurse educators refrained from discussing theoretical issues. The weighting of geriatric nursing was limited in the assessment discussions.

The nurse educators varied in their pedagogical approach across the assessment discussions. Some nurse educators asked open questions inviting the students to self-evaluate and reflect on their own performance and development before approaching and inviting the RN mentor to provide input. Other nurse educators adopted a more confirmative approach, asking closed-ended questions, and reading aloud from the student’s written self-assessment before asking the RN mentors to just confirm the nurse educators’ impressions and evaluations with “yes” or “no” answers.

Theme 2: Limited three-part dialogue constrains feedback and reflection

The second category illuminates limited participation of all three parties in the formal assessment discussions. This was observed as constraining feedback and reflection. Several possible impediments to the dialogue were language barriers, interruptions, preparedness of students and RN mentors for the discussions, justification for assessing the students, and the students’ reported level of stress. There were variations in the way both nurse educators and RN mentors conveyed their feedback to the students. Several of the RN mentors were observed to have assumed a passive, peripheral role, sitting in the sidelines and saying very little. When addressed by the nurse educator, the RN mentors tended to respond with yes/no answers.

Language barriers related to understanding of the learning outcomes to be assessed appeared to hamper the dialogue. On several occasions the RN mentors who did not have Norwegian as their mother language, expressed that it was difficult to fully understand the language used in the assessment form, so they did not know what was required during the assessment. We therefore observed that the nurse educators often took time to explain and “translate” the concepts used to describe the student learning outcomes to ensure mutual understanding.

Interruptions during the assessment discussions interfered with the dialogue. In several of the assessment discussions the RN mentors took phone calls that required them to leave the assessment discussions, sometimes for extended periods of time. This meant that the nurse educator later had to update the RN mentor on what had been covered in her/his absence.

Preparedness of students and RN mentors for the assessment discussion varied. Some students brought their self-assessment document with them to the meeting, others came empty-handed. Some but not all RN mentors brought a hard copy of the student’s self-assessment document. Some RN mentors admitted that they had been too busy to read the self-assessment before the meeting.

Based on body language interpretation, several students appeared nervous during the assessment discussions. The observational notes showed that some of the students later in this assessment discussion confirmed their self-perceived stress by expressing during the assessment discussions that they had had dreams or nightmares about these assessment discussions. The observational notes also illuminated that some students expressed during the assessment discussion that they did not know what would be brought up in these discussions and that they were afraid of failing their clinical placement. The three-part dialogue to a great extent focused on the student competence achievements to pass or fail the clinical placement and less attention providing the student with ability to reflect for enhancing learning.

Theme 3: Restricted grounds for assessment leaves the nurse educators with a dominating role

Limited dialogue and engagement from students and RN mentors often left the nurse educators with restricted basis for assessment and thus gave them a dominant role in the assessment discussions. RN mentors seemed to have insufficient information to assess their students’ performance, learning and development stressing that they had not spent significant amounts of time observing them during the placement period. On several occasions RN mentors expressed that due to sick leave, administrative tasks, and alternating shifts, they had spent only a handful of workdays with their students. The observational notes revealed that some RN mentors expressed that they had to base their formal assessments of student performance on these assessment discussions mainly. Several RN mentors expressed frustration with limited nurse coverage which could impede their mentorship and assessment practices and the amount of student follow-up during placement. Because of limited input from the RN mentors, the researchers observed that the nurse educators had to base their evaluations on students’ written self-assessments. Some nurse educators gave weight to the students’ capacity for self-evaluation. The quality, amount and content of the written self-assessment was observed to be influential for determining the students’ strengths and weaknesses and whether the students passed or failed their competence areas. Our observational notes showed that some students struggled to pass because their written self-assessments were not comprehensive enough. In the formal assessment document, there are fields for ‘strengths and areas of growth /improvement’. It was variations in how much was written beyond the mark for approved or not approved and pass or fall. This implies that some assessment discussions were marked by ticking a box rather than engaging in a reflective dialogue.

The findings of this exploratory observational study suggest that the formal assessment discussions for first-year nursing students are characterized by lack of conformity - referred to as adverse variability- regarding structure, theoretical content and the pedagogical approach. The limited three-part dialogue appeared to constrain feedback and critical reflections to enhance students clinical learning opportunities, leaving the nurse educators with a dominant role and a restricted ground for assessment . Increased awareness is therefore required to improve formal assessment discussions to capitalize on unexploited learning opportunities.

Unexploited learning opportunities

Formal assessment discussions are expected to optimize nursing students’ clinical learning outcomes by extending and transforming their professional knowledge. Our findings illuminate adverse variability in nurse educators’ pedagogical approach and the weighting of theoretical content during formal assessment, which may reduce the students’ ability to learn. This is of major concern since the assessment discussion should be clear and systematic encouraging student’s continuous reflecting and learning process. Both nursing students and nurse mentors seem to need more knowledge of what a formal assessment discussion consists of to reduce unpredictability (e.g., familiarizing with the expected learning outcomes, the assessment criteria, and the context) and to optimize clinical learning. Regarding knowledge or referral to theoretical teaching prior to clinical placements, nurse educators may fail to provide consistency for first-year students during formal assessment; supporting them in bridging theory and practice which is essential to their learning and professional development. These findings resonate with an integrative review which indicated that orientation programs, mentor support, clear role expectations, and ongoing feedback on performance are essential for academic organizations to retain excellent nursing faculty [ 37 ]. This highlights the importance of nurse educators' awareness and active involvement in assessment discussions [ 38 , 39 ] to provide academic support and guidance of students' theory-based assignments [ 40 ]. A critical question is whether the nurse mentors’ competence and active role have been acknowledged sufficiently in the formal assessment discussions? Particularly when our research revealed that gerontological questions were hardly reflected upon to support students clinical learning in nursing homes.

Critical reflections in the assessment discussions were also limited since some nurse educators in this study mostly did not ask for reflections from the students. This is of major concern since critical reflection is a pedagogical way to bridge theory with clinical experience and to tap into unexploited learning opportunities by strengthening the students’ reflection skills and knowledge in the assessment discussions [ 41 ]. Critical reflection in clinical setting and education is known to assist students in the acquisition of necessary skills and competencies [ 42 ].

Encouraging students to reflect on clinical learning experiences contextually and having clearer guidelines for formal assessment discussions in educational nursing programs may be both necessary and important for exploring unexploited learning opportunities. Our findings imply that education programs may increase students’ learning opportunity by decreasing the variability in structure, the weighting of theoretical content and the pedagogical approach applied in clinical education. Increased awareness of assessment by having a common understanding of how the assessment should be managed and what the assessment criteria are therefore considered of importance (e.g., [ 17 ]). Research is however needed to explore the relationship between nurse educators’ clinical expertise, competence, pedagogical skills set and students’ learning outcomes [ 26 ]. Our findings suggest that measures for preparing nurse educators for better theoretical knowledge and pedagogical approach require further development, and that this could be done with i.e., online educational support. Further research should therefore explore and extend our understanding of the need for improvement in structure, theoretical content, and pedagogical approach in formal assessment discussions.

Hampered three-part dialogue

The study findings illuminate that a hampered three-part dialogue makes it hard to offer feedback and engage in reflection. This interfered with the amount of learning potential during formal assessment in clinical education. Many factors affect the degree of interaction in the three-part dialogue. For example, our findings imply that RN mentors gave little feedback to the students in the assessment discussions because they had not spent a significant amount of time with students. A consequence of inadequate feedback in the assessment discussions may limited the clinical learning potential for the nursing students. Formative assessment is arguably one of the most important and influential factors for students’ learning, personal growth, and professional development in their clinical assessment discussions [ 27 ]. According to our findings, formative process assessment is not always used properly, even though it is highly recommended in the literature (e.g., [ 3 , 4 ]). Our study shows that enhanced focus on formative assessment in the nurse educational program and further research using different methodology is of significance to extend our knowledge of the formative process assessment related to clinical learning. Overall, our findings indicate that there are room for improvement in the way that RN mentors participate in the assessment discussions. RN mentors and nurse educators need to increase their knowledge in how giving constructive and substantive feedback and how to encourage students to reflect critically on their learning through the formal assessment discussions. The findings also suggest that linguistic challenges associated with an internationally diverse workforce among RN mentors may constrain assessments of student competence. These findings are consistent with other studies that concluded that understanding the language and meaning of the concepts used in the assessment document was difficult and might have resulted in a peripheral and passive role of the RN nurses [ 18 , 21 , 26 ]. These linguistic challenges and the marginalization of RN nurses may be other causes of the limited dialogue requiring a better pedagogical preparation to nursing students’ clinical placement in nursing homes.

The results from our study also illuminated that the students appeared nervous before and during the assessment discussions. This anxiety could make the discussions difficult and stressful for them. Other studies mentioned the need for more predictability to reduce their stress, so they feel more secure [ 22 , 43 ]. Similar findings are described in the study of nursing students’ experiences with clinical placement in nursing homes, where Laugaland et al. [ 19 ] highlighted the vulnerability of being a first-year student. The students need to be informed about the purpose of the formal assessment discussions, so they prepare for them and feel less anxious consequently. This may give them a better basis for and openness to learning in the three-part dialogue. Enhanced focus on students’ stress in the assessment discussions and further research on the consequences of stress for learning in the assessment discussions are therefore required.

The hampered three-part dialogue, indicated in our results, often left the nurse educators with restricted basis for assessing the nursing students as well as having a dominate role in the discussions. The cooperation between the nurse educators and the RN mentors was also limited related to both interruptions in the assessment discussion and little input from the RN mentors which is not ideal or desirable for the formal assessment discussions to support students clinical learning, also considered as an unexploited learning opportunity. Wu et al. [ 10 ] describe clinical assessment as a robust activity which requires collaboration between clinical partners and academia to enhance the clinical experiences of students. Our research indicates a need for increased collaboration between educational programs and clinical placements in nursing homes. One way to increase the collaboration and giving the RN mentor a more active role, may be to give the RN mentors dedicated time and compensation for mentoring students including access to academic courses in clinical mentoring. Another way to increase this collaboration may be that the leaders in the nursing homes let the RN mentors prioritize the supervision of the students during the clinical placement period. Future research should extend and explore measures to strengthen the collaboration without giving the nurse educators too much of a say in assessment discussions.

Methodology considerations

This study has a strength by using passive observation as method in exploring and describing the formal assessment discussion using a novel approach to expand and deepen our knowledge in this research field.

A methodological consideration may be that the researcher claimed to be a passive observer of an interaction, but an observer’s presence is itself a significant part of the interaction. Participants might be uncomfortable and stressed seeing a stranger silently sitting in the corner and taking notes [ 31 ]. On the other hand, the researchers were familiar faces to the nursing students acknowledged by some of them being comforted with the researchers’ presence feeling more secure contextually. A critical question however is whether use of video recordings would provide richer data. Video recordings allow for the capturing and recording of interactions in the assessment discussions as they occur naturally without many disturbances and because they allow for repeated viewing and detailed analysis [ 44 ]. Frequencies of questions and answers in the tree-part dialogue might also have been counted to for example confirm the observed dominating role of the nurse educator. This was discussed but decided that it was not practical or possible contextually.

Video recording with no researcher present might also reduce the participants’ stress, but on the other hand also perceived as more threatening.

The way of collecting and analyzing the data by using three researchers should also be reconsidered. The data was collected separately by three researchers to ensure that the data was collected timely without any abruption. A limitation may be that all three researchers had similar educational background and experiences from formal assessment discussions in nursing homes. Their preconceptions might have influenced the results. Yet, the researchers were not involved in nursing students’ clinical placement period. To control for research bias, the data analysis process applied triangulation; two of the authors who were not actively involved in the observations, reflected upon the results, providing a basis for checking interpretations to strengthen their trustworthiness [ 45 ].

Conclusions and implications

Adverse variability in structuring and weighting of theoretical content and pedagogical approach, a hampered three-part dialogue and limited basis for assessment, all lead to unexploited learning opportunities. These characteristic signal key areas of attention to optimize the learning potential of these formal assessment discussions.

Higher education is in a unique position to lay the ground for improvements of formal assessment discussions. Educational nursing programs may increase students’ learning opportunity by using a structured guide, decreasing the variability in the weighting of theoretical content and the pedagogical approach applied in clinical placement in nursing homes. The nursing educational program should therefore find ways to increase the collaboration between nurse educators, nursing students and RN mentors to improve feedback, critical reflections, and clinical learning, thereby limiting the nurse educators dominating role in the formal assessment discussions in nursing homes.

Acknowledgements

We express our sincere appreciation to all participants who made this study possible. We wish to thank them for their willingness for being observed.

Abbreviation

RNRegistered nurse

Authors’ contributions

All authors were responsible for design, analyses, and interpretation of data. IA worked out the drafts, revised it and completed the submitted version of the manuscript. IA and KL conducted the data collection and IA, KL and CF did the first analysis. KL, KA, CF and PC contributed comments and ideas throughout the analyses, and interpretation of the data, and helped revise the manuscript. All authors gave final approval of the submitted version.

This work is supported by The Research Council of Norway (RCN) grant number 273558. The funder had no role in the design of the project, data collection, analysis, interpretation of data or in writing and publication of the manuscript.

This study is part of a lager project: Aiming for quality in nursing home: rethinking clinical supervision and assessment of nursing students in clinical practice (QUALinCLINstud) [ 19 ].

Availability of data and materials

Declarations.

Not applicable.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Academic staff perspectives of formative assessment in nurse education

Affiliation.

  • 1 Thames Valley University, Faculty of Health and Human Sciences, Paragon House, Boston Manor Road, Brentford, Middx TW8 9GA, UK. [email protected]
  • PMID: 19818688
  • DOI: 10.1016/j.nepr.2009.08.007

High quality formative assessment has been linked to positive benefits on learning while good feedback can make a considerable difference to the quality of learning. It is proposed that formative assessment and feedback is intricately linked to enhancement of learning and has to be interactive. Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse education on formative assessment and feedback of theoretical assessment. A qualitative approach using semi-structured interviews was adopted. The interview data were analysed and the following themes identified: purposes of formative assessment, involvement of peers in the assessment process, ambivalence of timing of assessment, types of formative assessment and quality of good feedback. The findings offer suggestions which may be of value to teachers facilitating formative assessment. The conclusion is that teachers require changes to the practice of formative assessment and feedback by believing that learning is central to the purposes of formative assessment and regarding students as partners in this process.

Copyright 2009 Elsevier Ltd. All rights reserved.

PubMed Disclaimer

Similar articles

  • Formative peer assessment in higher healthcare education programmes: a scoping review. Stenberg M, Mangrio E, Bengtsson M, Carlson E. Stenberg M, et al. BMJ Open. 2021 Feb 9;11(2):e045345. doi: 10.1136/bmjopen-2020-045345. BMJ Open. 2021. PMID: 33563627 Free PMC article. Review.
  • Refocusing formative feedback to enhance learning in pre-registration nurse education. Koh LC. Koh LC. Nurse Educ Pract. 2008 Jul;8(4):223-30. doi: 10.1016/j.nepr.2007.08.002. Epub 2007 Oct 23. Nurse Educ Pract. 2008. PMID: 17959416
  • Lecturers' experiences and perspectives of using an objective structured clinical examination. Byrne E, Smyth S. Byrne E, et al. Nurse Educ Pract. 2008 Jul;8(4):283-9. doi: 10.1016/j.nepr.2007.10.001. Epub 2007 Nov 26. Nurse Educ Pract. 2008. PMID: 18042434
  • Formative assessment: a key to deep learning? Rushton A. Rushton A. Med Teach. 2005 Sep;27(6):509-13. doi: 10.1080/01421590500129159. Med Teach. 2005. PMID: 16199357
  • Workplace-based assessment as an educational tool: AMEE Guide No. 31. Norcini J, Burch V. Norcini J, et al. Med Teach. 2007 Nov;29(9):855-71. doi: 10.1080/01421590701775453. Med Teach. 2007. PMID: 18158655 Review.
  • Applying formative evaluation in the mentoring of student intern nurses in an emergency department. Zhang YR, Hu RF, Liang TY, Chen JB, Wei Y, Xing YH, Fang Y. Zhang YR, et al. Front Public Health. 2022 Oct 19;10:974281. doi: 10.3389/fpubh.2022.974281. eCollection 2022. Front Public Health. 2022. PMID: 36339220 Free PMC article.
  • Assessing the Utility of a Quality-of-Care Assessment Tool Used in Assessing Comprehensive Care Services Provided by Community Health Workers in South Africa. Babalola O, Goudge J, Levin J, Brown C, Griffiths F. Babalola O, et al. Front Public Health. 2022 May 16;10:868252. doi: 10.3389/fpubh.2022.868252. eCollection 2022. Front Public Health. 2022. PMID: 35651863 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Elsevier Science
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

IMAGES

  1. FREE 10+ Nursing Assessment Form Samples in MS Word

    formative assessment examples nursing

  2. 39 Printable Nursing Assessment Forms (+Examples)

    formative assessment examples nursing

  3. 75 Formative Assessment Examples (2024)

    formative assessment examples nursing

  4. FREE 6+ Sample Nursing Assessments in PDF

    formative assessment examples nursing

  5. FREE 7+ Sample Nursing Assessment Forms in PDF

    formative assessment examples nursing

  6. Nursing Assessment

    formative assessment examples nursing

COMMENTS

  1. How are formative assessment methods used in the clinical setting? A

    Conclusions. This study emphasizes that the use of formative assessment methods in the clinical setting is not a neutral and context-independent exercise, but rather is affected by a myriad of factors such as collegial relations, educational traditions, emotional issues, and subtle forms of resistance. An important implication for the health ...

  2. 14 Examples of Formative Assessment [+FAQs]

    What makes something a formative assessment? ASCD characterized formative assessment as "a way for teachers and students to gather evidence of learning, engage students in assessment, and use data to improve teaching and learning." Their definition continues, "when you use an assessment instrument— a test, a quiz, an essay, or any other kind of classroom activity—analytically and ...

  3. Comparing formative and summative simulation-based assessment in

    In this sense, future studies should compare nursing students' satisfaction with both strategies of SBA in the same group of students and using the same debriefing method. Finally, future research should combine formative and summative evaluation for assessing the clinical performance of undergraduate nursing students in simulated scenarios.

  4. PDF Competency-Based Education and Assessment Model: Teaching, Learning

    Assessment •Design formative and summative performance assessments that include public criteria ... Undergraduate Level Example Course: Nursing Practice with Healthy Populations (Community ... C. Create the assessment prompt and instructions Examples of Modes of Performance Assessment WRITTEN Essay, report, case study, in-basket, literary ...

  5. PDF Guiding Principles for Competency-Based Education

    Formative assessment is intended to enhance learning without consequences or to inform progression decisions. Summative assessment is intended for making a decision regarding attainment of the competency or a key step towards competency demonstration, ability to perform the competency without or limited supervision, or pass/fail.

  6. Summative, Formative, & Benchmark Assessments in Nursing

    Formative Assessments in Nursing. Formative assessments are used to identify a student's strengths and weaknesses, allowing for immediate remediation. Unlike other assessment types, these are generally low-stakes and used to aid in the learning process rather than determine a grade. ... Texas is a recent example:

  7. Formative Assessment Strategies for Healthcare Educators

    Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities. 1. Pre-Assessment: Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas. When instructors have feedback from student assessments ...

  8. Comparing formative and summative simulation-based assessment in

    Background Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies. Methods Two hundred eighteen undergraduate nursing students ...

  9. Formative assessment

    The form of the formative assessment task e.g. online test, essay plan seminar presentation, peer review of work; When it will take place i.e. during a taught session or independent study; How students will get feedback e.g. online test auto mark, oral feedback, written feedback; from peer or academic; Examples of formative assessments:

  10. Using formative evaluation methods to improve clinical ...

    1. Introduction1.1. Formative evaluation. Formative evaluation is "a rigorous assessment process designed to identify potential and actual influences on the progress and effectiveness of implementation efforts" (Stetler et al., 2006), enabling researchers to explicitly study the complexity of implementation projects and suggests ways to answer questions about context, adaptations, and ...

  11. Assessment and evaluation: Nursing education and ACEN ...

    One example of formative evaluation is when faculty provide feedback to students that are learning a new skill. The guidance and feedback will help the student make the needed adjustments and master the skill. ... Another common method of assessment seen in nursing programs is related to preceptor feedback. As a reminder, faculty are ...

  12. Formative & Summative Assessment

    Paul Ross. 12/11/2019. FOANed, formative, Nurse Educator, Nursing, summative. Introduce and provide an overview of formative & summative assessment: Describe key concepts related to formative and summative assessment. Formative and summative assessment in healthcare. Making learning visible.

  13. Setting the Stage

    The summative assessment measures achievement, while formative assessments focus on the learning process and whether the activities the learners engaged in helped them to better understand and demonstrate competency. As such, both summative and formative assessments are critical components of a competency-based system.

  14. Formative Assessment and Its Impact on Student Success

    Engagement Tools in the Online Classroom: Formative Assessment; Mentoring Nursing Faculty: An Inclusive Scholarship Support Group; Civility Mentor: A Virtual Learning Experience; Nursing Students' Academic Success Factors: A Quantitative Cross-sectional Study; An Unfolding Tabletop Simulation Training Exercise on Disaster Planning for...

  15. Assessment and Evaluation in Nursing Education: A Simulation ...

    Assessment as learning occurs when students reflect and self-assess their progress to inform their future learning goals (formative assessment). Through this process, students can learn about themselves as learners and become aware of how they learn. Examples of this type of assessment used in the undergraduate nursing space include immersive simulations using manikins or simulated participants.

  16. Using standardized exams for formative program evaluation

    Systematic program evaluation is critical for assessing nursing program quality and achievement of outcomes (Spector et al., 2020).It should include using data to identify areas for improvement, understand processes, and inform program decisions (Al-Alawi & Alexander, 2020).While both formative and summative program evaluation have unique value and purpose, the literature has largely focused ...

  17. 6 Formative Assessment Examples & Ideas

    Some work best when included right at the beginning, while others can be used at a midway point. As with all formative assessments these are not designed to be graded in any way and are constructed to be simple and offer a basic reading of students' comprehension. 1. Live multiple-choice poll.

  18. Healthcare Simulation Standards of Best PracticeTM Evaluation of

    Formative evaluation of the learner is meant to foster development and assist in progression toward achieving objectives or outcomes. Summative evaluation focuses on the measurement of outcomes or achievement of objectives at a discrete moment in time, for example, at the end of a program of study.

  19. Exploring the formal assessment discussions in clinical nursing

    Method. An exploratory qualitative study was performed. The data consist of passive participant observations of 24 assessment discussions (12 mid-term and 12 final assessments) with first-year nursing students (n=12), their assigned registered nurse mentors (n=12) and nurse educators (n=5).The study was conducted in three public nursing homes in a single Norwegian municipality.

  20. Perceptions, Practices, and Challenges of Formative Assessment in

    The purpose of this research is to explore, as a first step, how nursing teachers conceptualize formative assessment and how they judge its. usefulness in the teaching/learning process. Secondly ...

  21. An exploration of student nurses' experiences of formative assessment

    Journal of Nursing Education 37 (6), 275-277] fuelled the desire to explore student nurses experiences of being assessed formatively. Focus group discussion, within a UK Higher Education setting, captured the holistic, dynamic and individual experiences student nurses (n=14) have of formative assessment. Ethical approval was obtained.

  22. Academic staff perspectives of formative assessment in nurse ...

    Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse ...