disagreement
Model 1 includes all five predictor variables. Model 2 excludes the two that showed no significant effects in Model 1. Numbers in brackets are adjusted R 2 s.
As in Study 1, we again observe that the providers and recipients of feedback formed very different impressions about past performance. A new and important finding in this study is that feedback conversations did not merely fail to diminish provider-recipient disagreements about what led to strong and weak performance; they actually turned minor disagreements into major ones. Recipients made more self-enhancing and self-protective attributions following the performance discussion, believing more strongly than before that their successes were caused by internal factors (their ability, personality, effort, and attention) and their failures were caused by external factors (job responsibilities, employer expectations, resources provided, and bad luck). There were also modest disagreements regarding the quality and importance of different aspects of the recipient’s job performance, but these did not worsen following discussion. The most important source of disagreement between providers and recipients then, especially following the feedback conversation, was not about what happened, but about why it happened.
What led recipients of performance feedback to accept it as legitimate and helpful? The best predictor of feedback effectiveness was the extent to which the discussion was perceived as future focused. Unsurprisingly, feedback was also easier to accept when it was more favorable. As predicted, recipients were more likely to accept feedback when they and the feedback providers agreed more about what caused the past events. Greater attribution agreement, however, did not increase recipients’ intention to change. These findings suggest that reaching agreement on the causes of past performance is neither likely to happen (because feedback discussions widen causal attribution disagreement) nor is it necessary for fostering change. What does matter is the extent to which the feedback conversation focuses on generating new ideas for future success. We further explore the relations among all these variables following the reporting of Study 3.
Performance feedback serves goals other than improving performance. For example, performance reviews often serve as an opportunity for the feedback provider to justify promotion and compensation decisions. For the recipient, the conversation may provide an opportunity for image management and the chance to influence employment decisions. People may fail to distinguish between evaluation and improvement goals when providing and receiving feedback. In Study 2, the instructions were intended to be explicit in directing participants to the developmental goal of performance improvement, rather than accountability or rewards. Nevertheless, the providers’ wish to justify their evaluations and the recipients’ wish to influence them might have contributed to the differences we observed in attributions and in judgments about the feedback’s legitimacy. To address this concern, we added a page of detailed company guidelines that emphasized the primacy of the performance-improvement goal over the goals of expressing, justifying, or influencing evaluations. There were two versions of these guidelines, which did not differ in their effects.
Participants were 162 executives and MBA students enrolled in advanced Human Resources classes in Australia. An international mix of businesspeople, 74% said they grew up in Australia or New Zealand, 10% in Europe, 22% in Asia, and 7% other. (Totals sum to more than 100% because some participants indicated more than one.) Participants averaged 39 years of age, ranging from 27 to 60. Females comprised 37% of the participants.
Participants read the same scenario and instructions as in Study 2, with an added page of guidelines for giving developmental feedback ( S8 Text ). They then completed the same post-discussion questionnaires used for the pre-post group of Study 2, minus the ratings of performance quality and importance for various aspects of the job, which showed no effects in Study 2. (The full text of the questionnaires is provided in S9 and S10 Texts). Taken together, these modifications kept the procedure to about the same length as in Study 2. This study was approved by the Institutional Review Board at the University of Melbourne. Written consent was obtained.
As in Study 2, we calculated the sum of the percentages of attributions assigned to internal causes (ability and personality + effort and attention), applying an arcsine transformation. As before, we analyzed the internal attributions measure with a mixed-model ANOVA treating each dyad as a unit. There were two within-dyads variables: role (provider or recipient), and outcomes (successes or failures) and one between-dyads variable (guideline version ). There were no effects involving guideline version (all F < 1). The main effects of role ( F (1, 79) = 50.12, p < .001, η 2 = .39) and outcomes ( F (1, 79) = 113.8, p < .001, η 2 = .59) and the interaction between them ( F (1, 79) = 86.34, p < .001, η 2 = .52) are displayed in Fig 3 , along with the parallel post-feedback results from the previous two studies. As in Study 2, the two parties’ post-discussion attributions were well apart on both successes and, especially, failures ( t (80) = 3.3 and 9.4 respectively, both p ≤ .001). Again, the correlations between the provider’s and the recipient’s post-conversation performance attributions across dyads were not significant for either successes ( r (79) = -.04, p > .69) or failures ( r (79) = -.13, p > .23) suggesting that conversation does not lead the dyad to a common understanding of what led to good or poor performance.
Results are shown by role (provider vs. recipient of feedback) and valence/outcomes (positive feedback for successes vs. negative feedback for failures), following feedback conversation. Error bars show standard errors.
We conducted regression analyses of the recipient’s feedback acceptance and intention to change as in Study 2. The regression models included three predictors: future focus, attribution disagreement, and feedback favorability. Results, shown in Table 4 , replicated our Study 2 finding that future focus is the best predictor of both feedback acceptance and intention to change. As before, attribution disagreement predicted lower acceptance, but in this study it also predicted less intention to change. We again found that feedback favorability ratings were associated with greater acceptance, but this time, not with intention to change. Recipients and providers were again significantly correlated in their judgments of how future focused the conversation was ( r (79) = .299, p = .007).
Feedback Acceptance [.373] | Intention to Change [.323] | |||||
---|---|---|---|---|---|---|
Beta | (77) | Beta | (77) | |||
Future focus | .411 | 4.432 | < .001 | .549 | 5.697 | .001 |
Attribution disagreement | -.193 | -2.131 | .036 | -.198 | -2.105 | .039 |
Favorability | .284 | 3.017 | .003 | -.050 | -.516 | .607 |
Numbers in brackets are adjusted R 2 s.
Future focus, as perceived by the recipients of feedback, was once again the strongest predictor of their acceptance of the feedback and the strongest predictor of their intention to change. Conversely, attribution disagreement between the provider and recipient of feedback was associated with lower feedback acceptance and weaker intention to change. As in Studies 1 and 2, recipients made more internal attributions for successes than providers did and, especially, more external attributions for failures. The added guidelines in this study emphasizing performance-improvement goals over evaluative ones did not alleviate provider-recipient attribution differences. Indeed, those differences were considerably larger in this study than in the previous one and were more similar to those seen in Study 1 (see Fig 3 ).
The strongest predictor of feedback effectiveness is the recipient’s perception that the feedback conversation focused on plans for the future rather than analysis of the past. We seek here to elucidate the relationship between future focus and feedback effectiveness by looking at the interrelations among the three predictors of effectiveness we studied: future focus, attribution disagreement, and feedback favorability.
The analyses that follow include data from all participants who were asked for ratings of future focus, namely those in Study 3 and in the pre-post group of Study 2. We included study as a variable in our analyses; no effects involving the study variable were significant. Nonetheless, because the two studies drew from different samples and used slightly different methods, inferential statistics could be impacted by intraclass correlation within each study. Therefore, we also tested for study-specific differences in parameter estimates using hierarchical linear modeling [ 58 , 59 ]. No significant differences between studies emerged, confirming the appropriateness of combining the data. (The HLM results are provided in S2 Analyses .)
The association between future focus and feedback effectiveness could be mediated by the effects of attribution disagreement and/or feedback favorability. Specifically, it could be that perceiving the conversation as more future focused is associated with closer agreement on attributions or with perceiving the feedback as more favorable, and one or both of those latter two effects leads to improved feedback effectiveness. Tests of mediation, following the methods of Kenny and colleagues [ 60 ], suggest otherwise (see Fig 4 ). These analyses partition the total associations of future focus with feedback acceptance and with intention to change into direct effects and indirect effects. Indirect effects via reduced attribution disagreement were 6.2% of the relation of future focus to feedback acceptance and 2.2% to intention to change. Indirect effects via improved perceptions of feedback favorability were 20.8% of the relation of future focus to feedback acceptance and 4.5% to intention to change. Thus, there is little to suggest that closer agreement on attributions or improved perceptions of feedback favorability account for the benefits of future focus on feedback effectiveness.
The two feedback effectiveness measures are feedback acceptance and intention to change. Following Kenny (2018), standardized regression coefficients are shown for the relations between future focus and two hypothesized mediators, attribution disagreement and feedback favorability ( a ), the mediators and the feedback effectiveness measures controlling for future focus ( b ), future focus and the effectiveness measures ( c ), and future focus and the effectiveness measures controlling for the mediator ( c′ ). The total effect ( c ) equals the direct effect ( c′ ) plus the indirect effect ( a · b ). Data are from Studies 2 and 3. a p = .072; * p = .028; ** p < .001.
Future focus might have synergistic or moderating effects. In particular, we hypothesized that perceiving the conversation as more future focused may moderate the negative impact of attribution disagreement on feedback effectiveness. Alternatively, future focus may be especially beneficial when agreement about attributions is good, or when attribution differences are neither so big that they cannot be put aside, nor so small that the parties see eye to eye even when they focus on the past. Similarly, future focus may be especially beneficial when feedback is most unfavorable to the recipient, or when it’s most favorable, or when it is neither so negative that the recipients can’t move past it, nor so positive that the recipients accept it even when the conversation focuses on the past.
We conducted regression analyses with feedback acceptance and intention to change as dependent variables and future focus, feedback favorability, attribution disagreement, and their first-order interactions as predictors. Because some plausible interactions are nonlinear, we defined low, intermediate, and high values for each of the three predictor variables, dividing the 198 participants as evenly as possible for each. We then partitioned each predictor into linear and quadratic components with one degree of freedom each. With linear and quadratic components of three predictors plus a binary variable for Study 2 vs. Study 3, there were seven potential linear effects and 18 possible two-way interactions. We used a stepwise procedure to select which interactions to include in our regressions, using an inclusion parameter of p < .15. Results are shown in Table 5 .
Feedback acceptance | Intention to change | |||||
---|---|---|---|---|---|---|
Future focus—Linear | 0.487 | 5.09 | < .001 | 0.639 | 11.51 | < .001 |
Future focus—Quadratic | 0.024 | 0.40 | .687 | -0.068 | -1.27 | .206 |
Feedback favorability—Linear | 0.268 | 4.36 | < .001 | 0.096 | 1.74 | .083 |
Feedback favorability—Quadratic | -0.067 | -1.12 | .265 | -0.029 | -0.55 | .584 |
Attribution disagreement—Linear | -0.226 | -3.57 | .001 | -0.148 | -2.60 | .010 |
Attribution disagreement—Quadratic | -0.094 | -1.62 | .108 | -0.088 | -1.69 | .093 |
Study 2 vs. 3 | 0.073 | 1.13 | .259 | -0.078 | -1.34 | .182 |
Future focus—Linear x Feedback favorability—Linear | -0.119 | -1.91 | .057 | -0.116 | -2.09 | .038 |
Future focus—Linear x Attribution disagreement—Linear | -0.095 | -1.83 | .070 | |||
Future focus—Linear x Study | -0.136 | -1.46 | .145 | |||
Feedback favorability–Quadratic x Attribution disagreement–Quadratic | 0.084 | 1.60 | .112 |
Models include all main effects and those first-order interactions that met an entry criterion of p < .15, plus data source (Study 2 vs. Study 3). Statistically significant values are underlined.
Future focus interacted with feedback favorability—marginally for feedback acceptance and significantly for intention to change. As shown in Fig 5 , recipients who gave low or intermediate ratings for future focus accepted the feedback less when it was most negative ( t (128) = 5.21, p < .001) and similarly, reported less inclination to change ( t (128) = 3.23, p = .002). In contrast, the recipients who rated the feedback discussion as most future focused accepted their feedback and indicated high intention to change at all levels of feedback favorability. These patterns suggest that perceiving future focus moderates the deleterious effect of negative feedback on feedback effectiveness.
Results for each measure of feedback effectiveness are shown by three levels of perceived future focus and three levels of perceived feedback favorability. Error bars show standard errors. Data are from Studies 2 and 3.
On the other hand, we find no evidence that future focus moderates the negative effect of attribution disagreement on feedback effectiveness. Future focus did interact marginally with attribution disagreement for intention to change. However, the benefits of perceiving high vs. low future focus may, in fact, be stronger when there is closer agreement about attributions: The increase in intention to change between low and high future focus groups was 2.30 with high disagreement, 2.37 with intermediate disagreement, and 3.24 in dyads with low disagreement, on a scale from 1 to 7.
Regression-tree analyses can provide additional insights into the non-linear relations among variables [ 61 ], with a better visualization of the best and worst conditions to facilitate feedback acceptance and intention to change. These analyses use the predictors (here, future focus, attribution disagreement, and feedback favorability) to divide participants into subgroups empirically, maximizing the extent to which values on the dependent measure are homogeneous within subgroups and different between them. We generated regression trees for each of our two effectiveness measures, feedback acceptance and intention to change. Fig 6 shows the results, including all subgroups (nodes) with N = 10 or more.
The trees depict the effects of future focus, attribution disagreement, and feedback favorability on our two measures of feedback effectiveness. The width of branches is proportional to the number of participants in that branch. Node 0 is the full sample of 198. Values on the X axis are standardized values for each dependent measure. Data are from Studies 2 and 3.
Both trees show that future focus is the most important variable, dividing into lower and higher branches at Nodes 1 and 2, and further distinguishing highest-future groups at Nodes A8 and B6. These representations also reinforce the conclusion that perceived future focus does not operate mainly via an association with more positive feedback or with better agreement on attributions. However, attribution disagreement does play a role, with more agreement leading to better acceptance of feedback and greater intention to change, as long as future focus is at least moderately high (Nodes A3 vs. A4 and B7 vs. B8). (The lack of effect at Node B6 is likely a ceiling effect.) Unfavorable feedback makes matters worse under adverse conditions: when future focus is low (Nodes B3 vs. B4) or when future focus is moderate but attribution disagreement is large (nodes A5 vs. A6).
Our research was motivated by a need to understand why performance feedback conversations do not benefit performance to the extent intended and what might be done to improve that situation. We investigated how providers and recipients of workplace feedback differ in their judgements about the causes of performance and the credibility of feedback, and how feedback discussions impact provider-recipient (dis)agreement and feedback effectiveness. We were particularly interested in how interpretations of past performance, feedback acceptance, and intention to change are affected by the recipient’s perception of temporal focus, that is, the extent to which the feedback discussion focuses on past versus future behavior.
Management theorists typically advocate evaluating performance relative to established goals and standards, diagnosing the causes of substandard performance, and providing feedback so that people can learn from the past [ 19 ]. They also posit that feedback recipients must recognize there is a problem, accept the feedback as accurate, and find the feedback providers fair and credible in order for performance feedback to motivate improvement [ 7 , 14 , 35 ]. Unfortunately, we know that performance feedback often does not motivate improvement [ 4 ]. Our research contributes in several ways to understanding why that is and how feedback conversations might be made more effective.
Decades of attribution theory and research have elucidated the biases thought to produce discrepant explanations for performance between the providers and recipients of feedback. We show that for negative feedback, these discrepancies are prevalent in the workplace. We also show that larger attribution discrepancies are associated with greater rejection of feedback and, in our performance review simulations, with weaker intention to change. These findings support recent research and theory linking performance feedback, work-related decision making, and attribution theory: Instead of changing behavior in response to mixed or negative feedback, people make self-enhancing and self-protecting attributions and judgements they can use to justify not changing [ 8 , 14 , 62 ].
Our research suggests that the common practice of discussing the employees’ past performance, with an emphasis on how and why outcomes occurred and what that implies about the employees’ strengths and weaknesses, can be counterproductive. Although the parties to a feedback discussion may agree reasonably well about which goals and standards were met or unmet, they are unlikely to converge on an understanding of the causes of unmet goals and standards, even with engaged give and take. Instead, the feedback conversation creates or exacerbates disagreement about the causes of performance outcomes, leading feedback recipients to take more credit for their successes and less responsibility for their failures. This suggests that feedback conversations that attempt to diagnose past performance act as another form of self-threat that increases the self-serving bias [ 33 ]. Surely this runs counter to what the feedback provider intended.
At the same time, we find that self-serving attributions need not stand in the way of feedback acceptance and motivation to improve. A key discovery in our research is that the more recipients feel the feedback focuses on next steps and future actions, the more they accept the feedback and the more they intend to act on it. In fact, when feedback is perceived to be highly future focused, feedback recipients respond as well to predominantly negative feedback as to predominantly positive feedback. Future focus does not nullify self-serving attributions and their detrimental effects [see also 63 ], but it does enable productive feedback discussions despite them.
We used two complementary research methods. Study 1 used a more naturalistic and thus more ecologically valid method, collecting retrospective self-reports from hundreds of managers about actual feedback interactions in a wide variety of work situations [see 64 ]. Studies 2 and 3 used a role-play method that allowed us to give all participants identical workplace performance information, a good portion of which was undisputed and quantitative. With that design, response differences between the providers and recipients of feedback are due entirely to role, unconfounded by differences in knowledge and experience.
What role plays cannot establish is the magnitude of effects in organizational settings. Attribution misalignment and resistance to feedback might easily be much stronger in real workplace performance reviews where it would be rare for the parties to arrive with identical, largely unambiguous information. Moreover, managers’ investment in the monetary and career outcomes of performance reviews might lead feedback recipients to feel more threatened than in a role play and thus to disagree even more with unfavorable feedback. On the other hand, the desire to maintain employment and/or to maintain good relationships with supervisors might motivate managers to re-assess their past achievements, to change their private attributions, and to be more accepting of unfavorable feedback. Data from our role-play studies may not speak to the magnitude of resistance to feedback in work settings (although our survey results suggest it’s substantial), but they do show that feedback acceptance is increased when the participants perceive their feedback to be focused on the future.
There are few research topics more important to the study of organizations than performance management. Feedback conversations are a cornerstone of most individual and team performance management, yet there is still much we do not know about what should be said, how, and why. Based on research into the motivational advantages of prospective thinking, we hypothesized that feedback discussions perceived as future focused are the most effective kind for generating acceptance of feedback and fostering positive behavior change. Our findings support that hypothesis. The present research contributes to the literature on prospection by highlighting the role of interpersonal interactions in facilitating prefactual thinking and any associated advantages for goal pursuit [ 39 , 43 – 45 , 63 , 65 ]. In this section we suggest three lines of future research: (a) field studies and interventions; (b) research into the potential role of self-beliefs; and (c) exploration of the conversational dynamics associated with feedback perceived as past vs. future focused.
Testing feedback interventions in the workplace and other field settings is an important future step toward corroborating, elaborating, or correcting our findings. It will be necessary to develop effective means to foster a more future-focused style of feedback. Then, randomized controlled trials that contrast future-focused with diagnostic feedback can demonstrate the benefits that may accrue from focusing feedback more on future behavior and less on past behavior. Participant evaluations of the feedback discussions can be supplemented by those of neutral observers. Such evaluations are directly relevant to organizational goals, including employee motivation, positive supervisor-supervisee relations, and effective problem solving. Assessing subsequent behavior change and job performance is both important and complicated for evaluating feedback effectiveness: Seeing intentions through to fruition depends on many factors, including individual differences in self-regulation [ 66 , 67 ] and factors beyond people’s control, such as competing commitments, limited resources, and changing priorities [ 68 – 71 ]. Nevertheless, the ultimate proof of future-focused feedback will lie in performance improvement itself.
If future focus enhances feedback effectiveness, it may do so via self-beliefs. Growth mindset and self-efficacy, for example, are self-beliefs that influence how people think about and act on the future. Discussions that focus on what people can do in the future to improve performance may encourage people to view their own behavior as malleable and to view better results as achievable. If future focus helps people access this growth mindset, it should orient them toward mastering challenges and improving the self for the future: Whereas people exercise defensive self-esteem repair when in a fixed mindset, they prefer self-improvement when accessing a growth mindset [ 72 , 73 ]. Similarly, feedback conversations that focus on ways the feedback recipient can attain goals in the future may enhance people’s confidence in their ability to execute the appropriate strategies and necessary behaviors to succeed. Such self-efficacy expectancies have been shown to influence the goals people select, the effort and resources they devote, their persistence in the face of obstacles, and the motivation to get started [ 74 , 75 ]. Thus, research is needed to assess whether future focus alters people’s self-beliefs (or vice versa; see below) and if these, in turn, impact people’s acceptance of feedback and intention to change.
We found sizeable variation in the extent to which dyads reported focusing on the future. Pre-existing individual differences in self-beliefs may contribute to that variation. Recent research, for example, finds that professors with more growth mindsets have students who perform better and report being more motivated to do their best work [ 76 ]. In the case of a feedback conversation, we suspect that either party can initiate thinking prospectively, but both must participate in it to sustain the benefits.
Unlike most studies of people’s reactions to mixed or negative feedback, our studies use face-to-face, real-time interaction, that is to say, two people in conversation. Might conversational dynamics associated with future-focused feedback contribute to its being better accepted and more motivating than feedback focused on the past? Do managers who focus more on the future listen to other people’s ideas and perspectives in ways that are perceived as more empathic and nonjudgmental? Do these more prospective discussions elicit greater cooperative problem solving? Research on conversation in the workplace is in its early stages [ 77 ], but some studies support the idea that high quality listening and partner responsiveness might reduce defensiveness, increase self-awareness, or produce greater willingness to consider new perspectives and ideas [ 78 , 79 ].
Our studies provide the first empirical evidence that managers can make feedback more effective by focusing it on the future. Future-focused feedback, as we define it, is characterized by prospective thinking and by collaboration in generating ideas, planning, and problem-solving. We assessed the degree of future focus by asking participants to rate the extent to which the feedback discussion focused on future behavior, the two parties spent time generating new ideas for next steps, and the conversation centered on how to make the recipient successful. This differs greatly from feedback research that distinguishes past vs. future orientation “using minimal rewording of each critique comment” (e.g., you didn’t always demonstrate awareness of… vs. you should aim to demonstrate more awareness of…) [ 80 p. 1866].
Because future-focused feedback is feedback, it also differs from both advice giving and “feedforward” (although it might be advantageous to incorporate these): It differs from Kluger and Nir’s feedforward interview, which queries how the conditions that enabled a person’s positive work experiences might be replicated in the future [ 81 ], and from Goldsmith’s feedforward exercise, which involves requesting and receiving suggestions for the future, without discussion or feedback [ 82 ].
The scenario at the very start of this article asks, “What can Chris say to get through to Taylor?” A future-focused answer might include the following: Chris first clarifies that the purpose of the feedback is to improve Taylor’s future performance, with the goal of furthering Taylor’s career. Chris applauds Taylor’s successes and is forthright and specific about Taylor’s shortcomings, while avoiding discussion of causes and explanations. Chris signals belief that Taylor has the motivation and competence to improve [ 83 ]. Chris then initiates a discussion in which they work together to develop ideas for how Taylor can achieve better outcomes in the future. (For a more detailed illustration of a future-focused conversation, see S11 Text .)
Our research supports the intriguing possibility that the future of feedback could be more effective and less aversive than its past. Performance management need not be tied to unearthing the determinants of past performance and holding people to account for past failures. Rather, performance may be managed most successfully by collaborating with the feedback recipient to generate next steps, to develop opportunities for interesting and worthwhile endeavors, and to enlarge the vision of what the recipient could accomplish. Most organizations and most managers want their workers to perform well. Most workers wish to succeed at their jobs. Everyone benefits when feedback discussions develop new ideas and solutions and when the recipients of feedback are motivated to make changes based on those. A future-focused approach to feedback holds great promise for motivating future performance improvement.
S1 analyses, s2 analyses, acknowledgments.
For helpful comments on earlier drafts of this paper, we are grateful to Pino Audia, Angelo Denisi, Nick Epley, Ayelet Fishbach, Brian Gibbs, Reid Hastie, Chris Hsee, Remus Ilies, David Nussbaum, Jay Russo, Paul Schoemaker, William Swann, and Kathleen Vohs.
This research received funding from the Melbourne Business School while the first three authors were either visiting (JG, JK) or permanent (IOW) faculty there. While working on this research, the first two authors (JG, JK) also worked as owners and employees of management consulting firm Humanly Possible. Humanly Possible provided support in the form of salaries and profit-sharing compensation for authors JG and JK, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the “author contributions” section.
PONE-D-20-05644
The future of feedback: Motivating performance improvement
Dear Dr Klayman,
Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
We would appreciate receiving your revised manuscript by May 22 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.
To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols
Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.
We look forward to receiving your revised manuscript.
Kind regards,
Paola Iannello
Academic Editor
Journal requirements:
When submitting your revision, we need you to address these additional requirements:
1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.plosone.org/attachments/PLOSOne_formatting_sample_main_body.pdf and http://www.plosone.org/attachments/PLOSOne_formatting_sample_title_authors_affiliations.pdf
2. Please modify the title to ensure that it is meeting PLOS’ guidelines ( https://journals.plos.org/plosone/s/submission-guidelines#loc-title ). In particular, the title should be "specific, descriptive, concise, and comprehensible to readers outside the field" and in this case it is not informative and specific about your study's scope and methodology.
3. Thank you for stating the following in the Competing Interests section:
"The authors have declared that no competing interests exist."
We note that one or more of the authors are employed by a commercial company: Humanly Possible, Inc.
1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.
Please also include the following statement within your amended Funding Statement.
“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”
If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.
2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.
Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests ) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.
Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.
Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests
[Note: HTML markup is below. Please do not edit.]
Reviewers' comments:
Reviewer's Responses to Questions
Comments to the Author
1. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer #1: Yes
Reviewer #2: Yes
2. Has the statistical analysis been performed appropriately and rigorously?
3. Have the authors made all data underlying the findings in their manuscript fully available?
The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.
4. Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
5. Review Comments to the Author
Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)
Reviewer #1: 1. I enjoyed reading this manuscript, but it appears to be unnecessary long in parts and readability would benefit of a more concise style. I would recommend condensing some parts, for example in the methods section for study 2 was overly long and lacked clarity in parts. The description of the second questionnaire was a little confusing in terms of the consistency in how items were measured and the hypothesis was not clear.
2. In the ethics statement for Study 1 (line 184), please explain the rationale behind the waiver of consent.
3. Procedure (line 187) please give details of the survey platform used.
4. Results -Please include the number of participants in each group.
5. Please comment on what normality checks were performed to assess the distribution of the data.
6. Line 470, correlations are discussed but I can’t see a table to support these.
7. The discussion did not address the results in relation to previous literature and lacked a theoretical explanation of the findings (See for example ‘Korn CW, Rosenblau G, Rodriguez Buritica JM, Heekeren HR (2016) Performance Feedback Processing Is Positively Biased As Predicted by Attribution Theory. PLoS ONE 11(2)’ for a discussion of attributional style and self-serving bias. I recommend some rewrite of the discussion with more reference to theory.
8. Some acknowledgement of the effect of individual differences in self-regulation would be useful to include as this may influence how feedback is received in terms of attributions. See for example, ‘Donovan, JJ, Lorenzet, SJ, Dwight, SA, Schneider, D. The impact of goal progress and individual differences on self‐regulation in training. J Appl Soc Psychol. 2018; 48: 661– 674’.
9. The suggestions for improvement at the end of the study would be better to be condensed to give a brief suggestion of methods.
Reviewer #2: The paper reports an interesting and comprehensive work about a relevant issue in organizational psychology. Both the theoretical frame and the applied methodology are original and thorough, though the use of role-play raises some doubts about the robustness of the results (some concerns are raised by the authors themselves (lines 752-760) ). This is, in my opinion, the main limitation of studies 2 and 3. I would suggest that the authors insert a wider reasoning about the choice of using this method to collect their data and the pros and cons.
In the "General Discussion" paragraph the authors state that "We investigated the sources of agreement and disagreement between feedback provider and recipient" (lines 712-713). I strongly suggest that this sentence is being modified, since it doesn't describe the aim nor the results in Study 1 correctly.
6. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.
If you choose “no”, your identity will remain anonymous but your review may still be made public.
Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .
Reviewer #1: No
Reviewer #2: Yes: Federica Biassoni
[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at gro.solp@serugif . Please note that Supporting Information files do not need this step.
12 May 2020
Please see uploaded document Response to Reviewers. Text copied here.
Response to Reviewers
We wish to thank the reviewers for their very helpful and constructive comments. We especially appreciate the clarity and specificity with which they framed their suggestions. Below we respond to each reviewer recommendation.
Reviewer #1:
1. I enjoyed reading this manuscript, but it appears to be unnecessary long in parts and readability would benefit of a more concise style. I would recommend condensing some parts, for example in the methods section for study 2 was overly long and lacked clarity in parts. The description of the second questionnaire was a little confusing in terms of the consistency in how items were measured and the hypothesis was not clear.
We revised the methods section for Study 2 (former lines 274-279; 285-414, revision lines 276-281; 299-402). The new version is a full page shorter and, in line with the reviewer’s suggestion, we believe this more concise version is now more readable. It includes a revised description of the post-discussion questionnaires (former 346-367; revision 350-361), clarifying the sequence and types of questions provided to each group. It also includes revisions, mainly in the Design section (former 387-414; revision lines 377-402) to clarify how the various measures related to our hypotheses.
Study 1 was approved by the Institutional Review Board at the University of Chicago, which waived the requirement for written consent as was its customary policy for studies judged to be minimal risk, involving only individual, anonymized survey responses. Their decision cited US Code 45 CFR 46.101(b). Citing the code in our manuscript seemed overly legalistic, but we have added the rest of the rationale to the ethics statement (former lines 184-185; revision 184-186).
We now identify the platform as Cogix ViewsFlash (revision line 188).
We have added the requested information for Study 1 (revision lines 214-215). Following up on the suggestion, we also made it easier to locate the corresponding information for Study 2 (revision lines 316-317).
The general consensus is that the analyses we use, i.e. ANOVA and linear regression, are generally quite robust with regard to moderate violations of normality with Ns on the order of ours (e.g., Blanca, Alarcón, Arnau, Bono, & Bendayan, Psichothema, 2017; Schmidt & Finana, Journal of Clinical Epidemiology, 2018; Ali & Sharma, Journal of Econometrics, 1996; Schmider, Ziegler Danay, Beyer, & Bühner, Methodology, 2010). Nevertheless, we used an arcsine transformation on the variables a priori most likely to suffer from systematic deviations, namely the attribution proportions. Most authors recommend checking for major deviations from normality by plotting model-predicted values against residuals and against the normal distribution (using P-P or Q-Q plots). We did that for our analyses (graphs attached), and found no troublesome deviations, with the possible exception of one variable of minor importance to our main results or theory, namely performance quality ratings for successes in Study 2. We note in the paper that that variable may suffer from ceiling effects (former 468-469, revision 456-457). We did not add a discussion of normality to the paper because of the increased length and complexity that would involve and because it’s seldom an issue of concern with data and analyses like ours. However, we could include the graphs we’ve attached here as supplemental material if you tell us you would like us to do so.
Thank you for alerting us to this inadvertent omission. We now include complete correlation tables for all the variables analyzed in each Study in the supplemental materials: S2 Table for Study 1 (revision lines 224-225) and S11 Tables for Studies 2 and 3 separately and combined (revision lines 458-459), with provider-recipient correlations identified by color shading. (S2 was formerly the dataset for Study 1, but now data from all three studies are contained in S17.)
To better address our results in relation to previous attribution literature and theory, we have revised former lines 723-740 in the General Discussion. Now we more clearly discuss our findings in relation to self-serving bias, self-threat, and both historical and more recent formulations of attribution theory, including the helpful reference the reviewer provided (revision lines 708-735). We have also added a brief discussion of how our results relate to previous literature on future thinking (revision lines 760-762). We attempted to minimize redundancy with the Introduction section. The new material includes several new references.
We added mention in the General Discussion of individual differences in self-regulation, citing two references, including the one helpfully provided by Reviewer #1 (revision line 776). Additionally, we reworded former lines 798-799 (revision lines 793-794) to make it clearer that we are acknowledging individual differences there as well.
We condensed former lines 828-846 from 19 lines to 8 lines (revision lines 823-830), referring the interested reader to new Supporting Information S16 Text for the expanded version. We trust this solution meets the recommendation for a brief suggestion of methods, while also satisfying the interests of those seeking more detail.
Reviewer #2:
1. The paper reports an interesting and comprehensive work about a relevant issue in organizational psychology. Both the theoretical frame and the applied methodology are original and thorough, though the use of role-play raises some doubts about the robustness of the results (some concerns are raised by the authors themselves (lines 752-760)). This is, in my opinion, the main limitation of studies 2 and 3. I would suggest that the authors insert a wider reasoning about the choice of using this method to collect their data and the pros and cons.
We now include a wider reasoning about our choice to use a role-play method and the pros and cons. The new version comprises revision lines 282-298. (We also revised the subsequent paragraph for increased clarity, given the insertion of the new paragraph about the role-play method.)
2. In the "General Discussion" paragraph the authors state that "We investigated the sources of agreement and disagreement between feedback provider and recipient" (lines 712-713). I strongly suggest that this sentence is being modified, since it doesn't describe the aim nor the results in Study 1 correctly.
Thank you for your careful reading. We have re-written that sentence to more accurately capture the results of Study 1 as well as the other two studies (revised lines 697-700).
[Figures attached--please see uploaded document Response to Reviewers.]
Submitted filename: Response to Reviewers.docx
27 May 2020
The future of feedback: Survey and role-play investigations into causal attributions, feedback acceptance, motivation to improve, and the potential benefits of future focus for increasing feedback effectiveness in the workplace
PONE-D-20-05644R1
Dear Dr. Klayman,
We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.
Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.
Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/ , click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at gro.solp@gnillibrohtua .
If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact gro.solp@sserpeno .
With kind regards,
Additional Editor Comments (optional):
1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.
Reviewer #1: All comments have been addressed
Reviewer #2: All comments have been addressed
2. Is the manuscript technically sound, and do the data support the conclusions?
3. Has the statistical analysis been performed appropriately and rigorously?
4. Have the authors made all data underlying the findings in their manuscript fully available?
5. Is the manuscript presented in an intelligible fashion and written in standard English?
6. Review Comments to the Author
Reviewer #1: (No Response)
Reviewer #2: (No Response)
7. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.
The future of feedback: Motivating performance improvement through future-focused feedback
Dear Dr. Klayman:
I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.
If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact gro.solp@sserpeno .
If we can help with anything else, please email us at gro.solp@enosolp .
Thank you for submitting your work to PLOS ONE and supporting open access.
PLOS ONE Editorial Office Staff
on behalf of
Dr. Paola Iannello
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Email citation, add to collections.
Your saved search, create a file for external citation management software, your rss feed.
Affiliation.
Objective: To review evaluation literature concerning people, organizational, and social issues and provide recommendations for future research.
Method: Analyze this research and make recommendations.
Results and conclusions: Evaluation research is key in identifying how people, organizational, and social issues - all crucial to system design, development, implementation, and use - interplay with informatics projects. Building on a long history of contributions and using a variety of methods, researchers continue developing evaluation theories and methods while producing significant interesting studies. We recommend that future research: 1) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi-site or multi-system comparative studies. 3) Incorporate evaluation into all phases of a project. 4) Study failures, partial successes, and changes in project definition or outcome. 5) Employ evaluation approaches that take account of the shifting nature of health care and project environments, and do formative evaluations. 6) Incorporate people, social, organizational, cultural, and concomitant ethical issues into the mainstream of medical informatics. 7) Diversify research approaches and continue to develop new approaches. 8) Conduct investigations at different levels of analysis. 9) Integrate findings from different applications and contextual settings, different areas of health care, studies in other disciplines, and also work that is not published in traditional research outlets. 10) Develop and test theory to inform both further evaluation research and informatics practice.
PubMed Disclaimer
Full text sources.
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Before January comes to a close, I thought I would make a few predictions. Ten to be exact. That’s what blogs do in the new year, after all.
Rather than make predictions about what will happen this year—in which case I would surely be caught out—I make predictions about what will happen over the next ten years. It’s safer that way, and more fun as I can set my imagination free.
My predictions are not based on my ideal future. I believe that some of my predictions, if they came to pass, would present serious challenges to the field (and to me). Rather, I take trends that I have noticed and push them out to their logical—perhaps extreme—conclusions.
In the next ten years…
(1) Most evaluations will be internal.
The growth of internal evaluation, especially in corporations adopting environmental and social missions, will continue. Eventually, internal evaluation will overshadow external evaluation. The job responsibilities of internal evaluators will expand and routinely include organizational development, strategic planning, and program design. Advances in online data collection and real-time reporting will increase the transparency of internal evaluation, reducing the utility of external consultants.
(2) Evaluation reports will become obsolete.
After-the-fact reports will disappear entirely. Results will be generated and shared automatically—in real time—with links to the raw data and documentation explaining methods, samples, and other technical matters. A new class of predictive reports, preports , will emerge. Preports will suggest specific adjustments to program operations that anticipate demographic shifts, economic shocks, and social trends.
(3) Evaluations will abandon data collection in favor of data mining.
Tremendous amounts of data are being collected in our day-to-day lives and stored digitally. It will become routine for evaluators to access and integrate these data. Standards will be established specifying the type, format, security, and quality of “core data” that are routinely collected from existing sources. As in medicine, core data will represent most of the outcome and process measures that are used in evaluations.
(4) A national registry of evaluations will be created.
Evaluators will begin to record their studies in a central, open-access registry as a requirement of funding. The registry will document research questions, methods, contextual factors, and intended purposes prior to the start of an evaluation. Results will be entered or linked at the end of the evaluation. The stated purpose of the database will be to improve evaluation synthesis, meta-analysis, meta-evaluation, policy planning, and local program design. It will be the subject of prolonged debate.
(5) Evaluations will be conducted in more open ways.
Evaluations will no longer be conducted in silos. Evaluations will be public activities that are discussed and debated before, during, and after they are conducted. Social media, wikis, and websites will be re-imagined as virtual evaluation research centers in which like-minded stakeholders collaborate informally across organizations, geographies, and socioeconomic strata.
(6) The RFP will RIP.
The purpose of an RFP is to help someone choose the best service at the lowest price. RFPs will no longer serve this purpose well because most evaluations will be internal (see 1 above), information about how evaluators conduct their work will be widely available (see 5 above), and relevant data will be immediately accessible (see 3 above). Internal evaluators will simply drop their data—quantitative and qualitative—into competing analysis and reporting apps, and then choose the ones that best meet their needs.
(7) Evaluation theories (plural) will disappear.
Over the past 20 years, there has been a proliferation of theories intended to guide evaluation practice. Over the next ten years, there will be a convergence of theories until one comprehensive, contingent, context-sensitive theory emerges. All evaluators—quantitative and qualitative; process-oriented and outcome-oriented; empowerment and traditional—will be able to use the theory in ways that guide and improve their practice.
(8) The demand for evaluators will continue to grow.
The demand for evaluators has been growing steadily over the past 20 to 30 years. Over the next ten years, the demand will not level off due to the growth of internal evaluation (see 1 above) and the availability of data (see 3 above).
(9) The number of training programs in evaluation will increase.
There is a shortage of evaluation training programs in colleges and universities. The shortage is driven largely by how colleges and universities are organized around disciplines. Evaluation is typically found as a specialty within many disciplines in the same institution. That disciplinary structure will soften and the number of evaluation-specific centers and training programs in academia will grow.
(10) The term evaluation will go out of favor.
The term evaluation sets the process of understanding a program apart from the process of managing a program. Good evaluators have always worked to improve understanding and management. When they do, they have sometimes been criticized for doing more than determining the merit of a program. To more accurately describe what good evaluators do, evaluation will become known by a new name, such as social impact management .
…all we have to do now is wait ten years and see if I am right.
41 Comments
Filed under Design , Evaluation , Program Design , Program Evaluation
Tagged as evaluation , evaluations , external evaluation , internal evaluation , preport , social missions , standards , the future
Salaam John,
I can not censor my comments or other’s comments! I like to say these predictions are Very nice and reasonable thinking.
Moein — I am always happy to hear from you. Thanks for the comment.
A question for you:
What predictions would you make for evaluation in your country?
1, 9 & 10.
Moein–If the name *evaluation* goes out of favor, what will replace it?
Currently in Iran evaluation have not a favor! And I think in future evaluation may be a part of management process. In sum I think the next generations of evaluation evolve in capacity building discourse.
Generally, I’d bet on your predictions in descending order. As an evaluator who moved from external to internal evaluation about 7 years ago, I think #1 is a pretty sure bet. I’ve seen my own responsibilities shift dramatically in the past years from evaluation to performance management systems and quality improvement. Likewise, the importance of continuous improvements based on ongoing evaluation findings has long been the earmark of the “best” evaluation partnerships. Regaring #3, I work in public health and we have long relied on ongoing data collection systems–BRFSS, Healthy Youth Survey, disease surveillance systems, immunization records, vital statistics, etc., etc.
I would bet the same way. Without intending to, it seems that I more or less put the predictions in descending order of what I believe is likely.
Another question is whether society would be better off if any of the predictions came true.
For example, I agree that public health and medicine have been at the front of common data definition/collection efforts for some time. That has helped policymakers coordinate public health efforts, researchers interpret findings, and healthcare professionals design programs. It may also be limiting our imagination of what is possible or desirable, and it may privilege those sectors of society that provide more and better data.
I believe the predictions capture where the field is going. I wonder if we will be ready when we get there.
John, that definitely is the crucial question. I remember reviewing AEA’s guidance to the feds regarding internalizing evaluation at the national level. I was a bit alarmed at the thought of evaluation being enlisted to work within a system that is driven by political tides as much as rational processes. Also, as funding resources for the Behavioral Risk Factor Surveillance Survey have decreased the costs per completed survey interview have increased dramatically. This results in a smaller sample and at the local level we were already struggling to have enough data to say anything about our American Indian and Latino populations. We will need to have loud voices and commitment to assure that there’s enough data to mine, especially when we want to look at equity issues. Is it time for the canary to sound the alarm?
Your metaphor may be a bit too apt as canaries in mines don’t sound an alarm so much as drop dead, which sets the miners into immediate action. As you point out, we don’t want to wait for some group to be negatively impacted by data policies before we take action. So the big question is this — How do we focus attention on a policy that at the moment is not hurting anyone but at some point in the future will? I wish I knew.
I see advocates and their organizations, such as Angela Glover Blackwell and her staff at Policy Link at the national level and Rosalinda Guillen and her staff at Comunidad y Comunidad at our county level who are raising these concerns and artfully moving forward the equity movement. And where data mining is not possible they are advocating for data collection systems that include those on behalf of whom they advocate.
I tried sharing some of your predictions with a couple of university professor types. Oops. All I got in response was a rather superior sounding comment about, “Oh, I don’t know… external evaluators will still be necessary because of a… oh what is that… a little thing called being ‘objective.'” Sigh. I chose to leave the vicinity rather than try to get into a debate about it. Long story short, I find your predictions thought provoking. And I have the patience to see how well your crystal ball blog entry holds up over the next decade! Thanks as always for your fine thinking.
Not surprising. But keep in mind I wasn’t predicting the end of external evaluation. Just that it will be less important.
Most other fields depend on internally generated information. For example, independent financial audits of corporations only check a small fraction of accounts. Why should social betterment programs require greater scrutiny?
Objectivity is important. Honesty more so. Transparency promotes honesty, imperfectly, but possibly enough that honest insiders may eventually be valued over objective outsiders.
I appreciate the notion of accountability by and to the team as much as accountability to funders. That’s why your predictions resonated so much. It also reminds me a bit of the old phrase “The Wilford Brimly Law– ‘cuz it’s the right thing to do.” Which connects with the importance of doing the right things and not just doing things right. I look forward to sharing this list with others.
Pingback: Susan Kistler on The Future of Evaluation: 5 Predictions (building on 10 others!) · AEA365
Pingback: The Future of Evaluation: Part 3 (Two more predictions) « EvaluationBaron
Imagine that these 10 had already come true. What would be your predictions for the next 10 years?
You asked for it, you got it. Look for my 20 year predictions in a new EvalBlog entry in about one week.
Your predictions are very interesting indeed and i think that many of these are already a reality in the health and social development sector especially in poor resource settings. No 1 is becoming more the norm in South Africa where i work. Internal evaluators (M&E proffessionals) are leading efforts to strengthen program design and Org strategic planning processes using internal evaluation findings as well as data mining (No.3). External evaluations commissioned by donors also draw considerably on existing program data- thereby increasing the importance of M&E managers’ role of ensuring quality and use of routine program data gathered by the organization. Your 8th and 9th predictions are a reality in our context as well. There are very few opportunities for training in line with the growing demand. We hope this will change gradually as Universities adapt to address these needs.
John, I think these are reasonable predictions, with one exception. Although data mining could certainly grow in importance in evaluation, I don’t see data collection disappearing. The problem I have always experienced with data that are not collected with a specific research/evaluation questions in mind is that, most often, they don’t answer the questions very well! In addition, I wonder about the design implications. Where in data mining are the potential conterfactuals?
Miles, This is a response I gave to roughly the same question on an AEA LinkedIn Group discussion. I think you can link to it here ( http://tinyurl.com/7rplt3s ).
I have similar concerns about data mining. However, electronic data are becoming more widely available and more comprehensive in scope. Evaluators are rightly making greater efforts to take advantage of this growing pool of data. For better or for worse, I believe their efforts will grow until data mining overshadows the customized, research-like data collection efforts that we currently favor in evaluation.
Data mining can be rigorous in the way that experimentalists use the word. Data mining techniques can be used to conduct sophisticated interrupted time series analyses, which are widely accepted quasi-experimental alternatives to classic randomized control trials.
Data mining techniques can also be used to provide rich descriptions of humans and their behavior. In contrast to datasets from most randomized control trials, evaluators can find available electronic datasets that are larger by many magnitudes of ten, allowing for more nuanced understandings of subgroups, contingencies, and contexts.
As you point out, one danger is actively believing, or just tacitly assuming, that the natural circumstances that give rise to available data generally provide a sound basis for causal inferences. This is something to worry about.
But the danger may (and I emphasize *may*) seem larger than it is.
Traditionally researchers develop a causal hypothesis from theory and/or data about a program, create a special set of (experimental) circumstances under which the hypothesis is tested, and if the results are favorable suggest that others in similar (non-experimental) circumstances use the program.
We now have the capacity to develop a causal hypothesis exclusively from data collected in the course of some online activity, modify the online activity quickly in accordance with the hypothesis, see what happens, then revert to the prior online activity, and again see what happens. This scenario looks a lot like N-of-1 studies used to good effect in medicine.
Studies such as this depend on the ease and speed of manipulating the design of a program. As programs incorporate more online activities, ease and speed will likely increase.
Who knows what will happen in the future. As with all of my predictions, I remain a hopeful skeptic. But I absolutely have hope.
John, thanks for the predictions. Can you talk a little bit more about the trends you’re seeing that suggest greater shifts toward internal evaluations? It’s happening in my organization and the reasons include greater opportunities for internal learning, more frequent feedback & sustainability. Would love to hear your thoughts and some of background/details. Any links you can share would also be helpful. Cheers.
I discuss this a bit and some other changes I am seeing in evaluation practice in a paper that will appear in Evaluation and Program Planning sometime soon.
In short, the variety of players in the “social benefit sector” is growing. There are many more corporations, microfoundations, megafoundations, and social entrepreneurs focusing on (or at least talking about) social and environmental impacts than there were 10 or even 5 years ago. My sense is that these new players tend to include internal evaluators early in their development.
Interestingly, internal evaluators in these new organizations frequently do not have explicit evaluation training (coming instead from law, design, tech, communication, and business) and may not even call themselves evaluators (using instead titles like Chief Impact Officer or Knowledge and Learning Associate).
They often come to internal evaluation early in their careers, something I find a cause for celebration (What’s not to like about new ideas, current training, and optimism?) and a cause for worry (Will they stay in a field in which measurable progress has historically been slow? How much of dent should we expect newcomers to make in problems that are as ancient as humankind?).
From what I see, traditional players–nonprofit organizations in particular–are hiring more internal evaluators for two reasons. First, there is a strategic advantage to evaluation (something that I strive to provide to clients). Get it right, and your programs become more effective. With evidence of that, it becomes easier to find funding and do more good for more people.
Second, there is a tactical advantage to communicating publicly that you take evaluation seriously (even if you don’t). In cynical moments, I feel as though the second reason dominates the first. But then I talk to some internal evaluators and my cynicism fades. I have found internal evaluators to be a good bunch, and I think having more of them will benefit everyone.
Thanks for the preview!…will look for your paper when published. Agree that It seems like a good thing to have folks coming to internal evaluation with different backgrounds, ideas and experiences. Kuhn has a good line about new insights and changes often coming from people who are new to an area or field, partly b/c they’re not wedded to prior practices.
Am curous whether your take is that foundations are also increasingly on board with a shift to internal evaluation. I know some are becoming more accepting of different kinds of evidence, but is it a trend? I still get questions like, can internal evaluators be objective and can their data and/or conclusions be trusted?
I very much like your predictions and I would suggest that you come to present your ideas to the next conference of the European Evaluation Society that will take place in Helsinki from 3-5 October 2012: “Evaluation in the Networked Society: new concepts, new challenges, new solutions” . This will trigger a lot of interesting discussions among participants, among whom many are reflecting on the future of evaluation. Please visit our Website http://www.europeanevaluation.org . Kind regards Claudine Voyadzis
Thank you for your kind words. Helsinki…interesting. I have wanted to attend the EES Conference for some time. I will give it serious thought.
Nice job John. Thoughtful and considered as usual. I agree with most of the discussion above and your predictions are very much on target. (I like the future work picture as well.)
As you know, I am a major advocate of internal evaluation through empowerment evaluation (and I served as an internal auditor as well). However, as the profession shifts in that direction additional quality controls should be contemplated to avoid organizational conflicts of interest. In other words, the evaluation team should report high enough in management that it avoids reporting to the group they are evaluating (if they are an independent unit). If they adopt more of an empowerment evaluation mode then it is the group evaluating themselves, reducing much of the traditional organizational conflict of interest problems.
I think reports will remain but take on new forms, ranging from brief videos (like the Quicktime one’s I make for my clients) to taped videoconference exchanges (like I do on ooVoo or Skype). There may also be a need for audit trail summary documents (more conventional reports) for some time (decades) – since socialization runs deep for everyone and expectations however archaic remain long after they are useful.
My guess is the term evaluation will remain (if it continues to evolve and accept more responsibilities and meet rising expectations).
You take care and thanks as always for providing thought provoking (and I think accurate) predictions about our profession.
– David Dr. David Fetterman Fetterman & Associates http://www.davidfetterman.com
Empowerment Evaluation offers an interesting lens for considering internal evaluation. Skeptics of internal evaluation, I believe, fear that organizations use it to conduct Empower-Me-To-Control-My-Message Evaluation. I like your suggestion that Empowerment Evaluation might reduce skepticism by resolving some conflicts of interest. I need to think about it some more. Interesting.
Reports, however, are doomed. The real value of a report is determined by the amount of usable information it contains. The same information presented in reports is now being presented faster and more comprehensively in other ways online. In ten years, the amount of information we will be able to access — without the filter and delay of a formal report — will astound us. However, I am skeptical that more and faster information will improve our collective efforts to benefit others. That is another story.
I hope the term evaluation will remain, but it is already falling out of favor. I see two important reasons for this: (1) amazingly, most people have never heard of evaluation, and (2) the new players in the social benefit space want to establish that they are approaching their work differently so they are choosing new words to describe their efforts.
John, just wanted to make sure that you didn’t miss this response from colleagues in Slovenia: “Six predictions about the future of evaluation” http://www.sdeval.si/Objave/Six-predictions-about-the-future-of-evaluation.html
Thanks Susan. Saw it — glad that others are joining in the fun.
Thank you, John, for your 10 predictions. My attention was drawn to them by our Slovenia Evaluation Association colleague, Bojan who sent me copy of SEA’s 6 Point predictions.
If much of the 10 or 6 Point Predictions is to become reality in the next 10 years, and they should be, if World Leaders are serious at finding; implementing; monitoring, evaluating and assessing the implementation of Sound Professional Solutions to real and complex World Food, Fuel, Finance, Trade, Terrorism and Climate Change problems on the ground from Village to Global levels on International Institutions, Developed Countries and Developing Countries sides; then Dr. Hellmut Eggers, who created Project Cycle Management, (PCM) in 1987, observed constraint / drawback – “there is no accumulation of Development Evaluation (Cooperation) Learning in the past 25 years of operating PCM”, which is the most widely used (in the breech) Evaluation Approach in our World today, need to be Professionally TACKLED by all concerned Evaluation and non Evaluation Professionals as well as Policy / Decision Makers from Village to Global levels.
Thus, if in the next 25 years of CORRECTLY operating PCM through 3PCM (Policy, Program, Project Cycle Management) ENSURING “Development Evaluation (Cooperation) Lesson Learning” is generally being followed, at equal speed, by “Development Evaluation (Cooperation) Lesson Forgetting” within International Institutions, Developed Countries Governments and Developing Countries Governments, the probability is HIGH that the 6 / 10 Point Predictions can become reality in 5 years or less – thus making Dream of World without Poverty Reality or Achievable by 2030, that is our World will be a much better place by 2037, for Citizens of both Rich and Poor Countries.
The point we are making is that currently there is the absence of the Bridge between “Learning” and “Doing”. We should like to propose a way out of this dilemma, allowing the accumulation of “Development Evaluation (Cooperation) Lesson Learning” and the operational application of such accumulated Lesson Learning in the work towards implementing the ideas set out in World Bank Public Sector Management, WBPSM and World Bank Governance and Anti Corruption, WBGAC Documents and in ways that help achieve increasing convergence between the International Institution / Developed Country Government / Developing Country Government: Vision Intention and Reality.
Prior to the elaboration of our proposal, we should like to know that any genuinely interested International Institution / Government Entity; ACTIVE in National / International Development Cooperation; will give this proposal serious consideration. Please, let us know that the International Institution / Government Entity will do so, and we will send them our ideas – set out in Bridge Building Paper and Standard Assessment Framework Paper that the International Institution / Government Entity will be free to reject or accept, as it see fit. We just want to make sure, before setting to work, that the International Institution / Government Entity will have a look at them. Should your Institution / Entity be interested in receiving the Papers, please send email to [email protected]
Lanre Rotimi Global Center for Learning in Evaluation and Results International Society for Poverty Elimination / Economic Alliance Group Secretariat to 3PCM Community of Practice Abuja Nigeria; Kent UK
Pingback: IDO’nun Gelecegi ile İlgili 10 Tahmin « degerturk
Pingback: About The Future of Evaluation | The Future of Evaluation
It looks like I am coming to the party late but I am interested that no one has commented on your prediction #4 (a national registry of evaluations). Having created a poster for the 2004 AEA conference as a grad student from the University of Minnesota on this topic, I am interested in finding out what, if anything is happening on this front and who might be interested in pursuing the topic and perhaps presenting at AEA in 2013. Best wishes, Randi Nelson, Partners in Evaluation, Minneapolis, MN
Never too late to join in. I am not aware of any efforts currently underway to establish a registry, but it is possible that someone is working on it. I think it’s an important topic and one I would love to discuss further. An AEA session might be a good way to start a larger conversation.
How encouraging. I will plan on submitting a proposal for a think tank on the subject. If any of your readers have ideas on the subject I would love to hear them. Randi
Keep me in the loop. Would be happy to participate if my conf schedule allows.
Pingback: Dying or Thriving: The Future Of Evaluation | On Top Of The Box Evaluation
Dear John Very interesting. Let me add further. Join country Led evaluation The future evaluation will very much country led join evaluation where ownership of the findings will share with the country of the program.
Reporting Most request reporting with evidence of pictures and success stories Also some will go for online reporting too.
isha Miranda M&E Expert ,Project Management consultant Trainer & Facilitator.
Isha, These are trends that I agree are likely to continue. An interesting question is what may be driving them. The first, I would suggest, is a reaction to feeling that evaluations–and the values they promote–are being imposed by those far from the local contexts in which programs are implemented. It sets out to address the imbalance between values and power. The second, it seems to me, is a reaction to methods that focus narrowly on discrete indicators rather than holistic assessments. It sets out to address the imbalance between what stakeholders see and what evaluators measure. Greater local autonomy and fuller understanding are where evaluation is going. Getting there, however, may be a bit of a bumpy ride because we are still learning how to accomplish these ends while also promoting others that we believe are important–program improvement, credible evidence, program development, justice, management, fiduciary responsibility–and may not fit neatly into any single approach.
Those are good points, but I see some others that should also be significant. When it comes to evaluation of efforts to address social issues like chronic disease reduction, improving graduation rates or addressing environmental issues, the concept of Collective Impact will lead to a big shift away from evaluating the “isolated impact” of a program in favor of how a coalition is working together to address complex issues. This should lead to the shift from logic models to collaborative strategy maps that are much better to create alignment and teamwork. The concepts of Developmental Evaluation should also gain traction as people (especially those doing internal evaluation) realize that learning and improving along the journey is the priority reason for doing evaluation. Strategy Management, which is forward looking and taps into the collective thinking of people should become as important as the data mining and predictive work.
I think you are getting at a question that policy, programs, and evaluation have faced since they began–does an intentionally coordinated “bundle” of interventions create greater impact than many individually pursued interventions. Some argue for the former because of the complexity underlying social problems. A good example would be a TB program in which public health organizations, homeless shelters, law enforcement agencies, and hospitals work together as partners to control the spread of an increasingly drug-resistant disease. Our ability to understand the complexity underlying social problems, however, has its limits. And our ability to coordinate activities across multiple organizations also has limits. So it is possible that in practice collective impact–which at a conceptual level makes a great deal of sense–may not live up to its promise. On the other hand, neither may individually pursued interventions. There is a growing belief that markets for funding and/or customers of double-bottom-line organizations (those with both financial and social missions) might impose a structure or discipline that increases the collective impact of organizations. It is an interesting alternative to the coordinated approach that, alas, has just as little evidence. I agree that in the next few years there will be more coordinate/collective/complexity-driven approaches to evaluation. Whether that turns out to be an improvement over what is currently being done is much harder to predict.
Employees who use AI as a core part of their jobs report feeling more isolated, drinking more, and sleeping less than employees who don’t.
The promise of AI is alluring — optimized productivity, lightning-fast data analysis, and freedom from mundane tasks — and both companies and workers alike are fascinated (and more than a little dumbfounded) by how these tools allow them to do more and better work faster than ever before. Yet in fervor to keep pace with competitors and reap the efficiency gains associated with deploying AI, many organizations have lost sight of their most important asset: the humans whose jobs are being fragmented into tasks that are increasingly becoming automated. Across four studies, employees who use it as a core part of their jobs reported feeling lonelier, drinking more, and suffering from insomnia more than employees who don’t.
Imagine this: Jia, a marketing analyst, arrives at work, logs into her computer, and is greeted by an AI assistant that has already sorted through her emails, prioritized her tasks for the day, and generated first drafts of reports that used to take hours to write. Jia (like everyone who has spent time working with these tools) marvels at how much time she can save by using AI. Inspired by the efficiency-enhancing effects of AI, Jia feels that she can be so much more productive than before. As a result, she gets focused on completing as many tasks as possible in conjunction with her AI assistant.
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Sustainability in educational research: mapping the field with a bibliometric analysis, share and cite.
Dönmez, İ. Sustainability in Educational Research: Mapping the Field with a Bibliometric Analysis. Sustainability 2024 , 16 , 5541. https://doi.org/10.3390/su16135541
Dönmez İ. Sustainability in Educational Research: Mapping the Field with a Bibliometric Analysis. Sustainability . 2024; 16(13):5541. https://doi.org/10.3390/su16135541
Dönmez, İsmail. 2024. "Sustainability in Educational Research: Mapping the Field with a Bibliometric Analysis" Sustainability 16, no. 13: 5541. https://doi.org/10.3390/su16135541
Article access statistics, supplementary material.
ZIP-Document (ZIP, 52 KiB)
Mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
During the past decades, microplastics (MPs) have become an emerging concern due to their persistence and potential environmental threat. MP pollution has become so drastic that it has been found in the human food chain, breast milk, polar regions, and even the Himalayan basin, lake, etc. Inflammation, pulmonary hypertension, vascular occlusions, increased coagulability and blood cell cytotoxicity, disruption of immune function, neurotoxicity, and neurodegenerative diseases can all be brought on by severe microplastic exposure. Although many MPs studies have been performed on single environmental compartments, MPs in multi-environmental compartments have yet to be explored fully. This review aims to summarize the muti-environmental media, detection tools, and global management scenarios of MPs. The study revealed that MPs could significantly alter C flow through the soil-plant system, the structure and metabolic status of the microbial community, soil pH value, biomass of plant shoots and roots, chlorophyll, leaf C and N contents, and root N contents. This review reveals that MPs may negatively affect many C-dependent soil functions. Different methods have been developed to detect the MPs from these various environmental sources, including microscopic observation, density separation, Raman, and FT-IR analysis. Several articles have focused on MPs in individual environmental sources with a developed evaluation technique. This review revealed the extensive impacts of MPs on soil-plant systems, microbial communities, and soil functions, especially on water, suggesting possible disturbances to vital ecological processes. Furthermore, the broad range of detection methods explored emphasizes the significance of reliable analytical techniques in precisely evaluating levels of MP contamination in various environmental media. This paper critically discusses MPs' sources, occurrences, and global management scenarios in all possible environmental media and ecological health impacts. Future research opportunities and required sustainable strategies have also been suggested from Bangladesh and international perspectives based on challenges faced due to MP's pollution.
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Symposium | Mixed
Where: FDA White Oak Campus Great Room and virtually via Zoom Webinar
On This Page: FDA Omics Days 2024
On this page:.
Registration
View Recording 10/10 AM View Recording 10/10 PM
View Recording 10/11 AM View Recording 10/11 PM
View Posters
Poster Abstract Submission
Background:
At the FDA Omics Days 2024, the FDA Omics Working Group will be hosting speakers from industry, academia, and government to discuss topics important to the FDA. This event will include sessions on contemporary topics in the fields of precision medicine, multi-omics, and One Health.
Upon registration, further information will be sent to you that includes the link for the event. The meeting will be recorded and available after the event concludes.
Additional details will be added closer to the event.
Thursday, September 12, 2024
Time | Topics |
---|---|
9:00 am - 9:15 am ET | |
9:15 am - 10:00 am ET | |
10:00 am - 10:15 am ET | |
10:15 am - 11:45 am ET | |
11:45 am - 1:15 pm ET | |
1:15 pm - 2:45 pm ET | |
2:45 pm - 3:00 pm ET | |
3:00 pm - 4:30 pm ET |
Twitter: #FDAOmicsDay2023
BMC Cancer volume 24 , Article number: 777 ( 2024 ) Cite this article
Metrics details
Evaluation publications typically summarize the results of studies to demonstrate the effectiveness of an intervention, but little is shared concerning any changes implemented during the study. We present a process evaluation protocol of a home-based gait, balance, and resistance exercise intervention to ameliorate persistent taxane-induced neuropathy study according to 7 key elements of process evaluation.
The process evaluation is conducted parallel to the longitudinal, randomized control clinical trial examining the effects of the home-based gait, balance, and resistance exercise program for women with persistent peripheral neuropathy following treatment with taxanes for breast cancer (IRB approval: Pro00040035). The flowcharts clarify how the intervention should be implemented in comparable settings, fidelity procedures help to ensure the participants are comfortable and identify their individual needs, and the process evaluation allows for the individual attention tailoring and focus of the research to avoid protocol deviation.
The publication of the evaluation protocol plan adds transparency to the findings of clinical trials and favors process replication in future studies. The process evaluation enables the team to systematically register information and procedures applied during recruitment and factors that impact the implementation of the intervention, thereby allowing proactive approaches to prevent deviations from the protocol. When tracking an intervention continuously, positive or negative intervention effects are revealed early on in the study, giving valuable insight into inconsistent results. Furthermore, a process evaluation adds a participant-centered element to the research protocols, which allows a patient-centered approach to be applied to data collection.
ClinicalTrials.gov NCT04621721, November 9, 2020, registered prospectively. Protocol version: April 27, 2020, v2.
Peer Review reports
Breast cancer chemotherapy regimens vary, but many include taxane preparation [ 1 ]. Taxane-induced peripheral neuropathy is an important consequence of breast cancer therapy, leading to functional impairment and compromised quality of life. Chemotherapy-induced peripheral neuropathy (CIPN) occurs in up to 80–97% of patients with onset from week 1-101 with symptoms persisting until around 57 months [ 2 , 3 ].
The “Home-based Physical Activity Intervention for Taxane-Induced CIPN” (B-HAPI) study is two-group, 16-week randomized clinical trial designed to address persistent taxane-induced peripheral neuropathy in women treated for invasive breast cancer. There have been only a limited number of original Randomized Controlled Trials conducted concerning this topic [ 4 ], particularly on proposing an exercise intervention specifically targeted towards persistent taxane-induced peripheral neuropathy using authenticated measures of gait and balance assessment.
Process evaluation is a systematic method for collecting, analyzing, and using data to examine the effectiveness of programs. Most evaluation publications report the results of studies to demonstrate the efficacy of an intervention. However, little is shared about protocol or other changes implemented during the research process that may influence the study outcomes. Often the mechanism of intervention delivery is overlooked as a critical aspect of evaluation, but instead should be treated as an important component of the overall intervention strategy, including the planning phase [ 5 ].
Implementing and obtaining process evaluation data helps to identify factors responsible for maintaining study integrity that may be implicated in determining the effectiveness of the intervention, the success or failure of an intervention, and for whom and under what circumstances the intervention is effective [ 6 , 7 ].
In this paper, we present a process evaluation protocol of a home-based gait, balance and resistance exercise intervention to ameliorate persistent taxane-induced neuropathy study according to 7 key elements of process evaluation [ 6 , 7 , 8 ]. The 7 key process evaluation components that will determine intervention effectiveness are fidelity (quality), dose delivered (completeness), dose received on exposure and satisfaction, reach (participation rate), recruitment, and context.
The process evaluation is conducted parallel to the longitudinal, randomized control clinical trial (B-HAPI study) whose objective is to examine the effects of the home-based gait, balance and resistance exercise program for women with persistent peripheral neuropathy following treatment with taxanes for breast cancer. The current process evaluation aims to: (1) monitor and assess the implementation of the home-based gait, balance, and resistance exercise program and (2) generate findings that aid in the interpretation and explanation of the program effects obtained in the parallel controlled trial. This model provides a conceptual framework for understanding the factors that affect the success or failure of a complex intervention. Data collection is structured using a triangulation design model [ 9 ]. The protocol had undergone previous scientific peer review as part of the grant application.
Process evaluation data are collected throughout the study as factors related to the successful completion of monthly questionnaires using Research Electronic Data Capture (REDCap), an electronic data capture tool hosted by University of South Florida. This data capture system maintains the standardized contact frequency of participants with the research team via telephone or videoconference, and health issues that can influence study-related processes. Results of the process evaluation are used to inform the intervention implementation and to perform midcourse corrections when fidelity of implementation is threatened (formative purposes). However, most process data will only be available following study intervention completion (summative purposes). Process data is ongoing and will be analyzed and interpreted prior to analysis of study outcomes. The hypothesis generated in the process evaluation derives from the adjustments in the implementation of the process only, and does not apply to not the original study hypothesis or results. These changes lead to new insights and hypotheses that can subsequently be statistically tested [ 5 , 10 ].
A two-group longitudinal randomized controlled trial (RCT) was designed to address persistent chemotherapy induced peripheral neuropathy (CIPN) in women treated for invasive breast cancer with taxane-based chemotherapy. The B-HAPI study so far screened 1,889 people, including 94 people who are at least 6 months post-treatment and suffer from CIPN with a visual analog scale pain rating of ≥ 3. Figure 1 shows the CONSORT flow diagram of the study.
B-HAPI study CONSORT Flow Diagram. Displays the recruitment flow diagram for screening, randomized allocation per group, and follow up based on the Consolidated Standards of Reporting Trials (CONSORT).
The study has the goal of recruiting 312 women in total, 156 in the intervention group and 156 in the attention control group. Power analyses determining the group sizes are described at the Statistical Analysis section. Breast cancer survivors are recruited from the regional community through breast cancer support groups, local institutions, social media campaigns, and recruitment flyers with the assistance of a local advertisement agency. Participants were randomized to either the intervention, consisting of a home-based exercise program, or an educational attention control group. Randomization to the study group was achieved using the REDCap randomization tool customized by the study statistician and REDCap specialist hosted at the University of South Florida [ 11 , 12 ]. Protocol dictated that participants in both groups were to complete a total of five (5) appointments over the course of a 16-week period. Two in-person study appointments occurred once at the beginning and once at the end of the four (4) months. In between the two in-person appointments, participants in both groups had monthly phone calls scheduled at the 4-, 8-, 12-, and 15/16-week mark. The study finished recruiting and is in the last phases of the study with follow-up collection.
Following initial eligibility screening, the written informed consent, baseline data collection are conducted in person at the University of South Florida’s School of Physical Therapy and Rehabilitation Sciences Human Functional Performances Lab (HFPL) located on the university campus. The HFPL is a 6500 square foot research facility with a private space for consent and nerve conduction studies. It is equipped to assess performance, impairments, and functional limitations of neuromusculoskeletal conditions. Equipment in the HFPL that is utilized for this study includes: the BIODEX 3.0 computerized dynamometer to assess lower extremity muscle strength; the GAITRite System to assess gait; and the Neurocom Sensory Organization Test to assess balance. Nerve conduction studies are conducted at a private room in the HFPL by the collaborating study neurologist. Once baseline data are collected, group assignment (Exercise Intervention or Educational Attention Control) is revealed via RedCap. The data collector is blinded to study group assignment. Similarly, the 16-week (end of study) data collection is also performed in person with the same assessments as described above. All other data collection at 4, 8, and 12 weeks are done using a REDCap link sent to all study participants where the questionnaires can be accessed. Data is collected only in the United States. The Principal Investigator and statistician are blinded to the groups allocated intervention. Because this study has been evaluated as low risk by the university IRB, no unblinding guidelines were deemed necessary.
Participants randomized to the exercise intervention are instructed by the interventionist in all the exercises in the HFPL. The participant is given a tote bag with the B-HAPI research logo and the resistance bands and a paper exercise booklet for referral. Exercises are also recorded by the research team’s physical therapist on a YouTube channel and the link is provided to the participant. The exercise diary is provided to the is electronic through a RedCap link.
Community-dwelling breast cancer survivors are recruited from the community. Female breast cancer survivors (≥ 21) who completed treatment for invasive breast cancer with taxane-based chemotherapy, and who have a peripheral neuropathy score of ≥ 3 by VAS rating were eligible for the study. Individuals with any disease (e.g. diabetes, HIV) that results in peripheral neuropathy or muscle weakness (chronic fatigue syndrome, multiple sclerosis, spinal cord tumors or injuries, stroke,); any disease that would preclude exercise (preexisting cardiopulmonary disease)) symptomatic lymphedema or at high risk for pathologic fracture are excluded. The study was approved by the University of South Florida Institutional Review Board (Pro00040035) and registered at ClinicalTrials.gov (Identifier: NCT04621721). If the study participants scored higher than 10 on the PHQ-9 or GAD-7 while answering the RedCap online forms, the Principal Investigator received an e-mail alert to inquire the reason for their high scores and make a decision about referral. Referrals to neurology, mental health professionals, and physical therapy were available through an affiliation with the University of South Florida healthcare network.
The attention control group participants received an educational intervention designed to equalize exposure to the exercise intervention protocol. Participants in this group received a journal binder in which to record their clinic and research appointments, pamphlets used for the educational attention control condition were from the American Cancer Society (ACS) and pertained to post-cancer care with additional supplemental information related to the ACS topics. Initially, the educational materials chosen consisted of (1) Nutrition: Eating Well After Treatment [ 13 ]; (2) Body Image and Sexuality After Breast Cancer [ 14 ]; (3) Life After Cancer/Follow-up Care [ 15 ]; and (4) Emotional and Social Issues After Cancer [ 16 ]. However, before the study was to commence, the SARS-CoV-2 pandemic struck the United States of America. As a result, the addition of COVID-19 Vaccinations: Myths vs. Facts and ‘Survivorship’ was added to the list of educational materials. In addition, participants were very interested in stress reduction techniques, so educational information on mindfulness-based stress reduction was also added. These topics were used as a substitute for those who chose to opt-out of any of the original topics.
The topics chosen were specially selected to provide relevant, timely information the individual can use in the cancer survivorship trajectory, while avoiding those related to exercise/physical activity to prevent contamination. Each control group participant received phone calls scheduled around data collection to equalize attention. Each phone call had a specific topic for that month and a trained member of the research team discussed the topic while providing additional insights in a semi-structured interview process. These educational sessions lasted approximately 20–35 min and occurred at the 4-, 8-, 12-, and 15-week mark. The attention control group members agreed to not begin a new exercise program or change their level of exercise during the study.
The exercise intervention consists of a 16-week home-based exercise program meant to improve the participant’s gait, balance and lower extremity muscle strength. All material related to the exercise protocol was provided to the intervention group participants. The strength training exercises used progressive resistance flat bands for performing a variety of resistive exercises for the lower extremities, such as leg curls, lunges, and calf raises. The gait and balance exercises consisted of movements and postures that engaged varied sensory information by having participants perform static and dynamic tasks with eyes open/closed (visual), head steady or with head turns (vestibular), on firm surface/on foam (somatosensory). The exercise program contains detailed easy to follow demonstrations for each gait/balance training and resistance exercise training led by a physical therapist via a YouTube link. In addition, a pictorial exercise instruction booklet is also provided to participants for their reference. All exercise sessions are recorded in an Exercise Diary to provide a quantitative measure of exercise, as the prescribed exercises cannot be collected via any available device. Participants are instructed to complete the exercise diary for review at every data collection encounter. The intervention length is comparable with previous studies of exercise in persons with peripheral neuropathy [ 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 ] Intervention group participants are provided the resistance training bands of varying levels for the purpose of exercise progression, and wide, firm foam surface for the balance exercises. The intervention protocol begins with light warm-up and stretching activities followed by10 minutes each of gait/balance and 10 min of resistive (strength) training components. Telephone calls for follow-up to assist in surmounting barriers to exercise are conducted according to a standard schedule. The research team also offered video calls with participants to ensure proper exercise performance. The intervention nurse called each exercise participant one week after the baseline appointment to ensure exercise understanding and exercise diary completion. The study physical therapist also provided any needed consultations.
Following informed consent, the following data is collected: age, gender, race, marital status, income level, employment status. Information concerning breast cancer stage, and hormonal status, type of breast cancer-related surgery, number of taxane cycles received, and current medications are also obtained.
Assessments of lower extremity muscle strength [ 31 ], gait/balance [ 19 , 26 , 35 ], nerve conduction [ 20 , 36 ], neuropathy symptoms [ 18 ], Brief Resilience Scale (BRS) [ 37 ], quality of life (QOL) [ 18 ], Generalized Anxiety Disorder (GAD-7) [ 38 , 39 ], Patient Health Questionnaire (PHQ-9) [ 40 , 41 ] are collected in person at baseline. At 4 weeks, 8 weeks, and 12 weeks, measures of neuropathy symptoms, anxiety, depression, resilience and QOL are collected online via RedCap at the end of the intervention (16 weeks) all in-person assessments are repeated as in the baseline measures. The assessments performed and instruments validity are described at Table 1 per time point. And Fig. 2 through the Standard Protocol Items recommended for Interventional Trials (SPIRIT) with the schedule of enrolment, interventions, and assessments.
Standard Protocol Items recommended for Interventional Trials (SPIRIT) with the schedule of enrolment, interventions, and assessments. Displays Timeline for application of the standard protocol items. *Only intervention group, ** only control group. Note: reminders are sent
Individual semi-structured interviews by group assignment occurs on a regular basis at baseline, 4 weeks, 8 weeks, 12 weeks, and 16 weeks with all participants. The intervention group is asked about their ability to engage in the exercise program over the past few weeks, any barriers to exercise they have experienced, and strategies to overcome these barriers.
The attention control and intervention phone calls utilize standardized scripts and take a similar length of time at the same time intervals to equalize contact with both groups and avoid attention bias. The attention control script consists of the educational topics as noted above about barriers and strategies in the survivorship trajectory. The educational topics specifically avoid those related to exercise/physical activity to prevent contamination. Educational pamphlets of these topics are placed in the planners given to the attention control group. A review the assigned topic is provided during the scheduled attention control phone call, and the participant is engaged in a discussion of the topic and any questions are answered.
While the overall COVID-19 pandemic has been resolved, it remains important to discuss the impact of the pandemic on the study processes. The study start was delayed for 4 months due to the 2020 acute COVID-19 outbreak which resulted in the closure of in-person university research activities. Once the study could begin recruitment, the research team took steps to mitigate COVID-19 infection transmission, as this occurred before vaccine approval. These steps included mask mandates for all research staff in contact with participants, the provision of clean, disposable masks for patients upon arrival, hand sanitization stations, procedures for sanitizing all surfaces and equipment before and after participant appointments, and the institution of a COVID-19 risk assessment questionnaire. For 2021 and 2022, those measures continued to be implemented until masks were not mandatory in our clinics, approximately mid-2022. However, aseptic techniques continued to be implemented as needed.
Program implementations as planned.
A graphical presentation of the recruitment and data collection is provided as flowcharts (Figs. 2 and 3 ). The flowcharts clarify how the intervention should be implemented in comparable settings, revealing important aspects necessary to reach optimal performance and quick adjustments. Prior to starting recruitment, the research team assessed the fidelity of the intervention by use of a fidelity checklist developed by the PI. The fidelity checklist is utilized at regular weekly intervals throughout the study for training any new staff, for re-training and ensuring compliance with the intervention procedures.
Recruitment. Reports detailed information and transcript for recruitment and enrollment in the study
First, through social media marketing efforts, the participant reaches out the research team to obtain additional study information and to assess for interest and study eligibility. The team then explains the study objectives and requirements as well as triaging COVID-19 symptoms/risks during the active COVID-19 infection and quarantine period to ensure participants and team safety. Upon confirming eligibility (Fig. 3 ), the participants baseline lab visit is scheduled for data collection (Fig. 4 ).
Baseline and follow-up flowcharts. Displays detailed information of the procedures during baseline and follow-up appointments. Both groups has the same baseline and final follow-up procedure (16 weeks), but differ in the follow-up for the 4,8,12, and 15 weeks
The physical therapy lab team performing data collection, the study statistician and the primary investigator are blind to whether the participant is allocated to the intervention or control group at baseline and follow ups. Only the study research manager and research assistants are aware of the participants allocation as they proceed with the instructions and implementation of the exercise diary and educational materials for the attention group.
The participants provide data via a fidelity instrument (Tables 2 and 3 , according to the designated group) and the research team members proceeded with debriefing. These procedures beyond the data collection helps to ensure the participants are comfortable and identify any of their individual needs, which helps building relationship rapport and avoid attrition rates.
The fidelity instrument is administered according to the designated group assignment. (Tables 2 and 3 ) This procedure allows structured data collection from participants in both the intervention and control groups concerning perception of the intervention or control conditions, with an opportunity for any comments about the session.
The team members debriefing was done initially at the end of the each follow up until the staff were comfortable with the procedures. Currently a debriefing concerning the fidelity measure is conducted bi-weekly at the research team meeting. The meeting time ensures reflection and alignment to study focus and procedures, providing an opportunity for feedback meetings. During those meetings, the primary investigator receives a status update on the research study as well as additional details regarding additional aspects of the research, such as logistics for collecting data and returning data to the research team. Team members were ready to correct the implementation of the intervention if needed to ensure fidelity to the intervention. They kept track of the discussion topics and changes for evaluation purposes. The study has not yet experienced any significant protocol deviations.
Throughout the research process shown in the flowchart (Fig. 3 ), different elements of the process evaluation components are implemented and used to collect process data. The tools to collect process data are based on the nature of the process evaluation questions (Table 4 ), this includes how to acquire valid, reliable information efficiently and with the least burden on those involved. In Table 4 , the tools/procedures for collecting data, data sources and process evaluation questions are indicated for each process evaluation component.
Quantitative data will be analyzed using the software package SPSS for windows computing descriptive statistics with means and frequencies, the attrition rate and follow-up contacts. We will compare both groups and test the efficacy of the 16-week delivered program of gait/balance training plus resistance exercise in increasing muscle strength, improving gai/balance and nerve induction parameters, decreasing neuropathy symptoms, increasing quality of life and resilience, and decreasing anxiety and depression while controlling for age, BMI, number of taxane cycles and intervals, neuropathic pain, neuropathy/pain medications, current resistance exercise participation and falls/near falls experienced.
The qualitative data collected by open-ended question in the fidelity checklist and teams notes throughout the process evaluation will be used for the individual attention tailoring and focus of the research to avoid protocol deviation. Content analysis on the notes about participants commons concerns will allow major themes to emerge from the data [ 42 ]. A narrative report will summarize the description of the procedures.
Power analyses were performed through a Monte Carlo simulation approach with the software Mplus to calculate sample size [ 43 , 44 ], incuding recommended variance of the population parameters. Observations were spaced at 0, 4, 8, 12, and 16, weeks with the number of weeks since baseline as the time metric to evaluate the efficacy of the 16-week intervention. To reflect an effective randomization of participants to conditions, we modeled no mean difference between treatment and control conditions at baseline, and the difference in slopes between the treatment and control conditions during the intervention period (γ 11 ) is the focal parameter to be adequately powered. Given α = 0.05, a two-tailed hypothesis test, and the view that a power value of 0.80 will be adequate to detect a treatment effect, a minimum sample of N = 312 participants (based on recruitment of 2 or more participants per week for 3 years) with 20% attrition, 10% periodic non-response. A full-information maximum likelihood approach for an intent-to-treat analysis, a Monte Carlo simulation with 10,000 replications suggests we will be able to detect a minimum standardized effect of 0.30 with a probability of correctly rejecting a false null (power) of 0.81. If the recruitment rate is closer to 3 per week resulting in a sample of N = 468, the minimum detectable standardized effect is 0.25. By including additional control variables (all ES’s = 0.10), the minimum-detectable effect sizes decrease to 0.27 and 0.22, respectively. Topic relevant meta-analyses reported effect sizes for exercise intervention effects on similar outcomes to range between ES = 0.30 to ES = 0.0.84 [ 45 ]. The prospective power analysis suggests that our study is well positioned to detect effect sizes even at the lower end of this reported range.
In order to test the efficacy of the 16-week-delivered program of gait/balance training plus resistance exercise, we will use a intent-to-treat (ITT) analyses to evaluate the effect of the intervention using the Exercise Diary for change in outcomes at post-intervention and at follow-up and a structural equation modeling (SEM) to explore the covariates of the intervention effect. The aforementioned analyses provide a generalized mixed model that allows to modeling both time-varying covariates (e.g., pain, medications, BMI, falls) and individually varying covariates (e.g., age, taxane cycles, years since treatment completion, baseline resistance exercise); adjust for loss of power and bias derived from attrition and periodic non-response; utilize a non-normal link function from non-normally-distributed outcomes; and, consider individual differences in baseline outcomes and improved outcomes from the intervention by allowing initial status and change over time to be random (latent) variables. The intention-to-treat analyses are based on differential improvement outcomes between the treatment and control conditions during the 16-week intervention efficacy period.
We will also evaluate for differences in muscle strength, gait/balance, sensory (sural) and motor (peroneal) nerve conduction, peripheral neuropathy symptoms, quality of life (QOL), resilience (BRS), anxiety (GAD-7), and depression (PHQ-9) between groups (exercise-intervention vc educational-intervention, control group) while controlling for age, Body Mass Index, taxane cycles and intervals, neuropathic pain, neuropathy/pain medications, current resistance exercise participation and falls/near falls experienced.
Additional parameters are included to evaluate the time-varying controls (pain, medication use, BMI, fall) and time-invariant controls (age, taxane interval/cycles, baseline resistance exercise). Control for these potential covariate effects reduces potential bias to the slope parameters central to the test of study aims and increases statistical power.
A certified research associate and statistician are dedicated to the role of data management. The process evaluation is periodically analyzed through descriptive statistics analysis (quantitative data) and content analysis (qualitative data). The process evaluation analysis allows individual attention while focusing on research to avoid protocol deviation. This study has been evaluated as low risk by the university IRB and no stopping guidelines to terminate the trial were deemed necessary.
This paper describes the process evaluation protocol plan for the B-HAPI study: Home-based physical activity intervention for taxane-induced CPIN: A randomized controlled trial (RCT). Beyond focusing on publishing the outcomes, publishing the process flow diagram and evaluation model favors replication of a complex longitudinal clinical trial study. This allows midcourse correction when fidelity of the implementation is threatened with data analysis and interpretation before the outcomes of the effect of the study. Considering that most summative process data is not processed or available until after completion of the proposed intervention [ 6 ], the process evaluation is critical for the success and replication of the study.
The incorporation of process evaluation elements in the process supports the implementation of the intervention key components. After all, it ensures that quantitative and qualitative data supports an understanding and assurance of the quality and process of the implementation are gathered [ 46 ].
The process evaluation allows the team to systematically register information and procedures applied during the recruitment process and factors influencing the intervention implementation, which allows a proactive approach to avoid protocol deviations. This allows a seamless documentation of midcourse correction, non-participation and drop-outs during recruitment, intervention, and follow-up.
By following the flow diagram consciously incorporating the process evaluation key components, the team gathered valuable information. Whenever there were conflicting opinions regarding adjustments of the process, the research team revisited the study hypothesis/objective. The research financial institution and IRB should be consulted for any potential significant adjustment.
Regarding the breast cancer chemotherapy regimens, taxanes are known to induce peripheral neuropathy toxicity leading to lower extremity muscle weakness, impaired balance, pain, numbness, and decreased vibration or touch sensation [ 47 , 48 , 49 ]. Currently, there is no evidence-based preventative or treatment strategies available [ 50 , 51 ] and a limitation of current publications is the lack of a clear theoretical framework in the development process [ 52 ]. Studies in this field may benefit from a thorough process evaluation publication to determine factors that facilitate or hinder the intervention.
Lastly, by tracking the implementation of an intervention continuously, favorable, or unfavorable intervention effects can be clarified early on in the study, which leads to valuable insights into contradictory results. The use of a mixed methods approach provides a key strength to the process evaluation by providing an understanding of the processes and experiences of participants with both interventions. As a general principle, combining quantitative and qualitative methods increases validity more so than utilizing either one alone [ 46 ].
In conclusion, the publication of the process evaluation plan adds transparency to the findings of clinical trials and favors process replication in future studies. The authors believe every study and intervention management follows a structured protocol procedure, barriers, and adjustments as part of the studies ethics and procedures. However, adding transparency by publishing the process implemented and not only the outcomes validity and reliability is a practice that still needs to be instilled in the research community.
A process evaluation has many uses depending on the main objective, the available resources, the type of intervention, and where it will be implemented. It also adds a participant-centered component into the research, bringing the patient-centered model into data collection. While executing the process evaluation, one challenge is to consider whether interim adjustments and changes can be made to ensure that the exercise and educational intervention will be implemented with fidelity without jeopardizing the study protocol’s integrity. The team ensured fidelity through consultation with the study physical therapist co-investigators, statistician and study neurologist prior to any significant adjustments. In addition, physical therapists not part of the study team were used to assess features of the exercise protocol for the intervention group and suggest and necessary adjustments.
For dissemination, the team plans to publish the data in publications and presentations in several venues, including national and international professional meetings. For the patients, we communicate with them routinely through the newsletter, which is published periodically every month, and will publish a final newsletter in December 2024.
A limitation is the execution of the process evaluation by the research team, which may introduce bias. However, acknowledging this possibility and introducing consultation to experts on the decision-making process of adjustments (a peer review by an independent researcher component) helps to reduce this risk.
Randomized clinical trials are only designed to test interventions with a positive effect, making generalization of results difficult because the study population differs greatly from the population treated in normal life. Additionally, trials are not usually able to answer the questions practitioners, decision-makers, or consumers ask. For an insight into long-term outcomes and endurance of the outcomes at 16 weeks, follow up should extend beyond 16 weeks.
The data are available from the authors upon reasonable request.
Randomized Clinical Trial
Home-Based Physical Activity Intervention
Human Functional Performances Lab
Quality of Life
Brief Resilience Scale
Generalized Anxiety Disorder 7-item scale
Patient Health Questionnaire
Coronavirus Disease 2019
Consolidated Standards of Reporting Trials
Body Mass Index
Gradishar WJ, Anderson BO, Balassanian R, Blair SL, Burstein HJ, Cyr A, Elias AD, Farrar WB, Forero A, Giordano SH. NCCN guidelines insights: breast cancer, version 1.2017. J Natl Compr Canc Netw. 2017;15(4):433–51.
Article PubMed Google Scholar
Park SB, Lin CSY, Krishnan AV, Friedlander ML, Lewis CR, Kiernan MC. Early, progressive, and sustained dysfunction of sensory axons underlies paclitaxel-induced neuropathy. Muscle Nerve. 2011;43(3):367–74.
Kerckhove N, Collin A, Condé S, Chaleteix C, Pezet D, Balayssac D. Long-term effects, pathophysiological mechanisms, and risk factors of chemotherapy-induced peripheral neuropathies: a comprehensive literature review. Front Pharmacol. 2017;8:86.
Article PubMed PubMed Central Google Scholar
Xu R, Yu C, Zhang X, Zhang Y, Li M, Jia B, Yan S, Jiang M. The efficacy of neuromodulation interventions for chemotherapy-induced peripheral neuropathy: a systematic review and meta-analysis. J Pain Res. 2024;17(null):1423–39.
Moore JB, Maddock J, Singletary CR, Oniffrey TM. Evaluation of physical activity interventions: impact, outcome, and cost evaluation. Phys Activity Public Health Pract 2019.
Stijnen M, Duimel-Peeters I, Vrijhoel H, Jansen M. Process evaluation plan of a patient-centered home visitation program for potentially frail community-dwelling older people in general practice. Eur J Person Centered Healthc. 2014;2(2):179–89.
Article Google Scholar
Linnan L, Steckler A. Process evaluation for public health interventions and research. 2002.
Baranowski T, Stables G. Process evaluations of the 5-a-day projects. Health Educ Behav. 2000;27(2):157–66.
Article CAS PubMed Google Scholar
Speziale HS, Streubert HJ, Carpenter DR. Qualitative research in nursing: advancing the humanistic imperative. Lippincott Williams & Wilkins; 2011.
Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Process evaluation in randomised controlled trials of complex interventions. BMJ. 2006;332(7538):413–6.
Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81.
Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O’Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J, et al. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. 2019;95:103208.
Eating well after treatment. [ https://www.cancer.org/treatment/survivorship-during-and-after-treatment/be-healthy-after-treatment/eating-well-after-treatment-ends.html ].
Body image and sexuality after cancer. [ https://www.cancer.org/cancer/breast-cancer/living-as-a-breast-cancer-survivor/body-image-and-sexuality-after-breast-cancer.html ].
Life. after cancer [ https://www.cancer.org/treatment/survivorship-during-and-after-treatment/be-healthy-after-treatment/life-after-cancer.html ].
Emotional mentalhealth. and mood changes [ https://www.cancer.org/treatment/treatments-and-side-effects/physical-side-effects/emotional-mood-changes.html ].
Balducci S, Iacobellis G, Parisi L, Di Biase N, Calandriello E, Leonetti F, Fallucca F. Exercise training can modify the natural history of diabetic peripheral neuropathy. J Diabetes Complicat. 2006;20(4):216–23.
Cella D, Peterman A, Hudgens S, Webster K, Socinski MA. Measuring the side effects of taxane therapy in oncology: the functional assessment of cancer therapy–taxane (FACT-taxane). Cancer: Interdisciplinary Int J Am Cancer Soc. 2003;98(4):822–31.
Article CAS Google Scholar
Chaudhry H, Findley T, Quigley KS, Ji Z, Maney M, Sims T, Bukiet B, Foulds R. Postural stability index is a more valid measure of stability than equilibrium score. J Rehabilitation Res Dev. 2005;42(4).
Chen X, Stubblefield MD, Custodio CM, Hudis CA, Seidman AD, DeAngelis LM. Electrophysiological features of taxane-induced polyneuropathy in patients with breast cancer. J Clin Neurophysiol. 2013;30(2):199–203.
Cleeland C, Ryan K. Pain assessment: global use of the Brief Pain Inventory. Annals, Academy of Medicine, Singapore. 1994.
Fisher M, Langbein W, Collins E, Williams K, Corzine L. Physiological improvement with moderate exercise in type II diabetic neuropathy. Electromyogr Clin Neurophysiol. 2007;47(1):23–8.
CAS PubMed Google Scholar
Franco-Villoria M, Wright CM, McColl JH, Sherriff A, Pearce MS. Team GMSc: Assessment of adult body composition using bioelectrical impedance: comparison of researcher calculated to machine outputted values. BMJ open. 2016;6(1):e008922.
Graham R, Hughes R, White C. A prospective study of physiotherapist prescribed community based exercise in inflammatory peripheral neuropathy. J Neurol. 2007;254(2):228–35.
Manor B, Li L. Characteristics of functional gait among people with and without peripheral neuropathy. Gait Posture. 2009;30(2):253–6.
Nigg BM, Cole GK, Nachbauer W. Effects of arch height of the foot on angular motion of the lower extremities in running. J Biomech. 1993;26(8):909–16.
Penttinen H, Saarto T, Kellokumpu-Lehtinen P, Blomqvist C, Huovinen R, Kautiainen H, Järvenpää S, Nikander R, Idman I, Luoto R. Quality of life and physical performance and activity of breast cancer patients after adjuvant treatments. Psycho‐Oncology. 2011;20(11):1211–20.
Ruhland JL, Shields RK. The effects of a home exercise program on impairment and health-related quality of life in persons with chronic peripheral neuropathies. Phys Ther. 1997;77(10):1026–39.
Sugden JA, Sniehotta FF, Donnan PT, Boyle P, Johnston DW, McMurdo ME. The feasibility of using pedometers and brief advice to increase activity in sedentary older women–a pilot study. BMC Health Serv Res. 2008;8(1):169.
Taylor C, Coffey T, Berra K, Iaffaldano R, Casey K, Haskell W. Seven-day activity and self-report compared to a direct measure of physical activity. Am J Epidemiol. 1984;120(6):818–24.
Tiffreau V, Ledoux I, Eymard B, Thévenon A, Hogrel J-Y. Isokinetic muscle testing for weak patients suffering from neuromuscular disorders: a reliability study. Neuromuscul Disord. 2007;17(7):524–31.
Tofthagen C, Visovsky C, Berry DL. Strength and balance training for adults with peripheral neuropathy and high risk of fall: current evidence and implications for future research. Oncology nursing forum: 2012. NIH Public Access. 2012:E416.
van den Berg M, Winkels R, de Kruif JTC, van Laarhoven H, Visser M, de Vries J, de Vries Y, Kampman E. Weight change during chemotherapy in breast cancer patients: a meta-analysis. BMC Cancer. 2017;17(1):259.
van Schie CH. Neuropathy: mobility and quality of life. Diab/Metab Res Rev. 2008;24(S1):S45–51.
Vaughan CL, Davis BL, O’connor JC. Dynamics of human gait. Volume 2. Human Kinetics; 1992.
Velasco R, Bruna J, Briani C, Argyriou AA, Cavaletti G, Alberti P, Frigeni B, Cacciavillani M, Lonardi S, Cortinovis D. Early predictors of oxaliplatin-induced cumulative neuropathy in colorectal cancer patients. J Neurol Neurosurg Psychiatry. 2014;85(4):392–8.
Smith BW, Dalen J, Wiggins K, Tooley E, Christopher P, Bernard J. The brief resilience scale: assessing the ability to bounce back. Int J Behav Med. 2008;15(3):194–200.
Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092–7.
Esser P, Hartung TJ, Friedrich M, Johansen C, Wittchen H-U, Faller H, Koch U, Härter M, Keller M, Schulz H, et al. The generalized anxiety disorder screener (GAD-7) and the anxiety module of the Hospital and Depression Scale (HADS-A) as screening tools for generalized anxiety disorder among cancer patients. Psycho-oncology. 2018;27(6):1509–16.
Kroenke K, Spitzer RL, Williams JBW. The PHQ-9. J Gen Intern Med. 2001;16(9):606–13.
Article CAS PubMed PubMed Central Google Scholar
Hinz A, Mehnert A, Kocalevent R-D, Brähler E, Forkmann T, Singer S, Schulte T. Assessment of depression severity with the PHQ-9 in cancer patients and in the general population. BMC Psychiatry. 2016;16(1):22.
Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.
Muthén L, Muthén B. Mplus user’s guide, Fourth Edition edn. Los Angeles, CA: Muthen & Muthen; (1998–2007).
Muthén LK, Muthén BO. How to use a Monte Carlo study to decide on sample size and determine power. Struct Equ Model. 2002;9(4):599–620.
Conn VS, Hafdahl AR, Porock DC, McDaniel R, Nielsen PJ. A meta-analysis of exercise interventions among people treated for cancer. Support Care Cancer. 2006;14(7):699–712.
Creswell JW, Creswell JD. Research design: qualitative, quantitative, and mixed methods approaches. Sage; 2017.
Tofthagen C. Patient perceptions associated with chemotherapy-induced peripheral neuropathy. Clin J Oncol Nurs. 2010;14(3).
Cata JP, Weng H-R, Chen J-H, Dougherty PM. Altered discharges of spinal wide dynamic range neurons and down-regulation of glutamate transporter expression in rats with paclitaxel-induced hyperalgesia. Neuroscience. 2006;138(1):329–38.
Visovsky C, Collins M, Abbott L, Ashenbrenner J, Hart C. Putting evidence into practice: evidenced-based interventions for chemotherapy-induced peripheral neuropathy. Clin J Oncol Nurs. 2007;11(6):901–13.
Hershman DL, Lacchetti C, Dworkin RH, Lavoie Smith EM, Bleeker J, Cavaletti G, Chauhan C, Gavin P, Lavino A, Lustberg MB. Prevention and management of chemotherapy-induced peripheral neuropathy in survivors of adult cancers: American Society of Clinical Oncology clinical practice guideline. J Clin Oncol. 2014;32(18):1941–67.
Teran-Wodzinski P, Haladay D, Vu T, Ji M, Coury J, Adams A, Schwab L, Visovsky C. Assessing gait, balance, and muscle strength among breast cancer survivors with chemotherapy-induced peripheral neuropathy (CIPN): study protocol for a randomized controlled clinical trial. Trials. 2022;23(1):363.
Tanay MAL, Armes J, Moss-Morris R, Rafferty AM, Robert G. A systematic review of behavioural and exercise interventions for the prevention and management of chemotherapy-induced peripheral neuropathy symptoms. J Cancer Surviv. 2021.
Download references
We acknowledge the support of physical therapists Dr. Stephanie Hart Hughes and Dr. Kelly Collins at the University of South Florida’s School of Physical Therapy and Rehabilitation Sciences Human Functional Performances Lab (HFPL) for the exercise assessment measures and Dr. Tran Vu for the nerve conduction assessment. We also acknowledge the financial support to Samia Valeria Ozorio Dutra from the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES). We acknowledge the support from Avree Ito-Fujita for figure resolution editing.
This research was funded by the National Cancer Institute NCI1R01CA229681-01A1 (Home-Based Physical Activity Intervention for Taxane-Induced CIPN). The role of the National Cancer Institute is to monitor the study through study reports following recruitment and progress of the study related to financial expenditures, outcomes, and adverse events. The protocol had undergone previous scientific peer review as part of the grant application. Contact information for the trial sponsor: Alexis Bakos, PhD RN, National Cancer Institute, [email protected].
Dr. Dutra also worked on this research while affiliated with the University of South Florida, College of Nursing and the University of Tennessee-Knoxville, College of Nursing. Dr. Ji also worked on this research while affiliated with the University of South Florida, College of Nursing.
Nancy Atmospera-Walch School of Nursing, University of Hawaii at Manoa, Honolulu, HI, USA
Samia Valeria Ozorio Dutra
College of Nursing, University of South Florida, Tampa, FL, USA
Lauren Schwab, Jillian Coury & Constance Visovsky
Health Sciences, University of New Mexico, Albuquerque, NM, USA
You can also search for this author in PubMed Google Scholar
SVOD was a major contributor to the conception, design of the work, the process evaluation, and writing the manuscript. JC made substantial contributions to the manuscript update and revisions. LS made substantial contributions to the manuscript update and revisions. MJ made substantial contributions to the manuscript update and revisions. CV supervised and revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Samia Valeria Ozorio Dutra .
Ethics approval and consent to participate.
The study protocol was approved by the University of South Florida Institutional Review Board (Pro00040035) and registered at ClinicalTrials.gov (Identifier: NCT04621721). Written informed consent was obtained from all participants prior to enrollment. All experiments were performed in accordance with relevant guidelines and regulations.
Not applicable.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Supplementary material 2, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Ozorio Dutra, S.V., Schwab, L., Coury, J. et al. Process evaluation protocol plan for a home-based physical activity intervention versus educational intervention for persistent taxane-induced peripheral neuropathy (B-HAPI study): a randomized controlled trial. BMC Cancer 24 , 777 (2024). https://doi.org/10.1186/s12885-024-12444-x
Download citation
Received : 23 January 2024
Accepted : 28 May 2024
Published : 27 June 2024
DOI : https://doi.org/10.1186/s12885-024-12444-x
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1471-2407
IMAGES
VIDEO
COMMENTS
'The Future of Research Evaluation' gives a review of the current state of research evaluation systems and discusses the most recent actions, response and initiatives taken by different stakeholders through several case examples from around the world. The goal of this discussion paper is to contribute to the ongoing debates and open questions ...
The Future of Research Evaluation gives a review of the current state of research evaluation systems and discusses the most recent actions, response and initiatives taken by different stakeholders through several case examples from around the world. The goal of this discussion paper is to contribute to the ongoing debates and open questions on ...
Research evaluation is core to the research system. If evaluation criteria can be made more representative of how research is done, that much-needed culture change will move one important step closer.
The starting point to explore the future of evaluation is obviously its role within society. Based on the assumption that social structures can be pur- ... (2006, p. 420) points out that 'scientific-research-based programs and eval-uations', 'evidence-based resource allocation', 'program logic models' and
The Global Young Academy (GYA), the InterAcademy Partnership (IAP) and the International Science Council (ISC) Centre for Science Futures have released a synthesis paper which sets out the major drivers, opportunities and challenges for research evaluation reform and collates illustrative examples of change happening at global, regional, national and institutional levels. The paper is intended ...
Their publication, "The Future of Research Evaluation," highlights recent reform efforts. This paper discusses how new evaluation models are being implemented, but expected goals are not being met, raising concerns about potential fragmentation in the research landscape. The paper emphasizes the importance of collective action and mutual ...
Tom came to evaluation in 2008 through his mentor, then-AEA President Bill Trochim, and colleagues in the Cornell Office for Research on Evaluation, learning evaluation on the job as a graduate research assistant while facilitating and researching evaluation capacity building (ECB)—taking only one formal course in evaluation in graduate school.
Research Article. Connections, Themes, and Implications for the Future of Evaluation. David Dwayne Williams, ... offer their own responses regarding what they learned from each other and what possible implications they foresee for future evaluation. Based on these reviews, readers are invited to think about possible implications for their own ...
The paper is intended to serve as a prospectus for in-depth conversations with stakeholders including the global research community itself. Evaluation practices are used to assess research proposals for funding decisions, research papers for publication, researchers for recruitment or promotion and the performance of research institutions and ...
Institutions and their researchers face mounting pressure to demonstrate their wider value; for example, their contributions in areas such as open science, societal change, and the UN Sustainable Development Goals. At the same time, there are calls to improve recognition and incentives, and provide researchers with fairer career paths. The academic community needs a new evaluation structure ...
Research Evaluation, 28, 158-168. Crossref. Google Scholar. Laperrière H., Potvin L., Zúñiga R. (2012). A socio-political framework for evaluability assessment of participatory evaluations of partnerships: Making sense of the power differentials in programs that involve the state and civil society. ... future evaluation; research on ...
Future of evaluation: Charting a path in a changing development landscape. Join us for a series of virtual conversations from April 8 to 10, 2024, on the future direction of the independent evaluation practice, the questions it will have to answer, and the impact of new technologies for ever greater data gathering and analysis.
The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8. We have developed and ...
Evaluation policy in use would probably be a fruitful topic for future research on evaluation. Future attention might also be given to how evaluation policy intersects with and is influenced by other policies and regulations. In discussing the role of federal staff in carrying out evaluation policy, Epstein and colleagues point to the ...
Source Definition; Suchman (1968, pp. 2-3) [Evaluation applies] the methods of science to action programs in order to obtain objective and valid measures of what such programs are accomplishing.…Evaluation research asks about the kinds of change desired, the means by which this change is to be brought about, and the signs by which such changes can be recognized.
There is consequently a need to investigate the conceptual content of the multidisciplinary notion of research and related quality and evaluation issues. However, developing a truly concept-oriented terminology and model is a demanding endeavour. 4.6. Implications for policy makers, researchers and future research
Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...
The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications. LEARN ABOUT: Action Research.
In this section we suggest three lines of future research: (a) field studies and interventions; (b) research into the potential role of self-beliefs; and (c) exploration of the conversational dynamics associated with feedback perceived as past vs. future focused. ... Ambiguity and self-evaluation: The role of idiosyncratic trait definitions in ...
We recommend that future research: 1) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi-site or multi-system comparative studies. 3) Incorporate evaluation into all phases ...
he future of research evaluation.Focusing on specific and urgent challenges for researc. evaluation reform is imperative. The GYA, IAP and ISC, and international networks like them, can draw on their respective convening powers, the intellectual weight and influence of their members and co.
Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...
Internal evaluators will simply drop their data—quantitative and qualitative—into competing analysis and reporting apps, and then choose the ones that best meet their needs. (7) Evaluation theories (plural) will disappear. Over the past 20 years, there has been a proliferation of theories intended to guide evaluation practice.
Joel Koopman is the TJ Barlow Professor of Business Administration at the Mays Business School of Texas A&M University. His research interests include prosocial behavior, organizational justice ...
NinjaTech AI's mission is to make everyone more productive by taking care of time-consuming complex tasks with fast and affordable artificial intelligence (AI) agents. We recently launched MyNinja.ai, one of the world's first multi-agent personal AI assistants, to drive towards our mission. MyNinja.ai is built from the ground up using specialized agents that are capable of completing tasks ...
a Studies often provide information on both effectiveness and safety. For this report, a study was classified as "safety" if it was conducted primarily to assess a known or potential safety ...
This research uncovers contemporary patterns by employing the bibliometric analysis approach to analyze sustainability research in the education domain. Consequently, we map the academic outputs and observe a tendency of increased publications, which proves the growing interest in global sustainability imperatives with the help of WoS data. Regarding the publications, the United Kingdom ...
Future research opportunities and required sustainable strategies have also been suggested from Bangladesh and international perspectives based on challenges faced due to MP's pollution. ... Several articles have focused on MPs in individual environmental sources with a developed evaluation technique. This review revealed the extensive impacts ...
At the FDA Omics Days 2024, the FDA Omics Working Group will be hosting speakers from industry, academia, and government to discuss topics important to the FDA. This event will include sessions on ...
Evaluation publications typically summarize the results of studies to demonstrate the effectiveness of an intervention, but little is shared concerning any changes implemented during the study. We present a process evaluation protocol of a home-based gait, balance, and resistance exercise intervention to ameliorate persistent taxane-induced neuropathy study according to 7 key elements of ...