-Is the target population narrow or broad?
-Is the target population vulnerable?
-What are the eligibility criteria?
-What is the most appropriate recruitment strategy?
Occasionally, the intended population of the study needs to be modified, in order to overcome any potential ethical issues, and/or for the sake of convenience and feasibility of the project. Yet, the researcher must be aware that the external validity of the results may be compromised. As an illustration, in a randomised clinical trial, authors compared the ease of tracheal tube insertion between C-MAC video laryngoscope and direct laryngoscopy, in patients presenting to the emergency department with an indication of rapid sequence intubation. However, owing to the existence of ethical concerns, a substantial amount of patients requiring emergency tracheal intubation, including patients with major maxillofacial trauma and ongoing cardiopulmonary resuscitation, had to be excluded from the trial.[ 14 ] In fact, the design of prospective studies to explore this subset of patients can be challenging, not only because of ethical considerations, but because of the low incidence of these cases. In another study, Metterlein et al . compared the glottis visualisation among five different supraglottic airway devices, using fibreroptic-guided tracheal intubation in an adult population. Despite that the study was aimed to explore the ease of intubation in patients with anticipated difficult airway (thus requiring fibreoptic tracheal intubation), authors decided to enrol patients undergoing elective laser treatment for genital condylomas, as a strategy to hasten the recruitment process and optimise resources.[ 15 ]
Anaesthetic interventions can be classified into pharmacological (experimental treatment) and nonpharmacological. Among nonpharmacological interventions, the most common include anaesthetic techniques, monitoring instruments and airway devices. For example, it would be appropriate to examine the ease of insertion of Supreme™ LMA, when compared with ProSeal™ LMA. Notwithstanding, a common mistake is the tendency to be focused on the data aimed to be collected (the “stated” objective), rather than the question that needs to be answered (the “latent” objective).[ 1 , 4 ] In one clinical trial, authors stated: “we compared the Supreme™ and ProSeal™ LMAs in infants by measuring their performance characteristics, including insertion features, ventilation parameters, induced changes in haemodynamics, and rates of postoperative complications”.[ 10 ] Here, the research question has been centered on the measurements (insertion characteristics, haemodynamic variables, LMA insertion characteristics, ventilation parameters) rather than the clinical problem that needs to be addressed (is Supreme™ LMA easier to insert than ProSeal™ LMA?).
Comparators in clinical research can also be pharmacological (e.g., gold standard or placebo) or nonpharmacological. Typically, not more than two comparator groups are included in a clinical trial. Multiple comparisons should be generally avoided, unless there is enough statistical power to address the end points of interest, and statistical analyses have been adjusted for multiple testing. For instance, in the aforementioned study of Metterlein et al .,[ 15 ] authors compared five supraglottic airway devices by recruiting only 10--12 participants per group. In spite of the authors' recommendation of using two supraglottic devices based on the results of the study, there was no mention of statistical adjustments for multiple comparisons, and given the small sample size, larger clinical trials will undoubtedly be needed to confirm or refute these findings.[ 15 ]
A clear formulation of the primary outcome results of vital importance in clinical research, as the primary statistical analyses, including the sample size calculation (and therefore, the estimation of the effect size and statistical power), will be derived from the main outcome of interest. While it is clear that using more than one primary outcome would not be appropriate, it would be equally inadequate to include multiple point measurements of the same variable as the primary outcome (e.g., visual analogue scale for pain at 1, 2, 6, and 12 h postoperatively).
Composite outcomes, in which multiple primary endpoints are combined, may make it difficult to draw any conclusions based on the study findings. For example, in a clinical trial, 200 children undergoing ophthalmic surgery were recruited to explore the incidence of respiratory adverse events, when comparing desflurane with sevoflurane, following the removal of flexible LMA during the emergence of the anaesthesia. The primary outcome was the number of respiratory events, including breath holding, coughing, secretions requiring suction, laryngospasm, bronchospasm, and mild desaturation.[ 16 ] Should authors had claimed a significant difference between these anaesthetic volatiles, it would have been important to elucidate whether those differences were due to serious adverse events, like laryngospasm or bronchospasm, or the results were explained by any of the other events (e.g., secretions requiring suction). While it is true that clinical trials evaluating the occurrence of adverse events like laryngospasm/bronchospasm,[ 16 , 17 ] or life-threating complications following a tracheal intubation (e.g., inadvertent oesophageal placement, dental damage or injury of the larynx/pharynx)[ 14 ] are almost invariably underpowered, because the incidence of such events is expected to be low, subjective outcomes like coughing or secretions requiring suction should be avoided, as they are highly dependent on the examiner's criteria.[ 16 ]
Secondary outcomes are useful to document potential side effects (e.g., gastric insufflation after placing a supraglottic device), and evaluate the adherence (say, airway leak pressure) and safety of the intervention (for instance, occurrence, or laryngospasm/bronchospasm).[ 17 ] Nevertheless, the problem of addressing multiple secondary outcomes without the adequate statistical power is habitual in medical literature. A good illustration of this issue can be found in a study evaluating the performance of two supraglottic devices in 50 anaesthetised infants and neonates, whereby authors could not draw any conclusions in regard to potential differences in the occurrence of complications, because the sample size calculated made the study underpowered to explore those differences.[ 17 ]
Among PICOT components, the time frame is the most likely to be omitted or inappropriate.[ 1 , 12 ] There are two key aspects of the time component that need to be clearly specified in the research question: the time of measuring the outcome variables (e.g. visual analogue scale for pain at 1, 2, 6, and 12 h postoperatively), and the duration of each measurement (when indicated). The omission of these details in the study protocol might lead to substantial differences in the methodology used. For instance, if a study is designed to compare the insertion times of three different supraglottic devices, and researchers do not specify the exact moment of LMA insertion in the clinical trial protocol (i.e., at the anaesthetic induction after reaching a BIS index < 60), placing an LMA with insufficient depth of anaesthesia would have compromised the internal validity of the results, because inserting a supraglottic device in those patients would have resulted in failed attempts and longer insertion times.[ 10 ]
A well-elaborated research question may not necessarily be a good question. The proposed study also requires being achievable from both ethical and realistic perspectives, interesting and useful to the clinical practice, and capable to formulate new hypotheses, that may contribute to the generation of knowledge. Researchers have developed an effective way to convey the message of how to build a good research question, that is usually recalled under the acronym of FINER (feasible, interesting, novel, ethical and relevant).[ 5 , 6 , 7 ] Table 2 highlights the main characteristics of FINER criteria.[ 7 ]
Main features of FINER criteria (Feasibility, interest, novelty, ethics, and relevance) to formulate a good research question. Adapted from Cummings et al .[ 7 ]
Component | Criteria |
---|---|
Feasible | -Ensures adequacy of research design -Guarantees adequate funding -Recruits target population strategically -Aims an achievable sample size -Prioritises measurable outcomes -Optimises human and technical resources -Accounts for clinicians commitment -Procures high adherence to the treatment and low rate of dropouts -Opts for appropriate and affordable frame time |
Interesting | -Engages the interest of principal investigators -Attracts the attention of readers -Presents a different perspective of the problem |
Novel | -Provides different findings -Generates new hypotheses -Improves methodological flaws of existing studies -Resolves a gap in the existing literature |
Ethical | -Complies with local ethical committees -Safeguards the main principles of ethical research -Guarantees safety and reversibility of side effects |
Relevant | -Generates new knowledge -Contributes to improve clinical practice -Stimulates further research -Provides an accurate answer to a specific research question |
Although it is clear that any research project should commence with an accurate literature interpretation, in many instances it represents the start and the end of the research: the reader will soon realise that the answer to several questions can be easily found in the published literature.[ 5 ] When the question overcomes the test of a thorough literature review, the project may become novel (there is a gap in the knowledge, and therefore, there is a need for new evidence on the topic) and relevant (the paper may contribute to change the clinical practice). In this context, it is important to distinguish the difference between statistical significance and clinical relevance: in the aforementioned study of Oba et al .,[ 10 ] despite the means of insertion times were reported as significant for the Supreme™ LMA, as compared with ProSeal™ LMA, the difference found in the insertion times (528 vs. 486 sec, respectively), although reported as significant, had little or no clinical relevance.[ 10 ] Conversely, a statistically significant difference of 12 sec might be of clinical relevance in neonates weighing <5 kg.[ 17 ] Thus, statistical tests must be interpreted in the context of a clinically meaningful effect size, which should be previously defined by the researcher.
Among FINER criteria, there are two potential barriers that may prevent the successful conduct of the project and publication of the manuscript: feasibility and ethical aspects. These obstacles are usually related to the target population, as discussed above. Feasibility refers not only to the budget but also to the complexity of the design, recruitment strategy, blinding, adequacy of the sample size, measurement of the outcome, time of follow-up of participants, and commitment of clinicians, among others.[ 3 , 7 ] Funding, as a component of feasibility, may also be implicated in the ethical principles of clinical research, because the choice of the primary study question may be markedly influenced by the specific criteria demanded in the interest of potential funders.
Discussing ethical issues with local committees is compulsory, as rules applied might vary among countries.[ 18 ] Potential risks and benefits need to be carefully weighed, based upon the four principles of respect for autonomy, beneficence, non-maleficence, and justice.[ 19 ] Although many of these issues may be related to the population target (e.g., conducting a clinical trial in patients with ongoing cardiopulmonary resuscitation would be inappropriate, as would be anaesthetising patients undergoing elective LASER treatment for condylomas, to examine the performance of supraglottic airway devices),[ 14 , 15 ] ethical conflicts may also arise from the intervention (particularly those involving the occurrence of side effects or complications, and their potential for reversibility), comparison (e.g., use of placebo or sham procedures),[ 19 ] outcome (surrogate outcomes should be considered in lieu of long term outcomes), or time frame (e.g., unnecessary longer exposition to an intervention). Thus, FINER criteria should not be conceived without a concomitant examination of the PICOT checklist, and consequently, PICOT framework and FINER criteria should not be seen as separated components, but rather complementary ingredients of a good research question.
Undoubtedly, no research project can be conducted if it is deemed unfeasible, and most institutional review boards would not be in a position to approve a work with major ethical problems. Nonetheless, whether or not the findings are interesting, is a subjective matter. Engaging the attention of readers also depends upon a number of factors, including the manner of presenting the problem, the background of the topic, the intended audience, and the reader's expectations. Furthermore, the interest is usually linked to the novelty and relevance of the topic, and it is worth nothing that editors and peer reviewers of high-impact medical journals are usually reluctant to accept any publication, if there is no novelty inherent to the research hypothesis, or there is a lack of relevance in the results.[ 11 ] Nevertheless, a considerable number of papers have been published without any novelty or relevance in the topic addressed. This is probably reflected in a recent survey, according to which only a third of respondents declared to have read thoroughly the most recent papers downloaded, and at least half of those manuscripts remained unread.[ 20 ] The same study reported that up to one-third of papers examined remained uncited after 5 years of publication, and only 20% of papers accounted for 80% of the citations.[ 20 ]
Formulating a good research question can be fascinating, albeit challenging, even for experienced investigators. While it is clear that clinical experience in combination with the accurate interpretation of literature and teamwork are essential to develop new ideas, the formulation of a clinical problem usually requires the compliance with PICOT framework in conjunction with FINER criteria, in order to translate a clinical dilemma into a researchable question. Working in the right environment with the adequate support of experienced researchers, will certainly make a difference in the generation of knowledge. By doing this, a lot of time will be saved in the search of the primary study question, and undoubtedly, there will be more chances to become a successful researcher.
Conflicts of interest.
There are no conflicts of interest.
Advanced Search
There is an increasing familiarity with the principles of evidence-based medicine in the surgical community. As surgeons become more aware of the hierarchy of evidence, grades of recommendations and the principles of critical appraisal, they develop an increasing familiarity with research design. Surgeons and clinicians are looking more and more to the literature and clinical trials to guide their practice; as such, it is becoming a responsibility of the clinical research community to attempt to answer questions that are not only well thought out but also clinically relevant. The development of the research question, including a supportive hypothesis and objectives, is a necessary key step in producing clinically relevant results to be used in evidence-based practice. A well-defined and specific research question is more likely to help guide us in making decisions about study design and population and subsequently what data will be collected and analyzed. 1
In this article, we discuss important considerations in the development of a research question and hypothesis and in defining objectives for research. By the end of this article, the reader will be able to appreciate the significance of constructing a good research question and developing hypotheses and research objectives for the successful design of a research study. The following article is divided into 3 sections: research question, research hypothesis and research objectives.
Interest in a particular topic usually begins the research process, but it is the familiarity with the subject that helps define an appropriate research question for a study. 1 Questions then arise out of a perceived knowledge deficit within a subject area or field of study. 2 Indeed, Haynes suggests that it is important to know “where the boundary between current knowledge and ignorance lies.” 1 The challenge in developing an appropriate research question is in determining which clinical uncertainties could or should be studied and also rationalizing the need for their investigation.
Increasing one’s knowledge about the subject of interest can be accomplished in many ways. Appropriate methods include systematically searching the literature, in-depth interviews and focus groups with patients (and proxies) and interviews with experts in the field. In addition, awareness of current trends and technological advances can assist with the development of research questions. 2 It is imperative to understand what has been studied about a topic to date in order to further the knowledge that has been previously gathered on a topic. Indeed, some granting institutions (e.g., Canadian Institute for Health Research) encourage applicants to conduct a systematic review of the available evidence if a recent review does not already exist and preferably a pilot or feasibility study before applying for a grant for a full trial.
In-depth knowledge about a subject may generate a number of questions. It then becomes necessary to ask whether these questions can be answered through one study or if more than one study needed. 1 Additional research questions can be developed, but several basic principles should be taken into consideration. 1 All questions, primary and secondary, should be developed at the beginning and planning stages of a study. Any additional questions should never compromise the primary question because it is the primary research question that forms the basis of the hypothesis and study objectives. It must be kept in mind that within the scope of one study, the presence of a number of research questions will affect and potentially increase the complexity of both the study design and subsequent statistical analyses, not to mention the actual feasibility of answering every question. 1 A sensible strategy is to establish a single primary research question around which to focus the study plan. 3 In a study, the primary research question should be clearly stated at the end of the introduction of the grant proposal, and it usually specifies the population to be studied, the intervention to be implemented and other circumstantial factors. 4
Hulley and colleagues 2 have suggested the use of the FINER criteria in the development of a good research question ( Box 1 ). The FINER criteria highlight useful points that may increase the chances of developing a successful research project. A good research question should specify the population of interest, be of interest to the scientific community and potentially to the public, have clinical relevance and further current knowledge in the field (and of course be compliant with the standards of ethical boards and national research standards).
Feasible | ||
Interesting | ||
Novel | ||
Ethical | ||
Relevant |
Adapted with permission from Wolters Kluwer Health. 2
Whereas the FINER criteria outline the important aspects of the question in general, a useful format to use in the development of a specific research question is the PICO format — consider the population (P) of interest, the intervention (I) being studied, the comparison (C) group (or to what is the intervention being compared) and the outcome of interest (O). 3 , 5 , 6 Often timing (T) is added to PICO ( Box 2 ) — that is, “Over what time frame will the study take place?” 1 The PICOT approach helps generate a question that aids in constructing the framework of the study and subsequently in protocol development by alluding to the inclusion and exclusion criteria and identifying the groups of patients to be included. Knowing the specific population of interest, intervention (and comparator) and outcome of interest may also help the researcher identify an appropriate outcome measurement tool. 7 The more defined the population of interest, and thus the more stringent the inclusion and exclusion criteria, the greater the effect on the interpretation and subsequent applicability and generalizability of the research findings. 1 , 2 A restricted study population (and exclusion criteria) may limit bias and increase the internal validity of the study; however, this approach will limit external validity of the study and, thus, the generalizability of the findings to the practical clinical setting. Conversely, a broadly defined study population and inclusion criteria may be representative of practical clinical practice but may increase bias and reduce the internal validity of the study.
Population (patients) | ||
Intervention (for intervention studies only) | ||
Comparison group | ||
Outcome of interest | ||
Time |
A poorly devised research question may affect the choice of study design, potentially lead to futile situations and, thus, hamper the chance of determining anything of clinical significance, which will then affect the potential for publication. Without devoting appropriate resources to developing the research question, the quality of the study and subsequent results may be compromised. During the initial stages of any research study, it is therefore imperative to formulate a research question that is both clinically relevant and answerable.
The primary research question should be driven by the hypothesis rather than the data. 1 , 2 That is, the research question and hypothesis should be developed before the start of the study. This sounds intuitive; however, if we take, for example, a database of information, it is potentially possible to perform multiple statistical comparisons of groups within the database to find a statistically significant association. This could then lead one to work backward from the data and develop the “question.” This is counterintuitive to the process because the question is asked specifically to then find the answer, thus collecting data along the way (i.e., in a prospective manner). Multiple statistical testing of associations from data previously collected could potentially lead to spuriously positive findings of association through chance alone. 2 Therefore, a good hypothesis must be based on a good research question at the start of a trial and, indeed, drive data collection for the study.
The research or clinical hypothesis is developed from the research question and then the main elements of the study — sampling strategy, intervention (if applicable), comparison and outcome variables — are summarized in a form that establishes the basis for testing, statistical and ultimately clinical significance. 3 For example, in a research study comparing computer-assisted acetabular component insertion versus freehand acetabular component placement in patients in need of total hip arthroplasty, the experimental group would be computer-assisted insertion and the control/conventional group would be free-hand placement. The investigative team would first state a research hypothesis. This could be expressed as a single outcome (e.g., computer-assisted acetabular component placement leads to improved functional outcome) or potentially as a complex/composite outcome; that is, more than one outcome (e.g., computer-assisted acetabular component placement leads to both improved radiographic cup placement and improved functional outcome).
However, when formally testing statistical significance, the hypothesis should be stated as a “null” hypothesis. 2 The purpose of hypothesis testing is to make an inference about the population of interest on the basis of a random sample taken from that population. The null hypothesis for the preceding research hypothesis then would be that there is no difference in mean functional outcome between the computer-assisted insertion and free-hand placement techniques. After forming the null hypothesis, the researchers would form an alternate hypothesis stating the nature of the difference, if it should appear. The alternate hypothesis would be that there is a difference in mean functional outcome between these techniques. At the end of the study, the null hypothesis is then tested statistically. If the findings of the study are not statistically significant (i.e., there is no difference in functional outcome between the groups in a statistical sense), we cannot reject the null hypothesis, whereas if the findings were significant, we can reject the null hypothesis and accept the alternate hypothesis (i.e., there is a difference in mean functional outcome between the study groups), errors in testing notwithstanding. In other words, hypothesis testing confirms or refutes the statement that the observed findings did not occur by chance alone but rather occurred because there was a true difference in outcomes between these surgical procedures. The concept of statistical hypothesis testing is complex, and the details are beyond the scope of this article.
Another important concept inherent in hypothesis testing is whether the hypotheses will be 1-sided or 2-sided. A 2-sided hypothesis states that there is a difference between the experimental group and the control group, but it does not specify in advance the expected direction of the difference. For example, we asked whether there is there an improvement in outcomes with computer-assisted surgery or whether the outcomes worse with computer-assisted surgery. We presented a 2-sided test in the above example because we did not specify the direction of the difference. A 1-sided hypothesis states a specific direction (e.g., there is an improvement in outcomes with computer-assisted surgery). A 2-sided hypothesis should be used unless there is a good justification for using a 1-sided hypothesis. As Bland and Atlman 8 stated, “One-sided hypothesis testing should never be used as a device to make a conventionally nonsignificant difference significant.”
The research hypothesis should be stated at the beginning of the study to guide the objectives for research. Whereas the investigators may state the hypothesis as being 1-sided (there is an improvement with treatment), the study and investigators must adhere to the concept of clinical equipoise. According to this principle, a clinical (or surgical) trial is ethical only if the expert community is uncertain about the relative therapeutic merits of the experimental and control groups being evaluated. 9 It means there must exist an honest and professional disagreement among expert clinicians about the preferred treatment. 9
Designing a research hypothesis is supported by a good research question and will influence the type of research design for the study. Acting on the principles of appropriate hypothesis development, the study can then confidently proceed to the development of the research objective.
The primary objective should be coupled with the hypothesis of the study. Study objectives define the specific aims of the study and should be clearly stated in the introduction of the research protocol. 7 From our previous example and using the investigative hypothesis that there is a difference in functional outcomes between computer-assisted acetabular component placement and free-hand placement, the primary objective can be stated as follows: this study will compare the functional outcomes of computer-assisted acetabular component insertion versus free-hand placement in patients undergoing total hip arthroplasty. Note that the study objective is an active statement about how the study is going to answer the specific research question. Objectives can (and often do) state exactly which outcome measures are going to be used within their statements. They are important because they not only help guide the development of the protocol and design of study but also play a role in sample size calculations and determining the power of the study. 7 These concepts will be discussed in other articles in this series.
From the surgeon’s point of view, it is important for the study objectives to be focused on outcomes that are important to patients and clinically relevant. For example, the most methodologically sound randomized controlled trial comparing 2 techniques of distal radial fixation would have little or no clinical impact if the primary objective was to determine the effect of treatment A as compared to treatment B on intraoperative fluoroscopy time. However, if the objective was to determine the effect of treatment A as compared to treatment B on patient functional outcome at 1 year, this would have a much more significant impact on clinical decision-making. Second, more meaningful surgeon–patient discussions could ensue, incorporating patient values and preferences with the results from this study. 6 , 7 It is the precise objective and what the investigator is trying to measure that is of clinical relevance in the practical setting.
The following is an example from the literature about the relation between the research question, hypothesis and study objectives:
Study: Warden SJ, Metcalf BR, Kiss ZS, et al. Low-intensity pulsed ultrasound for chronic patellar tendinopathy: a randomized, double-blind, placebo-controlled trial. Rheumatology 2008;47:467–71.
Research question: How does low-intensity pulsed ultrasound (LIPUS) compare with a placebo device in managing the symptoms of skeletally mature patients with patellar tendinopathy?
Research hypothesis: Pain levels are reduced in patients who receive daily active-LIPUS (treatment) for 12 weeks compared with individuals who receive inactive-LIPUS (placebo).
Objective: To investigate the clinical efficacy of LIPUS in the management of patellar tendinopathy symptoms.
The development of the research question is the most important aspect of a research project. A research project can fail if the objectives and hypothesis are poorly focused and underdeveloped. Useful tips for surgical researchers are provided in Box 3 . Designing and developing an appropriate and relevant research question, hypothesis and objectives can be a difficult task. The critical appraisal of the research question used in a study is vital to the application of the findings to clinical practice. Focusing resources, time and dedication to these 3 very important tasks will help to guide a successful research project, influence interpretation of the results and affect future publication efforts.
Perform a systematic literature review (if one has not been done) to increase knowledge and familiarity with the topic and to assist with research development.
Learn about current trends and technological advances on the topic.
Seek careful input from experts, mentors, colleagues and collaborators to refine your research question as this will aid in developing the research question and guide the research study.
Use the FINER criteria in the development of the research question.
Ensure that the research question follows PICOT format.
Develop a research hypothesis from the research question.
Develop clear and well-defined primary and secondary (if needed) objectives.
Ensure that the research question and objectives are answerable, feasible and clinically relevant.
FINER = feasible, interesting, novel, ethical, relevant; PICOT = population (patients), intervention (for intervention studies only), comparison group, outcome of interest, time.
Competing interests: No funding was received in preparation of this paper. Dr. Bhandari was funded, in part, by a Canada Research Chair, McMaster University.
Thank you for your interest in spreading the word on CJS.
NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.
National Institutes of Health ( NIH )
National Institute of Allergy and Infectious Diseases ( NIAID )
National Institute of Dental and Craniofacial Research ( NIDCR )
National Institute on Drug Abuse ( NIDA )
National Institute of Mental Health ( NIMH )
U19 Research Program – Cooperative Agreements
See Section III. 3. Additional Information on Eligibility .
The purpose of this NOFO is to support the Pediatric HIV/AIDS Cohort Study (PHACS) as a transformative and agile program addressing the developmental and clinical course of persons living with HIV, and perinatally acquired HIV, with an emphasis on youth through reproductive age in the United States.
This Notice of Fuding Opportunity (NOFO) requires a Plan for Enhancing Diverse Perspectives (PEDP).
November 5, 2024
Application Due Dates | Review and Award Cycles | ||||
---|---|---|---|---|---|
New | Renewal / Resubmission / Revision (as allowed) | AIDS - New/Renewal/Resubmission/Revision, as allowed | Scientific Merit Review | Advisory Council Review | Earliest Start Date |
Not Applicable | Not Applicable | December 11, 2024 | March 2025 | May 2025 | July 2025 |
All applications are due by 5:00 PM local time of applicant organization.
Applicants are encouraged to apply early to allow adequate time to make any corrections to errors found in the application during the submission process by the due date.
Not Applicable
It is critical that applicants follow the Multi-Project (M) Instructions in the How to Apply - Application Guide , except where instructed to do otherwise (in this NOFO or in a Notice from the NIH Guide for Grants and Contracts ). Conformance to all requirements (both in the How to Apply - Application Guide and the NOFO) is required and strictly enforced. Applicants must read and follow all application instructions in the How to Apply - Application Guide as well as any program-specific instructions noted in Section IV. When the program-specific instructions deviate from those in the How to Apply - Application Guide , follow the program-specific instructions. Applications that do not comply with these instructions may be delayed or not accepted for review.
There are several options available to submit your application through Grants.gov to NIH and Department of Health and Human Services partners. You must use one of these submission options to access the application forms for this opportunity.
Section i. notice of funding opportunity description.
The purpose of this NOFO is to support the Pediatric HIV/AIDS Cohort Study (PHACS) cohorts as a transformative, streamlined, and agile program addressing the developmental and clinical course of persons living with HIV and perinatally acquired HIV in the United States. The integration of investigators with experience using streamlined scientific and administrative methods and approaches to enhance the function and scientific vision of the cohorts is encouraged.
The goals of this initiative are to support research on the developmental and clinical course of persons living with HIV and perinatally acquired HIV, including the effects of HIV and HIV treatment on fertility, pregnancy and post-partum outcomes, complications, co-morbidities, and co-infections including gynecologic conditions and sexually transmitted infections (STI's) for example syphilis, chlamydia, gonorrhea, HPV, trichomoniasis and CMV. The transition to adulthood of youth living with vertically acquired HIV who have been on ART for an extended period provides an important opportunity to understand many early developing health issues, including cardiovascular, metabolic, and immune. Early changes in oral health, alcohol, and substance use, behavioral, mental, social, health outcomes may also be evaluated.
The WHO has reported that in 2022 there were approximately 1.2 M pregnant women and 1.5 M children living with HIV. At the end of 2022 there were 9.9K women and 8.8K children accessing ART globally. Individuals with perinatally acquired HIV live with a chronic illness and face the developmental consequences of prolonged HIV, associated co-morbidities, and long-term ART that can affect health, starting with the development of the immune system and over the life course into young adulthood. Of the >1 M people in the US diagnosed with HIV at the end of 2019, 12,355 were among people diagnosed with vertically acquired HIV. The total number of people diagnosed with vertically acquired HIV in the US is disproportionate and occurs among certain racial and ethnic groups with the largest number among Black and Hispanic populations. People with vertically acquired HIV face the developmental consequences of exposure to HIV in utero, long-term antiretroviral therapy (ART) use and associated co-morbidities and co-infections, that affect health throughout life.
There were 2.5 million new cases of syphilis, chlamydia, and gonorrhea reported by the CDC in 2022, with a 555% increase in syphilis reported. These STIs are syndemic with HIV and affect similar populations. Included in the groups most affected by STI's and HIV are pregnant people, those who misuse substances and those aged 13-24.
The findings from United States (US) based initiatives have great relevance internationally since millions of children living with HIV in resource-constrained settings receive treatment and survive into adolescence and adulthood, and many pregnant people with HIV have access to and use combination antiretroviral therapy to prevent transmission of HIV to their infants and preserve their own health. The increased availability of antiretrovirals for HIV treatment and prevention has allowed for an increased number of children with perinatally acquired HIV to age into adulthood globally. There is limited clinical data on the long-term impact of HIV and its treatments on this population as they enter reproductive age and have children of their own.
Building on the infrastructure, community connections, and data obtained in the PHACS and similar US cohorts for perinatally acquired HIV individuals, opportunities to study the generational consequences of lifelong ART therapy is critical. For example, the PHACS Adolescent Master Protocol Cohort (AMP Series) includes youth who received very early treatment and may have had nearly lifelong HIV suppression. These data may also inform HIV cure research. Collaborations will continue to be encouraged with other similar cohorts in both resource-rich and resource-constrained settings for data harmonization and sharing.
Cohorts of Interest:
Cohorts of 500 to 1,000 individuals at risk for or living with perinatally or behaviorally acquired HIV, including youth and women of reproductive age are of interest. Recruitment of pregnant and non-pregnant individuals at high risk for or living with HIV (including perinatally acquired) and their children will continue to be encouraged. New enrollments will continue to capture the evolving type and timing of antiretrovirals used as youth transition to adulthood and during pregnancy. The impact of new HIV regimens in these populations will inform the future direction of long-acting antiretrovirals, multipurpose prevention technologies, and vaccines. Activities to support the maintenance and enrichment of the foundational cohorts proposed for study will continue to follow the needed numbers of participants in proposed protocols, but at least 200 new individuals, including children, will be recruited as an addition to the active cohorts each year.
It is expected appropriate control groups, pertinent to the cohorts being studied are included.
Cohorts will also be used in focused Research Pilots (sub-studies) to answer new questions as the research landscape evolves. This will enable the study of priority scientific investigations more rapidly than could be accomplished by individual projects alone.
The collection of basic information in areas of interest is expected to continue through base protocols and in other supported studies and should include but not be limited to:
Examples of activities supported and encouraged under this NOFO include, but are not limited to:
Essential Features of the U19 Structure
Scientific Administrative Core (SAC) (required)
The Scientific Administrative Core (SAC) provides overall management, communication, coordination, and supervision of the Program. The SAC administers the plan provided in the application to address the short- and long-term management of the Program. The SAC will monitor progress, develop, and implement a project management plan, and define timelines. Additionally, the Scientific Administrative Core will coordinate detailed communication of efforts and progress with NICHD and participating NIH program staff.
The SAC will provide outreach and establish collaborations with other networks and studies, develop and maintain bylaws and policies and mitigate conflicts of interest. In addition, the SAC will convene a Scientific Leadership Group and an Executive Committee, recruit and support the activities of an External Advisory Group (EAG).
The SAC will bring necessary expertise and resources for collaborative protocol development that will ensure feasible and acceptable study design(s), with proven ability to recruit and retain these unique populations through 5-15 competitive subcontracts to clinical sites with demonstrated high level prior performance .
The SAC will maintain discretionary funds to support the Emerging Research Pilots (ERPs) and may conduct an annual competition for an Early Career Investigator Award. The SAC will also be responsible for developing plans to mentor new and early-stage investigators to develop independent research careers.
The SAC will also be responsible for holding an annual group meeting to review accomplishments and plan the project agenda.
Data Management and Analysis Core (DMAC) (required)
The Data Management and Analysis Core (DMAC) will be responsible for providing central data storage, data management with safeguards to protect the integrity of the data to all projects within a U19 application and will be responsible for ensuring the submission of data, meta-data and related data analyses to DASH, or other appropriate public databases approved by NICHD. The core will also provide analytic support and development of methods, as needed, to integrate and/or harmonize data and methods for activities across research projects.The DMAC must demonstrate that existing datasets pertinent to the research proposed are usable and accessible through DASH or other publicly accessible data systems.
The DMAC will develop and direct the overarching Project Management Plan for the Cores and Research Projects. The project management plan must include a transition plan to another responsible steward and long-term archival of the data. This is required if the current team no longer manages the data resource, or the entire resource is sunset.
The DMAC will:
The Core Lead is responsible for ensuring that shared scientific and analytic resources/facilities are available and utilized to the maximum extent possible and that procedures are developed to ensure that such resources are available to members of the research team in a timely manner. The data management and analysis core will also be responsible for ensuring compliance with data sharing policies. The DMAC is encouraged to provide data as it becomes available. To achieve the goal of data sharing from large epidemiologic studies in which data are collected over several discrete time periods or waves, it is reasonable to expect that these data would be released in waves as data become available or main findings from waves of the data are published.
It is expected and encouraged that the DMAC will lead an effort to engage the community to inform the research project, implementation, and dissemination of research findings. This may include translation of findings into resources of interest, coordination of dissemination activities with community members, partner organizations, and relevant service organizations or policymakers.
The effort may also include the support of a community advisory board and/or utilization of a community-based participatory research approach as applicable. Using plain language strategies, dissemination activities should include an effort to translate findings from projects and strategic planning into sustainable community and system-level changes.
The PHACS U19 Research Projects
Each U19 will include a maximum of 3 Research Projects along with Core(s) necessary to support the projects. Research projects should focus on the effects of antiretroviral treatment (ART) treatment on HIV during reproductive years and/or the developmental and clinical course of persons living with perinatally transmitted HIV. The U19 research program will be facilitated by the sharing of ideas, data, and specialized resources, such as equipment, services, and clinical facilities. The Research Projects proposed must be scientifically meritorious, and complement one another, be synergistic, and support the program's overall theme. Thus, the program's overall scientific merit should be greater than the sum of its parts.
Research Projects require the participation of established investigators in several disciplines or investigators with special expertise in several areas of one discipline. All Senior/Key Personnel (PDs/PIs, Project Leads, Core Leads) must contribute to, and share in, the responsibilities of fulfilling the program objectives.
Each Research Project should contribute materially and intellectually to the specific goals and objectives of the Program Project, contribute expertise and/or resources toward the aims of the Program Project and emphasize collaboration across all components of the U19. Each Research Project should contain the scientific vision which anticipates the ongoing evolution of the field and an emerging scientific agenda by briefly addressing the current state of knowledge on the clinical course of vertically transmitted HIV in children and adolescents, and the critical scientific questions in the clinical course of HIV from preconception to post-partum including the significant scientific gaps and opportunities, and the research, tools, resources and collaborations needed to progress toward filling those gaps to improve health outcomes in these populations.
Research Projects should be supported by the Scientific Administrative Core (SAC), Data Management and Analysis Core (DMAC) and any other optional appropriate Cores to enhance the research objectives.
The PD/PI must possess recognized scientific and administrative competence, devote a substantial commitment of effort to the program, and exercise leadership in maintaining program quality.
Optional Core (Optional)
Up to 2 optional cores may be proposed to support the research projects proposed for the U19.
Cores are optional and may be included to provide investigators with core resources and/or facilities that are essential for the activities of two or more Research Projects. Core activities must not overlap with each other or with the activities of a Research Project. The Core (optional) will be evaluated as Acceptable or Not Acceptable based on whether it is essential for the proposed research and has the capability to fulfill the proposed function.
Annual Programmatic Meetings
A one- or two-day annual meeting will be held at a location at or near Bethesda, MD or at another NICHD-approved site or may be held virtually as needed. Costs associated with this meeting(s) should be included in the budget.
External Advisory Group (EAG)
An independent external advisory Group (EAG) of investigators who are not current collaborators of the funded programs is expected to be constituted by the PD/PI(s) of the U19 program project and the NIH. The advisory board will meet at least biannually to review the progress in achieving the goals of all research projects participating in the program. The EAG will make recommendations in writing for the continuation or re-direction of any or all projects and activities. Costs associated with the EAG should be included in the budget.
NICHD Data Sharing Expectations and Requirements
The NIH Policy for Data Management and Sharing (Policy) expects researchers maximize the sharing of scientific data and data be accessible as soon as possible and no later than the time of an associated publication or the end of the award period, whichever comes first. NIH requires all applications submitted in response to this NOFO to include a Data Management and Sharing Plan (DMS Plan). The DMS Plan is expected to address the Elements as described in Supplemental Information to the NIH Policy for Data Management and Sharing: Elements of an NIH Data Management and Sharing Plan (NOT-OD-21-014) The DMS Plan will be reviewed and approved by NIH Program Staff prior to award. Awardees will be required to comply with their approved DMS Plan and any approved updates.
For human data, NICHD encourages the use of the Data and Specimen Hub (DASH), a centralized resource for researchers to store and access de-identified data from studies funded by NICHD. Information about DASH may be obtained at https://dash.nichd.nih.gov/. For projects generating large-scale human genetic data, applicants should provide a Provisional or Institutional Certification specifying whether the individual-level data can be shared through an NIH approved repository, such as dbGaP and the Sequence Read Archive, in line with the NIH Genomic Data Sharing Policy.
If use of DASH is not feasible, NICHD expects awardees to share data through other equivalent broad-sharing data repositories. For applications that aim to analyze existing data, DMS Plans should describe where and how other researchers can access that data to enable reproducibility and reuse. Additional information on the Data Management and Sharing Policy is available on the NICHD Office of Data Science and Sharing website.
See Section VIII. Other Information for award authorities and regulations.
Plan for Enhancing Diverse Perspectives (PEDP) The NIH recognizes that teams comprised of investigators with diverse perspectives working together and capitalizing on innovative ideas and distinct viewpoints outperform homogeneous teams. There are many benefits that flow from a scientific workforce rich with diverse perspectives, including: fostering scientific innovation, enhancing global competitiveness, contributing to robust learning environments, improving the quality of the research, advancing the likelihood that underserved populations participate in, and benefit from research, and enhancing public trust. To support the best science, the NIH encourages inclusivity in research guided by the consideration of diverse perspectives. Broadly, diverse perspectives can include but are not limited to the educational background and scientific expertise of the people who perform the research; the populations who participate as human subjects in research studies; and the places where research is done. This NOFO requires a Plan for Enhancing Diverse Perspectives (PEDP), which will be assessed as part of the scientific and technical peer review evaluation. Assessment of applications containing a PEDP are based on the scientific and technical merit of the proposed project. Consistent with federal law, the race, ethnicity, or sex of a researcher, award participant, or trainee will not be considered during the application review process or when making funding decisions. Applications that fail to include a PEDP will be considered incomplete and will be administratively withdrawn before review. The PEDP will be submitted as Other Project Information as an attachment (see Section IV). Applicants are strongly encouraged to read the NOFO instructions carefully and view the available PEDP guidance materials .
Cooperative Agreement: A financial assistance mechanism used when there will be substantial Federal scientific or programmatic involvement. Substantial involvement means that, after award, NIH scientific or program staff will assist, guide, coordinate, or participate in project activities. See Section VI.2 for additional information about the substantial involvement for this NOFO.
The OER Glossary and the How to Apply - Application Guide provides details on these application types. Only those application types listed here are allowed for this NOFO.
Not Allowed: Only accepting applications that do not propose clinical trials.
Issuing IC, NICHD, and partner components intend to commit an estimated total of $11M to fund 1-2 awards.
Application budgets may not exceed $5.5 M direct costs per year but need to reflect the actual needs of the proposed project.
NIH grants policies as described in the NIH Grants Policy Statement will apply to the applications submitted and awards made from this NOFO.
1. eligible applicants eligible organizations higher education institutions public/state controlled institutions of higher education private institutions of higher education the following types of higher education institutions are always encouraged to apply for nih support as public or private institutions of higher education: hispanic-serving institutions historically black colleges and universities (hbcus) tribally controlled colleges and universities (tccus) alaska native and native hawaiian serving institutions asian american native american pacific islander serving institutions (aanapisis) nonprofits other than institutions of higher education nonprofits with 501(c)(3) irs status (other than institutions of higher education) nonprofits without 501(c)(3) irs status (other than institutions of higher education) for-profit organizations small businesses for-profit organizations (other than small businesses) local governments state governments county governments city or township governments special district governments indian/native american tribal governments (federally recognized) indian/native american tribal governments (other than federally recognized) federal governments eligible agencies of the federal government u.s. territory or possession other independent school districts public housing authorities/indian housing authorities native american tribal organizations (other than federally recognized tribal governments) faith-based or community-based organizations regional organizations foreign organizations non-domestic (non-u.s.) entities (foreign organization) are not eligible to apply. non-domestic (non-u.s.) components of u.s. organizations are not eligible to apply. foreign components, as defined in the nih grants policy statement , are not allowed. required registrations applicant organizations applicant organizations must complete and maintain the following registrations as described in the how to apply- application guide to be eligible to apply for or receive an award. all registrations must be completed prior to the application being submitted. registration can take 6 weeks or more, so applicants should begin the registration process as soon as possible. failure to complete registrations in advance of a due date is not a valid reason for a late submission, please reference nih grants policy statement section 2.3.9.2 electronically submitted applications for additional information. system for award management (sam) – applicants must complete and maintain an active registration, which requires renewal at least annually . the renewal process may require as much time as the initial registration. sam registration includes the assignment of a commercial and government entity (cage) code for domestic organizations which have not already been assigned a cage code. nato commercial and government entity (ncage) code – foreign organizations must obtain an ncage code (in lieu of a cage code) in order to register in sam. unique entity identifier (uei) - a uei is issued as part of the sam.gov registration process. the same uei must be used for all registrations, as well as on the grant application. era commons - once the unique organization identifier is established, organizations can register with era commons in tandem with completing their grants.gov registration; all registrations must be in place by time of submission. era commons requires organizations to identify at least one signing official (so) and at least one program director/principal investigator (pd/pi) account in order to submit an application. grants.gov – applicants must have an active sam registration in order to complete the grants.gov registration. program directors/principal investigators (pd(s)/pi(s)) all pd(s)/pi(s) must have an era commons account. pd(s)/pi(s) should work with their organizational officials to either create a new account or to affiliate their existing account with the applicant organization in era commons. if the pd/pi is also the organizational signing official, they must have two distinct era commons accounts, one for each role. obtaining an era commons account can take up to 2 weeks. eligible individuals (program director/principal investigator) any individual(s) with the skills, knowledge, and resources necessary to carry out the proposed research as the program director(s)/principal investigator(s) (pd(s)/pi(s)) is invited to work with his/her organization to develop an application for support. individuals from diverse backgrounds, including underrepresented racial and ethnic groups, individuals with disabilities, and women are always encouraged to apply for nih support. see, reminder: notice of nih's encouragement of applications supporting individuals from underrepresented ethnic and racial groups as well as individuals with disabilities, not-od-22-019 . for institutions/organizations proposing multiple pds/pis, visit the multiple program director/principal investigator policy and submission details in the senior/key person profile (expanded) component of the how to apply - application guide . 2. cost sharing.
This NOFO does not require cost sharing as defined in the NIH Grants Policy Statement Section 1.2- Definitions of Terms.
Number of applications.
Applicant organizations may submit more than one application, provided that each application is scientifically distinct.
The NIH will not accept duplicate or highly overlapping applications under review at the same time per NIH Grants Policy Statement Section 2.3.7.4 Submission of Resubmission Application . This means that the NIH will not accept:
1. requesting an application package.
The application forms package specific to this opportunity must be accessed through ASSIST or an institutional system-to-system solution. A button to apply using ASSIST is available in Part 1 of this NOFO. See the administrative office for instructions if planning to use an institutional system-to-system solution.
It is critical that applicants follow the Multi-Project (M) Instructions in the How to Apply - Application Guide , except where instructed in this notice of funding opportunity to do otherwise and where instructions in the How to Apply - Application Guide are directly related to the Grants.gov downloadable forms currently used with most NIH opportunities. Conformance to the requirements in the How to Apply - Application Guide is required and strictly enforced. Applications that are out of compliance with these instructions may be delayed or not accepted for review.
Although a letter of intent is not required, is not binding, and does not enter into the review of a subsequent application, the information that it contains allows IC staff to estimate the potential review workload and plan the review.
By the date listed in Part 1. Overview Information , prospective applicants are asked to submit a letter of intent that includes the following information:
The letter of intent should be sent to:
Denise Russo, Ph.D. Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Telephone: 301-435-6871 Email: [email protected]
All page limitations described in the How to Apply- Application Guide and the Table of Page Limits must be followed.
Component | Component Type for Submission | Page Limit | Required/Optional | Minimum | Maximum |
---|---|---|---|---|---|
Overall | Overall | 12 | Required | 1 | 1 |
Scientific Administrative Core | SAC | 12 | Required | 1 | 1 |
Cores | Cores | 6 | Optional | 0 | 2 |
Data Management and Analysis Core | DMAC | 12 | Required | 1 | 1 |
Projects | Projects | 12 | Required | 2 | 3 |
When preparing the application, use Component Type ‘Overall.
All instructions in the How to Apply - Application Guide must be followed, with the following additional instructions, as noted.
SF424(R&R) Cover (Overall)
Complete entire form.
PHS 398 Cover Page Supplement (Overall)
Note: Human Embryonic Stem Cell lines from other components should be repeated in cell line table in Overall component.
Research & Related Other Project Information (Overall)
Follow standard instructions.
Plan for Enhancing Diverse Perspectives (PEDP)
Examples of items that advance inclusivity in research and may be appropriate for a PEDP can include, but are not limited to:
Examples of items that are not appropriate in a PEDP include, but are not limited to:
For further information on the Plan for Enhancing Diverse Perspectives (PEDP), please see PEDP guidance materials .
Project/Performance Site Locations (Overall)
Enter primary site only.
A summary of Project/Performance Sites in the Overall section of the assembled application image in eRA Commons compiled from data collected in the other components will be generated upon submission.
Include only the Project Director/Principal Investigator (PD/PI) and any multi-PDs/PIs (if applicable to this NOFO) for the entire application.
The U19 Program Project PD/PI (s)
A summary of Senior/Key Persons followed by their Biographical Sketches in the Overall section of the assembled application image in eRA Commons will be generated upon submission.
Budget (Overall)
The only budget information included in the Overall component is the Estimated Project Funding section of the SF424 (R&R) Cover.
PEDP implementation costs: Applicants may include allowable costs associated with PEDP implementation (as outlined in the Grants Policy Statement section 7): https://grants.nih.gov/grants/policy/nihgps/html5/section_7/7.1_general.htm.
A budget summary in the Overall section of the assembled application image in eRA Commons compiled from detailed budget data collected in the other components will be generated upon submission.
PHS 398 Research Plan (Overall)
Introduction to Application: For Resubmission and Revision applications, an Introduction to Application is required in the Overall component.
Specific Aims should comprehensively address the overall goals of the U19
Research Strategy: Summarize the overall research objectives and strategic plan for the multi-project application. Applications responding to this FOA should describe the central theme of the proposed Program and explain how the proposed Research Projects are synergistic and fit under the overarching Program theme.
Letters of Support: Include letters of support/agreement for any collaborative/cooperative arrangements, subcontracts, or consultants. Letter of support for the U19 Cooperative Multi-Program Projects overall should be included with the Overall Component. Letter of support for individual Research Projects or Cores should be included with those components of the applications. For program activities to be conducted off site, i.e., at an institution other than the application institution, a letter of assurance or comparable documentation, signed by the collaborator as well as the off-site institutional officials, must be submitted with the application.
Resource Sharing Plan : Individuals are required to comply with the instructions for the Resource Sharing Plans as provided in the How to Apply - Application Guide .
Other Plan(s):
All instructions in the How to Apply- Application Guide must be followed, with the following additional instructions:
Only limited items are allowed in the Appendix. Follow all instructions for the Appendix as described in How to Apply- Application Guide ; any instructions provided here are in addition to the How to Apply - Application Guide instructions.
PHS Human Subjects and Clinical Trials Information (Overall)
When involving human subjects research, clinical research, and/or NIH-defined clinical trials follow all instructions for the PHS Human Subjects and Clinical Trials Information form in the How to Apply - Application Guide , with the following additional instructions:
If you answered Yes to the question Are Human Subjects Involved? on the R&R Other Project Information form, there must be at least one human subjects study record using the Study Record: PHS Human Subjects and Clinical Trials Information form or a Delayed Onset Study record within the application. The study record(s) must be included in the component(s) where the work is being done, unless the same study spans multiple components. To avoid the creation of duplicate study records, a single study record with sufficient information for all involved components must be included in the Overall component when the same study spans multiple components.
Study Record: PHS Human Subjects and Clinical Trials Information
All instructions in the How to Apply - Application Guide must be followed.
Delayed Onset Study
Note: Delayed onset does NOT apply to a study that can be described but will not start immediately (i.e., delayed start). All instructions in the How to Apply- Application Guide must be followed.
PHS Assignment Request Form (Overall)
All instructions in the How to Apply- Application Guide must be followed.
When preparing your application, use Component Type ‘[ Administrative Core]
All instructions in the How to Apply- Application Guide must be followed, with the following additional instructions, as noted.
Note: Effective for due dates on or after January 25, 2023, the Data Management and Sharing Plan will be attached in the Other Plan(s) attachment in FORMS-H application forms packages. If required, the Data Management and Sharing (DMS) Plan must be provided in the Overall component.
Complete only the following fields:
Enter Human Embryonic Stem Cells in each relevant component.
Human Subjects: Answer only the ‘Are Human Subjects Involved? and 'Is the Project Exempt from Federal regulations? questions.
Vertebrate Animals: Answer only the ‘Are Vertebrate Animals Used? question.
Project Narrative: Do not complete. Note: ASSIST screens will show an asterisk for this attachment indicating it is required. However, eRA systems only enforce this requirement in the Overall component and applications will not receive an error if omitted in other components.
List all performance sites that apply to the specific component.
Note: The Project Performance Site form allows up to 300 sites, prior to using additional attachment for additional entries.
Funding for the overall administrative efforts, including secretarial, and/or other administrative services, expenses for publications demonstrating collaborative efforts, and communication expenses should be requested in the budget for this core. Additionally,
Budget forms appropriate for the specific component will be included in the application package.
Note: The R&R Budget form included in many of the component types allows for up to 100 Senior/Key Persons in section A and 100 Equipment Items in section C prior to using attachments for additional entries. All other SF424 (R&R) instructions apply.
Specific Aims: List in priority order, the broad, long-range objectives, and goals of the Scientific Administrative Core. State the Cores relationship to the multi-project program goals and how it relates to the Research Projects and any other Cores in the application. Include a brief list of Specific Aims outlining the objectives and functions of the Scientific Administrative Core.
Research Strategy: The overview of the Scientific Administrative Core should articulate the strategy that the Program Project will adopt to achieve the scientific goals and describe the processes/approaches that will be used in decision-making and implementation of activities, including the establishment of scientific priorities, strategies used to manage the Program Project.
Letters of Support: Provide letters of support specific to this component.
Resource Sharing Plan: Individuals are required to comply with the instructions for the Resource Sharing Plans as provided in the How to Apply- Application Guide , The Data Management and Sharing (DMS) Plan must be provided in the Overall component.
Only limited items are allowed in the Appendix. Follow all instructions for the How to Apply- Application Guide ; any instructions provided here are in addition to those in the Application Guide instructions.
When involving human subjects research, clinical research, and/or NIH-defined clinical trials follow all instructions for the PHS Human Subjects and Clinical Trials Information form in the How to Apply- Application Guide, with the following additional instructions:
If you answered Yes to the question Are Human Subjects Involved? on the R&R Other Project Information form, you must include at least one human subjects study record using the Study Record: PHS Human Subjects and Clinical Trials Information form or a Delayed Onset Study record.
Delayed Onset Study:
Note: Delayed onset does NOT apply to a study that can be described but will not start immediately (i.e., delayed start).All instructions in the How to Apply- Application Guide must be followed.
When preparing your application, use Component Type ‘ Data Core .
All instructions in the SF424 (R&R) Application Guide must be followed, with the following additional instructions, as noted.
Phs 398 cover page supplement (data management and analysis core), research & related other project information (data management and analysis core).
Project Narrative: Do not complete. Note: ASSIST screens will show an asterisk for this attachment indicating it is required. However, eRA systems only enforce this requirement in the Overall component and applications will not receive an error if omitted in other components.
Research & related senior/key person profile (data management and analysis core).
Phs 398 research plan (data management and analysis core).
Specific Aims : List in priority order, the broad, long-range objectives, and goals of the proposed Core. In addition, state the Core's relationship to the Program Project and how it relates to the individual Research Projects or other Cores in the application. Include a brief list of Specific Aims outlining the objectives and functions of the Scientific Administrative Core.
Research Strategy: Describe the organizational structure and role of the Data Management and Analysis Core in the overall Program Project research activities and include a strategy for management of data activities that describes internal and external data acquisition strategies to achieve harmonization of systems and procedures for data management, data quality, data analyses, and dissemination for all data and data-related materials. Provide information on innovative capabilities in data analysis and visualization and how these will be developed. Describe the strategies and processes that will be used to manage the DMAC and achieve the overall goals, including monitoring progress on milestones, implementation of the Project Management Plan and proposed Timelines. The DMAC must demonstrate that existing datasets pertinent to the research proposed are usable and accessible through DASH or other publicly accessible data systems.
Describe the utilization of the Core and include the following :
Describe how DMAC will demonstrate that existing datasets pertinent to the research proposed are usable and accessible through DASH or other publicly accessible data systems.
Resource Sharing Plan:
Individuals are required to comply with the instructions for the Resource Sharing Plans as provided in the SF424 (R&R) Application Guide. The Data Management and Sharing (DMS) Plan must be provided in the Overall component.
Only limited items are allowed in the Appendix. Follow all instructions for the Appendix as described in the SF424 (R&R) Application Guide; any instructions provided here are in addition to the SF424 (R&R) Application Guide instructions.
When involving human subjects research, clinical research, and/or NIH-defined clinical trials follow all instructions for the PHS Human Subjects and Clinical Trials Information form in the SF424 (R&R) Application Guide, with the following additional instructions:
If you answered Yes to the question Are Human Subjects Involved? on the R&R Other Project Information form, you must include at least one human subjects study record using the Study Record: PHS Human Subjects and Clinical Trials Information form or a Delayed Onset Study record.
All instructions in the SF424 (R&R) Application Guide must be followed.
Note: Delayed onset does NOT apply to a study that can be described but will not start immediately (i.e., delayed start). All instructions in the SF424 (R&R) Application Guide must be followed
When preparing your application, use Component Type ‘ Project .
Phs 398 cover page supplement (research project), research & related other project information (research project), project /performance site location(s) (research project), research & related senior/key person profile (research project).
Phs 398 research plan (research project).
Specific Aims: Provide Specific Aims for the Research Project
Research Strategy: Following the instructions in the SF424 (R&R) Application Guide, start each section with the appropriate section heading—Significance, Innovation, Approach.
Letters of Support: Provide letters of support specific to the Research Projects.
Individuals are required to comply with the instructions for the Resource Sharing Plans as provided in the SF424 (R&R) Application Guide. The Data Management and Sharing (DMS) Plan must be provided in the Overall component.
Note: Delayed onset does NOT apply to a study that can be described but will not start immediately (i.e., delayed start).All instructions in the SF424 (R&R) Application Guide must be followed
When preparing your application, use Component Type ‘CORE.
Cores are optional and may be included to provide investigators with core resources and/or facilities that are essential for the activities of two or more Research Projects. Core activities must not overlap with each other or with the activities of a Research Project.
Phs 398 cover page supplement (optional core ), research & related other project information (optional core).
The Core (optional) will be evaluated as Acceptable or "Not Acceptable based on whether it is essential for the proposed research and has the capability to fulfill the proposed function.
Research & related senior/key person profile (optional core), budget (optional core ), phs 398 research plan (optional core).
Specific Aims:
Include a brief list of Specific Aims outlining the objectives and functions of the Core.
Research Strategy:
Provide the following information:
Letters of Support: Include letters of support specific to this component.
Only limited items are allowed in the Appendix. Follow all instructions for the Appendix as described in the SF424 (R&R) Application Guide; any instructions provided here are in addition to the SF424 (R&R) Application Guide instructions.
3. unique entity identifier and system for award management (sam).
See Part 2. Section III.1 for information regarding the requirement for obtaining a unique entity identifier and for completing and maintaining active registrations in System for Award Management (SAM), NATO Commercial and Government Entity (NCAGE) Code (if applicable), eRA Commons, and Grants.gov
Part I. contains information about Key Dates and times. Applicants are encouraged to submit applications before the due date to ensure they have time to make any application corrections that might be necessary for successful submission. When a submission date falls on a weekend or Federal holiday , the application deadline is automatically extended to the next business day.
Organizations must submit applications to Grants.gov (the online portal to find and apply for grants across all Federal agencies) using ASSIST or other electronic submission systems. Applicants must then complete the submission process by tracking the status of the application in the eRA Commons , NIHs electronic system for grants administration. NIH and Grants.gov systems check the application against many of the application instructions upon submission. Errors must be corrected and a changed/corrected application must be submitted to Grants.gov on or before the application due date and time. If a Changed/Corrected application is submitted after the deadline, the application will be considered late. Applications that miss the due date and time are subjected to the NIH Grants Policy Statement Section 2.3.9.2 Electronically Submitted Applications .
Applicants are responsible for viewing their application before the due date in the eRA Commons to ensure accurate and successful submission.
Information on the submission process and a definition of on-time submission are provided in How to Apply- Application Guide.
This initiative is not subject to intergovernmental review .
All NIH awards are subject to the terms and conditions, cost principles, and other considerations described in the NIH Grants Policy Statement .
Pre-award costs are allowable only as described in the NIH Grants Policy Statement Section 7.9.1 Selected Items of Cost.
Applications must be submitted electronically following the instructions described in the How to Apply - Application Guide . Paper applications will not be accepted.
Applicants must complete all required registrations before the application due date. Section III. Eligibility Information contains information about registration.
For assistance with your electronic application or for more information on the electronic submission process, visit How to Apply – Application Guide . If you encounter a system issue beyond your control that threatens your ability to complete the submission process on-time, you must follow the Dealing with System Issues guidance. For assistance with application submission, contact the Application Submission Contacts in Section VII .
Important reminders:
All PD(s)/PI(s) must include their eRA Commons ID in the Credential field of the Senior/Key Person Profile form . Failure to register in the Commons and to include a valid PD/PI Commons ID in the credential field will prevent the successful submission of an electronic application to NIH. See Section III of this NOFO for information on registration requirements.
The applicant organization must ensure that the unique entity identifier provided on the application is the same identifier used in the organizations profile in the eRA Commons and for the System for Award Management. Additional information may be found in the How to Apply - Application Guide .
See more tips for avoiding common errors.
Applications must include a PEDP submitted as Other Project Information as an attachment. Applications that fail to include a PEDP will be considered incomplete and will be administratively withdrawn before review.
Upon receipt, applications will be evaluated for completeness and compliance with application instructions by the Center for Scientific Review and responsiveness by components of participating organizations, NIH. Applications that are incomplete, non-compliant and/or nonresponsive will not be reviewed.
Use of Common Data Elements in NIH-funded Research
Many NIH ICs encourage the use of common data elements (CDEs) in basic, clinical, and applied research, patient registries, and other human subject research to facilitate broader and more effective use of data and advance research across studies. CDEs are data elements that have been identified and defined for use in multiple data sets across different studies. Use of CDEs can facilitate data sharing and standardization to improve data quality and enable data integration from multiple studies and sources, including electronic health records. NIH ICs have identified CDEs for many clinical domains (e.g., neurological disease), types of studies (e.g. genome-wide association studies (GWAS)), types of outcomes (e.g., patient-reported outcomes), and patient registries (e.g., the Global Rare Diseases Patient Registry and Data Repository). NIH has established a Common Data Element (CDE) Resource Portal" ( http://cde.nih.gov/ ) to assist investigators in identifying NIH-supported CDEs when developing protocols, case report forms, and other instruments for data collection. The Portal provides guidance about and access to NIH-supported CDE initiatives and other tools and resources for the appropriate use of CDEs and data standards in NIH-funded research. Investigators are encouraged to consult the Portal and describe in their applications any use they will make of NIH-supported CDEs in their projects.
Recipients or subrecipients must submit any information related to violations of federal criminal law involving fraud, bribery, or gratuity violations potentially affecting the federal award. See Mandatory Disclosures, 2 CFR 200.113 and NIH Grants Policy Statement Section 4.1.35 .
Send written disclosures to the NIH Chief Grants Management Officer listed on the Notice of Award for the IC that funded the award and to the HHS Office of Inspector Grant Self Disclosure Program at [email protected] .
Post Submission Materials
Applicants are required to follow the instructions for post-submission materials, as described in the policy .
1. criteria.
Only the review criteria described below will be considered in the review process. Applications submitted to NIH in support of the NIH mission are evaluated for scientific and technical merit through the NIH peer review system.
Reviewers will provide an overall impact score to reflect their assessment of the likelihood for the program to exert a sustained, powerful influence on the research field(s) involved, in consideration of the following review criteria and additional review criteria (as applicable for the program proposed).
As part of the overall impact score, reviewers should consider and indicate how the Plan to Enhance Diverse Perspectives affects the scientific merit of the program.
Reviewers will consider each of the review criteria below in the determination of scientific merit and give a separate score for each. An application does not need to be strong in all categories to be judged likely to have major scientific impact. For example, a program that by its nature is not innovative may be essential to advance a field.
Significance
Does the program address an important problem or a critical barrier to progress in the field? Is the prior research that serves as the key support for the proposed program rigorous? If the aims of the program are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?
Investigator(s)
Are the PD(s)/PI(s), collaborators, and other researchers well suited to the program? If Early Stage Investigators or those in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the program is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance and organizational structure appropriate for the program?
Specific to this NOFO : Are Early stage investigators (ESI) involved in different components of the program application? If a multi PD/PI application, are they part of the leadership plan?
Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?
Are the overall strategy, methodology, and analyses well-reasoned and appropriate to accomplish the specific aims of the program? Have the investigators included plans to address weaknesses in the rigor of prior research that serves as the key support for the proposed program? Have the investigators presented strategies to ensure a robust and unbiased approach, as appropriate for the work proposed? Are potential problems, alternative strategies, and benchmarks for success presented? If the program is in the early stages of development, will the strategy establish feasibility and will particularly risky aspects be managed? Have the investigators presented adequate plans to address relevant biological variables, such as sex, for studies in vertebrate animals or human subjects?
If the program involves human subjects and/or NIH-defined clinical research, are the plans to address:
1) the protection of human subjects from research risks, and 2) inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion or exclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?
Specific to this NOFO :
Is there robust synergy/integration across the programs proposed including the projects and cores?
Environment
Will the scientific environment in which the work will be done contribute to the probability of success? Are the institutional support, equipment and other physical resources available to the investigators adequate for the program proposed? Will the program benefit from unique features of the scientific environment, subject populations, or collaborative arrangements?
As applicable for the program proposed, reviewers will evaluate the following additional items while determining scientific and technical merit, and in providing an overall impact score, but will not give separate scores for these items.
For research that involves human subjects but does not involve one of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.
For research that involves human subjects and meets the criteria for one or more of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects involvement and characteristics, and 3) sources of materials. For additional information on review of the Human Subjects section, please refer to the Guidelines for the Review of Human Subjects .
When the proposed program involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for the inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed. For additional information on review of the Inclusion section, please refer to the Guidelines for the Review of Inclusion in Clinical Research .
The committee will evaluate the involvement of live vertebrate animals as part of the scientific assessment according to the following three points: (1) a complete description of all proposed procedures including the species, strains, ages, sex, and total numbers of animals to be used; (2) justifications that the species is appropriate for the proposed research and why the research goals cannot be accomplished using an alternative non-animal model; and (3) interventions including analgesia, anesthesia, sedation, palliative care, and humane endpoints that will be used to limit any unavoidable discomfort, distress, pain and injury in the conduct of scientifically valuable research. Methods of euthanasia and justification for selected methods, if NOT consistent with the AVMA Guidelines for the Euthanasia of Animals, is also required but is found in a separate section of the application. For additional information on review of the Vertebrate Animals Section, please refer to the Worksheet for Review of the Vertebrate Animals Section.
Reviewers will assess whether materials or procedures proposed are potentially hazardous to research personnel and/or the environment, and if needed, determine whether adequate protection is proposed.
Additional review considerations - overall.
As applicable for the program proposed, reviewers will consider each of the following items, but will not give scores for these items, and should not consider them in providing an overall impact score.
Select agent research.
Reviewers will assess the information provided in this section of the application, including 1) the Select Agent(s) to be used in the proposed research, 2) the registration status of all entities where Select Agent(s) will be used, 3) the procedures that will be used to monitor possession use and transfer of Select Agent(s), and 4) plans for appropriate biosafety, biocontainment, and security of the Select Agent(s).
Reviewers will comment on whether the Resource Sharing Plan(s) (e.g., Sharing Model Organisms ) or the rationale for not sharing the resources, is reasonable.
For programs involving key biological and/or chemical resources, reviewers will comment on the brief plans proposed for identifying and ensuring the validity of those resources.
Reviewers will consider whether the budget and the requested period of support are fully justified and reasonable in relation to the proposed research.
Reviewers will evaluate the following items in determining scientific and technical merit. Reviewers will provide a single impact score for the Science Administrative Core. Reviewers will not give separate scores for the individual items. Reviewers will not provide criteria scores.
Reviewers will evaluate the following items in determining scientific and technical merit. Reviewers will provide a single impact score for the Data Management and Analysis Core. Reviewers will not give separate scores for the individual items. Reviewers will not provide criteria scores.
Reviewers will rate the Optional Core as Acceptable or Not Acceptable based on whether it is essential and justified for the proposed research and has the capability to fulfill the proposed function (reviewers will evaluate the number of Projects serviced by the Core; the Core must service two or more Projects).
Reviewers will evaluate the following items in determining scientific and technical merit.
The following items should be considered in providing an overall evaluation of the optional Core(s) as Acceptable or Unacceptable
Overall impact - research projects.
Reviewers will provide an overall impact score to reflect their assessment of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved, in consideration of the following review criteria and additional review criteria (as applicable for the project proposed).
Reviewers will consider each of the review criteria below in the determination of scientific merit and give a separate score for each. An application does not need to be strong in all categories to be judged likely to have major scientific impact. For example, a project that by its nature is not innovative may be essential to advance a field.
Does the project address an important problem or a critical barrier to progress in the field? Is the prior research that serves as the key support for the proposed project rigorous? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?
Are the Project Leads, collaborators, and other researchers well suited to the project? If Early Stage Investigators or those in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance, and organizational structure appropriate for the project?
Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?
Are the overall strategy, methodology, and analyses well-reasoned and appropriate to accomplish the specific aims of the project? Have investigators included plans to address weaknesses in the rigor of prior research that serves as the key support for the proposed project? Have the investigators presented strategies to ensure a robust and unbiased approach, as appropriate for the work proposed? Are potential problems, alternative strategies, and benchmarks for success presented? If the project is in the early stages of development, will the strategy establish feasibility, and will particularly risky aspects be managed? Have the investigators presented adequate plans to address relevant biological variables, such as sex, for studies in vertebrate animals or human subjects?
If the project involves human subjects and/or NIH-defined clinical research, are the plans to address:
1) the protection of human subjects from research risks, and
2) inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion or exclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?
Specific to this NOFO : Has the research project's use of the Core services, including why they are needed, been adequately explained?
Will the scientific environment in which the work will be done contribute to the probability of success? Are the institutional support, equipment, and other physical resources available to the investigators adequate for the project proposed? Will the project benefit from unique features of the scientific environment, subject populations, or collaborative arrangements?
For research that involves human subjects but does not involve one of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.
For research that involves human subjects and meets the criteria for one or more of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects' involvement and characteristics, and 3) sources of materials. For additional information on review of the Human Subjects section, please refer to the Guidelines for the Review of Human Subjects .
When the proposed project involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for the inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed. For additional information on review of the Inclusion section, please refer to the Guidelines for the Review of Inclusion in Clinical Research .
The committee will evaluate the involvement of live vertebrate animals as part of the scientific assessment according to the following criteria: (1) description of proposed procedures involving animals, including species, strains, ages, sex, and total number to be used; (2) justifications for the use of animals versus alternative models and for the appropriateness of the species proposed; (3) interventions to minimize discomfort, distress, pain and injury; and (4) justification for euthanasia method if NOT consistent with the AVMA Guidelines for the Euthanasia of Animals. Reviewers will assess the use of chimpanzees as they would any other application proposing the use of vertebrate animals. For additional information on review of the Vertebrate Animals section, please refer to the Worksheet for Review of the Vertebrate Animals Section .
Not Applicable
Reviewers will comment on whether the Resource Sharing Plan(s) (e.g., Sharing Model Organisms ) or the rationale for not sharing the resources, is reasonable.
For projects involving key biological and/or chemical resources, reviewers will comment on the brief plans proposed for identifying and ensuring the validity of those resources.
After the peer review of the application is completed, the PD/PI will be able to access their Summary Statement (written critique) via the eRA Commons . Refer to Part 1 for dates for peer review, advisory council review, and earliest start date.
Information regarding the disposition of applications is available in the NIH Grants Policy Statement Section 2.4.4 Disposition of Applications .
1. award notices.
A Notice of Award (NoA) is the official authorizing document notifying the applicant that an award has been made and that funds may be requested from the designated HHS payment system or office. The NoA is signed by the Grants Management Officer and emailed to the recipients business official.
In accepting the award, the recipient agrees that any activities under the award are subject to all provisions currently in effect or implemented during the period of the award, other Department regulations and policies in effect at the time of the award, and applicable statutory provisions.
Recipients must comply with any funding restrictions described in Section IV.6. Funding Restrictions . Any pre-award costs incurred before receipt of the NoA are at the applicant's own risk. For more information on the Notice of Award, please refer to the NIH Grants Policy Statement Section 5. The Notice of Award and NIH Grants & Funding website, see Award Process.
Institutional Review Board or Independent Ethics Committee Approval: Grantee institutions must ensure that protocols are reviewed by their IRB or IEC. To help ensure the safety of participants enrolled in NIH-funded studies, the recipient must provide NIH copies of documents related to all major changes in the status of ongoing protocols.
Prior Approval of Pilot Projects
Recipient-selected projects that involve {clinical trials or studies involving greater than minimal risk to human subjects} require prior approval by NIH prior to initiation.
The following Federal wide and HHS-specific policy requirements apply to awards funded through NIH:
All federal statutes and regulations relevant to federal financial assistance, including those highlighted in NIH Grants Policy Statement Section 4 Public Policy Requirements, Objectives and Other Appropriation Mandates.
Recipients are responsible for ensuring that their activities comply with all applicable federal regulations. NIH may terminate awards under certain circumstances. See 2 CFR Part 200.340 Termination and NIH Grants Policy Statement Section 8.5.2 Remedies for Noncompliance or Enforcement Actions: Suspension, Termination, and Withholding of Support .
The following special terms of award are in addition to, and not in lieu of, otherwise applicable U.S. Office of Management and Budget (OMB) administrative guidelines, U.S. Department of Health and Human Services (HHS) grant administration regulations at 2 CFR Part 200, and other HHS, PHS, and NIH grant administration policies.
The administrative and funding instrument used for this program will be the cooperative agreement, an "assistance" mechanism (rather than an "acquisition" mechanism), in which substantial NIH programmatic involvement with the recipients is anticipated during the performance of the activities. Under the cooperative agreement, the NIH purpose is to support and stimulate the recipients' activities by involvement in and otherwise working jointly with the recipients in a partnership role; it is not to assume direction, prime responsibility, or a dominant role in the activities. Consistent with this concept, the dominant role and prime responsibility resides with the recipients for the project as a whole, although specific tasks and activities may be shared among the recipients and NIH as defined below.
The structure of this cooperative agreement encourages interaction and discussion among NIH staff and all involved investigators leading to more robust and innovative research strategies and methods for clinical research to enroll and retain vulnerable reproductive age young adult populations at risk for and living with HIV or at high risk for HIV. Substantive and frequent scientific and administrative involvement of the NICHD and the co-funding ICs (Institutes) Project Scientists will assist the investigators in developing the scientific agenda, refining study protocols, monitoring the progress of the clinical research and participant safety, and coordinating the activities of the Cohorts, including plans for data harmonization, curating, archiving and utilization. The cooperative agreement mechanism will also serve to facilitate cross-Cohort and multi-agency Collaborations, including efforts to ensure participants are prioritized in behavioral and biomedical clinical research.
PD(s)/PI(s) Responsibilities
PD(s)/PI(s) will have the primary responsibility for coordinating the Projects and Cores within the overall Program. Specifically, the PD(s)/PI(s) have primary responsibility as described below.
NIH staff have substantial programmatic involvement that is above and beyond the normal stewardship role in awards, as described below:
The NIH Project Scientists, representing each of the Institutes co-sponsoring the NOFO, will:
Additionally, an agency program official or IC program director will be responsible for the normal scientific and programmatic stewardship of the award and will be named in the award notice.
The duties of the agency Program Official include:
Areas of Joint Responsibility include:
The Project Scientist and the PD(s)/PI(s) will hold regular program-wide discussions to facilitate the achievement of program goals.
The Project Scientist and the PD(s)/PI(s) will collaborate during the course of the award to revise and/or update project milestones as appropriate.
Dispute Resolution:
Any disagreements that may arise in scientific or programmatic matters (within the scope of the award) between recipients and NIH may be brought to Dispute Resolution. A Dispute Resolution Panel composed of three members will be convened: a designee of the Steering Committee chosen without NIH staff voting, one NIH designee, and a third designee with expertise in the relevant area who is chosen by the other two; in the case of individual disagreement, the first member may be chosen by the individual recipient. This special dispute resolution procedure does not alter the recipient's right to appeal an adverse action that is otherwise appealable in accordance with PHS regulation 42 CFR Part 50, Subpart D and HHS regulation 45 CFR Part 16.
We encourage inquiries concerning this funding opportunity and welcome the opportunity to answer questions from potential applicants.
eRA Service Desk (Questions regarding ASSIST, eRA Commons, application errors and warnings, documenting system problems that threaten submission by the due date, and post-submission issues)
Finding Help Online: https://www.era.nih.gov/need-help (preferred method of contact) Telephone: 301-402-7469 or 866-504-9552 (Toll Free)
General Grants Information (Questions regarding application instructions, application processes, and NIH grant resources) Email: [email protected] (preferred method of contact) Telephone: 301-480-7075
Grants.gov Customer Support (Questions regarding Grants.gov registration and Workspace) Contact Center Telephone: 800-518-4726 Email: [email protected]
Denise Russo, Ph.D. Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Telephone: 301-435-6871 Email: [email protected]
Kathleen Ruth Borgmann NIDA - NATIONAL INSTITUTE ON DRUG ABUSE Phone: (301) 594-6561 E-mail: [email protected]
Anais Stenson, PhD National Institute of Mental Health (NIMH) Telephone: 240-926-7572 Email: [email protected]
Hiroko Iida, DDS, MPH NIDCR - NATIONAL INSTITUTE OF DENTAL & CRANIOFACIAL RESEARCH Phone: 301-594-7404 E-mail: [email protected]
Tia Morton, RN, MS National Institute of Allergy and Infectious Diseases (NIAID) Telephone: 240-627-3073 Email: [email protected]
Joanna Kubler-Kielb, PhD Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Telephone: 301-435-6916 Email: [email protected]
Rehana A. Chowdhury Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Telephone: 301-979-0259 Email: [email protected]
Pamela G Fleming NIDA - NATIONAL INSTITUTE ON DRUG ABUSE Phone: 301-480-1159 E-mail: [email protected]
Rita Sisco National Institute of Mental Health (NIMH) Telephone: 301-443-2805 Email: [email protected]
Gabriel Hidalgo, MBA NIDCR - NATIONAL INSTITUTE OF DENTAL & CRANIOFACIAL RESEARCH Phone: 301-827-4630 E-mail: [email protected]
Ann Devine National Institute of Allergy and Infectious Diseases (NIAID) Telephone: 240-669-2988 Email: [email protected]
Recently issued trans-NIH policy notices may affect your application submission. A full list of policy notices published by NIH is provided in the NIH Guide for Grants and Contracts . All awards are subject to the terms and conditions, cost principles, and other considerations described in the NIH Grants Policy Statement .
Awards are made under the authorization of Sections 301 and 405 of the Public Health Service Act as amended (42 USC 241 and 284) and under Federal Regulations 42 CFR Part 52 and 2 CFR Part 200.
Hypothesis Testing allows researchers to evaluate the validity of their assumptions and draw conclusions based on evidence. It provides a framework for making predictions and determining whether observed results are statistically significant or just occurred by chance. By applying various statistical tests, researchers can measure the strength of the evidence and quantify the uncertainty associated with their findings.
Table of Content
Types of hypothesis tests, common errors in hypothesis testing, interpreting the results of hypothesis tests, examples of hypothesis testing in different fields, tools and software for conducting hypothesis tests.
Understanding the importance of hypothesis testing is essential for conducting rigorous and reliable research. It enables researchers to make well-informed decisions, support or challenge existing theories, and contribute to the advancement of knowledge in their respective fields. So, whether you are a scientist, a market analyst, or a student working on a research project, grasp the power of hypothesis testing and elevate the impact of your data analysis.
Hypothesis testing is the cornerstone of the scientific method and plays a vital role in the research process. It allows researchers to make informed decisions and draw reliable conclusions from their data. By formulating a hypothesis and then testing it against the observed data, researchers can determine whether their initial assumptions are supported or refuted. This systematic approach is crucial for advancing knowledge and understanding in various fields, from medicine and psychology to economics and engineering. Hypothesis testing enables researchers to move beyond mere observations and anecdotal evidence, and instead rely on statistical analysis to quantify the strength of their findings. It helps them differentiate between genuine effects and random fluctuations, ensuring that the conclusions drawn are based on rigorous and objective analysis.
Moreover, hypothesis testing is not limited to academic research; it is equally important in the business world, where data-driven decision-making is essential for success. Marketers, for instance, can use hypothesis testing to evaluate the effectiveness of their advertising campaigns, while financial analysts can use it to assess the performance of investment strategies. By incorporating hypothesis testing into their decision-making processes, organizations can make more informed choices and optimize their operations.
At the heart of hypothesis testing lies the distinction between the null hypothesis (H0) and the alternative hypothesis (H1). The null hypothesis represents the status quo or the assumption that there is no significant difference or relationship between the variables being studied. Conversely, the alternative hypothesis suggests that there is a meaningful difference or relationship that is worth investigating.
Researchers begin by formulating their null and alternative hypotheses based on their research question and existing knowledge. For example, in a study examining the effect of a new drug on blood pressure, the null hypothesis might be that the drug has no effect on blood pressure, while the alternative hypothesis would be that the drug does have an effect on blood pressure.
Hypothesis testing is a structured process that involves several key steps:
While hypothesis testing is a powerful tool for data analysis, it is not immune to errors. Two common types of errors in hypothesis testing are Type I errors and Type II errors.
A Type I error occurs when the null hypothesis is true, but it is incorrectly rejected. The probability of making a Type I error is typically denoted by the significance level (α), which is the threshold used to determine statistical significance.
Conversely, a Type II error occurs when the null hypothesis is false, but it is not rejected. In this case, the researcher fails to detect a significant effect that is actually present. The probability of making a Type II error is denoted by β.
When a hypothesis test is conducted, the researcher is provided with a p-value, which represents the probability of obtaining the observed results if the null hypothesis is true. If the p-value is less than the chosen significance level (typically 0.05), the null hypothesis is rejected, and the alternative hypothesis is supported.
Various tools are available for conducting hypothesis tests:
Hypothesis testing is a crucial tool for researchers across many disciplines. It allows them to make informed decisions, support or challenge theories, and contribute to knowledge advancement. By understanding and mastering hypothesis testing techniques, researchers can significantly enhance their data analysis impact.
Similar reads.
Qualitative data collection and analysis approaches, such as those employing interviews and focus groups, provide rich insights into customer attitudes, sentiment, and behavior. However, manually analyzing qualitative data requires extensive time and effort to identify relevant topics and thematic insights. This study proposes a novel approach to address this challenge by leveraging Retrieval Augmented Generation (RAG) based Large Language Models (LLMs) for analyzing interview transcripts. The novelty of this work lies in strategizing the research inquiry as one that is augmented by an LLM that serves as a novice research assistant. This research explores the mental model of LLMs to serve as novice qualitative research assistants for researchers in the talent management space. A RAG-based LLM approach is extended to enable topic modeling of semi-structured interview data, showcasing the versatility of these models beyond their traditional use in information retrieval and search. Our findings demonstrate that the LLM-augmented RAG approach can successfully extract topics of interest, with significant coverage compared to manually generated topics from the same dataset. This establishes the viability of employing LLMs as novice qualitative research assistants. Additionally, the study recommends that researchers leveraging such models lean heavily on quality criteria used in traditional qualitative research to ensure rigor and trustworthiness of their approach. Finally, the paper presents key recommendations for industry practitioners seeking to reconcile the use of LLMs with established qualitative research paradigms, providing a roadmap for the effective integration of these powerful, albeit novice, AI tools in the analysis of qualitative datasets within talent management research.
Talent management researchers frequently work backwards from their customers, the employees at the organization. Understanding employee sentiment and behavior often involves conducting deep-dive interviews, explanatory in nature – e.g., demystifying the why behind customer choices, attitudes or behaviors (e.g., (Leino and Räihä, 2007 ) ). Talent management research, at its core, seeks to use science to equip every employee with resources to help them best navigate their careers (Zhao, 2023 ) .
Consequently, qualitative research methodology plays a critical role in talent management. Many of the key considerations around employee engagement, motivation, and workforce culture involve subjective, context-dependent factors that are best explored through in-depth interviews, focus groups, and other qualitative data collection approaches. Talent management professionals often rely on rich qualitative datasets to gain deep insights into employee experiences, organizational dynamics, and the nuances of human capital. However, these qualitative paradigms can clash with the more positivist, quantitative worldview that underlies many of the analytic tools used to evaluate talent management data. Talent management researchers may find that standard statistical techniques and data visualization approaches struggle to fully capture the complexities inherent in qualitative datasets, leading to potential misinterpretations or oversimplifications of the human elements involved in managing an organization’s workforce. Navigating this tension between qualitative and quantitative approaches is an ongoing challenge for talent management professionals.
Large language models (LLMs) like BERT, GPT-3 and PaLM have demonstrated strong aptitude for summarization (e.g., (Yang et al . , 2023 ) ), classification (e.g., (Pelaez et al . , 2024 ) ), and information extraction (e.g., (Dunn et al . , 2022 ) ) for text-based data. Consequently, LLMs are also increasingly being leveraged within talent management contexts for tasks such as interview analysis. However, language models are themselves designed primarily from a quantitative, data-driven paradigm. These models are trained on vast troves of text data using statistical machine learning techniques optimized for numerical patterns and correlations. While powerful at extracting insights from large-scale datasets, LLMs can often struggle to fully capture the nuanced, contextual nature of language (Bender et al . , 2021 ) , (Dwivedi et al . , 2023 ) that is critical for qualitative information sourced from interviews, focus groups, and other qualitative research methods common in talent management.
Talent management professionals must therefore continuously navigate a tension between the quantitative orientation of their analytical tools and the qualitative richness of the human dynamics they seek to understand. Bridging this gap requires innovative approaches that combine the opportunity for scale and speed offered by LLM-powered analysis augmented by borrowing evaluative nuances of traditional qualitative techniques. Talent leaders, thus, must carefully select and configure their AI-powered tools to ensure the voices and experiences of employees are authentically represented, rather than reduced to oversimplified metrics. Mastering this balance is an ongoing challenge, but one that is critical for talent management to yield truly holistic and impactful insights.
This paper presents results from leveraging LLMs as a novice qualitative researcher to augment qualitative research workstreams, specifically for data generated through semi-structured interviews.
The purpose of this paper is two-fold – 1) provide an overview of a successful implementation of a Retrieval Augmented Generation-based model for analyzing semi-structured interviews, and more importantly, 2) enumerate pragmatic take-aways and learnings drawing from traditional qualitative research to help fellow industry practitioners in reconciling the methodological paradigms. We posit the second purpose to be valuable to the larger discussion within talent management research communities on how and where to integrate AI capabilities across different talent management workstreams.
Quantitative and qualitative research represent two fundamental paradigms or philosophical frameworks that guide research strategies, methods, analysis, and use of results (Yilmaz, 2013 ) . While both methodological approaches seek to rigorously study research problems, they are based on distinct assumptions and procedures adapted to investigating particular types of questions and drawing different conclusions. Quantitative research is based on the assumptions of positivism, the philosophical tradition premised on the application of natural science methods to the study of social reality and beyond (Bryman, 2016 ) . Quantitative researchers believe that objective facts and truths about human behavior and society can be measured and quantified numerically. Quantitative methods such as surveys, structured observations, and experiments aim to test hypotheses derived from theories by examining relationships between precisely measured variables statistically analyzed using large sample sizes (Creswell and Creswell, 2017 ) . These methods seek to minimize subjectivity and generalize findings to a population. In contrast, qualitative research aligns with interpretivist and constructivist philosophical traditions by embracing subjectivity and focused meaning-making by and with research participants (Denzin et al . , 2023 ) .
Qualitative researchers often use an inductive approach aimed at discovering and understanding processes, experiences, and worldviews by collecting non-numerical data through methods like in-depth interviews, ethnographic fieldwork, and document analysis. Findings derive from themes that emerge openly from the data rather than testing predetermined hypotheses. Samples tend to be small and purposely selected to illuminate a phenomenon in depth and detail. The aim is particularization rather than generalization, with a priority on ecological validity and multiple realities situated in time, place, culture, and context.
While debates once positioned these paradigms in opposition, contemporary mixed methods research leverages the complementary strengths of quantitative and qualitative approaches (Halcomb and Hickman, 2015 ) . Mixed methods investigations integrate quantitative and qualitative data collection and analysis within a single program of inquiry by combining these approaches in creative ways to deepen understanding (Creamer, 2017 ) (Creamer, 2018 ) (Greene, 2008 ) . This reconciliation of methodological perspectives offers opportunities to generate more robust, contextualized insights to address complex research problems. The use of large language models (LLMs) as novice qualitative research assistants, as explored in this paper, can be considered an exercise in mixed methods research design.
Prior to LLMs, in previous work, Natural Language Processing based modeling of qualitative data from social science contexts, have also been used as "novice insight" augmented by the more expert contextualization provided by human researchers (e.g., (Bhaduri, 2018 ) , (Bhaduri et al . , 2021 ) ). Popular traditional topic modeling techniques (e.g. Latent Dirichlet Allocation), however, suffer from several limitations (e.g. specifying number of clusters) when compared to existing deep learning-based methods. They also often fail to capture the contextual nuances and ambiguities inherent in natural language, as they rely heavily on predefined rules and patterns (Devlin, 2018 ) (Radford et al . , 2019 ) . This can make it challenging to handle the complexities and variations present in real-world text data, and may require domain-specific knowledge or fine-tuning to achieve acceptable performance (Lee and Hsiang, 2019 ) . Recent advancements in LLMs, such as BERT and GPT, have largely overcome these limitations by leveraging deep neural networks to learn rich, contextual representations from large amounts of text data (Vaswani et al . , 2017 ) (Devlin, 2018 ) . These powerful models can capture subtle semantic and pragmatic features of language, and demonstrate strong generalization capabilities through transfer learning (Brown, 2020 ) (Radford et al . , 2019 ) .
Further, in traditional qualitative research, thematic analysis is the process of gathering themes across topics from qualitative data, such as interview data, through iteratively analyzing the dataset for topics of interest (Creamer, 2017 ) . Inductive coding and deductive coding are two approaches to analyzing data from semi-structured interviews. Inductive coding involves starting with raw data and gradually developing codes and categories based on patterns and topics that emerge from the data as the researcher manually interacts with it (Patton, 2014 ) (Strauss and Corbin, 1998 ) . This approach is bottom-up, where the data drives the development of codes and theories (Glaser, 1965 ) . Deductive coding, on the other hand, involves starting with preconceived codes or theories and applying them to the data (Pearse, 2019 ) . This approach is top-down, where existing theories or frameworks guide the coding process (Maxwell, 2018 ) . Researchers in industry typically work backwards from research question of interest. Most of the research questions in industry driving qualitative data collection are also explanatory (i.e., tend to explain the quantitative findings such as low customer satisfaction, low product adoption numbers), rather than exploratory (i.e., ethnography of a community of interest or a phenomenon) and as a result deductive approaches are often more popular than inductive coding.
Ultimately, by augmenting traditional deep-dive qualitative analysis with the time and resource efficient pattern recognition and text processing capabilities of LLMs, researchers can integrate quantitative and qualitative techniques to enhance the speed, depth, and rigor of their investigations. This mental model of a novice-LLM approach holds promise for bridging the divide between positivist and interpretive paradigms, ultimately working towards a more comprehensive understanding of the phenomenon under study.
We used an open-source dataset (Paskevicius, 2018 ) to demonstrate how an LLM prompted as a novice researcher can enhance traditional qualitative deductive thematic coding. This dataset was originally collected to explore educators’ experiences implementing open educational practices (Paskevicius, 2018 ) . The dataset contains eight transcripts each from hour-long interviews conducted with educators to understand how they are using openly accessible sources of knowledge and open-source tools. The original research involved a deep-dive qualitative analysis through using a phenomenological approach to extract topics manually from the dataset. We chose this open-source dataset for two reasons – 1) structural match to proprietary dataset, and 2) rich description and manually identified topics by an expert to serve as a gold standard to measure the efficacy of our LLM based approach. Semi-structured interviews provide critical insights through participant perspectives, making them foundational in various industry settings.
The semi-structured approach used to create this dataset is a close match to proprietary talent management data from our organization, where employees are interviewed on a particular phenomenon to get deeper understanding of their related sentiment, attitudes, and behaviors. Manually extracted topics serve as gold standard for benchmarking findings from our LLM-based approach. The paper (Paskevicius, 2018 ) describing the dataset explains the manual process establishing how each transcript was read twice: first, for a comprehensive analysis, and subsequently, to initiate a thematic exploration. Additional reviewing continued as codes and topics emerged and intersected among the interviews. A manual qualitative coding approach was applied at each iteration to reveal themes, following constant comparison methodology (Glaser, 1965 ) .
We posit that our approach, as demonstrated on this sample semi-structured interview dataset, can easily extend to multiple industry settings in talent management research where researchers conduct interviews and focus groups.
In traditional, manual qualitative research, deductive thematic analysis process begins with the researcher first formulating the research questions. Then, upon collection of the data, such as interview transcripts, the researcher iterates manually through the transcripts to identify and extract themes or topics of interest. This labor-intensive process involves carefully reading through the data, taking notes, and organizing the topics iteratively into broader coherent themes that address the research questions. The researcher may go through multiple rounds of coding and analysis to refine the themes and ensure they comprehensively capture the key insights from the data. Our approach finds that LLMs can quickly uncover topics of interest from the dataset which can then be iterated upon to garner broader themes of interest across topics. Thus, for our novice-LLM led approach, we leveraged the power of Large Language Models (LLMs) as a novice research assistant in the thematic analysis process. Specifically, we used the open-source framework called Langchain to create dynamic prompt templates, such as few-shot prompts and chain of thoughts, that guided the LLM in performing topic modeling and generating insights from the interview transcripts. We then opted to use Anthropic’s Claude2 model to execute these prompts and extract the relevant themes.
To initiate the analysis, we first selected a main research question and corresponding sub-questions from our dataset (Paskevicius, 2018 ) . We then fed these research questions, along with the interview transcripts, into the LLM-powered Langchain framework. The model was able to quickly identify and summarize the key topics, and iteratively, themes emerging from the data. This approach provided a quick yet relatively comprehensive analysis that would have taken a human researcher significant time and effort to reproduce manually.
In our LLM based approaches, we experiment with four methods - zero-shot prompting, few-shot prompting, chain-of-thought reasoning, and Retrieval Augmented Generation based Question Answering. In zero-shot prompting we provide a single prompt to the model. In few-shot prompting, we provide a set of topics and anecdotes to the model as examples. In the chain of thought (COT) approach, we provide a set of instructions for the model to follow. Finally, for Retrieval Augmented Generation (RAG) we provide context and questions to the model, from which it extracts information.
Zero-shot prompts are simple instructions or tasks given to an LLM that have not been specifically trained on that task. It serves as a baseline because it demonstrates the model’s fundamental ability to understand and respond to prompts based solely on its pre-training (Kong et al . , 2023 ) . In few-shot prompting, a small set of examples illustrating the desired outcome are manually selected and provided to the LLM. These examples allow the model to understand the tasks at hand and generate similar results (Brown, 2020 ) . Chain-of-thought prompting provides a set of intermediate steps to guide the LLM to mimic human-like reasoning. This significantly improves the capability of the LLM to understand complex reasoning and generate better topics (Wei et al . , [n. d.] ) . Retrieval-augmented generation (RAG) combines the capabilities of an LLM with a retrieval system to source and integrate additional information into its responses (Lewis et al . , 2020 ) . This effort provides contextually richer and ultimately more accurate outputs. We do this by providing all the interview transcripts to the LLM as a custom knowledge base. Two considerations helped the RAG approach outperform the other approaches:
In our approach, LLM searches the knowledge base to find and retrieve parts of documents that are most relevant to the question in the query. This narrows the focus to the most relevant information and ensures attention to critical topics and nuances.
Using all transcripts as input in a single instance creates information overload scenarios, ultimately leading to dilution of important topics or nuances. If the dataset is too large or complex, LLM might lose track of what’s most relevant to specific query, leading hallucinations. Hallucinations or inaccuracies within this context refers to instances where the model generates information which is not grounded in input data. In our approach, the use of RAG mitigates some of the hallucination by anchoring LLM responses relevant information, and providing a form of contextual validation for the output.
Distillbert-base-uncased | Precision | Recall | F1-Score |
Chain of Thought | 67% | 62% | 64% |
Few Shot | 72% | 67% | 70% |
Zero Shot | 68% | 66% | 67% |
RAG | 79% | 80% | 79% |
Bert-base-uncased | Precision | Recall | F1-Score |
Chain of Thought | 56% | 48% | 52% |
Few Shot | 64% | 56% | 60% |
Zero Shot | 59% | 55% | 57% |
RAG | 70% | 70% | 70% |
Roberta-large | Precision | Recall | F1-Score |
Chain of Thought | 89% | 85% | 87% |
Few Shot | 90% | 87% | 88% |
Zero Shot | 89% | 86% | 88% |
RAG | 92% | 91% | 91% |
In the paper describing the dataset leveraged for this work, the authors collected and conducted a manual analysis (Paskevicius, 2018 ) . Their research led to identification of significant, recurring topics within the interviews. Our evaluation strategy uses these manually generated topics from the paper’s work as gold standard to compare against topics generated by the LLMs-based approach. We use Precision (Equation 1), Recall (Equation 2), and F1-score (Equation 3) to benchmark topics generated by our LLM-augmented qualitative research approach against the topics generated by the human researcher.
(1) |
(2) |
(3) |
(4) |
These metrics are the current evaluation standard for classification models, but they can be adapted for text generation tasks (Zhang et al . , 2019 ) . Precision and Recall measure the proportion of correctly identified positive cases. In the context of our experiment, every word from predicted text gets matched to a word in the referenced text to compute recall. This process is inverted to then compute precision. The precision and recall values are then combined to compute an F1 score. These metrics use cosine similarity (Equation 4) in which each predicted word is paired with its closest corresponding word from the reference text with the aim of maximizing the similarity score.
In Table 1, the performance of various LLM prompting techniques including Chain of Thought, Few Shot, Zero Shot and RAG, are compared across different embedding models (Distillbert-base-uncased, Bert-base-uncased, and Roberta-large). This comparison aims to evaluate the robustness and effectiveness of these prompting techniques. Our results indicate that while each prompting technique shows varying level of precision, recall and F1-score, RAG consistently outperform the others on all three metrics, achieving highest performance across all models.
Example: Keywords from LDA Topic One |
Students |
Course |
Develop |
People |
Institution |
Project |
Science |
Discipline |
Material |
Start |
Example: Output from LLM approach |
Collaboration: Co-creating resources and connecting with others |
Corresponding Anecdote: You can also in your teaching have students connect with people outside the |
course in various ways. Like, maybe some people outside the course are commenting on blogs |
and student are getting in a conversation around that. |
Treating large language models (LLMs) as novice research assistants during thematic analysis offered valuable insights for our research. By framing the LLM as a novice collaborator with little knowledge or insight of the context, prompts can be crafted to better guide the model and leverage its capabilities. Used prudently, similar novice LLM-augmented approaches can significantly increase time and resource efficiency compared to traditional qualitative coding methods in talent management research. The following sections explore some of our key learnings that may benefit other researchers considering designing LLMs as novice researchers to optimize thematic analysis.
A novice is a person who, “has no experience with the situations in which they are expected to perform tasks” (Benner, 1982 ) . The novice is thus at a basic proficiency level for skill acquisition, with limited information and prior experience related to a task at hand (Montfort et al . , 2013 ) . For large qualitative datasets analyzed using LLMs we propose that a novice-led approach to analysis is a good fit. In our approach the human behaves as an expert prompting the novice LLM to provide insights related to topics of interest. We found this framework as a helpful mental model to ground the primary researcher prompting the LLM as they iteratively uncover insights from the dataset.
LLMs have advanced the field of natural language processing with their ability to understand and generate responses that closely mimic human language (Shanahan, 2024 ) . The strengths of LLMs extend beyond metrics, these models are adept at processing vast amounts of text rapidly, demonstrating a level of topic modeling that can mimic human analysis. Manual topic modeling is human labor intensive and time inefficient (Clarke and Braun, 2017 ) . LLMs also enhance efficiency by streamlining the processing of large datasets, allowing for the extraction of topics from qualitative data more quickly. Improvisations of these model using techniques like few-shot and zero-shot learning capabilities further reduce the need for expensive data labeling and annotations. In a nutshell, LLMs boost speed, reduce human effort, scale to massive datasets, and lower labeling costs. However, human expertise is still essential for judgment, validation and end-to-end framework design.
Using a RAG approach towards an LLM-augmented qualitative research analyzing semi-structure interviews shows great promise compared to natural language processing methods like Latent Dirichlet allocation (LDA). Currently, there are no widely accepted methods for comparing the two approaches as there is no bridge to compare keywords to themes, except from a human-evaluator ease of interpretability standpoint. We performed topic modeling analysis on the same dataset with the broader aim of finding themes. Manually comparing both approaches, each researcher of this workstream independently found that any of the approaches using an LLM yielded much greater context and consequently, better interpretability than the traditional LDA approach. This is likely because, with LDA, the model outputs a list of words and probability for each topic. With these words, the researcher would then have to manually define the topic. While this approach increases researcher flexibility, it remains time and resource consuming. In contrast, with the LLM approach, the output is richer in context of what particular topics mean. For example, our LDA model yielded 5 topics (see: Appendix A Figure 3). The first 10 words for topic 1 can also be seen in Table 2. Putting these words together into a comprehensive theme can be challenging without more context. However, an LLM is able to generate context grounded in the participant’s voice for researchers to work with. An example of an extracted theme and its corresponding anecdote using an LLM can also be seen in Table 2, above.
Traditional qualitative research is evaluated based on several criteria that ensure quality and rigor of the research, both in terms of methods as well as findings. Prior research has established four criteria for increased rigor and trustworthiness of qualitative research studies around credibility, dependability, confirmability, and transferability (Lincoln and Guba, 1988 ) . We recommend three ways in which quality criteria from traditional qualitative research can be used by practitioners employing LLM augmented analysis of qualitative data.
Member checks, i.e., the strategy of soliciting insights from research participants on research findings, are often relied on as the gold standard for increasing trustworthiness of qualitative research approaches (e.g., (Patton, 2014 ) (Kornbluh, 2015 ) ). Qualitative researchers employing LLMs can work on deepening their understanding of the research context using appropriate data-collection methods and tools that work best for particular contexts, as well as conduct adequate member checking to ensure the accuracy of findings.
Qualitative researchers are recommended that they acknowledge and address their own biases, thus recognizing the influence of their own experiences and opinions on the research process (Finlay, 2002 ) . Similar exercises on reflectivity can also be helpful for researchers augmenting qualitative data analysis through employing LLMs. Researcher reflexivity in such instances can extend to querying the LLM to ask for rationale on why certain topics were extracted, grounding topics in anecdotes from the transcripts, and recognizing the influence the human researcher’s prior knowledge and biases will have on the prompts used. Future work in extending LLMs for qualitative research should continue to draw on evaluation criteria grounded in traditional qualitative research paradigm.
Qualitative researchers are recommended to thoroughly document all decisions that guide their analysis process by providing thick descriptions, allowing for increased transparency. This practice enhances reliability and reproducibility of the research (Lincoln and Guba, 1988 ) . Qualitative researchers employing LLMs should also similarly strategize maximizing transparency through mechanisms such as documenting changes in workflow, sharing prompts, and detailing model preferences.
The approach outlined in this paper offers a promising avenue for industry-based talent management practitioners seeking to increase the time and resource efficiency of qualitative interview data analysis. By leveraging large language models (LLMs) as novice qualitative research assistants, organizations can potentially accelerate the coding, categorization, and thematic synthesis of rich interview data - a critical bottleneck in many talent management research initiatives.
However, as the field of LLM-assisted qualitative research matures, it will be essential to not only benchmark model performance against traditional quantitative evaluation metrics, but also consider quality criteria more prominent within the qualitative research paradigm. Factors such as credibility, transferability, dependability, and confirmability will need to be carefully evaluated as LLMs are integrated into qualitative workflows. Furthermore, the ethical use of AI assistants in sensitive domains like talent management will require close, multi-disciplinary attention to issues at the intersection of data privacy, algorithmic bias, and model transparency, for which researchers will have to be trained (Mackenzie et al . , 2024 ) .
Future research should seek to establish guidelines and best practices for LLM-augmented qualitative analysis that uphold the rigor and trustworthiness expected within the qualitative research community. Only by doing so can talent management scholars and practitioners unlock the full potential of these powerful language models, while respecting the epistemological foundations of qualitative inquiry. As the field evolves, we believe that a judicious, ethically-grounded approach to LLM integration can yield substantial gains in research efficiency and organizational impact.
Traditional topic modeling using approaches such as Latent Dirichlet Allocation (LDA) often present the most representative words for each generated topic. For instance, for Topic 1 words such as "students", "develop", "institution", "science", etc. were found important. Attempting to interpret the underlying thematic meaning of these word lists can be challenging without additional contextual information about how those words were used within the original corpus. In contrast, large language models (LLMs) have demonstrated the capability to synthesize the semantically related words and phrases into more coherent topical representations. This ability of LLMs to generate primitive yet formative contextual information threading together words and phrases of interest and thereby provide researchers with a more insightful starting point for further analysis and interpretation of the latent topics uncovered through the LDA process.
IMAGES
COMMENTS
The research aims, objectives and research questions (collectively called the "golden thread") ... Research questions are broader and guide the overall study, while hypotheses are specific and testable statements used in quantitative research. Research questions identify the problem, while hypotheses provide a focus for testing in the study
The presence of multiple research questions in a study can complicate the design, statistical analysis, and feasibility. It's advisable to focus on a single primary research question for the study. The primary question, clearly stated at the end of a grant proposal's introduction, usually specifies the study population, intervention, and ...
Formulating research objectives has the following five steps, which could help researchers develop a clear objective: 8. Identify the research problem. Review past studies on subjects similar to your problem statement, that is, studies that use similar methods, variables, etc.
Example: Research aim. To examine contributory factors to muscle retention in a group of elderly people. Example: Research objectives. To assess the relationship between sedentary habits and muscle atrophy among the participants. To determine the impact of dietary factors, particularly protein consumption, on the muscular health of the ...
Research questions, hypotheses and objectives. There is an increasing familiarity with the principles of evidence-based medicine in the surgical community. As surgeons become more aware of the hierarchy of evidence, grades of recommendations and the principles of critical appraisal, they develop an increasing familiarity with research design.
Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own.
Research Questions and Hypotheses I nvestigators place signposts to carry the reader through a plan for a study. The first signpost is the purpose statement, which establishes the ... quantitative research questions, objectives, and hypotheses; and mixed methods research questions. QUALITATIVE RESEARCH QUESTIONS In a qualitative study ...
Developing a hypothesis (with example) Step 1. Ask a question. Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project. Example: Research question.
Research Objectives. Research objectives refer to the specific goals or aims of a research study. They provide a clear and concise description of what the researcher hopes to achieve by conducting the research.The objectives are typically based on the research questions and hypotheses formulated at the beginning of the study and are used to guide the research process.
The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.
HYPOTHESES & RESEARCH QUESTIONS. Qualitative Approach. The use of Research Questions as opposed to objectives or hypothesis, is more frequent. Characteristics Use of words- what or how. Specify whether the study: discovers, seeks to understand, explores or describes the experiences. Use of non-directional wording in the question.
What is research? The research process at a glance; What are research problems, questions, hypotheses, aims and objectives? Searching and reviewing the literature; Getting to grips with approaches to research designs; Understanding sampling and sampling size; Ethical and legal issues in research; Rigour in research; Data collection in research
Articulating a clear and concise research question is fundamental to conducting a robust and useful research study. Although "getting stuck into" the data collection is the exciting part of research, this preparation stage is crucial. Clear and concise research questions are needed for a number of reasons. Initially, they are needed to ...
In every research, the terms 'research aim', 'research objectives', 'research questions' and 'research hypotheses' tend to have precise meaning, therefore defining the core objectives is the ...
Ensure that your research objectives are aligned with your research questions or hypotheses. The objectives should address the main goals of your study and provide a framework for answering your research questions or testing your hypotheses. 4. Be realistic and achievable. Set research objectives that are realistic and achievable within the ...
26 Research Questions and Hypotheses [you choose to ask] . Or possibly… the question that researcher chose to ask. After processing everything through their ologies, and through their paradigm - both through nature (research is a personal thing - for all of us!) and nurture (ways of doing things handed down through education - remember the problems with science discussion?).
The primary objective of a study is paired with the hypothesis of the study, and should be clearly stated in the introduction of the research protocol. Objectives usually state exactly the outcome measures that are going to be used within their statements. Strong verbs like determine, measure, assess, evaluate, identify, examine, investigate ...
Experiments using sounds suggest that we are less responsive during stages 3 and 4 sleep (deep sleep) than during stages 1, 2, or REM sleep (lighter sleep). Thus, the researcher predicts that research participants will be less responsive to odors during stages 3 and 4 sleep than during the other stages of sleep.
By the end of this article, the reader will be able to appreciate the significance of constructing a good research question and developing hypotheses and research objectives for the successful design of a research study. The following article is divided into 3 sections: research question, research hypothesis and research objectives.
The process of formulating a good research question can be challenging and frustrating. While a comprehensive literature review is compulsory, the researcher usually encounters methodological difficulties in the conduct of the study, particularly if the primary study question has not been adequately selected in accordance with the clinical dilemma that needs to be addressed.
pic 4: Formulating Study Objectives, Research Questions, Hypothesis Intro. uctionStudy objectives are formulated to direct implementation of research study. Objectives. irectly emanates from the problem statement of the identified researchable issues. The objectives reflect the cause-effect identified in the problem tree and t.
Research questions, hypotheses and objectives. There is an increasing familiarity with the principles of evidence-based medicine in the surgical community. As surgeons become more aware of the hierarchy of evidence, grades of recommendations and the principles of critical appraisal, they develop an increasing familiarity with research design.
Step 4: Peer Review, Publication, and Replication. Scientists share the results of their research by publishing articles in scientific journals, such as Science and Nature.Reputable journals and publishing houses will not publish an experimental study until they have determined its methods are scientifically rigorous and the conclusions are supported by evidence.
Historically, peer reviewing has focused on the importance of research questions/hypotheses, appropriateness of research methods, risk of bias, and quality of writing. Until recently, the issues related to trustworthiness—including but not limited to plagiarism and fraud—have been largely neglected because of lack of awareness and lack of ...
Research Strategy: Summarize the overall research objectives and strategic plan for the multi-project application. Applications responding to this FOA should describe the central theme of the proposed Program and explain how the proposed Research Projects are synergistic and fit under the overarching Program theme.
Clearly define the research question and formulate the null and alternative hypotheses. Select an appropriate statistical test based on the nature of the data and research question. Collect and organize data, ensuring it meets the assumptions required for the chosen test.
of the hypothesis and study objectives. It must be kept in mind that within the scope of one study, the presence of a number of research questions will affect and potentially increase the complexity of both the study design and subsequent statistical analyses, not to mention the actual feasibility of answering every question. 1 A sensible ...
2. Research ideas and hypotheses: The relevance of biological sex and/or gender for and within the subject matter needs to be analysed and an assessment made as to whether these are relevant variables. The formulation of hypotheses can draw upon previous research and existing literature. Indeed, the body of knowledge on sex/gender issues has
Quantitative research is based on the assumptions of positivism, the philosophical tradition premised on the application of natural science methods to the study of social reality and beyond (Bryman, 2016). Quantitative researchers believe that objective facts and truths about human behavior and society can be measured and quantified numerically.