• Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS
  • Acknowledgements
  • Research questions & hypotheses
  • Concepts, constructs & variables
  • Research limitations
  • Getting started
  • Sampling Strategy
  • Research Quality
  • Research Ethics
  • Data Analysis

FUTURE RESEARCH

Types of future research suggestion.

The Future Research section of your dissertation is often combined with the Research Limitations section of your final, Conclusions chapter. This is because your future research suggestions generally arise out of the research limitations you have identified in your own dissertation. In this article, we discuss six types of future research suggestion. These include: (1) building on a particular finding in your research; (2) addressing a flaw in your research; examining (or testing) a theory (framework or model) either (3) for the first time or (4) in a new context, location and/or culture; (5) re-evaluating and (6) expanding a theory (framework or model). The goal of the article is to help you think about the potential types of future research suggestion that you may want to include in your dissertation.

Before we discuss each of these types of future research suggestion, we should explain why we use the word examining and then put or testing in brackets. This is simply because the word examining may be considered more appropriate when students use a qualitative research design; whereas the word testing fits better with dissertations drawing on a quantitative research design. We also put the words framework or model in brackets after the word theory . We do this because a theory , framework and model are not the same things. In the sections that follow, we discuss six types of future research suggestion.

Addressing research limitations in your dissertation

Building on a particular finding or aspect of your research, examining a conceptual framework (or testing a theoretical model) for the first time, examining a conceptual framework (or testing a theoretical model) in a new context, location and/or culture.

  • Expanding a conceptual framework (or testing a theoretical model)

Re-evaluating a conceptual framework (or theoretical model)

In the Research Limitations section of your Conclusions chapter, you will have inevitably detailed the potential flaws (i.e., research limitations) of your dissertation. These may include:

An inability to answer your research questions

Theoretical and conceptual problems

Limitations of your research strategy

Problems of research quality

Identifying what these research limitations were and proposing future research suggestions that address them is arguably the easiest and quickest ways to complete the Future Research section of your Conclusions chapter.

Often, the findings from your dissertation research will highlight a number of new avenues that could be explored in future studies. These can be grouped into two categories:

Your dissertation will inevitably lead to findings that you did not anticipate from the start. These are useful when making future research suggestions because they can lead to entirely new avenues to explore in future studies. If this was the case, it is worth (a) briefly describing what these unanticipated findings were and (b) suggesting a research strategy that could be used to explore such findings in future.

Sometimes, dissertations manage to address all aspects of the research questions that were set. However, this is seldom the case. Typically, there will be aspects of your research questions that could not be answered. This is not necessarily a flaw in your research strategy, but may simply reflect that fact that the findings did not provide all the answers you hoped for. If this was the case, it is worth (a) briefly describing what aspects of your research questions were not answered and (b) suggesting a research strategy that could be used to explore such aspects in future.

You may want to recommend that future research examines the conceptual framework (or tests the theoretical model) that you developed. This is based on the assumption that the primary goal of your dissertation was to set out a conceptual framework (or build a theoretical model). It is also based on the assumption that whilst such a conceptual framework (or theoretical model) was presented, your dissertation did not attempt to examine (or test) it in the field . The focus of your dissertations was most likely a review of the literature rather than something that involved you conducting primary research.

Whilst it is quite rare for dissertations at the undergraduate and master's level to be primarily theoretical in nature like this, it is not unknown. If this was the case, you should think about how the conceptual framework (or theoretical model) that you have presented could be best examined (or tested) in the field . In understanding the how , you should think about two factors in particular:

What is the context, location and/or culture that would best lend itself to my conceptual framework (or theoretical model) if it were to be examined (or tested) in the field?

What research strategy is most appropriate to examine my conceptual framework (or test my theoretical model)?

If the future research suggestion that you want to make is based on examining your conceptual framework (or testing your theoretical model) in the field , you need to suggest the best scenario for doing so.

More often than not, you will not only have set out a conceptual framework (or theoretical model), as described in the previous section, but you will also have examined (or tested) it in the field . When you do this, focus is typically placed on a specific context, location and/or culture.

If this is the case, the obvious future research suggestion that you could propose would be to examine your conceptual framework (or test the theoretical model) in a new context, location and/or culture. For example, perhaps you focused on consumers (rather than businesses), or Canada (rather than the United Kingdom), or a more individualistic culture like the United States (rather than a more collectivist culture like China).

When you propose a new context, location and/or culture as your future research suggestion, make sure you justify the choice that you make. For example, there may be little value in future studies looking at different cultures if culture is not an important component underlying your conceptual framework (or theoretical model). If you are not sure whether a new context, location or culture is more appropriate, or what new context, location or culture you should select, a review the literature will often help clarify where you focus should be.

Expanding a conceptual framework (or theoretical model)

Assuming that you have set out a conceptual framework (or theoretical model) and examined (or tested) it in the field , another series of future research suggestions comes out of expanding that conceptual framework (or theoretical model).

We talk about a series of future research suggestions because there are so many ways that you can expand on your conceptual framework (or theoretical model). For example, you can do this by:

Examining constructs (or variables) that were included in your conceptual framework (or theoretical model) but were not focused.

Looking at a particular relationship aspect of your conceptual framework (or theoretical model) further.

Adding new constructs (or variables) to the conceptual framework (or theoretical model) you set out (if justified by the literature).

It would be possible to include one or a number of these as future research suggestions. Again, make sure that any suggestions you make have are justified , either by your findings or the literature.

With the dissertation process at the undergraduate and master's level lasting between 3 and 9 months, a lot a can happen in between. For example, a specific event (e.g., 9/11, the economic crisis) or some new theory or evidence that undermines (or questions) the literature (theory) and assumptions underpinning your conceptual framework (or theoretical model). Clearly, there is little you can do about this. However, if this happens, reflecting on it and re-evaluating your conceptual framework (or theoretical model), as well as your findings, is an obvious source of future research suggestions.

Surviving Grad School

(and hopefully thriving), how to write: future work/conclusions.

This post is the final part of a series on how to write a paper. The first was on  abstracts , the second on  introductions , the third on  related work  and fourth on methodology and analysis of results .

I’m combining future work and conclusions into a single post since they are often found combined in a single section in a paper. While a conclusion is always necessary, sometimes people don’t include future work. While I don’t think it’s always necessary to have a future work section, I would argue that it’s always worthwhile to include some mention of future work.

Let’s start with Future Work .

The future work section is a place for you to explain to your readers where you think the results can lead you. What do you think are the next steps to take? What other questions do your results raise? Do you think certain paths seem to be more promising than others?

Another way to look at the future work section, is a way to sort of “claim” an area of research. This is not to say that others can’t research the same things, but if your paper gets published, it’s out there that you had the idea. This lets people know what you’re thinking of doing next and they may ask to collaborate if your future research area crosses over theirs.

If you do include a future work section, it should be pretty short. The goal should not be to go into a bunch of details, but instead just a sentence or two explaining each idea. It should just provide enough information as to a possible research path and why the path may be important. Motivation is always key in research. I stressed earlier that you need to motivate your research. This also applies to future work. If you can’t motivate a good reason to continue research down some path, then why should/would you?

Conclusions

Conclusions are the last section people read in your paper, and therefore it’s what they leave remembering. You need to make sure they walk away thinking about your paper just the way you want them to.

Your conclusions needs to do three main things:

  • Recap what you did. In about one paragraph recap what your research question was and how you tackled it.
  • Highlight the big accomplishments. Spend another paragraph explaining the highlights of your results. These are the main results you want the reader to remember after they put down the paper, so ignore any small details.
  • Conclude. Finally, finish off with a sentence or two that wraps up your paper. I find this can often be the hardest part to write. You want the paper to feel finished after they read these. One way to do this, is to try and tie your research to the “real world.” Can you somehow relate how your research is important outside of academia? Or, if your results leave you with a big question, finish with that. Put it out there for the reader to think about to.
  • Optional  Before you conclude, if you don’t have a future work section, put in a paragraph detailing the questions you think arise from the work and where you think researchers need to be looking next.

Things to not  do in your conclusion:

  • Introduce new information. The conclusion is for wrapping up everything you’ve done. It’s not a place to say “oh yeah, and we also got result y.” All results should be first presented and detailed in the result section. Think of the conclusion as a place to reflect on what you’ve already said earlier in the paper.
  • Directly re-quote anything you’ve already written. I’ve seen conclusions that are almost identical to the abstract or a collection of sentences from throughout the paper. As a reader, it makes me think the author was lazy and couldn’t be bothered to actually summarize their results for the paper. Take the time to write a proper conclusion so that the reader walks away with good thoughts about your work.
  • Write a conclusion longer than your introduction. A conclusion should be short, and to the point. You’ll rarely see them over 3 paragraphs, and three is often long. A lot of the time they are usually only one or two. Think about a conclusion as a chance to see how concisely you can summarize your entire research project. It’s your “30 second” research spiel .

Share this:

  • Click to email a link to a friend (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)

Related posts

Reading papers, how much time to write a paper, how to write: introductions, 2 thoughts on “ how to write: future work/conclusions ”.

hi akajb, thanks for sharing this series! it helps me a lot, i hope it will stay online for a long time!

thanks for this. very helpful for my research paper 🙂

Join the discussion Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

How to Write the Discussion Section of a Research Paper

The discussion section of a research paper analyzes and interprets the findings, provides context, compares them with previous studies, identifies limitations, and suggests future research directions.

Updated on September 15, 2023

researchers writing the discussion section of their research paper

Structure your discussion section right, and you’ll be cited more often while doing a greater service to the scientific community. So, what actually goes into the discussion section? And how do you write it?

The discussion section of your research paper is where you let the reader know how your study is positioned in the literature, what to take away from your paper, and how your work helps them. It can also include your conclusions and suggestions for future studies.

First, we’ll define all the parts of your discussion paper, and then look into how to write a strong, effective discussion section for your paper or manuscript.

Discussion section: what is it, what it does

The discussion section comes later in your paper, following the introduction, methods, and results. The discussion sets up your study’s conclusions. Its main goals are to present, interpret, and provide a context for your results.

What is it?

The discussion section provides an analysis and interpretation of the findings, compares them with previous studies, identifies limitations, and suggests future directions for research.

This section combines information from the preceding parts of your paper into a coherent story. By this point, the reader already knows why you did your study (introduction), how you did it (methods), and what happened (results). In the discussion, you’ll help the reader connect the ideas from these sections.

Why is it necessary?

The discussion provides context and interpretations for the results. It also answers the questions posed in the introduction. While the results section describes your findings, the discussion explains what they say. This is also where you can describe the impact or implications of your research.

Adds context for your results

Most research studies aim to answer a question, replicate a finding, or address limitations in the literature. These goals are first described in the introduction. However, in the discussion section, the author can refer back to them to explain how the study's objective was achieved. 

Shows what your results actually mean and real-world implications

The discussion can also describe the effect of your findings on research or practice. How are your results significant for readers, other researchers, or policymakers?

What to include in your discussion (in the correct order)

A complete and effective discussion section should at least touch on the points described below.

Summary of key findings

The discussion should begin with a brief factual summary of the results. Concisely overview the main results you obtained.

Begin with key findings with supporting evidence

Your results section described a list of findings, but what message do they send when you look at them all together?

Your findings were detailed in the results section, so there’s no need to repeat them here, but do provide at least a few highlights. This will help refresh the reader’s memory and help them focus on the big picture.

Read the first paragraph of the discussion section in this article (PDF) for an example of how to start this part of your paper. Notice how the authors break down their results and follow each description sentence with an explanation of why each finding is relevant. 

State clearly and concisely

Following a clear and direct writing style is especially important in the discussion section. After all, this is where you will make some of the most impactful points in your paper. While the results section often contains technical vocabulary, such as statistical terms, the discussion section lets you describe your findings more clearly. 

Interpretation of results

Once you’ve given your reader an overview of your results, you need to interpret those results. In other words, what do your results mean? Discuss the findings’ implications and significance in relation to your research question or hypothesis.

Analyze and interpret your findings

Look into your findings and explore what’s behind them or what may have caused them. If your introduction cited theories or studies that could explain your findings, use these sources as a basis to discuss your results.

For example, look at the second paragraph in the discussion section of this article on waggling honey bees. Here, the authors explore their results based on information from the literature.

Unexpected or contradictory results

Sometimes, your findings are not what you expect. Here’s where you describe this and try to find a reason for it. Could it be because of the method you used? Does it have something to do with the variables analyzed? Comparing your methods with those of other similar studies can help with this task.

Context and comparison with previous work

Refer to related studies to place your research in a larger context and the literature. Compare and contrast your findings with existing literature, highlighting similarities, differences, and/or contradictions.

How your work compares or contrasts with previous work

Studies with similar findings to yours can be cited to show the strength of your findings. Information from these studies can also be used to help explain your results. Differences between your findings and others in the literature can also be discussed here. 

How to divide this section into subsections

If you have more than one objective in your study or many key findings, you can dedicate a separate section to each of these. Here’s an example of this approach. You can see that the discussion section is divided into topics and even has a separate heading for each of them. 

Limitations

Many journals require you to include the limitations of your study in the discussion. Even if they don’t, there are good reasons to mention these in your paper.

Why limitations don’t have a negative connotation

A study’s limitations are points to be improved upon in future research. While some of these may be flaws in your method, many may be due to factors you couldn’t predict.

Examples include time constraints or small sample sizes. Pointing this out will help future researchers avoid or address these issues. This part of the discussion can also include any attempts you have made to reduce the impact of these limitations, as in this study .

How limitations add to a researcher's credibility

Pointing out the limitations of your study demonstrates transparency. It also shows that you know your methods well and can conduct a critical assessment of them.  

Implications and significance

The final paragraph of the discussion section should contain the take-home messages for your study. It can also cite the “strong points” of your study, to contrast with the limitations section.

Restate your hypothesis

Remind the reader what your hypothesis was before you conducted the study. 

How was it proven or disproven?

Identify your main findings and describe how they relate to your hypothesis.

How your results contribute to the literature

Were you able to answer your research question? Or address a gap in the literature?

Future implications of your research

Describe the impact that your results may have on the topic of study. Your results may show, for instance, that there are still limitations in the literature for future studies to address. There may be a need for studies that extend your findings in a specific way. You also may need additional research to corroborate your findings. 

Sample discussion section

This fictitious example covers all the aspects discussed above. Your actual discussion section will probably be much longer, but you can read this to get an idea of everything your discussion should cover.

Our results showed that the presence of cats in a household is associated with higher levels of perceived happiness by its human occupants. These findings support our hypothesis and demonstrate the association between pet ownership and well-being. 

The present findings align with those of Bao and Schreer (2016) and Hardie et al. (2023), who observed greater life satisfaction in pet owners relative to non-owners. Although the present study did not directly evaluate life satisfaction, this factor may explain the association between happiness and cat ownership observed in our sample.

Our findings must be interpreted in light of some limitations, such as the focus on cat ownership only rather than pets as a whole. This may limit the generalizability of our results.

Nevertheless, this study had several strengths. These include its strict exclusion criteria and use of a standardized assessment instrument to investigate the relationships between pets and owners. These attributes bolster the accuracy of our results and reduce the influence of confounding factors, increasing the strength of our conclusions. Future studies may examine the factors that mediate the association between pet ownership and happiness to better comprehend this phenomenon.

This brief discussion begins with a quick summary of the results and hypothesis. The next paragraph cites previous research and compares its findings to those of this study. Information from previous studies is also used to help interpret the findings. After discussing the results of the study, some limitations are pointed out. The paper also explains why these limitations may influence the interpretation of results. Then, final conclusions are drawn based on the study, and directions for future research are suggested.

How to make your discussion flow naturally

If you find writing in scientific English challenging, the discussion and conclusions are often the hardest parts of the paper to write. That’s because you’re not just listing up studies, methods, and outcomes. You’re actually expressing your thoughts and interpretations in words.

  • How formal should it be?
  • What words should you use, or not use?
  • How do you meet strict word limits, or make it longer and more informative?

Always give it your best, but sometimes a helping hand can, well, help. Getting a professional edit can help clarify your work’s importance while improving the English used to explain it. When readers know the value of your work, they’ll cite it. We’ll assign your study to an expert editor knowledgeable in your area of research. Their work will clarify your discussion, helping it to tell your story. Find out more about AJE Editing.

Adam Goulston, Science Marketing Consultant, PsyD, Human and Organizational Behavior, Scize

Adam Goulston, PsyD, MS, MBA, MISD, ELS

Science Marketing Consultant

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

Writing a Research Paper Conclusion | Step-by-Step Guide

Published on October 30, 2022 by Jack Caulfield . Revised on April 13, 2023.

  • Restate the problem statement addressed in the paper
  • Summarize your overall arguments or findings
  • Suggest the key takeaways from your paper

Research paper conclusion

The content of the conclusion varies depending on whether your paper presents the results of original empirical research or constructs an argument through engagement with sources .

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Step 1: restate the problem, step 2: sum up the paper, step 3: discuss the implications, research paper conclusion examples, frequently asked questions about research paper conclusions.

The first task of your conclusion is to remind the reader of your research problem . You will have discussed this problem in depth throughout the body, but now the point is to zoom back out from the details to the bigger picture.

While you are restating a problem you’ve already introduced, you should avoid phrasing it identically to how it appeared in the introduction . Ideally, you’ll find a novel way to circle back to the problem from the more detailed ideas discussed in the body.

For example, an argumentative paper advocating new measures to reduce the environmental impact of agriculture might restate its problem as follows:

Meanwhile, an empirical paper studying the relationship of Instagram use with body image issues might present its problem like this:

“In conclusion …”

Avoid starting your conclusion with phrases like “In conclusion” or “To conclude,” as this can come across as too obvious and make your writing seem unsophisticated. The content and placement of your conclusion should make its function clear without the need for additional signposting.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Having zoomed back in on the problem, it’s time to summarize how the body of the paper went about addressing it, and what conclusions this approach led to.

Depending on the nature of your research paper, this might mean restating your thesis and arguments, or summarizing your overall findings.

Argumentative paper: Restate your thesis and arguments

In an argumentative paper, you will have presented a thesis statement in your introduction, expressing the overall claim your paper argues for. In the conclusion, you should restate the thesis and show how it has been developed through the body of the paper.

Briefly summarize the key arguments made in the body, showing how each of them contributes to proving your thesis. You may also mention any counterarguments you addressed, emphasizing why your thesis holds up against them, particularly if your argument is a controversial one.

Don’t go into the details of your evidence or present new ideas; focus on outlining in broad strokes the argument you have made.

Empirical paper: Summarize your findings

In an empirical paper, this is the time to summarize your key findings. Don’t go into great detail here (you will have presented your in-depth results and discussion already), but do clearly express the answers to the research questions you investigated.

Describe your main findings, even if they weren’t necessarily the ones you expected or hoped for, and explain the overall conclusion they led you to.

Having summed up your key arguments or findings, the conclusion ends by considering the broader implications of your research. This means expressing the key takeaways, practical or theoretical, from your paper—often in the form of a call for action or suggestions for future research.

Argumentative paper: Strong closing statement

An argumentative paper generally ends with a strong closing statement. In the case of a practical argument, make a call for action: What actions do you think should be taken by the people or organizations concerned in response to your argument?

If your topic is more theoretical and unsuitable for a call for action, your closing statement should express the significance of your argument—for example, in proposing a new understanding of a topic or laying the groundwork for future research.

Empirical paper: Future research directions

In a more empirical paper, you can close by either making recommendations for practice (for example, in clinical or policy papers), or suggesting directions for future research.

Whatever the scope of your own research, there will always be room for further investigation of related topics, and you’ll often discover new questions and problems during the research process .

Finish your paper on a forward-looking note by suggesting how you or other researchers might build on this topic in the future and address any limitations of the current paper.

Full examples of research paper conclusions are shown in the tabs below: one for an argumentative paper, the other for an empirical paper.

  • Argumentative paper
  • Empirical paper

While the role of cattle in climate change is by now common knowledge, countries like the Netherlands continually fail to confront this issue with the urgency it deserves. The evidence is clear: To create a truly futureproof agricultural sector, Dutch farmers must be incentivized to transition from livestock farming to sustainable vegetable farming. As well as dramatically lowering emissions, plant-based agriculture, if approached in the right way, can produce more food with less land, providing opportunities for nature regeneration areas that will themselves contribute to climate targets. Although this approach would have economic ramifications, from a long-term perspective, it would represent a significant step towards a more sustainable and resilient national economy. Transitioning to sustainable vegetable farming will make the Netherlands greener and healthier, setting an example for other European governments. Farmers, policymakers, and consumers must focus on the future, not just on their own short-term interests, and work to implement this transition now.

As social media becomes increasingly central to young people’s everyday lives, it is important to understand how different platforms affect their developing self-conception. By testing the effect of daily Instagram use among teenage girls, this study established that highly visual social media does indeed have a significant effect on body image concerns, with a strong correlation between the amount of time spent on the platform and participants’ self-reported dissatisfaction with their appearance. However, the strength of this effect was moderated by pre-test self-esteem ratings: Participants with higher self-esteem were less likely to experience an increase in body image concerns after using Instagram. This suggests that, while Instagram does impact body image, it is also important to consider the wider social and psychological context in which this usage occurs: Teenagers who are already predisposed to self-esteem issues may be at greater risk of experiencing negative effects. Future research into Instagram and other highly visual social media should focus on establishing a clearer picture of how self-esteem and related constructs influence young people’s experiences of these platforms. Furthermore, while this experiment measured Instagram usage in terms of time spent on the platform, observational studies are required to gain more insight into different patterns of usage—to investigate, for instance, whether active posting is associated with different effects than passive consumption of social media content.

If you’re unsure about the conclusion, it can be helpful to ask a friend or fellow student to read your conclusion and summarize the main takeaways.

  • Do they understand from your conclusion what your research was about?
  • Are they able to summarize the implications of your findings?
  • Can they answer your research question based on your conclusion?

You can also get an expert to proofread and feedback your paper with a paper editing service .

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

future directions in research example

The conclusion of a research paper has several key elements you should make sure to include:

  • A restatement of the research problem
  • A summary of your key arguments and/or findings
  • A short discussion of the implications of your research

No, it’s not appropriate to present new arguments or evidence in the conclusion . While you might be tempted to save a striking argument for last, research papers follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the results and discussion sections if you are following a scientific structure). The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, April 13). Writing a Research Paper Conclusion | Step-by-Step Guide. Scribbr. Retrieved September 16, 2024, from https://www.scribbr.com/research-paper/research-paper-conclusion/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, writing a research paper introduction | step-by-step guide, how to create a structured research paper outline | example, checklist: writing a great research paper, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Research Method

Home » Implications in Research – Types, Examples and Writing Guide

Implications in Research – Types, Examples and Writing Guide

Table of Contents

Implications in Research

Implications in Research

Implications in research refer to the potential consequences, applications, or outcomes of the findings and conclusions of a research study. These can include both theoretical and practical implications that extend beyond the immediate scope of the study and may impact various stakeholders, such as policymakers, practitioners, researchers , or the general public.

Structure of Implications

The format of implications in research typically follows the structure below:

  • Restate the main findings: Begin by restating the main findings of the study in a brief summary .
  • Link to the research question/hypothesis : Clearly articulate how the findings are related to the research question /hypothesis.
  • Discuss the practical implications: Discuss the practical implications of the findings, including their potential impact on the field or industry.
  • Discuss the theoretical implications : Discuss the theoretical implications of the findings, including their potential impact on existing theories or the development of new ones.
  • Identify limitations: Identify the limitations of the study and how they may affect the generalizability of the findings.
  • Suggest directions for future research: Suggest areas for future research that could build on the current study’s findings and address any limitations.

Types of Implications in Research

Types of Implications in Research are as follows:

Theoretical Implications

These are the implications that a study has for advancing theoretical understanding in a particular field. For example, a study that finds a new relationship between two variables can have implications for the development of theories and models in that field.

Practical Implications

These are the implications that a study has for solving practical problems or improving real-world outcomes. For example, a study that finds a new treatment for a disease can have implications for improving the health of patients.

Methodological Implications

These are the implications that a study has for advancing research methods and techniques. For example, a study that introduces a new method for data analysis can have implications for how future research in that field is conducted.

Ethical Implications

These are the implications that a study has for ethical considerations in research. For example, a study that involves human participants must consider the ethical implications of the research on the participants and take steps to protect their rights and welfare.

Policy Implications

These are the implications that a study has for informing policy decisions. For example, a study that examines the effectiveness of a particular policy can have implications for policymakers who are considering whether to implement or change that policy.

Societal Implications

These are the implications that a study has for society as a whole. For example, a study that examines the impact of a social issue such as poverty or inequality can have implications for how society addresses that issue.

Forms of Implications In Research

Forms of Implications are as follows:

Positive Implications

These refer to the positive outcomes or benefits that may result from a study’s findings. For example, a study that finds a new treatment for a disease can have positive implications for patients, healthcare providers, and the wider society.

Negative Implications

These refer to the negative outcomes or risks that may result from a study’s findings. For example, a study that finds a harmful side effect of a medication can have negative implications for patients, healthcare providers, and the wider society.

Direct Implications

These refer to the immediate consequences of a study’s findings. For example, a study that finds a new method for reducing greenhouse gas emissions can have direct implications for policymakers and businesses.

Indirect Implications

These refer to the broader or long-term consequences of a study’s findings. For example, a study that finds a link between childhood trauma and mental health issues can have indirect implications for social welfare policies, education, and public health.

Importance of Implications in Research

The following are some of the reasons why implications are important in research:

  • To inform policy and practice: Research implications can inform policy and practice decisions by providing evidence-based recommendations for actions that can be taken to address the issues identified in the research. This can lead to more effective policies and practices that are grounded in empirical evidence.
  • To guide future research: Implications can also guide future research by identifying areas that need further investigation, highlighting gaps in current knowledge, and suggesting new directions for research.
  • To increase the impact of research : By communicating the practical and theoretical implications of their research, researchers can increase the impact of their work by demonstrating its relevance and importance to a wider audience.
  • To enhance the credibility of research : Implications can help to enhance the credibility of research by demonstrating that the findings have practical and theoretical significance and are not just abstract or academic exercises.
  • To foster collaboration and engagement : Implications can also foster collaboration and engagement between researchers, practitioners, policymakers, and other stakeholders by providing a common language and understanding of the practical and theoretical implications of the research.

Example of Implications in Research

Here are some examples of implications in research:

  • Medical research: A study on the efficacy of a new drug for a specific disease can have significant implications for medical practitioners, patients, and pharmaceutical companies. If the drug is found to be effective, it can be used to treat patients with the disease, improve their health outcomes, and generate revenue for the pharmaceutical company.
  • Educational research: A study on the impact of technology on student learning can have implications for educators and policymakers. If the study finds that technology improves student learning outcomes, educators can incorporate technology into their teaching methods, and policymakers can allocate more resources to technology in schools.
  • Social work research: A study on the effectiveness of a new intervention program for individuals with mental health issues can have implications for social workers, mental health professionals, and policymakers. If the program is found to be effective, social workers and mental health professionals can incorporate it into their practice, and policymakers can allocate more resources to the program.
  • Environmental research: A study on the impact of climate change on a particular ecosystem can have implications for environmentalists, policymakers, and industries. If the study finds that the ecosystem is at risk, environmentalists can advocate for policy changes to protect the ecosystem, policymakers can allocate resources to mitigate the impact of climate change, and industries can adjust their practices to reduce their carbon footprint.
  • Economic research: A study on the impact of minimum wage on employment can have implications for policymakers and businesses. If the study finds that increasing the minimum wage does not lead to job losses, policymakers can implement policies to increase the minimum wage, and businesses can adjust their payroll practices.

How to Write Implications in Research

Writing implications in research involves discussing the potential outcomes or consequences of your findings and the practical applications of your study’s results. Here are some steps to follow when writing implications in research:

  • Summarize your key findings: Before discussing the implications of your research, briefly summarize your key findings. This will provide context for your implications and help readers understand how your research relates to your conclusions.
  • Identify the implications: Identify the potential implications of your research based on your key findings. Consider how your results might be applied in the real world, what further research might be necessary, and what other areas of study could be impacted by your research.
  • Connect implications to research question: Make sure that your implications are directly related to your research question or hypotheses. This will help to ensure that your implications are relevant and meaningful.
  • Consider limitations : Acknowledge any limitations or weaknesses of your research, and discuss how these might impact the implications of your research. This will help to provide a more balanced view of your findings.
  • Discuss practical applications : Discuss the practical applications of your research and how your findings could be used in real-world situations. This might include recommendations for policy or practice changes, or suggestions for future research.
  • Be clear and concise : When writing implications in research, be clear and concise. Use simple language and avoid jargon or technical terms that might be confusing to readers.
  • Provide a strong conclusion: Provide a strong conclusion that summarizes your key implications and leaves readers with a clear understanding of the significance of your research.

Purpose of Implications in Research

The purposes of implications in research include:

  • Informing practice: The implications of research can provide guidance for practitioners, policymakers, and other stakeholders about how to apply research findings in practical settings.
  • Generating new research questions: Implications can also inspire new research questions that build upon the findings of the original study.
  • Identifying gaps in knowledge: Implications can help to identify areas where more research is needed to fully understand a phenomenon.
  • Promoting scientific literacy: Implications can also help to promote scientific literacy by communicating research findings in accessible and relevant ways.
  • Facilitating decision-making : The implications of research can assist decision-makers in making informed decisions based on scientific evidence.
  • Contributing to theory development : Implications can also contribute to the development of theories by expanding upon or challenging existing theories.

When to Write Implications in Research

Here are some specific situations of when to write implications in research:

  • Research proposal : When writing a research proposal, it is important to include a section on the potential implications of the research. This section should discuss the potential impact of the research on the field and its potential applications.
  • Literature review : The literature review is an important section of the research paper where the researcher summarizes existing knowledge on the topic. This is also a good place to discuss the potential implications of the research. The researcher can identify gaps in the literature and suggest areas for further research.
  • Conclusion or discussion section : The conclusion or discussion section is where the researcher summarizes the findings of the study and interprets their meaning. This is a good place to discuss the implications of the research and its potential impact on the field.

Advantages of Implications in Research

Implications are an important part of research that can provide a range of advantages. Here are some of the key advantages of implications in research:

  • Practical applications: Implications can help researchers to identify practical applications of their research findings, which can be useful for practitioners and policymakers who are interested in applying the research in real-world contexts.
  • Improved decision-making: Implications can also help decision-makers to make more informed decisions based on the research findings. By clearly identifying the implications of the research, decision-makers can understand the potential outcomes of their decisions and make better choices.
  • Future research directions : Implications can also guide future research directions by highlighting areas that require further investigation or by suggesting new research questions. This can help to build on existing knowledge and fill gaps in the current understanding of a topic.
  • Increased relevance: By highlighting the implications of their research, researchers can increase the relevance of their work to real-world problems and challenges. This can help to increase the impact of their research and make it more meaningful to stakeholders.
  • Enhanced communication : Implications can also help researchers to communicate their findings more effectively to a wider audience. By highlighting the practical applications and potential benefits of their research, researchers can engage with stakeholders and communicate the value of their work more clearly.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Outline

Research Paper Outline – Types, Example, Template

References in Research

References in Research – Types, Examples and...

Research Paper Introduction

Research Paper Introduction – Writing Guide and...

Dissertation Methodology

Dissertation Methodology – Structure, Example...

Appendices

Appendices – Writing Guide, Types and Examples

Informed Consent in Research

Informed Consent in Research – Types, Templates...

  • Submit your COVID-19 Pandemic Research
  • Research Leap Manual on Academic Writing
  • Conduct Your Survey Easily
  • Research Tools for Primary and Secondary Research
  • Useful and Reliable Article Sources for Researchers
  • Tips on writing a Research Paper
  • Stuck on Your Thesis Statement?
  • Out of the Box
  • How to Organize the Format of Your Writing
  • Argumentative Versus Persuasive. Comparing the 2 Types of Academic Writing Styles
  • Very Quick Academic Writing Tips and Advices
  • Top 4 Quick Useful Tips for Your Introduction
  • Have You Chosen the Right Topic for Your Research Paper?
  • Follow These Easy 8 Steps to Write an Effective Paper
  • 7 Errors in your thesis statement
  • How do I even Write an Academic Paper?
  • Useful Tips for Successful Academic Writing

Digital Technologies Supporting SMEs – A Survey in Albanian Manufacturers’ Websites

A taxonomy of knowledge management systems in the micro-enterprise, immigrant entrepreneurship in europe: insights from a bibliometric analysis.

  • Enhancing Employee Job Performance Through Supportive Leadership, Diversity Management, and Employee Commitment: The Mediating Role of Affective Commitment
  • Relationship Management in Sales – Presentation of a Model with Which Sales Employees Can Build Interpersonal Relationships with Customers
  • Promoting Digital Technologies to Enhance Human Resource Progress in Banking
  • Enhancing Customer Loyalty in the E-Food Industry: Examining Customer Perspectives on Lock-In Strategies
  • Self-Disruption As A Strategy For Leveraging A Bank’s Sustainability During Disruptive Periods: A Perspective from the Caribbean Financial Institutions
  • Slide Share

Research leap

The future of research: Emerging trends and new directions in scientific inquiry

The world of research is constantly evolving, and staying on top of emerging trends is crucial for advancing scientific inquiry. With the rapid development of technology and the increasing focus on interdisciplinary research, the future of research is filled with exciting opportunities and new directions.

In this article, we will explore the future of research, including emerging trends and new directions in scientific inquiry. We will examine the impact of technological advancements, interdisciplinary research, and other factors that are shaping the future of research.

One of the most significant trends shaping the future of research is the rapid development of technology. From big data analytics to machine learning and artificial intelligence, technology is changing the way we conduct research and opening up new avenues for scientific inquiry. With the ability to process vast amounts of data in real-time, researchers can gain insights into complex problems that were once impossible to solve.

Another important trend in the future of research is the increasing focus on interdisciplinary research. As the boundaries between different fields of study become more fluid, interdisciplinary research is becoming essential for addressing complex problems that require diverse perspectives and expertise. By combining the insights and methods of different fields, researchers can generate new insights and solutions that would not be possible with a single-discipline approach.

One emerging trend in research is the use of virtual and augmented reality (VR/AR) to enhance scientific inquiry. VR/AR technologies have the potential to transform the way we conduct experiments, visualize data, and collaborate with other researchers. For example, VR/AR simulations can allow researchers to explore complex data sets in three dimensions, enabling them to identify patterns and relationships that would be difficult to discern in two-dimensional representations.

Another emerging trend in research is the use of open science practices. Open science involves making research data, methods, and findings freely available to the public, facilitating collaboration and transparency in the scientific community. Open science practices can help to accelerate the pace of research by enabling researchers to build on each other’s work more easily and reducing the potential for duplication of effort.

The future of research is also marked by scientific innovation, with new technologies and approaches being developed to address some of the world’s most pressing problems. For example, gene editing technologies like CRISPR-Cas9 have the potential to revolutionize medicine by allowing scientists to edit DNA and cure genetic diseases. Similarly, nanotechnology has the potential to create new materials with unprecedented properties, leading to advances in fields like energy, electronics, and medicine.

One new direction in research is the focus on sustainability and the environment. With climate change and other environmental issues becoming increasingly urgent, researchers are turning their attention to developing sustainable solutions to the world’s problems. This includes everything from developing new materials and technologies to reduce greenhouse gas emissions to developing sustainable agricultural practices that can feed the world’s growing population without damaging the environment.

Another new direction in research is the focus on mental health and wellbeing. With mental health issues becoming increasingly prevalent, researchers are exploring new approaches to understanding and treating mental illness. This includes everything from developing new therapies and medications to exploring the role of lifestyle factors like diet, exercise, and sleep in mental health.

In conclusion, the future of research is filled with exciting opportunities and new directions. By staying on top of emerging trends, embracing interdisciplinary research, and harnessing the power of technological innovation, researchers can make significant contributions to scientific inquiry and address some of the world’s most pressing problems.

Suggested Articles

Try a New Method for Your Research. Research Tools for Primary and Secondary Research

The Role of Primary and Secondary Research Tools in Your Survey Try a New Method…

future directions in research example

The world of research funding is undergoing significant changes, with new funding models and approaches…

future directions in research example

Entrepreneurship has been a hot topic for several years now, and with the pace of…

A person reviewing a manuscript with keywords "peer review" and "scientific publishing" highlighted.

Peer review is an essential component of scientific publishing. It involves the critical evaluation of…

Related Posts

future directions in research example

Comments are closed.

Research-Methodology

Suggestions for Future Research

Your dissertation needs to include suggestions for future research. Depending on requirements of your university, suggestions for future research can be either integrated into Research Limitations section or it can be a separate section.

You will need to propose 4-5 suggestions for future studies and these can include the following:

1. Building upon findings of your research . These may relate to findings of your study that you did not anticipate. Moreover, you may suggest future research to address unanswered aspects of your research problem.

2. Addressing limitations of your research . Your research will not be free from limitations and these may relate to formulation of research aim and objectives, application of data collection method, sample size, scope of discussions and analysis etc. You can propose future research suggestions that address the limitations of your study.

3. Constructing the same research in a new context, location and/or culture . It is most likely that you have addressed your research problem within the settings of specific context, location and/or culture. Accordingly, you can propose future studies that can address the same research problem in a different settings, context, location and/or culture.

4. Re-assessing and expanding theory, framework or model you have addressed in your research . Future studies can address the effects of specific event, emergence of a new theory or evidence and/or other recent phenomenon on your research problem.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline. John Dudovskiy

Suggestions for Future Research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 16 October 2023

Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network

  • Mario Krenn   ORCID: orcid.org/0000-0003-1620-9207 1 ,
  • Lorenzo Buffoni 2 ,
  • Bruno Coutinho 2 ,
  • Sagi Eppel 3 ,
  • Jacob Gates Foster 4 ,
  • Andrew Gritsevskiy   ORCID: orcid.org/0000-0001-8138-8796 3 , 5 , 6 ,
  • Harlin Lee   ORCID: orcid.org/0000-0001-6128-9942 4 ,
  • Yichao Lu   ORCID: orcid.org/0009-0001-2005-1724 7 ,
  • João P. Moutinho 2 ,
  • Nima Sanjabi   ORCID: orcid.org/0009-0000-6342-5231 8 ,
  • Rishi Sonthalia   ORCID: orcid.org/0000-0002-0928-392X 4 ,
  • Ngoc Mai Tran 9 ,
  • Francisco Valente   ORCID: orcid.org/0000-0001-6964-9391 10 ,
  • Yangxinyu Xie   ORCID: orcid.org/0000-0002-1532-6746 11 ,
  • Rose Yu 12 &
  • Michael Kopp 6  

Nature Machine Intelligence volume  5 ,  pages 1326–1335 ( 2023 ) Cite this article

29k Accesses

16 Citations

1046 Altmetric

Metrics details

  • Complex networks
  • Computer science
  • Research data

A tool that could suggest new personalized research directions and ideas by taking insights from the scientific literature could profoundly accelerate the progress of science. A field that might benefit from such an approach is artificial intelligence (AI) research, where the number of scientific publications has been growing exponentially over recent years, making it challenging for human researchers to keep track of the progress. Here we use AI techniques to predict the future research directions of AI itself. We introduce a graph-based benchmark based on real-world data—the Science4Cast benchmark, which aims to predict the future state of an evolving semantic network of AI. For that, we use more than 143,000 research papers and build up a knowledge network with more than 64,000 concept nodes. We then present ten diverse methods to tackle this task, ranging from pure statistical to pure learning methods. Surprisingly, the most powerful methods use a carefully curated set of network features, rather than an end-to-end AI approach. These results indicate a great potential that can be unleashed for purely ML approaches without human knowledge. Ultimately, better predictions of new future research directions will be a crucial component of more advanced research suggestion tools.

Similar content being viewed by others

future directions in research example

Learning on knowledge graph dynamics provides an early warning of impactful research

future directions in research example

TrendyGenes, a computational pipeline for the detection of literature trends in academia and drug discovery

future directions in research example

Accelerating science with human-aware artificial intelligence

The corpus of scientific literature grows at an ever-increasing speed. Specifically, in the field of artificial intelligence (AI) and machine learning (ML), the number of papers every month is growing exponentially with a doubling rate of roughly 23 months (Fig. 1 ). Simultaneously, the AI community is embracing diverse ideas from many disciplines such as mathematics, statistics and physics, making it challenging to organize different ideas and uncover new scientific connections. We envision a computer program that can automatically read, comprehend and act on AI literature. It can predict and suggest meaningful research ideas that transcend individual knowledge and cross-domain boundaries. If successful, it could greatly improve the productivity of AI researchers, open up new avenues of research and help drive progress in the field.

figure 1

The doubling rate of papers per month is roughly 23 months, which might lead to problems for publishing in these fields, at some point. The categories are cs.AI, cs.LG, cs.NE and stat.ML.

In this work, we address the ambitious vision of developing a data-driven approach to predict future research directions 1 . As new research ideas often emerge from connecting seemingly unrelated concepts 2 , 3 , 4 , we model the evolution of AI literature as a temporal network. We construct an evolving semantic network that encapsulates the content and development of AI research since 1994, with approximately 64,000 nodes (representing individual concepts) and 18 million edges (connecting jointly investigated concepts).

We use the semantic network as an input to ten diverse statistical and ML methods to predict the future evolution of the semantic network with high accuracy. That is, we can predict which combinations of concepts AI researchers will investigate in the future. Being able to predict what scientists will work on is a first crucial step for suggesting new topics that might have a high impact.

Several methods were contributions to the Science4Cast competition hosted by the 2021 IEEE International Conference on Big Data (IEEE BigData 2021). Broadly, we can divide the methods into two classes: methods that use hand-crafted network-theoretical features and those that automatically learn features. We found that models using carefully hand-crafted features outperform methods that attempt to learn features autonomously. This (somewhat surprising) finding indicates a great potential for improvements of models free of human priors.

Our paper introduces a real-world graph benchmark for AI, presents ten methods for solving it, and discusses how this task contributes to the larger goal of AI-driven research suggestions in AI and other disciplines. All methods are available at GitHub 5 .

Semantic networks

The goal here is to extract knowledge from the scientific literature that can subsequently be processed by computer algorithms. At first glance, a natural first step would be to use large language model (such as GPT3 6 , Gopher 7 , MegaTron 8 or PaLM 9 ) on each article to extract concepts and their relations automatically. However, these methods still struggle in reasoning capabilities 10 , 11 ; thus, it is not yet directly clear how these models can be used for identifying and suggesting new ideas and concept combinations.

Rzhetsky et al. 12 pioneered an alternative approach, creating semantic networks in biochemistry from co-occurring concepts in scientific papers. There, nodes represent scientific concepts, specifically biomolecules, and are linked when a paper mentions both in its title or abstract. This evolving network captures the field’s history and, using supercomputer simulations, provides insights into scientists’ collective behaviour and suggests more efficient research strategies 13 . Although creating semantic networks from concept co-occurrences extracts only a small amount of knowledge from each paper, it captures non-trivial and actionable content when applied to large datasets 2 , 4 , 13 , 14 , 15 . PaperRobot extends this approach by predicting new links from large medical knowledge graphs and formulating new ideas in human language as paper drafts 16 .

This approach was applied and extended to quantum physics 17 by building a semantic network of over 6,000 concepts. There, the authors (including one of us) formulated the prediction of new research trends and connections as an ML task, with the goal of identifying concept pairs not yet jointly discussed in the literature but likely to be investigated in the future. This prediction task was one component for personalized suggestions of new research ideas.

Link prediction in semantic networks

We formulate the prediction of future research topics as a link-prediction task in an exponentially growing semantic network in the AI field. The goal is to predict which unconnected nodes, representing scientific concepts not yet jointly researched, will be connected in the future.

Link prediction is a common problem in computer science, addressed with classical metrics and features, as well as ML techniques. Network theory-based methods include local motif-based approaches 18 , 19 , 20 , 21 , 22 , linear optimization 23 , global perturbations 24 and stochastic block models 25 . ML works optimized a combination of predictors 26 , with further discussion in a recent review 27 .

In ref. 17 , 17 hand-crafted features were used for this task. In the Science4Cast competition, the goal was to find more precise methods for link-prediction tasks in semantic networks (a semantic network of AI that is ten times larger than the one in ref. 17 ).

Potential for idea generation in science

The long-term goal of predictions and suggestions in semantic networks is to provide new ideas to individual researchers. In a way, we hope to build a creative artificial muse in science 28 . We can bias or constrain the model to give topic suggestions that are related to the research interest of individual scientists, or a pair of scientists to suggest topics for collaborations in an interdisciplinary setting.

Generation and analysis of the dataset

Dataset construction.

We create a dynamic semantic network using papers published on arXiv from 1992 to 2020 in the categories cs.AI, cs.LG, cs.NE and stat.ML. The 64,719 nodes represent AI concepts extracted from 143,000 paper titles and abstracts using Rapid Automatic Keyword Extraction (RAKE) and normalized via natural language processing (NLP) techniques and custom methods 29 . Although high-quality taxonomies such as the Computer Science Ontology (CSO) exist 30 , 31 , we choose not to use them for two reasons: the rapid growth of AI and ML may result in new concepts not yet in the CSO, and not all scientific domains have high-quality taxonomies like CSO. Our goal is to build a scalable approach applicable to any domain of science. However, future research could investigate merging these approaches (see ‘Extensions and future work’).

Concepts form the nodes of the semantic network, and edges are drawn when concepts co-appear in a paper title or abstract. Edges have time stamps based on the paper’s publication date, and multiple time-stamped edges between concepts are common. The network is edge-weighted, and the weight of an edge stands for the number of papers that connect two concepts. In total, this creates a time-evolving semantic network, depicted in Fig. 2 .

figure 2

Utilizing 143,000 AI and ML papers on arXiv from 1992 to 2020, we create a list of concepts using RAKE and other NLP tools, which form nodes in a semantic network. Edges connect concepts that co-occur in titles or abstracts, resulting in an evolving network that expands as more concepts are jointly investigated. The task involves predicting which unconnected nodes (concepts not yet studied together) will connect within a few years. We present ten diverse statistical and ML methods to address this challenge.

Network-theoretical analysis

The published semantic network has 64,719 nodes and 17,892,352 unique undirected edges, with a mean node degree of 553. Many hub nodes greatly exceed this mean degree, as shown in Fig. 3 , For example, the highest node degrees are 466,319 (neural network), 198,050 (deep learning), 195,345 (machine learning), 169,555 (convolutional neural network), 159,403 (real world), 150,227 (experimental result), 127,642 (deep neural network) and 115,334 (large scale). We fit a power-law curve to the degree distribution p ( k ) using ref. 32 and obtained p ( k )  ∝   k −2.28 for degree k  ≥ 1,672. However, real complex network degree distributions often follow power laws with exponential cut-offs 33 . Recent work 34 has indicated that lognormal distributions fit most real-world networks better than power laws. Likelihood ratio tests from ref. 32 suggest truncated power law ( P  = 0.0031), lognormal ( P  = 0.0045) and lognormal positive ( P  = 0.015) fit better than power law, while exponential ( P  = 3 × 10 −10 ) and stretched exponential ( P  = 6 × 10 −5 ) are worse. We couldn’t conclusively determine the best fit with P  ≤ 0.1.

figure 3

Nodes with the highest (466,319) and lowest (2) non-zero degrees are neural network and video compression technique, respectively. The most frequent non-zero degree is 64 (which occures 313 times). The plot, in log scale, omits 1,247 nodes with zero degrees.

We observe changes in network connectivity over time. Although degree distributions remained heavy-tailed, the ordering of nodes within the tail changed due to popularity trends. The most connected nodes and the years they became so include decision tree (1994), machine learning (1996), logic program (2000), neural network (2005), experimental result (2011), machine learning (2013, for a second time) and neural network (2015).

Connected component analysis in Fig. 4 reveals that the network grew more connected over time, with the largest group expanding and the number of connected components decreasing. Mid-sized connected components’ trajectories may expose trends, like image processing. A connected component with four nodes appeared in 1999 (brightness change, planar curve, local feature, differential invariant), and three more joined in 2000 (similarity transformation, template matching, invariant representation). In 2006, a paper discussing support vector machine and local feature merged this mid-sized group with the largest connected component.

figure 4

Primary (left, blue) vertical axis: number of connected components with more than one node. Secondary (right, orange) vertical axis: number of nodes in the largest connected component. For example, the network in 2019 comprises of one large connected component with 63,472 nodes and 1,247 isolated nodes, that is, nodes with no edges. However, the 2001 network has 19 connected components with size greater than one, the largest of which has 2,733 nodes.

The semantic network reveals increasing centralization over time, with a smaller percentage of nodes (concepts) contributing to a larger fraction of edges (concept combinations). Figure 5 shows that the fraction of edges for high-degree nodes rises, while it decreases for low-degree nodes. The decreasing average clustering coefficient over time supports this trend, suggesting nodes are more likely to connect to high-degree central nodes. This could be due to the AI community’s focus on a few dominating methods or more consistent terminology use.

figure 5

This cumulative histogram illustrates the fraction of nodes (concepts) corresponding to the fraction of edges (connections) for given years (1999, 2003, 2007, 2011, 2015 and 2019). The graph was generated by adding edges and nodes dated before each year. Nodes are sorted by increasing degrees. The y value at x  = 80 represents the fraction of edges contributed by all nodes in and below the 80th percentile of degrees.

Problem formulation

At the big picture, we aim to make predictions in an exponentially growing semantic network. The specific task involves predicting which two nodes v 1 and v 2 with degrees d ( v 1/ 2 ) ≥  c lacking an edge in the year (2021 −  δ ) will have w edges in 2021. We use δ  = 1, 3, 5, c  = 0, 5, 25 and w  = 1, 3, where c is a minimal degree. Note that c  = 0 is an intriguing special case where the nodes may not have an associated edge in the initial year, requiring the model to predict which nodes will connect to entirely new edges. The task w  = 3 goes beyond simple link prediction and seeks to identify uninvestigated concept pairs that will appear together in at least three papers. An interesting alternative task could be predicting the fastest-growing links, denoted as ‘trend’ prediction.

In this task, we provide a list of 10 million unconnected node pairs (each node having a degree ≥ c ) for the year (2021 −  δ ), with the goal of sorting this list by descending probability that they will have at least w edges in 2021.

For evaluation, we employ the receiver operating characteristic (ROC) curve 35 , which plots the true-positive rate against the false-positive rate at various threshold settings. We use the area under the curve (AUC) of the ROC curve as our evaluation metric. The advantage of AUC over mean square error is its independence from the data distribution. Specifically, in our case, where the two classes have a highly asymmetric distribution (with only about 1–3% of newly connected edges) and the distribution changes over time, AUC offers meaningful interpretation. Perfect predictions yield AUC = 1, whereas random predictions result in AUC = 0.5. AUC represents the percentage that a random true element is ranked higher than a random false one. For other metrics, see ref. 36 .

To tackle this task, models can use the complete information of the semantic network from the year (2021 −  δ ) in any way possible. In our case, all presented models generate a dataset for learning to make predictions from (2021 − 2 δ ) to (2021 −  δ ). Once the models successfully complete this task, they are applied to the test dataset to make predictions from (2021 −  δ ) to 2021. All reported AUCs are based on the test dataset. Note that solving the test dataset is especially challenging due to the δ -year shift, causing systematic changes such as the number of papers and density of the semantic network.

AI-based solutions

We demonstrate various methods to predict new links in a semantic network, ranging from pure statistical approaches and neural networks with hand-crafted features (NF) to ML models without NF. The results are shown in Fig. 6 , with the highest AUC scores achieved by methods using NF as ML model inputs. Pure network features without ML are competitive, while pure ML methods have yet to outperform those with NF. Predicting links generated at least three times can achieve a quasi-deterministic AUC > 99.5%, suggesting an interesting target for computational sociology and science of science research. We have performed numerous tests to exclude data leakage in the benchmark dataset, overfitting or data duplication both in the set of articles and the set of concepts. We rank methods based on their performance, with model M1 as the best performing and model M8 as the least effective (for the prediction of a new edge with δ  = 3, c  = 0). Models M4 and M7 are subdivided into M4A, M4B, M7A and M7B, differing in their focus on feature or embedding selection (more details in Methods ).

figure 6

Here we show the AUC values for different models that use machine learning techniques (ML), hand-crafted network features (NF) or a combination thereof. The left plot shows results for the prediction of a single new link (that is, w  = 1) and the right plot shows the results for the prediction of new triple links w  = 3. The task is to predict δ  = [1, 3, 5] years into the future, with cut-off values c  = [0, 5, 25]. We sort the models by the the results for the task ( w  = 1,  δ  = 3,  c  = 0), which was the task in the Science4Cast competition. Data points that are not shown have a AUC below 0.6 or are not computed due to computational costs. All AUC values reported are computed on a validation dataset δ years ahead of the training dataset that the models have never seen. Note that the prediction of new triple edges can be performed nearly deterministic. It will be interesting to understand the origin of this quasi-deterministic pattern in AI research, for example, by connecting it to the research interests of scientists 88 .

Model M1: NF + ML. This approach combines tree-based gradient boosting with graph neural networks, using extensive feature engineering to capture node centralities, proximity and temporal evolution 37 . The Light Gradient Boosting Machine (LightGBM) model 38 is employed with heavy regularization to combat overfitting due to the scarcity of positive examples, while a time-aware graph neural network learns dynamic node representations.

Model M2: NF + ML. This method utilizes node and edge features (as well as their first and second derivatives) to predict link formation probabilities 39 . Node features capture popularity, and edge features measure similarity. A multilayer perceptron with rectified linear unit (ReLU) activation is used for learning. Cold start issues are addressed with feature imputation.

Model M3: NF + ML. This method captures hand-crafted node features over multiple time snapshots and employs a long short-term memory (LSTM) to learn time dependencies 40 . The features were selected to be highly informative while having a low computational cost. The final configuration uses degree centrality, degree of neighbours and common neighbours as features. The LSTM outperforms fully connected neural networks.

Model M4: pure NF. Two purely statistical methods, preferential attachment 41 and common neighbours 27 , are used 42 . Preferential attachment is based on node degrees, while common neighbours relies on the number of shared neighbours. Both methods are computationally inexpensive and perform competitively with some learning-based models.

Model M5: NF + ML. Here, ten groups of first-order graph features are extracted to obtain neighbourhood and similarity properties, with principal component analysis 43 applied for dimensionality reduction 44 . A random forest classifier is trained on the balanced dataset to predict new links.

Model M6: NF + ML. The baseline solution uses 15 hand-crafted features as input to a four-layer neural network, predicting the probability of link formation between node pairs 17 .

Model M7: end-to-end ML (auto node embedding). The baseline solution is modified to use node2vec 45 and ProNE embeddings 46 instead of hand-crafted features. The embeddings are input to a neural network with two hidden layers for link prediction.

Model M8: end-to-end ML (transformers). This method learns features in an unsupervised manner using transformers 47 . Node2vec embeddings 45 , 48 are generated for various snapshots of the adjacency matrix, and a transformer model 49 is pre-trained as a feature extractor. A two-layer ReLU network is used for classification.

Extensions and future work

Developing an AI that suggests research topics to scientists is a complex task, and our link-prediction approach in temporal networks is just the beginning. We highlight key extensions and future work directly related to the ultimate goal of AI for AI.

High-quality predictions without feature engineering. Interestingly, the most effective methods utilized carefully crafted features on a graph with extracted concepts as nodes and edges representing their joint publication history. Investigating whether end-to-end deep learning can solve tasks without feature engineering will be a valuable next step.

Fully automated concept extraction. Current concept lists, generated by RAKE’s statistical text analysis, demand time-consuming code development to address irrelevant term extraction (for example, verbs, adjectives). A fully automated NLP technique that accurately extracts meaningful concepts without manual code intervention would greatly enhance the process.

Leveraging ontology taxonomies. Alongside fully automated concept extraction, utilizing established taxonomies such as the CSO 30 , 31 , Wikipedia-extracted concepts, book indices 17 or PhySH key phrases is crucial. Although not comprehensive for all domains, these curated datasets often contain hierarchical and relational concept information, greatly improving prediction tasks.

Incorporating relation extraction. Future work could explore relation extraction techniques for constructing more accurate, sparser semantic networks. By discerning and classifying meaningful concept relationships in abstracts 50 , 51 , a refined AI literature representation is attainable. Using NLP tools for entity recognition, relationship identification and classification, this approach may enhance prediction performance and novel research direction identification.

Generation of new concepts. Our work predicts links between known concepts, but generating new concepts using AI remains a challenge. This unsupervised task, as explored in refs. 52 , 53 , involves detecting concept clusters with dynamics that signal new concept formation. Incorporating emerging concepts into the current framework for suggesting research topics is an intriguing future direction.

Semantic information beyond concept pairs. Currently, abstracts and titles are compressed into concept pairs, but more comprehensive information extraction could yield meaningful predictions. Exploring complex data structures such as hypergraphs 54 may be computationally demanding, but clever tricks could reduce complexity, as shown in ref. 55 . Investigating sociological factors or drawing inspiration from material science approaches 56 may also improve prediction tasks. A recent dataset for the study of the science of science also includes more complex data structures than the ones used in our paper, including data from social networks such as Twitter 57 .

Predictions of scientific success. While predicting new links between concepts is valuable, assessing their potential impact is essential for high-quality suggestions. Introducing a metric of success, like estimated citation numbers or citation growth rate, can help gauge the importance of these connections. Adapting citation prediction techniques from the science of science 58 , 59 , 60 , 61 to semantic networks offers a promising research direction.

Anomaly detections. Predicting likely connections may not align with finding surprising research directions. One method for identifying surprising suggestions involves constraining cosine similarity between vertices 62 , which measures shared neighbours and can be associated with semantic (dis)similarity. Another approach is detecting anomalies in semantic networks, which are potential links with extreme properties 63 , 64 . While scientists often focus on familiar topics 3 , 4 , greater impact results from unexpected combinations of distant domains 12 , encouraging the search for surprising associations.

End-to-end formulation. Our method breaks down the goal of extracting knowledge from scientific literature into subtasks, contrasting with end-to-end deep learning that tackles problems directly without subproblems 65 , 66 . End-to-end approaches have shown great success in various domains 67 , 68 , 69 . Investigating whether such an end-to-end solution can achieve similar success in our context would be intriguing.

Our method represents a crucial step towards developing a tool that can assist scientists in uncovering novel avenues for exploration. We are confident that our outlined ideas and extensions pave the way for achieving practical, personalized, interdisciplinary AI-based suggestions for new impactful discoveries. We firmly believe that such a tool holds the potential to become a influential catalyst, transforming the way scientists approach research questions and collaborate in their respective fields.

Details on concept set generation and application

In this section, we provide details on the generation of our list of 64,719 concepts. For more information, the code is accessible on GitHub . The entire approach is designed for immediate scalability to other domains.

Initially, we utilized approximately 143,000 arXiv papers from the categories cs.AI, cs.LG, cs.NE and stat.ML spanning 1992 to 2020. The omission of earlier data has a negligible effect on our research question, as we show below. We then iterated over each individual article, employing RAKE (with an extended stopword list) to suggest concept candidates, which were subsequently stored.

Following the iteration, we retained concepts composed of at least two words (for example, neural network) appearing in six or more articles, as well as concepts comprising a minimum of three words (for example, recurrent neural network) appearing in three or more articles. This initial filter substantially reduced noise generated by RAKE, resulting in a list of 104,948 concepts.

Lastly, we developed an automated filtering tool to further enhance the quality of the concept list. This tool identified common, domain-independent errors made by RAKE, which primarily included phrases that were not concepts (for example, dataset provided or discuss open challenge). We compiled a list of 543 words not part of meaningful concepts, including verbs, ordinal numbers, conjunctions and adverbials. Ultimately, this process produced our final list of 64,719 concepts employed in our study. No further semantic concept/entity linking is applied.

By this construction, the test sets with c  = 0 could lead to very rare contamination of the dataset. That is because each concept will have at least one edge in the final dataset. The effects, however, are negligible.

The distribution of concepts in the articles can be seen in Extended Data Fig. 1 . As an example, we show the extraction of concepts from five randomly chosen papers:

Memristor hardware-friendly reinforcement learning 70 : ‘actor critic algorithm’, ‘neuromorphic hardware implementation’, ‘hardware neural network’, ‘neuromorphic hardware system’, ‘neural network’, ‘large number’, ‘reinforcement learning’, ‘case study’, ‘pre training’, ‘training procedure’, ‘complex task’, ‘high performance’, ‘classical problem’, ‘hardware implementation’, ‘synaptic weight’, ‘energy efficient’, ‘neuromorphic hardware’, ‘control theory’, ‘weight update’, ‘training technique’, ‘actor critic’, ‘nervous system’, ‘inverted pendulum’, ‘explicit supervision’, ‘hardware friendly’, ‘neuromorphic architecture’, ‘hardware system’.

Automated deep learning analysis of angiography video sequences for coronary artery disease 71 : ‘deep learning approach’, ‘coronary artery disease’, ‘deep learning analysis’, ‘traditional image processing’, ‘deep learning’, ‘image processing’, ‘f1 score’, ‘video sequence’, ‘error rate’, ‘automated analysis’, ‘coronary artery’, ‘vessel segmentation’, ‘key frame’, ‘visual assessment’, ‘analysis method’, ‘analysis pipeline’, ‘coronary angiography’, ‘geometrical analysis’.

Demographic influences on contemporary art with unsupervised style embeddings 72 : ‘classification task’, ‘social network’, ‘data source’, ‘visual content’, ‘graph network’, ‘demographic information’, ‘social connection’, ‘visual style’, ‘historical dataset’, ‘novel information’

The utility of general domain transfer learning for medical language tasks 73 : ‘natural language processing’, ‘long short term memory’, ‘logistic regression model’, ‘transfer learning technique’, ‘short term memory’, ‘average f1 score’, ‘class classification model’, ‘domain transfer learning’, ‘weighted average f1 score’, ‘medical natural language processing’, ‘natural language process’, ‘transfer learning’, ‘f1 score’, ’natural language’, ’deep model’, ’logistic regression’, ’model performance’, ’classification model’, ’text classification’, ’regression model’, ’nlp task’, ‘short term’, ‘medical domain’, ‘weighted average’, ‘class classification’, ‘bert model’, ‘language processing’, ‘biomedical domain’, ‘domain transfer’, ‘nlp model’, ‘main model’, ‘general domain’, ‘domain model’, ‘medical text’.

Fast neural architecture construction using envelopenets 74 : ‘neural network architecture’, ‘neural architecture search’, ‘deep network architecture’, ‘image classification problem’, ‘neural architecture search method’, ‘neural network’, ‘reinforcement learning’, ‘deep network’, ‘image classification’, ‘objective function’, ‘network architecture’, ‘classification problem’, ‘evolutionary algorithm’, ‘neural architecture’, ‘base network’, ‘architecture search’, ‘training epoch’, ‘search method’, ‘image class’, ‘full training’, ‘automated search’, ‘generated network’, ‘constructed network’, ‘gpu day’.

Time gap between the generation of edges

We use articles from arXiv, which only goes back to the year 1992. However, of course, the field of AI exists at least since the 1960s 75 . Thus, this raises the question whether the omission of the first 30–40 years of research has a crucial impact in the prediction task we formulate, specifically, whether edges that we consider as new might not be so new after all. Thus, in Extended Data Fig. 2 , we compute the time between the formation of edges between the same concepts, taking into account all or just the first edge. We see that the vast majority of edges are formed within short time periods, thus the effect of omission of early publication has a negligible effect for our question. Of course, different questions might be crucially impacted by the early data; thus, a careful choice of the data source is crucial 61 .

Positive examples in the test dataset

Table 1 shows the number of positive cases within the 10 million examples in the 18 test datasets that are used for evaluation.

Publication rates in quantum physics

Another field of research that gained a lot of attention in the recent years is quantum physics. This field is also a strong adopter of arXiv. Thus, we analyse in the same way as for AI in Fig. 1 . We find in Extended Data Fig. 3 no obvious exponential increase in papers per month. A detailed analysis of other domains is beyond the current scope. It will be interesting to investigate the growth rates in different scientific disciplines in more detail, especially given that exponential increase has been observed in several aspects of the science of science 3 , 76 .

Details on models M1–M8

What follows are more detailed explanations of the models presented in the main text. All codes are available at GitHub. The feature importance of the best model M1 is shown here, those of other models are analysed in the respective workshop contributions (cited in the subsections).

Details on M1

The best-performing solution is based on a blend of a tree-based gradient boosting approach and a graph neural network approach 37 . Extensive feature engineering was conducted to capture the centralities of the nodes, the proximity between node pairs and their evolution over time. The centrality of a node is captured by the number of neighbours and the PageRank score 77 , while the proximity between a node pair is derived using the Jaccard index. We refer the reader to ref. 37 for the list of all features and their feature importance.

The tree-based gradient boosting approach uses LightGBM 38 and applies heavy regularization to combat overfitting due to the scarcity of positive samples. The graph neural network approach employs a time-aware graph neural network to learn node representations on dynamic semantic networks. The feature importance of model M1, averaged over 18 datasets, is shown in Table 2 . It shows that the temporal features do contribute largely to the model performance, but the model remains strong even when they are removed. An example of the evolution of the training (from 2016 to 2019) and test set (2019 to 2021) for δ  = 3, c  = 25, ω  = 1 is shown in Extended Data Fig. 4 .

Details on M2

The second method assumes that the probability that nodes u and v form an edge in the future is a function of the node features f ( u ), f ( v ) and some edge feature h ( u ,  v ). We chose node features f that capture popularity at the current time t 0 (such as degree, clustering coefficient 78 , 79 and PageRank 77 ). We also use these features’ first and second time derivatives to capture the evolution of the node’s popularity over time. After variable selection during training, we chose h to consist of the HOP-rec score (high-order proximity for implicit recommendation) 80 , 81 and a variation of the Dice similarity score 82 as a measure of similarity between nodes. In summary, we use 31 node features for each node, and two edge features, which gives 31 × 2 + 2 = 64 features in total. These features are then fed into a small multilayer perceptron (5 layers, each with 13 neurons) with ReLU activation.

Cold start is the problem that some nodes in the test set do not appear in the training set. Our strategy for a cold start is imputation. We say a node v is seen if it appeared in the training data, and unseen otherwise; similarly, we say that a node is born at time t if t is the first time stamp where an edge linking this node has appeared. The idea is that an unseen node is simply a node born in the future, so its features should look like a recently born node in the training set. If a node is unseen, then we impute its features as the average of the features of the nodes born recently. We found that with imputation during training, the test AUC scores across all models consistently increased by about 0.02. For a complete description of this method, we refer the reader to ref. 39 .

Details on M3

This approach, detailed in ref. 40 , uses hand-crafted node features that have been captured in multiple time snapshots (for example, every year) and then uses an LSTM to benefit from learning the time dependencies of these features. The final configuration uses two main types of feature: node features including degree and degree of neighbours, and edge features including common neighbours. In addition, to balance the training data, the same number of positive and negative instances have been randomly sampled and combined.

One of the goals was to identify features that are very informative with a very low computational cost. We found that the degree centrality of the nodes is the most important feature, and the degree centrality of the neighbouring nodes and the degree of mutual neighbours gave us the best trade-off. As all of the extracted features’ distributions are highly skewed to the right, meaning most of the features take near zero values, using a power transform such as Yeo–Johnson 83 helps to make the distributions more Gaussian, which boosts the learning. Finally, for the link-prediction task, we saw that LSTMs perform better than fully connected neural networks.

Details on M4

The following two methods are based on a purely statistical analysis of the test data and are explained in detail in ref. 42 .

Preferential attachment. In the network analysis, we concluded that the growth of this dataset tends to maintain a heavy-tailed degree distribution, often associated with scale-free networks. As mentioned before the γ value of the degree distribution is very close to 2, suggesting that preferential attachment 41 is probably the main organizational principle of the network. As such, we implemented a simple prediction model following this procedure. Preferential attachment scores in link prediction are often quantified as

with k i , j the degree of nodes i and j . However, this assumes the scoring of links between nodes that are already connected to the network, that is k i , j  > 0, which is not the case for all the links we must score in the dataset. As a result, we define our preferential attachment model as

Using this simple model with no free parameters we could score new links and compare them with the other models. Immediately we note that preferential attachment outperforms some learning-based models, even if it never manages to reach the top AUC, but it is extremely simple and with negligible computational cost.

Common neighbours. We explore another network-based approach to score the links. Indeed, while the preferential attachment model we derived performed well, it uses no information about the distance between i and j , which is a popular feature used in link-prediction methods 27 . As such, we decided to test a method known as common neighbours 18 . We define Γ ( i ) as the set of neighbors of node i and Γ ( i ) ∩  Γ ( j ) as the set of common neighbours between nodes i and j . We can easily score the nodes with

the intuition being that nodes that share a larger number of neighbours are more likely to be connected than distant nodes that do not share any.

Evaluating this score for each pair ( i ,  j ) on the dataset of unconnected pairs, which can be computed as the second power of the adjacency matrix, A 2 , we obtained an AUC that is sometimes higher than preferential attachment and sometimes lower than it but is still consistently quite close with the best learning-based models.

Details on M5

This method is based on ref. 44 . First, ten groups of first-order graph features are extracted to get some neighbourhood and similarity properties from each pair of nodes: degree centrality of nodes, pair’s total number of neighbours, common neighbours index, Jaccard coefficient, Simpson coefficient, geometric coefficient, cosine coefficient, Adamic–Adar index, resource allocation index and preferential attachment index. They are obtained for three consecutive years to capture the temporal dynamics of the semantic network, leading to a total of 33 features. Second, principal component analysis 43 is applied to reduce the correlation between features, speed up the learning process and improve generalization, which results in a final set of seven latent variables. Lastly, a random forest classifier is trained (using a balanced dataset) to estimate the likelihood of new links between the AI concepts.

In this paper, a modification was performed in relation to the original formulation of the method 44 : two of the original features, average neighbour degree and clustering coefficient, were infeasible to extract for some of the tasks covered in this paper, as their computation can be heavy for such a very large network, and they were discarded. Due to some computational memory issues, it was not possible to run the model for some of the tasks covered in this study, and so those results are missing.

Details on M6

The baseline solution for the Science4Cast competition was closely related to the model presented in ref. 17 . It uses 15 hand-crafted features of a pair of nodes v 1 and v 2 . (Degrees of v 1 and v 2 in the current year and previous two years are six properties. The number of shared neighbours in total of v 1 and v 2 in the current year and previous two years are six properties. The number of shared neighbours between v 1 and v 2 in the current year and the previous two years are three properties). These 15 features are the input of a neural network with four layers (15, 100, 10 and 1 neurons), intending to predict whether the nodes v 1 and v 2 will have w edges in the future. After the training, the model computes the probability for all 10 million evaluation examples. This list is sorted and the AUC is computed.

Details on M7

The solution M7 was not part of the Science4Cast competition and therefore not described in the corresponding proceedings, thus we want to add more details.

The most immediate way one can apply ML to this problem is by automating the detection of features. Quite simply, the baseline solution M6 is modified such that instead of 15 hand-crafted features, the neural network is instead trained on features extracted from a graph embedding. We use two different embedding approaches. The first method is employs node2vec (M7A) 45 , for which we use the implementations provided in the nodevectors Python package 84 . The second one uses the ProNE embedding (M7B) 46 , which is based on sparse matrix factorizations modulated by the higher-order Cheeger inequality 85 .

The embeddings generate a 32-dimensional representation for each node, resulting in edge representations in [0, 1] 64 . These features are input into a neural network with two hidden layers of size 1,000 and 30. Like M6, the model computes the probability for evaluation examples to determine the ROC. We compare ProNE to node2vec, a common graph embedding method using a biased random walk procedure with return and in–out parameters, which greatly affect network encoding. Initial experiments used default values for a 64-dimensional encoding before inputting into the neural network. The higher variance in node2vec predictions is probably due to its sensitivity to hyperparameters. While ProNE is better suited for general multi-dataset link prediction, node2vec’s sensitivity may help identify crucial network features for predicting temporal evolution.

Details on M8

This model, which is detailed in ref. 47 , does not use any hand-crafted features but learns them in a completely unsupervised manner. To do so, we extract various snapshots of the adjacency matrix through time, capturing graphs in the form of A t for t  = 1994, …, 2019. We then embed each of these graphs into 128-dimensional Euclidean space via node2vec 45 , 48 . For each node u in the semantic graph, we extract different 128-dimensional vector embeddings n u ( A 1994 ), …,  n u ( A 2019 ).

Transformers have performed extremely well in NLP tasks 49 ; thus, we apply them to learn the dynamics of the embedding vectors. We pre-train a transformer to help classify node pairs. For the transformer, the encoder and decoder had 6 layers each; we used 128 as the embedding dimension, 2,048 as the feed-forward dimension and 8-headed attention. This transformer acts as our feature extractor. Once we pre-train our transformer, we add a two-layer ReLU network with hidden dimension 128 as a classifier on top.

Data availability

All 18 datasets tested in this paper are available via Zenodo at https://doi.org/10.5281/zenodo.7882892 ref. 86 .

Code availability

All of the models and codes described above can be found via GitHub at https://github.com/artificial-scientist-lab/FutureOfAIviaAI ref. 5 and a permanent Zenodo record at https://zenodo.org/record/8329701 ref. 87 .

Clauset, A., Larremore, D. B. & Sinatra, R. Data-driven predictions in the science of science. Science 355 , 477–480 (2017).

Article   Google Scholar  

Evans, J. A. & Foster, J. G. Metaknowledge. Science 331 , 721–725 (2011).

Article   MathSciNet   MATH   Google Scholar  

Fortunato, S. et al. Science of science. Science 359 , eaao0185 (2018).

Wang, D. & Barabási, A.-L. The Science of Science (Cambridge Univ. Press, 2021).

Krenn, M. et al. FutureOfAIviaAI. GitHub https://github.com/artificial-scientist-lab/FutureOfAIviaAI (2023).

Brown, T. et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33 , 1877–1901 (2020).

Google Scholar  

Rae, J. W. et al. Scaling language models: methods, analysis & insights from training gopher. Preprint at https://arxiv.org/abs/2112.11446 (2021).

Smith, S. et al. Using DeepSpeed and Megatron to train Megatron-Turing NLG 530B, a large-scale generative language model. Preprint at https://arxiv.org/abs/2201.11990 (2022).

Chowdhery, A. et al. Palm: scaling language modeling with pathways. Preprint at https://arxiv.org/abs/2204.02311 (2022).

Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large language models are zero-shot reasoners. Preprint at https://arxiv.org/abs/2205.11916 (2022).

Zhang, H., Li, L. H., Meng, T., Chang, K.-W. & Broeck, G. V. d. On the paradox of learning to reason from data. Preprint at https://arxiv.org/abs/2205.11502 (2022).

Rzhetsky, A., Foster, J. G., Foster, I. T. & Evans, J. A. Choosing experiments to accelerate collective discovery. Proc. Natl Acad. Sci. USA 112 , 14569–14574 (2015).

Foster, J. G., Rzhetsky, A. & Evans, J. A. Tradition and innovation in scientists’ research strategies. Am. Sociol. Rev. 80 , 875–908 (2015).

Van Eck, N. J. & Waltman, L. Text mining and visualization using vosviewer. Preprint at https://arxiv.org/abs/1109.2058 (2011).

Van Eck, N. J. & Waltman, L. in Measuring Scholarly Impact: Methods and Practice (eds Ding, Y. et al.) 285–320 (Springer, 2014).

Wang, Q. et al. Paperrobot: Incremental draft generation of scientific ideas. Preprint at https://arxiv.org/abs/1905.07870 (2019).

Krenn, M. & Zeilinger, A. Predicting research trends with semantic and neural networks with an application in quantum physics. Proc. Natl Acad. Sci. USA 117 , 1910–1916 (2020).

Liben-Nowell, D. & Kleinberg, J. The link-prediction problem for social networks. J. Am. Soc. Inf. Sci. Technol. 58 , 1019–1031 (2007).

Albert, I. & Albert, R. Conserved network motifs allow protein–protein interaction prediction. Bioinformatics 20 , 3346–3352 (2004).

Zhou, T., Lü, L. & Zhang, Y.-C. Predicting missing links via local information. Eur. Phys. J. B 71 , 623–630 (2009).

Article   MATH   Google Scholar  

Kovács, I. A. et al. Network-based prediction of protein interactions. Nat. Commun. 10 , 1240 (2019).

Muscoloni, A., Abdelhamid, I. & Cannistraci, C. V. Local-community network automata modelling based on length-three-paths for prediction of complex network structures in protein interactomes, food webs and more. Preprint at bioRxiv https://doi.org/10.1101/346916 (2018).

Pech, R., Hao, D., Lee, Y.-L., Yuan, Y. & Zhou, T. Link prediction via linear optimization. Physica A 528 , 121319 (2019).

Lü, L., Pan, L., Zhou, T., Zhang, Y.-C. & Stanley, H. E. Toward link predictability of complex networks. Proc. Natl Acad. Sci. USA 112 , 2325–2330 (2015).

Guimerà, R. & Sales-Pardo, M. Missing and spurious interactions and the reconstruction of complex networks. Proc. Natl Acad. Sci. USA 106 , 22073–22078 (2009).

Ghasemian, A., Hosseinmardi, H., Galstyan, A., Airoldi, E. M. & Clauset, A. Stacking models for nearly optimal link prediction in complex networks. Proc. Natl Acad. Sci. USA 117 , 23393–23400 (2020).

Zhou, T. Progresses and challenges in link prediction. iScience 24 , 103217 (2021).

Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4 , 761–769 (2022).

Rose, S., Engel, D., Cramer, N. & Cowley, W. in Text Mining: Applications and Theory (eds Berry, M. W. & Kogan, J.) Ch. 1 (Wiley, 2010).

Salatino, A. A., Thanapalasingam, T., Mannocci, A., Osborne, F. & Motta, E. The computer science ontology: a large-scale taxonomy of research areas. In Proc. Semantic Web–ISWC 2018: 17th International Semantic Web Conference Part II Vol. 17, 187–205 (Springer, 2018).

Salatino, A. A., Osborne, F., Thanapalasingam, T. & Motta, E. The CSO classifier: ontology-driven detection of research topics in scholarly articles. In Proc. Digital Libraries for Open Knowledge: 23rd International Conference on Theory and Practice of Digital Libraries Vol. 23, 296–311 (Springer, 2019).

Alstott, J., Bullmore, E. & Plenz, D. powerlaw: a Python package for analysis of heavy-tailed distributions. PLoS ONE 9 , e85777 (2014).

Fenner, T., Levene, M. & Loizou, G. A model for collaboration networks giving rise to a power-law distribution with an exponential cutoff. Soc. Netw. 29 , 70–80 (2007).

Broido, A. D. & Clauset, A. Scale-free networks are rare. Nat. Commun. 10 , 1017 (2019).

Fawcett, T. ROC graphs: notes and practical considerations for researchers. Pattern Recognit. Lett. 31 , 1–38 (2004).

Sun, Y., Wong, A. K. & Kamel, M. S. Classification of imbalanced data: a review. Int. J. Pattern Recognit. Artif. Intell. 23 , 687–719 (2009).

Lu, Y. Predicting research trends in artificial intelligence with gradient boosting decision trees and time-aware graph neural networks. In 2021 IEEE International Conference on Big Data (Big Data) 5809–5814 (IEEE, 2021).

Ke, G. et al. LightGBM: a highly efficient gradient boosting decision tree. In Proc. 31st International Conference on Neural Information Processing Systems 3149–3157 (Curran Associates Inc., 2017).

Tran, N. M. & Xie, Y. Improving random walk rankings with feature selection and imputation Science4Cast competition, team Hash Brown. In 2021 IEEE International Conference on Big Data (Big Data) 5824–5827 (IEEE, 2021).

Sanjabi, N. Efficiently predicting scientific trends using node centrality measures of a science semantic network. In 2021 IEEE International Conference on Big Data (Big Data) 5820–5823 (IEEE, 2021).

Barabási, A.-L. Network science. Phil. Trans. R. Soci. A 371 , 20120375 (2013).

Moutinho, J. P., Coutinho, B. & Buffoni, L. Network-based link prediction of scientific concepts—a Science4Cast competition entry. In 2021 IEEE International Conference on Big Data (Big Data) 5815–5819 (IEEE, 2021).

Jolliffe, I. T. & Cadima, J. Principal component analysis: a review and recent developments. Phil. Trans. R. Soc. A 374 , 20150202 (2016).

Valente, F. Link prediction of artificial intelligence concepts using low computational power. In 2021 IEEE International Conference on Big Data (Big Data) 5828–5832 (2021).

Grover, A. & Leskovec, J. node2vec: scalable feature learning for networks. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 855–864 (ACM, 2016).

Zhang, J., Dong, Y., Wang, Y., Tang, J. & Ding, M. ProNE: fast and scalable network representation learning. In Proc. Twenty-Eighth International Joint Conference on Artificial Intelligence 4278–4284 (International Joint Conferences on Artificial Intelligence Organization, 2019).

Lee, H., Sonthalia, R. & Foster, J. G. Dynamic embedding-based methods for link prediction in machine learning semantic network. In 2021 IEEE International Conference on Big Data (Big Data) 5801–5808 (IEEE, 2021).

Liu, R. & Krishnan, A. PecanPy: a fast, efficient and parallelized python implementation of node2vec. Bioinformatics 37 , 3377–3379 (2021).

Vaswani, A. et al. Attention is all you need. In Proc. 31st International Conference on Neural Information Processing Systems 6000–6010 (Curran Associates Inc., 2017).

Zelenko, D., Aone, C. & Richardella, A. Kernel methods for relation extraction. J. Mach. Learn. Res. 3 , 1083–1106 (2003).

MathSciNet   MATH   Google Scholar  

Bach, N. & Badaskar, S. A review of relation extraction. Literature Review for Language and Statistics II 2 , 1–15 (2007).

Salatino, A. A., Osborne, F. & Motta, E. How are topics born? Understanding the research dynamics preceding the emergence of new areas. PeerJ Comput. Sc. 3 , e119 (2017).

Salatino, A. A., Osborne, F. & Motta, E. AUGUR: forecasting the emergence of new research topics. In Proc. 18th ACM/IEEE on Joint Conference on Digital Libraries 303–312 (IEEE, 2018).

Battiston, F. et al. The physics of higher-order interactions in complex systems. Nat. Phys. 17 , 1093–1098 (2021).

Coutinho, B. C., Wu, A.-K., Zhou, H.-J. & Liu, Y.-Y. Covering problems and core percolations on hypergraphs. Phys. Rev. Lett. 124 , 248301 (2020).

Article   MathSciNet   Google Scholar  

Olivetti, E. A. et al. Data-driven materials research enabled by natural language processing and information extraction. Appl. Phys. Rev. 7 , 041317 (2020).

Lin, Z., Yin, Y., Liu, L. & Wang, D. SciSciNet: a large-scale open data lake for the science of science research. Sci. Data 10 , 315 (2023).

Azoulay, P. et al. Toward a more scientific science. Science 361 , 1194–1197 (2018).

Liu, H., Kou, H., Yan, C. & Qi, L. Link prediction in paper citation network to construct paper correlation graph. EURASIP J. Wirel. Commun. Netw. 2019 , 1–12 (2019).

Reisz, N. et al. Loss of sustainability in scientific work. New J. Phys. 24 , 053041 (2022).

Frank, M. R., Wang, D., Cebrian, M. & Rahwan, I. The evolution of citation graphs in artificial intelligence research. Nat. Mach. Intell. 1 , 79–85 (2019).

Newman, M. Networks (Oxford Univ. Press, 2018).

Kwon, D. et al. A survey of deep learning-based network anomaly detection. Cluster Comput. 22 , 949–961 (2019).

Pang, G., Shen, C., Cao, L. & Hengel, A. V. D. Deep learning for anomaly detection: a review. ACM Comput. Surv. 54 , 1–38 (2021).

Collobert, R. et al. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 , 2493–2537 (2011).

MATH   Google Scholar  

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).

Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 60 , 84–90 (2017).

Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518 , 529–533 (2015).

Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529 , 484–489 (2016).

Wu, N., Vincent, A., Strukov, D. & Xie, Y. Memristor hardware-friendly reinforcement learning. Preprint at https://arxiv.org/abs/2001.06930 (2020).

Zhou, C. et al. Automated deep learning analysis of angiography video sequences for coronary artery disease. Preprint at https://arxiv.org/abs/2101.12505 (2021).

Huckle, N., Garcia, N. & Nakashima, Y. Demographic influences on contemporary art with unsupervised style embeddings. In Proc. Computer Vision–ECCV 2020 Workshops Part II Vol. 16, 126–142 (Springer, 2020).

Ranti, D. et al. The utility of general domain transfer learning for medical language tasks. Preprint at https://arxiv.org/abs/2002.06670 (2020).

Kamath, P., Singh, A. & Dutta, D. Fast neural architecture construction using envelopenets. Preprint at https://arxiv.org/abs/1803.06744 (2018).

Minsky, M. Steps toward artificial intelligence. Proc. IRE 49 , 8–30 (1961).

Bornmann, L., Haunschild, R. & Mutz, R. Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanit. Soc. Sci. Commun. 8 , 224 (2021).

Brin, S. & Page, L. The anatomy of a large-scale hypertextual web search engine. Comput. Netw. ISDN Syst. 30 , 107–117 (1998).

Holland, P. W. & Leinhardt, S. Transitivity in structural models of small groups. Comp. Group Studies 2 , 107–124 (1971).

Watts, D. J. & Strogatz, S. H. Collective dynamics of ‘small-world’ networks. Nature 393 , 440–442 (1998).

Yang, J.-H., Chen, C.-M., Wang, C.-J. & Tsai, M.-F. HOP-rec: high-order proximity for implicit recommendation. In Proc. 12th ACM Conference on Recommender Systems 140–144 (2018).

Lin, B.-Y. OGB_collab_project. GitHub https://github.com/brucenccu/OGB_collab_project (2021).

Sorensen, T. A. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on danish commons. Biol. Skar. 5 , 1–34 (1948).

Yeo, I.-K. & Johnson, R. A. A new family of power transformations to improve normality or symmetry. Biometrika 87 , 954–959 (2000).

Ranger, M. nodevectors. GitHub https://github.com/VHRanger/nodevectors (2021).

Bandeira, A. S., Singer, A. & Spielman, D. A. A Cheeger inequality for the graph connection Laplacian. SIAM J. Matrix Anal. Appl. 34 , 1611–1630 (2013).

Krenn, M. et al. Predicting the future of AI with AI. Zenodo https://doi.org/10.5281/zenodo.7882892 (2023).

Krenn, M. et al. FutureOfAIviaAI code. Zenodo https://zenodo.org/record/8329701 (2023).

Jia, T., Wang, D. & Szymanski, B. K. Quantifying patterns of research-interest evolution. Nat. Hum. Behav. 1 , 0078 (2017).

Download references

Acknowledgements

We thank IARAI Vienna and IEEE for supporting and hosting the IEEE BigData Competition Science4Cast. We are specifically grateful to D. Kreil, M. Neun, C. Eichenberger, M. Spanring, H. Martin, D. Geschke, D. Springer, P. Herruzo, M. McCutchan, A. Mihai, T. Furdui, G. Fratica, M. Vázquez, A. Gruca, J. Brandstetter and S. Hochreiter for helping to set up and successfully execute the competition and the corresponding workshop. We thank X. Gu for creating Fig. 2 , and M. Aghajohari and M. Sadegh Akhondzadeh for helpful comments on the paper. The work of H.L., R.S. and J.G.F. was supported by grant TWCF0333 from the Templeton World Charity Foundation. H.L. is additionally supported by NSF grant DMS-1952339. J.P.M. acknowledges the support of FCT (Portugal) through scholarship SFRH/BD/144151/2019. B.C. thanks the support from FCT/MCTES through national funds and when applicable co-funded EU funds under the project UIDB/50008/2020, and FCT through the project CEECINST/00117/2018/CP1495/CT0001. N.M.T. and Y.X. are supported by NSF grant DMS-2113468, the NSF IFML 2019844 award to the University of Texas at Austin, and the Good Systems Research Initiative, part of University of Texas at Austin Bridging Barriers.

Open access funding provided by Max Planck Society.

Author information

Authors and affiliations.

Max Planck Institute for the Science of Light (MPL), Erlangen, Germany

Mario Krenn

Instituto de Telecomunicações, Lisbon, Portugal

Lorenzo Buffoni, Bruno Coutinho & João P. Moutinho

University of Toronto, Toronto, Ontario, Canada

Sagi Eppel & Andrew Gritsevskiy

University of California Los Angeles, Los Angeles, CA, USA

Jacob Gates Foster, Harlin Lee & Rishi Sonthalia

Cavendish Laboratories, Cavendish, VT, USA

Andrew Gritsevskiy

Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria

Andrew Gritsevskiy & Michael Kopp

Alpha 8 AI, Toronto, Ontario, Canada

Independent Researcher, Barcelona, Spain

Nima Sanjabi

University of Texas at Austin, Austin, TX, USA

Ngoc Mai Tran

Independent Researcher, Leiria, Portugal

Francisco Valente

University of Pennsylvania, Philadelphia, PA, USA

Yangxinyu Xie

University of California, San Diego, CA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

M. Krenn and R.Y. initiated the research. M. Krenn and M. Kopp organized the Science4Cast competition. M. Krenn generated the datasets and initial codes. S.E. and H.L. analysed the network-theoretical properties of the semantic network. M. Krenn, L.B., B.C., J.G.F., A.G, H.L., Y.L, J.P.M, N.S., R.S., N.M.T, F.V., Y.X and M. Kopp provided codes for the ten models. M. Krenn wrote the paper with input from all co-authors.

Corresponding author

Correspondence to Mario Krenn .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Alexander Belikov, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Mirko Pieropan, in collaboration with the Nature Machine Intelligence team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1.

Number of concepts per article.

Extended Data Fig. 2

Time Gap between the generation of edges. Here, left shows the time it takes to create a new edge between two vertices and right shows the time between the first and the second edge.

Extended Data Fig. 3

Publications in Quantum Physics.

Extended Data Fig. 4

Evolution of the AUC during training for Model M1.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Krenn, M., Buffoni, L., Coutinho, B. et al. Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network. Nat Mach Intell 5 , 1326–1335 (2023). https://doi.org/10.1038/s42256-023-00735-0

Download citation

Received : 21 January 2023

Accepted : 11 September 2023

Published : 16 October 2023

Issue Date : November 2023

DOI : https://doi.org/10.1038/s42256-023-00735-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Link prediction for hypothesis generation: an active curriculum learning infused temporal graph-based approach.

  • Uchenna Akujuobi
  • Priyadarshini Kumari
  • Tarek R. Besold

Artificial Intelligence Review (2024)

A commentary on transformative consumer research: Musings on its genesis, evolution, and opportunity for scientific specialization

  • Martin Mende
  • David Glen Mick

AMS Review (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

future directions in research example

National Academies Press: OpenBook

Improving the Nation's Water Security: Opportunities for Research (2007)

Chapter: 6 recommendations for future research directions, 6 recommendations for future research directions.

Progress has been made in the Environmental Protection Agency’s (EPA’s) water security research program (see Chapter 4 ), but many important research questions and technical support needs remain. In Chapter 3 , a framework is suggested for evaluating water security research initiatives that gives priority to research that improves response and recovery and/or develops risk reduction or consequence mitigation measures. The research should also produce tools with a reasonable likelihood of implementation and, where feasible, dual-use benefits. Based on this framework and the review of water security efforts already under way, two important water security research gaps are identified and discussed briefly in this chapter. In addition, short- and long-term water security research recommendations are made. The research recommendations are organized in this chapter according to the three long-term program objectives proposed in Chapter 5 emphasizing pre-incident, incident, and post-incident applications: (1) develop products to support more resilient design and operation of facilities and systems, (2) improve the ability of operators and responders to detect and assess incidents, and (3) improve response and recovery. Both drinking water and wastewater research priorities are addressed together within these three objectives to maximize research synergies that may exist.

KEY RESEARCH GAPS

The Water Security Research and Technical Support Action Plan (EPA, 2004a) set out a comprehensive guide for the EPA’s near-term research initiatives. Although the Action Plan was intended to provide a short-term (three- to four- year) research agenda, the previous National Research Council review (NRC, 2004) noted that several of the Action Plan projects represented long-term research questions not easily ad-

dressed in the original time frame. Therefore, the Action Plan provides a reasonable starting point for building the EPA’s future research program. Nevertheless, the short-term planning horizon of the Action Plan prevented consideration of two key subjects that are critical to a long-term water security research program: behavioral science and innovative system design. The committee recommends the EPA work in collaboration with other organizations to build research initiatives in these two areas.

Behavioral Science

The threat of bioterrorism presents new and different types of risks that are dynamic and pose difficult trade-offs, bringing about intellectual challenges and an emotional valence possibly more important than the risks themselves. Developing an effective communication strategy that meets the needs of the broad range of stakeholders (e.g., response organizations, water organizations and utilities, public health agencies, the public, the media) while addressing security concerns is clearly a high-priority research area. The EPA’s work on risk communication is focused primarily on the development of guidance, protocols, and training, and little emphasis has been devoted to interdisciplinary behavioral science research to better prepare stakeholders for water security incidents or to build confidence in their ability to respond. Behavioral science research could help address, for example, what the public’s beliefs, opinions, and knowledge about water security risks are; how risk perception and other psychological factors affect responses to water-related events; and how to communicate these risks with the public (Gray and Ropeik, 2002; Means, 2002; Roberson and Morely, 2005b). A better understanding of what short-term disruptions customers are prepared to tolerate may also guide response and recovery planning and the development of recovery technologies.

Previous experience with natural disasters and environmental risks provides a basis for investigating and predicting human behavior in risky situations (Fischoff, 2005). Existing models of human behavior during other kinds of crises, however, may not be adequate to forecast human behavior during bioterrorism or water security incidents (DiGiovanni et al., 2003).

Risk communicators consider empirical findings from psychology, cognitive science, communications, and other behavioral and social sciences to varying extents (Bostrom and Lofstedt, 2003). Although decision makers frequently predict panic and irrational behavior in times of

crisis, behavioral science researchers have found that people respond reasonably to such challenges (e.g., Fishoff, 2005). Given the urgency of terror risk communication, risk communicators are obliged to incorporate existing behavioral science research as it relates to water security risks.

The EPA should take advantage of existing behavioral science research that may be applicable to water security issues, but this requires knowledge and experience in behavioral science research. Where gaps exist, the EPA will need to engage in interdisciplinary, rigorous empirical research to obtain the necessary knowledge.

Innovative Designs for Secure and Resilient Water and Wastewater Systems

Innovative designs for water and wastewater infrastructure were not addressed in the EPA Action Plan, but the topic deserves a place in a long-term water security research program. The EPA’s research mission has traditionally included the development and testing of new concepts, technologies, and management structures for water and wastewater utilities to achieve practical objectives in public health, sustainability and cost-effectiveness. The addition of homeland security to its mission provides a unique opportunity to take a holistic view of current design and management of water and wastewater infrastructures. Innovation is needed to address the problem of aging infrastructures while making new water systems more resilient to natural hazards and malicious incidents. The EPA should, therefore, take a leadership role in providing guidance for the planning, design, and implementation of new, more sustainable and resilient water and wastewater facilities for the 21st century.

Disagreggation of large water and wastewater systems should be an overarching theme of innovation. Large and complex systems have developed in the United States following the pattern of urban and suburban sprawl. While there are clear economies of scale for large utilities in construction and system management, there are distinct disadvantages as well. The complexity of large systems makes security measures difficult to implement and complicates the response to an attack. For example, locating the source of incursion within the distribution system and isolating contaminated sections are more difficult in large and complex water systems. Long water residence times are also more likely to occur in large drinking water systems, and, as a result, disinfectant residual may be lacking in the extremities of the system because of the chemical and biological reactions that occur during transport. From a security perspec-

tive, inadequate disinfectant residual means less protection against intentional contamination by a microbial agent.

A breadth of possibilities exists for improving security through innovative infrastructure design. Satellite water treatment plants could boost water quality. Strategic placement of treatment devices (e.g., ultraviolet lamp arrays) within the distribution system could counter a bioterrorism attack. Wastewater treatment systems could be interconnected to provide more flexibility in case of attack, and diversion devices could be installed to isolate contaminants. Box 6-1 describes some of these concepts in greater detail, and specific research recommendations are suggested in the following section.

RESEARCH RECOMMENDATIONS: DEVELOP PRODUCTS TO SUPPORT MORE RESILIENT DESIGN AND OPERATION OF FACILITIES AND SYSTEMS

Specific research topics are suggested here in two areas to support development of more resilient water and wastewater systems: (1) innovative designs for water and wastewater and (2) improved methods for risk assessment, including processes for threat and consequence assessments.

Innovative Designs for Water and Wastewater Systems

Innovative changes to water infrastructure will require long-term investment in research. Existing systems have been in place for more than a century in older cities. Thus, bold new directions will understandably require intensive research at the outset to produce a defensible economic argument for change. On the other hand, the EPA has the opportunity to develop innovative approaches that can be implemented almost immediately in relatively new, as well as planned, urban and suburban areas. The first step in research would be to enumerate the opportunities for innovation, recognizing the constraints brought about by the size, age, and complexity of existing water and wastewater infrastructures. A broad-gauge, economic analysis should follow that would quantify the costs and multiple benefits of these innovative designs (e.g., increased security, improved drinking water quality, enhanced sustainability of water resources). In addition, there is an implicit need for EPA research-

Three infrastructure concepts illustrate potential innovative approaches for improving the security and resilience of water and wastewater systems: (1) distribution and collection system interventions, (2) the use of distributed networks, and (3) the implementation of dual piping systems. Water distribution system interventions could include multiple points of treatment within the distribution system (e.g., UV disinfection, chemical addition) or effective inline monitoring and localized diversion, using multiple valves and interconnections for routing contaminated water out of the distribution system network. In wastewater collections systems, new designs might include realtime monitoring, interventions to isolate portions of the collection system should toxic or explosive constituents be detected (e.g., sensor-activated inflatable dams), and interconnections or online storage capacity for diversion, containment, and treatment.

The “distributed optimal technology network” (DOT-Net) concept (Weber, 2002; 2004) includes a holistic approach to decentralization of both water and wastewater treatment. The premise is that advanced water treatment would be installed most economically at the scale of households, apartment complexes, or neighborhoods using POU/POE technology. These devices offer protection against chemical and biological agents that escape conventional water treatment as well as agents that may be added to the distribution system subsequent to treatment. An almost infinite number of infrastructure variations involving water and wastewater are possible, even including the localized processing of wastewater for energy recovery.

An alternative concept of a dual water distribution system has been proposed to address water quality concerns in aging infrastructures while meeting demand for fire protection (Okun, 1997). Additional benefits could be gained by incorporating satellite and decentralized wastewater treatment facilities. In this concept, the existing water distribution system and storage tanks would be used for delivery of reclaimed water and for fire demand, and a new water distribution system would deliver potable water through much smaller diameter pipes (Snyder et al., 2002). The dual distribution system concept offers several security advantages. For example, fire protection would not depend upon the integrity of the potable water supply in the event of a terrorist attack. The installation of small diameter stainless steel pipes would reduce residence time in the system (and related water quality degradation) and also speed the recovery process from a chemical or biological attack. However, a dual distribution system might also make a contamination attack on the drinking water supply easier because less contaminant mass would be needed to produce a toxic effect.

ers to coordinate with the agency’s regulatory branch to validate the feasibility of the innovative concepts that are proposed.

Each of the infrastructure concepts illustrated in Box 6-1 require far more research to become feasible. The recommendations below outline specific research topics that, if addressed, could improve the safety and sustainability of water resources in the 21st century.

Disaggregation of Water and Wastewater Systems

The “distributed optimal technology network” (DOT-Net) concept (Norton and Weber, 2005; Weber, 2002; 2004) hinges upon the feasibility of distributed treatment via point-of-use (POU)/point-of-entry (POE) devices installed at the scale of individual buildings or perhaps small neighborhoods. The corollary premise is that installation of expensive advanced treatment technology at the centralized water treatment facility is unnecessary when only a fraction of the service area outside a “critical radius” requires additional protection. Only a broad economic analysis of this concept has been published thus far for a hypothetical urban center, but the assumptions need to be verified for actual systems, particularly because of the unique characteristics of individual cities. In addition, far more research is needed on the utility management required to ensure the reliability of POU/POE devices in widespread implementation.

Dual water systems have also been proposed to address aging infrastructure (see Box 6-1 ; Okun, 1997; 2005). As with the DOT-Net concept, long-term research is needed to determine the costs and benefits of constructing an entirely new paradigm for distribution system design. Research issues would include assessing the acceptability of reclaimed water for progressively more intense levels of nonpotable use (e.g., irrigation, toilet flushing, laundering), the acceptability and management demands of decentralized wastewater treatment facilities, and the net benefits to water security.

In-Pipe Interventions to Reduce Exposure

In-pipe engineering interventions (see Box 6-1 ) are deserving of research in a long-term water security research strategy. For example, research is needed to optimize the location of disinfection booster stations or to examine the effectiveness and feasibility of in situ ultraviolet (UV)

irradiation systems as a decontamination strategy. EPA research could also examine various pipe materials (e.g., stainless steel) and evaluate their benefits for security and sustainability relative to their costs.

Infrastructure Designs to Enable Isolation and Interconnection

Most large drinking water systems have the ability to isolate portions of their distribution systems during necessary system repairs, but security concerns provide a new impetus for rapid and effective isolation mechanisms. Research on innovative mechanisms to isolate or divert contaminated water in drinking water and wastewater systems would be useful. The EPA should identify these design options, research their costs and benefits (including dual-use benefits) and their feasibility both for existing systems and new infrastructure, and make this information available to system managers.

Improved Risk Assessments Procedures

A sound risk assessment process allows utilities to make better resource management decisions for enhancing their recovery capacity or security strategies to mitigate the consequences of an attack. The risk assessment process includes assessments of threat, consequences, and vulnerability. To date, most of the efforts to guide utilities in their own risk assessments have focused on vulnerabilities.

Threat Assessment

Water and wastewater utilities today are making resource management decisions related to security without adequate information about the nature and likelihood of threats to their systems. As discussed in Chapter 4 , the EPA has focused their efforts on identifying contaminant threats without conducting similarly detailed analyses of possible physical and cyber threats. Both the nature and likelihood of these threats are needed for efficient allocation of resources at the utility level and within the EPA’s research program. Improved threat assessment would require the EPA and/or a consortium of water experts to work closely with the intelligence community and local law enforcement agencies. Other national and federal laboratory expertise within the Department of Energy,

Department of Defense, and private-public community might be needed as well. Threat assessments for water and wastewater should be periodically reviewed to identify threat scenarios that should be added to the list and to remove those that are no longer a concern. The development of a threat assessment process for local water and wastewater utilities with current techniques used in other infrastructures would also be helpful, provided the threat information could be communicated to those who need it (ASME, 2004; Sandia National Laboratories, 2001).

Consequence Assessment

A consequence assessment should accompany the threat assessment within the risk assessment process. Consequence assessments would provide decision makers with information on the potential for fatalities, public health impacts, economic impacts, property damage, systems disruption, effects on other infrastructures, and loss of public confidence. Procedures for determining the expected consequences from an attack or natural disaster are not currently being systematically developed. As a result, water system managers do not have sufficient data to make decisions about the benefits of risk reduction relative to the costs. The development and application of a consequence assessment procedure would provide decision makers with information needed to decide whether to mitigate the consequences, upgrade with countermeasures, take steps to improve response and recovery capacity, and/or decide to accept the level of risk and take no further action. A fault tree analysis that includes, for example, options for redundant systems or contingency water supplies could provide vital information on whether to invest in security upgrades or less costly consequence mitigation strategies . Many of these approaches have already been developed for other infrastructures (e.g., Risk Assessment Methodology [RAM]-T for the high-voltage power transmission industry or RAM-D for dams, locks, and levees; see Sandia National Laboratories, 2001; 2002). A thorough review of other RAM methodologies could provide guidance for consequence assessment strategies that could be incorporated into the Risk Assessment Methodology for Water Utilities (RAM-W).

The EPA has worked to develop the AT Planner tool to assist utilities in assessing the consequences from physical attacks (see Chapter 4 ). While AT Planner has been validated against actual blast test data for nonwater systems, there remains significant uncertainty in the applicability of the modeling for water security because it has not been validated

against the structures specific to those systems. Therefore, the ongoing evaluation of AT Planner by the EPA and select water utility operators should include an assessment of the applicability of AT Planner for each of the critical and high-consequence components of a water system. The EPA and water utilities should then consider whether any additional validation testing is needed to determine specific failure modes of relevant water system components (e.g., actual storage tanks, pumps, water conduits, chlorine tanks) and possible countermeasures.

Summary of Research Priorities for Secure and Resilient Systems

Short-term priorities.

Develop an improved understanding of physical, cyber, and contaminant threats to water and wastewater systems, especially focusing on physical and cyber threats.

Communicate information on threats and consequences to water system managers through training and information exchange.

Develop an improved threat assessment procedure for water and wastewater utilities that will assist local utilities with their security and response planning.

Develop a process to assist local utilities in determining the consequences from physical, cyber, and contaminant attacks.

Update the risk assessment methodology for water systems to incorporate the latest approaches used in other industries, including developing credible threat descriptions and identifying cascading consequences.

Long-Term Priorities

Develop innovative design strategies for drinking water and wastewater systems that mitigate security risks and identify their costs and benefits in the context of public health, sustainability, cost-effectiveness, and homeland security. These designs might include:

In-pipe intervention strategies for drinking water systems,

Disaggregation of water and wastewater treatment facilities to achieve dual-use benefits, and

Designs that allow for interconnections and isolation.

Evaluate the need to validate AT Planner against structures specific to water systems.

Periodically review the EPA’s prioritized list of threats, contaminants, and threat scenarios to identify items that should be added to the list and remove items that are no longer a concern.

Continue development of technology transfer/training programs so that utilities understand the value of the EPA’s products for both homeland security incidents and natural disasters and know how to utilize the tools to their full extent.

Implementation of Priorities

Some of the research recommendations to support more resilient design and operation of drinking water and wastewater systems lie outside of the EPA’s traditional areas of expertise. To support the Action Plan efforts so far, the EPA has relied heavily on expert contractors to conduct this type of work. The EPA should continue to seek the relevant expertise of other federal agencies and national laboratories in these future efforts. However, the EPA will need to consider how best to balance intramural and extramural research funding to carry out this research, while maintaining appropriate oversight and input into the research activities (see also Chapter 5 ). Increasing staff expertise in some key areas, such as physical security, will be necessary to build a strong and well-rounded water security research program to support more resilient system design and operation.

RESEARCH RECOMMENDATIONS: IMPROVE THE ABILITY OF OPERATORS AND RESPONDERS TO DETECT AND ASSESS INCIDENTS

Suggestions are provided in this section for future research that should improve the ability of operators and responders to detect and assess water security incidents. Specific research suggestions in the areas of analytical methodologies and monitoring and distribution system modeling are discussed below.

Analytical Methodologies and Monitoring

Expanding existing analytical methods.

For some analytes of relevance to water security concerns, the available or approved detection methods are poor (e.g., some nonregulated analytes). More work needs to be done to expand existing methods to a broader range of analytes. For example, method 300.1 (EPA, 2000) covers only the common anions but could be extended to others, including toxic substances. The extension of existing methods to new analytes would allow a broader range of laboratories to expand their capabilities into the water security area.

Screening methods using conventional gas chromatography (GC) or high-performance liquid chromatography (HPLC) should also be investigated. Modern high-resolution chromatography combined with high-sensitivity detection (e.g., electron capture, fluorescence) is a powerful, yet accessible tool. Protocols should be developed to make the best use of these widely available capabilities. Software will have to be developed to facilitate the documentation of normal, background signals (fingerprint-type chromatograms). This background information can then be used to detect anomalies. Final protocols would have to be tested thoroughly against priority chemical contaminants. Chromatographic finger-prints have been used to monitor water supplies for nonintentional contamination, so this line of research would provide a dual benefit (D. Metz, Ohio River, personal communication, 2006; P. Schulhof, Seine River, personal communication, 2006).

Progress is being made with the protocol to concentrate samples and identify biological contaminants by polymerase chain reaction (PCR) analysis. Continued research, however, needs to be directed towards reducing the time and effort required to collect, process, and identify samples by automating portions of the protocol such as the concentration step. Such automated collection and sample processing systems would be especially valuable in response to security threats, when water samples could be channeled to existing or new detection technologies capable of onsite processing. The EPA should continue to expand the number of biothreat agents tested with the concentration/PCR protocol to include microbes other than spores, prioritizing test organisms that are both a threat to public health and resistant to chlorine (Morales-Morales, et al., 2003; Straub and Chandler, 2003). Continued testing of the concentration/PCR protocol should include various mixed suspensions of a target

microbe and background microbes to determine specificity of detection and various dilutions of the target microbe to determine sensitivity of detection. The protocol should also be tested on chloraminated water samples.

Developing New Monitoring Technologies

Chemical Detection. New chemical monitoring technologies for security-relevant analytes should be investigated. Examples include quartz crystal microbalance (QCM) sensors, microfluidic devices (lab-on-a-chip), ion-sensitive field-effect transistors (ISFETs), and larger-scale optrodes. Extramural agency and corporate partnerships developed by the EPA and longer-term research projects will help the evaluation and consideration of a broader range of detection platforms.

Biological Detection. Biological monitoring devices are essential to assess the type and extent of contamination in a suspected water security event. A broader range of innovative and developing detection technologies for biological agents, including methods that are field deployable and reagent-free, should be considered and evaluated. Innovative, field-deployable detection technologies (e.g., genetic fingerprinting, immunodetection, other technologies in development by universities, the Department of Defense, and industry) could reduce the time and effort for detection and enable earlier response efforts (Iqbal et al., 2000; Ivnitski et al., 2003; Lim et al., 2005; Monk and Walt, 2004; Yu and Bruno, 1996; Zhu et al., 2004). These new technologies might also increase the accuracy of detecting deliberate contamination events and reduce false alarms. Methods that can detect multiple biological agents and those with dual-use benefits should be emphasized over those methods limited to very specific agents (Peruski and Peruski, 2003; Rogers and Mulchandani, 1998). For example, DNA fingerprinting might be more useful than immunodetection systems dependent on a highly specific antibody for operation. The accuracy of these detection methods will depend on availability of quality reagents such as antibodies and primers; therefore, researchers will need to work closely with the Centers for Disease Control and Prevention (CDC) and other agencies that have access to such reagents.

Monitoring Devices for Wastewater Collection Systems . Contamination incidents have the potential to disrupt wastewater biological treat-

ment systems; thus, a long-term research program should also include research on monitoring technologies relevant to wastewater security concerns. Although a number of devices are available that can be used to monitor physical, chemical, and biological parameters, none of the currently available devices are robust or reliable enough when used in untreated wastewater to meet security requirements. The EPA should, therefore, encourage development of robust or reliable monitoring devices for wastewater infrastructure.

Syndromic Surveillance Tools. Syndromic surveillance tools may have the potential for detecting disease outbreaks and for investigating the possible role of water in such outbreaks (Berger et al., 2006). The EPA is already working to test two syndromic disease surveillance tools (RODS, ESSENCE) against prior water contamination outbreak data. There are substantive research needs that should be undertaken, however. Clearly, the improvement of existing syndromic surveillance tools is a long-term research objective. For syndromic surveillance to become worthwhile, it should achieve a favorable cost-benefit ratio considering the costs of false positives, and syndromic surveillance should also be adequately integrated into response plans. The implementation of syndromic surveillance systems on a large scale would require a more detailed linkage between disparate databases used in the public health sector and the water supply sector. Research to develop tools to allow local systems to readily fuse information from these disparate sources would be desirable. Such linkages would improve detection and response to waterborne disease outbreaks and more rapidly exclude water as a possible vehicle of disease. This would have important applications for both intentional and nonintentional water contamination events.

Real-Time Monitoring Systems

The development of a fully functional, easy-to-maintain, real-time monitoring system (RTMS) that could someday be used to prevent harm from deliberate attacks on the water system (“detect to prevent”), even with substantial research investments, is many years away. Therefore, the primary emphasis of future research on RTMSs, at least in the near term, should be on developing these technologies to assess the spread of contaminants, not to prevent exposure.

The committee also questions the likelihood of implementation of real-time monitoring devices for specific chemical or biological parame-

ters that are not useful in the day-to-day operation of a system (see Chapters 2 and 4 ). However, there are a few scenarios where implementation of continuous monitors for biological contaminants might be valuable, such as their use in certain water systems under heightened threat conditions (e.g., utilities for which specific intelligence information indicates they may be targeted). As discussed in Chapter 4 , deployment under these circumstances has a greater likelihood for success because the probability of an event is estimated to be much higher and the length of monitoring time is shortened. The use of highly sensitive and specific detection devices under such targeted circumstances would significantly lower the probability of false alarms and reduce the problem of poor positive predictive value (see Chapter 2 ) while also minimizing implementation and maintenance costs. Thus, improving monitoring systems for specific chemical or biological agents in drinking water is a valid long-term research goal. The EPA may find that longer-term research on more speculative sensor development could benefit from a further broadening of the circle of collaborators. Such speculative research may be more appropriately funded through the National Science Foundation or the Homeland Security Advanced Research Projects Agency, thus freeing up EPA resources for other purposes. To encourage such research, the EPA may wish to build its connections with the private sector on this technology.

Research on detection methods for RTMSs should proceed with careful consideration of the likelihood of implementation of the monitoring devices. In its near-term research plans, the EPA should adopt a first-stage approach to RTMSs, emphasizing generic sensors to detect intrusion or a system anomaly. The intrusion detection would then trigger more resource-intensive follow-up monitoring and analysis. Such an approach has significant dual-use benefits for routine contamination events that could outweigh the costs of implementing and operating these systems. Additional effort to develop cheaper, more accurate, and more easily deployable and maintainable sensors for routine water quality parameters would be useful both for anomaly detection and routine operation. Additional research is also needed, even in first-stage RTMSs, to understand normal water quality variations and distinguish variations that might be caused by a deliberate contamination attack. For example, continuous monitoring of chlorine residual at multiple points in the distribution system often reveals wide variations at different temporal scales due to changes in water demand that affect water residence time (e.g., operation of storage tanks). Although some work to understand inherent water quality variability in distribution systems is being conducted through the

Water Sentinel program, a significant amount of work is needed to translate the findings of this research into criteria for RTMSs to develop systems that have a reasonable likelihood of implementation.

An important component of RTMS research should include data fusion, whereby multiple anomalies must occur before an alarm signal is sent (see also Chapter 4 ). The private sector seems to be taking the lead on many types of multiparameter approaches to RTMSs and the processing of data, especially as described by contaminant or event signatures. It is important that the algorithms are open to peer review and can be accessed by all for development of new and refined approaches.

RTMS sensor research should consider a broader range of technologies, including full-spectrum UV and visible absorption, fluorescence excitation emission matrices, and ionization sensors (Alupoaei et al., 2004; Fenselau and Demirev, 2001; Lay, 2001). Many of these techniques are used as nonspecific chromatography detectors, and as such, they are highly sensitive. Most prototype RTMSs are composed of existing sensors that are designed to measure a specific contaminant, and some technologies have been excluded because they have not led to sensors with a high degree of selectivity. However, RTMSs need not be contaminant-specific; they only need to detect anomalies. Detection of an anomaly can then be followed by more specific contaminant analyses.

The problem of false positive signals from real-time contaminant-specific warning systems has been discussed in Chapter 2 . In essence, the problem is one of unfavorable arithmetic when the probability of a true positive is very small, as it would be for an intentional contamination attack on any particular water system of the tens of thousands of such systems. Therefore, most contaminant-specific alarm signals will be false positives. The EPA should consider the consequences of various rates of false positive signals for both large and small utilities and collect information on how alarms are currently handled by utilities. Workshops and structured surveys on this issue would provide valuable information on current practices, the extent to which positive signals are confirmed, the costs of false alarms, and the views of utility operators on their tolerance for various levels and types of false alarms. This research would provide useful guidance for the developers of water quality monitoring devices, for utilities that are considering implementing devices that are commercially available, and for local and state regulatory agencies who will need assistance interpreting alarm signals in light of the public health consequences.

Technology Testing

The EPA has developed a rigorous technology testing program to provide security product guidance to end users focusing on monitoring and decontamination technology. However, as noted in Chapter 4 , the number of relevant security technologies and agents of interest exceed the capacity and budget of the Technology Testing and Evaluation Program (TTEP). Therefore, developing a test-prioritization plan for TTEP seems especially important and is strongly recommended. Although the process of identifying technologies of interest has begun through the use of stakeholder meetings and advisory boards, activities to date have been weighted toward doing the easiest things first, and only some of these tests provided dual-use benefits. Balancing the homeland security benefits and the benefits to routine water system operations in TTEP will likely require additional strategic planning. One strategy has been to test equipment that is commercially available regardless of whether it addresses a high-risk agent. Instead, the EPA should look beyond the easy-to-identify commercially available equipment and make a greater effort to identify technologies in development that have the potential to address those agents identified as posing the greatest risk to water, considering the likelihood of the threat (including the ease of acquiring particular chemical or biological agents), the potential consequences, and the likelihood of implementing the technology. For a few of the highest-priority threats, the EPA may wish to consider providing technical support and/or funding to encourage more rapid development of a particularly promising technology that has a high likelihood of implementation and significant dual-use benefits, similar to the EPA Superfund Innovative Technology Evaluation (SITE) Emerging Technology Program.

Develop Laboratory Capability and Capacity

Adequate laboratory capacity is critical for responding to a terrorist incident affecting water supplies, and although this is not a research issue, the EPA has much to contribute from an applied perspective. The need for mobile analysis units capable of supplementing local laboratories and rapidly responding to geographical areas impacted by terrorist events should be considered. Such mobile laboratories could also address analytical needs that arise during natural catastrophes, such as Hurricane Katrina. Many states have begun to develop mobile laboratory

capabilities as part of their water security activities, and the EPA could glean information on their experiences to date.

The EPA is working with utilities and state and federal agencies to build a national laboratory response network for water sample analysis (i.e., the Water Laboratory Alliance). Some university laboratories may have capabilities that could merit inclusion in the nationwide network. Other laboratories may be stimulated to conduct additional research on improved analytical methods for toxic and biothreat agents if they were better informed of the current state of knowledge and had access to reference standards (access to some reference standards is currently limited due to security concerns). To be successful, a dual-use philosophy should be adopted whenever possible in the development of laboratory capacity (e.g., employing methods/instruments that can also be used for standard analytes).

Distribution System Modeling Tools

Distribution system models provide valuable tools for locating the source of contamination or assessing the spread if the source is known, estimating exposure, identifying locations for sampling, and developing decontamination strategies (see also Chapter 4 ). Distribution system models also have important dual-use applications to routine water quality concerns, and the EPA should continue to emphasize the dual-use value of its modeling tools. Specific recommendations are provided below to advance the capabilities and implementation of the Threat Ensemble Vulnerability Assessment (TEVA) and EPANET models.

Experimental Verification of Species Interaction Subcomponent Models

The final goal of producing a more flexible EPANET model through Multi-Species EPANET (MS-EPANET) is commendable. However, the new subcomponents are based upon developing better fundamental knowledge of reactions within the distribution system involving chemistry (e.g., disinfection kinetics, chemical partitioning), biology (e.g., development of biofilms, release and attachment of microbes), and materials science (e.g., corrosion of pipe materials and its relationship to disinfection efficacy). The large number of system constants in both MS-EPANET and TEVA necessitate significant investment in sensitivity

analysis research to quantify the accuracy of model predictions. The development and testing of all new features of MS-EPANET should be a long-term research goal. Until the validity of these subcomponents is verified and system constants can be assigned with more certainty, the water industry will be reluctant to use the full capability of MS-EPANET. Limitations in the accuracy of model predictions will need to be addressed in guidance to decision makers. A significant commitment will be needed in resources for experimental verification.

Alternate Approaches to Uncertainty Modeling

The Action Plan acknowledges correctly that the distribution system model simulations should incorporate an analysis of uncertainty because the point of attack is unknown. This has led to the use of the well-known Monte Carlo analysis to randomize the location of the attack and run repeated distribution system model simulations (1,000 or more) to generate a probability distribution to relate point of attack to human exposure impact. The focus on short-term results, however, has produced weaknesses in the current EPA approach to uncertainty research.

A broader discussion about how to incorporate uncertainty into the TEVA model should be invited. Approaches such as fuzzy logic (McKone and Deshpande, 2005) and Bayesian Maximum Entropy modeling (Serre and Christakos, 1999) are showing promise but have been applied mainly to homogenous space rather than to network domains. The EPA should encourage alternative ideas for handling uncertainty. If the expertise is not available within the agency, there needs to be a mechanism to expand extramural support for research, particularly within the university community.

Technology Transfer and Training in Use of the TEVA and EPANET Models

Advances in the TEVA model add significant complexity to the EPANET model, which may limit its widespread implementation. The EPA should work to communicate the capabilities of EPANET, MS-EPANET, and TEVA to utilities, emphasizing their value for routine water quality concerns, advanced homeland security planning, and contamination assessment and response activities. Until TEVA and MS-EPANET are further developed and widely available, the EPA should

consider an interim strategy to better inform water utilities on the value and use of existing distribution system models, such as EPANET. Progressive water utilities are already using EPANET to examine possible locations of attack and to track the concentration of contaminants within the distribution system.

Training in the use of MS-EPANET and the proposed TEVA model is also needed. Water utility managers need to be convinced that the costs for adapting a new model for their respective distribution systems are worthwhile, because many utilities have already invested heavily in development, verification, and calibration of existing models. The complexity of the TEVA model may increase these costs further, because many more implementation steps follow those for EPANET to adapt the TEVA “template” to the specifics of each water utility.

Summary of Research Priorities for Better Equipping Operators to Detect and Assess Incidents

Automate the concentration step of the concentration/PCR protocol.

Continue to test the concentration/PCR protocol:

Expand the number of biothreat agents tested to four or five organisms that include microbes other than spores, focusing on microbes that are both a threat to public health and resistant to chlorine.

Test the concentration/PCR protocol with chloraminated water samples.

Test the concentration/PCR protocol to determine sensitivity and specificity of detection.

Field-test RTMSs to determine false positive/false negative rates and maintenance requirements and develop basic criteria for the technology that might lead to a reasonable likelihood of implementation.

Continue research to develop a first-stage RTMS based on routine water quality sensors with dual-use applications.

Analyze the consequences of false positive signals from realtime monitoring systems, emphasizing current practices, the extent to which positive signals are confirmed, the costs of false alarms, and the tolerance of utility operators for false alarms.

Test standard chromatographic methods for their ability to screen for a broad range of toxic agents in routine laboratory testing.

Develop a test-prioritization strategy for TTEP to optimize the resources devoted to this effort.

Invite external peer review of the TEVA model before investing in field testing.

Long-term Priorities

Continue to develop portable, field-deployable systems that can be used to collect and process samples at event locations.

Formulate protocols and develop software for using GC- and HPLC-based fingerprinting to detect suspicious anomalies.

Stimulate research and ultimately development of new sensors for water security analytes based on innovative technologies, such as QCM, ISFETS, and microfluidics.

Evaluate and develop new field-deployable detection technologies for biological agents, including genetic fingerprinting, immunodetection, and reagentless technologies, that have the necessary sensitivity, specificity, and multiplex capabilities.

Develop improved, cheaper, and accurate RTMSs for routine water quality measurements.

Examine the use of nonspecific detection technologies for RTMSs.

Develop data fusion approaches for RTMSs that can minimize false positives.

Develop and test new monitoring technologies suitable for wastewater security applications.

Improve syndromic surveillance tools and develop a health surveillance network with appropriate linkages to water quality monitoring.

Continue to develop and refine the efficiency of a system-wide laboratory response network, including the development of mobile analysis units.

Continue fundamental research to understand the chemical and biological reactions that affect the fate and transport of contaminants in distribution systems to verify the constants used in MS-EPANET and TEVA.

Include alternative approaches to uncertainty design (e.g., fuzzy logic, Bayesian Maximum Entropy) in the TEVA model that are based more strongly upon stochastic than deterministic principles given that many of the input parameters to the current TEVA model are highly uncertain.

Develop projects for training water utilities in the value and use of EPANET, MS-EPANET, and TEVA.

Some of these research priorities may be more appropriately accomplished by universities, companies, or other agencies that have the necessary expertise, resources, and funding to successfully complete these tasks. The development of multiplex detection protocols and portable, field-deployable platforms are examples of tasks that might be better managed by some group other than the EPA. Work to determine the sensitivity and specificity of designated protocols for different biothreat agents could be conducted by university laboratories or private industry, with collaborative input from the EPA, considering their understanding of the needs of the water sector. Utilization of research resources outside the EPA would expand the variety of emerging, innovative analytical technologies that might be used to support the EPA’s efforts in enhancing the nation’s water security.

RESEARCH RECOMMENDATIONS: IMPROVED RESPONSE AND RECOVERY

Recommendations are provided in this section for future research that should improve response and recovery after a water security incident. Research suggestions related to tools and data for emergency planning and response, contingencies, risk communication and behavioral sciences, decontamination, and lessons learned from natural disasters are presented below.

Tools and Data for Emergency Planning and Response

Continued development of emergency response databases.

The EPA released preliminary versions of the Water Contamination Information Tool (WCIT) and the Consequence Assessment Tool (CAT) to provide data on contaminant properties, toxicity, and exposure threats (see Chapter 4 ), but the databases are still in their infancy, and numerous data gaps exist. The EPA will need to prioritize its continued efforts to further develop these response databases. Therefore, the EPA should develop strategic plans for WCIT and CAT, outlining the long-term goals for the databases and addressing questions such as:

What stakeholders will be served by the databases?

What categories of information do these stakeholders need?

How many contaminants should be included?

What linkages to other databases should be established?

The EPA will need to determine criteria for prioritizing what contaminants are added to the database and how to maintain and update the information. If WCIT and CAT are not continually revised to incorporate the latest scientific knowledge, the databases will become outdated. Expanding or even maintaining a database requires considerable resources, both intellectual and financial. If a commitment is not made initially for the necessary resources to update and maintain a database, spending the resources to create it becomes debatable. The EPA is currently facing similar issues maintaining its Integrated Risk Information System (IRIS) database.

The EPA should also clearly define the data quality objectives for WCIT/CAT and incorporate peer review of the data, as necessary, to meet these objectives. For example, the EPA may decide that some information about a contaminant is better than none, even if that information has limitations. This is a legitimate approach; however, the EPA should provide a mechanism that helps to ensure that individuals using the databases understand the data quality and their limitations. One mechanism for accomplishing this would be to add quality notations for each datum. Regardless of the approach taken, the EPA needs to describe the extent to which the data have been reviewed.

Evaluation and Improvement of Tools and Databases

With the forthcoming completion of at least the first stages of many tools and databases (e.g., WCIT, CAT), the EPA should consider the evaluation/improvement cycle. This will require the development of procedures to evaluate the utility and usability of these tools by potential constituencies. In addition, the EPA should take advantage of the tests afforded in response to “real-life” incidents. For example, some of the tools and databases were used (albeit in an early stage of their development) in the response to Hurricane Katrina. A formal assessment of knowledge gained from this experience could assist in the improvement and development of the tools.

Filling Data Gaps

The state of knowledge of the health risks from water contaminants that could be used in a malicious event is quite limited, as shown by the limited number of chemicals and even fewer biologicals in the WCIT/CAT databases and the many blank data fields in these databases. Important experimental and computational research is under way at the EPA to address some of these data gaps (see Chapter 4 , Section 3.6), but many gaps remain. There are two applications of toxicity/infectivity information that would be useful to the EPA for response and recovery efforts. The first is development of guidance for dissolved concentrations that would pose an immediate acute risk to exposed individuals, analogous to the inhalation immediate danger to life and health values of the National Institute for Occupational Safety and Health. The EPA is currently working on this problem by developing a database on acute and

chronic health effects associated with priority contaminants, although much work remains to be done. The second is guidance for determining the appropriate “acceptable” level remaining after cleanup/decontamination. This second aspect has not yet been strongly emphasized in the EPA research program. It is recommended that the EPA convene a working group to develop research and prioritization strategies for filling these data gaps and for ascertaining current gaps in knowledge with respect to rapid estimation of toxicity/infectivity in the absence of specific experimental information. Decisions for setting priorities for the data gathering efforts should be made with full consideration of dual-use benefits.

Contingencies for Water System Emergencies

Further study of water supply alternatives should be a high priority, considering their pivotal role in response and recovery and their dual-use applications for natural disasters or system failures. However, the subject of water supply contingencies seems to have been given a low priority in the EPA’s research program to date. Completion of the work in progress should be the first priority. The committee debated the value of investing significant resources in developing technologies that could supply drinking water for large communities over long-term disruptions because of the rarity of the need for such technologies. Nevertheless, the EPA should draw upon the research and development efforts of the Department of Defense in this area and work to test the application of these technologies to water security scenarios.

The EPA should consider including new research on contingencies for failures of the human subsystem in water system security. Such research could examine current practices for identifying back-up operators in the case of widespread incapacitation in both short-term and long-term scenarios. This research could also identify best practices, which could be incorporated into EPA guidance to water utilities for their emergency response planning.

Preliminary research suggests that geographic information systems (GIS) could be of significant value to utilities for identifying contingencies in the event of system failures. Therefore, further efforts may be needed to inform utilities about the value of GIS for emergency response and provide guidance for integrating GIS into their emergency planning procedures. National geodata standards may be needed to promote consistency and facilitate data exchange among users.

Behavioral Sciences and Risk Communication

The National Homeland Security Research Center (NHSRC) has made substantial progress in the development of risk communication guidance and training (see Chapter 4 ), but very little emphasis has been devoted to research on understanding how the public may respond to risk communication messages and how to improve communication of risks to the public. Terrorism presents risks that are new, evolving, and difficult to characterize; thus, water security poses communication challenges that should be addressed using scientifically rigorous research in the fields of risk communication and behavioral sciences. The EPA should continually reassess the role risk communication has in its overall risk management framework and fully integrate risk communication efforts into the overall risk management program. Behavioral science and associated risk communication research should be a high priority in the EPA’s future water security research plans. The following recommendations are targeted toward water-security events, but the proposed research has dual benefits for improving non-security-related communications with the public.

Analysis of Factors that Build Trust and Improve Communication

Research and experience prove that one of the most important keys to communication success is an organization’s ability to establish, maintain, and increase trust and credibility with key stakeholders, including employees, regulatory agencies, citizen groups, the public, and the media. To improve overall communication strategies in a water-related emergency, research is needed that analyzes factors that build trust and reduce fear (e.g., What types of concerns do people have related to public health emergencies, water security issues, or bioterrorism? How do utilities build trust and credibility with the public around water security incidents?). In addition, research is needed to analyze methods to counter and reduce the possibility of misinformation or false information being distributed to the public and key stakeholders.

Understanding Institutional Behavior

Building response and recovery capacity requires agencies that might be involved in a water security event to develop stronger working relationships. Although water utilities, public health agencies, law enforcement, emergency responders, and the media do not have a long history of collaborating and working together, several state drinking water programs have taken the lead in carrying out tabletop exercises as well as on-the-ground exercises to address this issue. These state programs have also undertaken measures to facilitate an understanding of the roles and responsibilities of the various potential players, including federal, state, and local law enforcement; state and local health agencies; state and local emergency response agencies; and water utilities. The EPA could glean useful information from these ongoing state and local activities. Nevertheless, additional research is needed to better understand the culture of the agencies that will be responding to events, how these agencies will interact in a water-related crisis, and what level of effort is needed to maintain collaboration in planning and preparedness. This research could identify barriers to more effective collaboration, and these findings could be used to create training scenarios that could improve coordination and resolve potential conflicts in advance. This research is a short-term priority given the importance of coordinated interaction during a crisis. The research could be performed relatively quickly because there is a wealth of experiences, particularly at the state level, related to agency interactions in water-related crises.

Investigate Applicability of Research in Behavioral Science

While some of the recommended research on risk communication and behavioral science may need to be managed by the EPA to address specific water security-related issues, the EPA should also take advantage of other behavioral science research currently being conducted through university-based partnerships, including those established by the Homeland Security Centers of Excellence program. For example, the University of Maryland’s National Consortium for the Study of Terrorism and Responses to Terror (START) is conducting original research on issues that are poorly understood, including risk perception and communication, household and community preparedness for terrorist attacks, likely behavioral responses by the public, social and psychological vulnerability to terrorism, and strategies for mitigating negative psychologi-

cal effects and enhancing resilience in the face of the terror threat. The START center is also synthesizing existing research findings in order to provide timely guidance for decision makers and the public, paying special attention to how diverse audiences react to and are affected by threats and preparedness efforts.

In addition, the CDC has developed a national network of 50 Centers for Public Health Preparedness (CPHP) to train the public health workforce to respond to threats to our nation's health, including bioterrorism. These centers work to strengthen terrorism preparedness and emergency public health response at the state and local level and to develop a network of academic-based programs contributing to national terrorism preparedness and emergency response capacity. Information from the CPHP may be relevant and useful to the water sector.

Pretesting Risk Communication Messages

Although the message mapping workshops are a good start to assist stakeholders in preparing messages that will be relevant in a water security incident, the messages have not been tested and evaluated. Therefore, the EPA should engage the research community in pretesting messages being developed by the Center for Risk Communication so that case studies and scenarios can be analyzed for effectiveness in reaching key audiences, and problems can be corrected in advance. Sophisticated evaluation techniques and standard research procedures are used by the CDC to pretest public messages. This evaluation research should be based on standard criteria established in the risk communication literature (e.g., Mailback and Parrott, 1995; National Cancer Institute, 2002; Witte et al., 2001).

Analysis of the Risks and Benefits of Releasing Security Information

The decision of when to release or withhold water security information is critical to the development of a risk communication strategy. Therefore, the EPA should analyze the risks and benefits of releasing water security information, considering input from its broad range of constituents, and develop transparent agency guidance on when to release information versus when to withhold it due to security concerns.

The committee considers this a priority because of the difficulty and importance of the information sharing problem.

Water-Related Risk Communication Training

As the lead U.S. agency in water system security, the EPA should assume the responsibility for developing a national training program on water-related risk communication planning and implementation for water managers. This should be done in collaboration with the water and wastewater organizations, state government agencies, public health officials, health care officials, and others engaged in communication of risks during water-related emergencies.

Decontamination

Decontamination research is critical to improving response and recovery, and the products are applicable to address unintentional contamination events from natural disasters (e.g., hurricanes, floods, earthquakes) and routine malfunctions (e.g., pipe breaks, negative pressures due to power losses). The EPA has numerous ongoing projects in this area that should be completed, but additional research topics are also suggested below.

Addressing Data Gaps

EPA decontamination research products released thus far have shown that fundamental physical, chemical, and/or biological characteristics of many threat agents of concern are not yet known. Therefore, additional laboratory research is needed related to the behavior of contaminants in water supply and wastewater systems and methods for decontaminating water infrastructure. For example, one research priority would be to develop inactivation rate data for all microbes of concern with both free and combined chlorine strategies, because both approaches are used in the water industry. Rate and equilibrium data for adsorption/desorption of contaminants on pipe walls is also needed, although the EPA could also take advantage of existing databases on structure-activity relationships to predict these behaviors. Long-term re-

search, perhaps in partnership with other Office of Research and Development units, could enhance our understanding of the fate, transport, and transformation of toxics in water and wastewater environments.

Decontamination Strategies

The EPA should build on its ongoing work in the area of decontamination and address gaps in the current knowledge base. For example, research is needed to examine readily available household inactivation methods for biological agents (including spore-formers), such as microwaving. The EPA should also work to further the development of innovative decontamination technologies that address important water security concerns. Research and development on new POU/POE technologies, such as superheated water devices, could help overcome operational disadvantages of the products currently on the market.

Prioritizing Future Surrogate Research

Surrogates are relevant to numerous water security research applications, including research on contaminant fate and transport, human exposure risks, and decontamination. Research is ongoing to identify surrogates or simulants for biological agents, to determine which surrogates are appropriate, and to determine the ability of typical drinking water disinfection practices (chlorination and chloramination) to inactivate those agents (see Chapter 4 , Section 3.2). Much of the research has focused on Bacillus anthracis and other bacterial agents, but the EPA should determine if surrogates for research on biotoxins and viruses are needed and whether additional surrogates are needed for other bacterial agents. A viral simulant or surrogate would be helpful to examine virus survival in fresh water, drinking water, and sewage, as well as virus susceptibility to water disinfectants. Research in this area has relevance to viral bioterrorism agents and also has strong dual-use research applications because viral surrogates could facilitate risk assessment studies on natural viruses (e.g., SARS, avian influenza).

Surrogate research is a laborious experimental process (see Box 4-1 ) that must be conducted in one of the few laboratories already authorized to keep and work with select agents. Considerable research is required to compare the select agent with candidate surrogates under the experimental conditions of interest. As discussed in Chapter 4 , surrogates need not

mimic in all respects the agents they stand in for. For some important security or decontamination uses, it may only be necessary that they provide an appropriate bound on the characteristic of interest in the target agent (e.g., persistence, disinfectant sensitivity). Therefore, the EPA should carefully consider and prioritize the agents and the research applications for which surrogates are needed. The prioritization process for surrogates should consider the following:

Which types of research could be greatly facilitated through the availability of surrogates?

Which types of research with surrogates might have “dual-use” applications (i.e., could the properties of certain surrogates also be usefully extrapolated to other common organisms)?

Which types of research should be done only with select agents?

How closely should the surrogate properties of interest match that of the target organism?

What are the costs and benefits to the research program associated with surrogate development versus use of the pathogenic agents?

The EPA should engage a limited number of individuals (e.g., federal partners, academics) who are involved in similar research in this prioritization process.

Lessons Learned from Natural Disasters

Midway through the committee’s work, NRC (2005; see Appendix A ) suggested the EPA take advantage of experience gained in the aftermath of Katrina so as to improve future response and recovery efforts for water security. While a hurricane caused this catastrophe, it is conceivable that a similar result might have occurred if the levees had been destroyed by terrorist explosives. Thus, New Orleans offered a living laboratory to study many aspects of the impacts of a disaster on water and wastewater systems of all sizes. Failure modes, infrastructure interdependencies, decontamination and service restoration strategies, the availability of alternative supplies, communication strategies, and the ability to service special institutions (e.g., hospitals) and special needs individuals could all have been examined in the immediate aftermath of the hurricane. To the best of the committee’s knowledge, however, the EPA has not attempted to compile a knowledge base from this experience. As

time passes, it will become increasingly difficult to reconstruct what transpired. Other natural or manmade disasters, such as the earthquakes in California in 1989 and 1994 or the “Great Flood of 1993” in the Mid-west, or natural contamination events, such as the Milwaukee C ryptosporidium outbreak, may also offer opportunities to mine important data about the failure or recovery of water and wastewater systems, but detailed information on these earlier occurrences may be lacking. In the future, the NHSRC should be poised to seize opportunities for learning about response and recovery after major natural or man-made disasters affecting water or wastewater systems.

Summary of Research Priorities for Improving Response and Recovery

Determine strategic plans for managing and maintaining the WCIT/CAT databases, considering the likely uses and long-term goals for the databases.

Develop and implement a strategy for evaluating the utility and usability of the response tools and databases, including stakeholder feedback and lessons learned during their use under “real-life” incidents.

Convene a working group to develop research strategies for filling the data gaps in WCIT/CAT and other planned emergency response databases.

Contingencies for Water Emergencies

Complete the work in progress on contingencies and infrastructure interdependencies under Section 3.5 of the Action Plan.

Test and evaluate the most promising innovative water supply technologies that enable or enhance the short- or long-term delivery of drinking water in the event of systemic failure of water systems. Analyze the positive features and those areas needing improvement prior to full-scale deployment.

Conduct research on potential contingencies for failures of the “human subsystem.”

Analyze factors that build trust, reduce fear, and prevent panic to improve overall communication strategies in a water-related emergency.

Investigate the behavioral science research being conducted by the Homeland Security University Centers of Excellence and other federal agencies for applicability to the water sector.

Pretest messages being developed by the Center for Risk Communication and analyze case studies and scenarios for effectiveness.

Analyze the risks and benefits of releasing security information to inform the EPA’s risk communication strategies and its practices on information sharing.

Fully integrate risk communication efforts into the overall risk management program and provide adequate resources that ensure these efforts remain a high priority in the EPA’s future water security research program.

Conduct research to better understand how agencies will interact in a water-related crisis situation and determine what strategies will be most effective in encouraging and maintaining collaboration in planning and preparedness.

Complete the many decontamination projects in progress under Section 3.4 of the Action Plan.

Develop predictive models or laboratory data for inactivation of bioterrorism agents in both free chlorine and chloramines that can be used in MS-EPANET and the TEVA model.

Explore development and testing of new POU/POE devices that may overcome the disadvantages of existing devices.

Examine readily available household inactivation methods for biological agents (including spore-forming agents), such as microwaving.

Determine the costs and benefits of further research to identify additional surrogates, considering which agents under which conditions or applications should be prioritized for surrogate development research.

Use the remaining data from the experience of Hurricane Katrina to analyze the optimal response and recovery techniques (e.g., water supply alternatives, contingency planning, and infrastructure interdependencies) that would also apply to water security events.

Integrate experience with decontamination of the distribution system in New Orleans after Hurricane Katrina to improve EPA guidance for water security decontamination.

Evaluate risk communication strategies related to Hurricane Katrina or other past disaster events to determine if communication strategies related to drinking water safety reached the most vulnerable populations.

Develop a post-event strategy for learning from future natural disasters affecting water systems. This strategy should support on-site assessments of impacts and interdependencies and evaluations of successes and failures during response and recovery.

Continue to develop and maintain the WCIT/CAT databases according to the objectives set forth in the strategic database management plan. Incorporate a mechanism to provide on-going peer review of the data to meet its data quality objectives.

Continue experimental and computational research to fill critical data gaps in WCIT/CAT, including research on the health effects of both acute and chronic exposure to priority contaminants.

Develop new, innovative technologies for supplying drinking water to affected customers over both short- and long-term water system failures.

Risk Communication and Behavioral Sciences

Develop a program of interdisciplinary empirical research in behavioral sciences to better understand how to prepare stakeholders for water security incidents. The EPA should support original research that will help address critical knowledge gaps. For example:

What are the public’s beliefs, opinions, and knowledge about water security risks?

How do risk perception and other psychological factors affect responses to water-related events?

How can these risks be communicated more effectively to the public?

Develop a national training program on water-related risk communication planning and implementation for water managers.

Continue laboratory research to fill the data gaps related to behavior of contaminants in water supply and wastewater systems and methods for decontaminating water infrastructure.

Continue surrogate research based on the research prioritization determined in collaboration with an interagency working group. The EPA should also explore ways that this surrogate research could assist in responding to everyday agents or to other routes of exposure (e.g., inhalation, inactivating agents on surfaces).

The EPA has historically been a lead federal agency in understanding the fate and transport of contaminants in the environment and has a clear understanding of the practical concerns of the water sector. Thus, the EPA remains the appropriate lead agency to develop the tools for emergency response and to prioritize the research needed to fill the remaining gaps, with input from key stakeholders. The EPA is also well suited to develop a national training program on water-related risk communication and to evaluate lessons learned from Hurricane Katrina and other past disaster events. However, innovative technology development research, such as the development of novel technologies for supplying water during system failures, should be conducted by other agencies,

university researchers, or firms with the greatest expertise. The EPA, instead, should focus its efforts on harvesting information on existing technologies, synthesizing this information for end users, and providing guidance to developers on unique technology needs for water security. Behavioral science research and evaluation research is more appropriately conducted by universities or other federal agencies (e.g., CDC) that have the necessary expertise to complete these tasks. However, the EPA still needs in-house behavioral science experts able to supervise and use this work to best advantage.

CONCLUSIONS AND RECOMMENDATIONS

In this chapter, recommendations are provided for future research directions in the area of water security. Two key water security research gaps—behavioral science and innovative future system design—that were not considered in the short-term planning horizon of the Action Plan are identified. In accordance with the committee’s charge (see Chapter 1 ), short- and long-term water security research priorities are presented in three areas: (1) developing products to support more resilient design and operation of facilities and systems, (2) improving the ability of operators and responders to detect and assess incidents, and (3) improving response and recovery.

The EPA should develop a program of interdisciplinary empiri cal research in behavioral science to better understand how to pre pare stakeholders for water security incidents. The risks of terrorism are dynamic and uncertain and involve complex behavioral phenomena. The EPA should take advantage of existing behavioral science research that could be applied to water security issues to improve response and recovery efforts. At the same time, when gaps exist, the EPA should support rigorous empirical research that will help address, for example, what the public’s beliefs, opinions, and knowledge about water security risks are; how risk perception and other psychological factors affect responses to water-related events; and how to communicate these risks effectively to the public.

The EPA should take a leadership role in providing guidance for the planning, design, and implementation of new, more sustainable and resilient water and wastewater facilities for the 21st century. Given the investments necessary to upgrade and sustain the country’s water and wastewater systems, research on innovative approaches to make the infrastructure more sustainable and resilient both to routine and

malicious incidents would provide substantial dual-use benefits. The EPA should help develop and test new concepts, technologies, and management structures for water and wastewater utilities to meet objectives of public health, sustainability, cost-effectiveness, and homeland security. Specific research topics related to drinking water and wastewater, such as decentralized systems and in-pipe interventions to reduce exposure from contaminants, are suggested.

Recommended research topics in the area of supporting more resilient design and operation of drinking water and wastewater systems include improved processes for threat and consequence assessments and innovative designs for water and wastewater. A thorough and balanced threat assessment encompassing physical, cyber, and contaminant threats is lacking. To date, the EPA has focused its threat assessments on contaminant threats, but physical and cyber threats deserve more attention and analysis because this information could influence the EPA’s future research priorities and utilities’ preparedness and response planning.

Research suggestions that improve the ability of operators and responders to detect and assess incidents build upon the EPA’s current research in the areas of analytical methodologies and monitoring and distribution system modeling. In the short term, the EPA should continue research to develop and refine a first-stage RTMS based on routine water quality parameters with dual-use applications. Long-term research recommendations include the development of innovative detection technologies and cheaper, more accurate RTMSs. To support the simulation models in development, a substantial amount of fundamental research is needed to improve understanding of the fate and transport of contaminants in distribution systems. Based on the number of emerging technologies and agents of interest, the EPA should develop a prioritization strategy for technology testing to optimize the resources devoted to this effort.

Recommendations for future research priorities to improve response and recovery emphasize the sustainability of tools for emergency planning and response (e.g., WCIT/CAT) and improving research on water security contingencies, behavioral sciences, and risk communication. The EPA should also evaluate the relative importance of future laboratory work on surrogate development and address data gaps in the knowledge of decontamination processes and behavior. So far, the EPA has not taken advantage of the many opportunities from Hurricane Katrina to harvest lessons learned related to response and recovery, and the window of opportunity is rapidly closing.

Some of the research recommendations provided in this chapter lie outside of the EPA’s traditional areas of expertise. The EPA will need to consider how best to balance intramural and extramural research funding to carry out this research, while maintaining appropriate oversight and input into the research activities. Increasing staff expertise in some key areas, such as physical security and behavioral sciences, will be necessary to build a strong and well-rounded water security research program.

This page intentionally left blank.

Concern over terrorist attacks since 2001 has directed attention to potential vulnerabilities of the nation's water and wastewater systems. The Environmental Protection Agency (EPA), which leads federal efforts to protect the water sector, initiated a research program in 2002 to address immediate research and technical support needs. This report, conducted at EPA's request, evaluates research progress and provides a long-term vision for EPA's research program. The report recommends that EPA develop a strategic research plan, address gaps in expertise among EPA program managers and researchers, and improve its approaches to information dissemination. The report recommends several high-priority research topics for EPA, including conducting empirical research in behavioral science to better understand how to prepare people for water security incidents.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Donaldson MS, Mohr JJ; Institute of Medicine (US). Exploring Innovation and Quality Improvement in Health Care Micro-Systems: A Cross-Case Analysis. Washington (DC): National Academies Press (US); 2001.

Cover of Exploring Innovation and Quality Improvement in Health Care Micro-Systems

Exploring Innovation and Quality Improvement in Health Care Micro-Systems: A Cross-Case Analysis.

  • Hardcopy Version at National Academies Press

CONCLUSIONS AND DIRECTIONS FOR FURTHER RESEARCH AND POLICY

  • Limitations of This Research

There are limitations to all sampling strategies and to qualitative research, in particular. The strength of this method was that the sample selection used input from a pool of reognized experts in the organization, delivery, and improvement of health care. Even with a pool of recognized experts, it is reasonable to expect that some high performing micro-systems were overlooked. It was also possible that less than high performing micro-systems were included. In fact, a concern was how to ensure that the micro-systems included in the study were high performing or successful micro-systems, and probes were included in the interview to assess what evidence micro-systems might offer to validate statements about their level of performance. We did not, however, seek validation from documents or other written materials. Although the intent of the sampling strategy was to study high performing micro-systems, a very small number of apparently negative cases were useful for comparison. More importantly, as expected, each site had some areas of very strong performance and other areas that were undistinguished, and they formed a natural cross-case comparison group. Although the sites were selected because of expert opinion, the database is limited by being self report. It is possible that the leaders of the micro-systems had an interest in making their micro-system appear to be better than it is, and we did not have any independent verification of their assertions. For this reason, we did not make any judgments about the validity of respondents' assertions and have limited the analysis to descriptive summaries and themes based on the respondents' own words.

TABLE 18 Micro-System Examples of Investment in Improvement

View in own window

Investment in improvement
LowHigh
Training, resources not available
“One change was to get people to carry medication cards in their wallets. We talked about it for 10 minutes or so and decided to do it. But it didn't work. We don't know how to implement it. We don't know how to flowchart. We don't know how to improve the system. We have closets full of good ideas but don't know how to implement them.”“We have a manager for staff development. She works on skill building and coaches the teams in how we get along. It's important to assign the role of staff development to someone.”
“Our micro-system is a prisoner of our macro-system. If it isn't important for the macro-system, we have no incentive to do it and improvement hasn't been a priority.”“We put together a guidance team and the idea was that this team would tell us what to work on. But I saw most of the good ideas coming from the front lines. The front line needed to be empowered to make the changes. So, now the guidance team will become the quality council. It will have membership from each of the three teams. Changes that teams want to work on will be presented to the Quality Council—‘this is what we want to do, we want to use this method.' The Council's goal will be to provide guidance and facilitation. ‘Yes, that project meets our overall goals, what resources do you need?'”
“We look at the data and say, ‘what can we do to make this better . . .' but there is so much pressure to reduce the time we see with patients and see more patients every day. Now there is pressure from the organization to see patients at 10 minute intervals. They are going to start to tie incentives to that. Each physician will have to decide how to deal with that more money, less hours, etc.”“Remember that even when it seems you have accomplished something, new people come who were not party to the original plans. Before you know it, you've fallen back. We used to think that people would learn the systems by osmosis. Now, they have a formal induction system to explain and show people how the systems should work.”
“We started looking at the data because we had a high rate of wound infection after CABG. We brought together all the different people and looked at all the different issues over 2 years. We found that there is a strong correlation between diabetes and infection, which the national data shows too. We decided that we should work on managing blood sugars before, during, and after surgery. As it turns out, there are so many primary care providers referring patients—we couldn't agree on a way to work on blood sugars before surgery and they didn't want to invest the resources that would be necessary to do this. We couldn't get any primary care providers to work with us on this because working on improvement impacts their productivity, which impacts how much they are paid. Even though it was clear what needed to be done, they chose the easier way and started working on just the peri-operative phase. Two years later we found that the staff wouldn't make the changes because they wouldn't buy into what we wanted to do. And the leaders had forgotten why they ever bought into it to begin with. As it turned out, some of the physicians were offended because we came to them with these changes and they weren't involved with planning the changes. But they had forgotten that when we started all this they didn't want to be involved because they didn't have the time to do it. I am sick and tired of hearing that people are too busy to work on this. When I was younger and less experienced I believed it, but I don't won't to hear that anymore.”“In a given week we are spending about 100 personhours on teams. People are being paid to spend their time doing this, not just during their lunch hour. Someone said, ‘You have to assume you'll be around here 5 years from now. Do you want to be doing things the same way?' Most of us don't. This requires a new attitude that results in understanding that industries must invest in change in these micro-systems. You have to tolerate pulling people off-line to work. This is a radically new way of thinking in medicine which traditionally views any sort of meeting as a waste of time. Traditionally, the view is that the only useful time is spent seeing patients. I think that unless you spend time considering how to deliver care better, much of that time seeing patients is wasted.”

TABLE 19 Micro-System Examples of Alignment of Role and Training

Alignment of role and training
LowHigh
“The system wants me to simply be a ‘broker.' They want me to just do my CHF part and then make referrals. I want to be more involved in the care process.”“The receptionist talks them through the systems of the office. They are trained to follow through specific areas of care such as screening, childhood immunization, and antenatal care, so they have one person to contact. They have become expert in their areas.”
“We emphasize training medical assistants to a much higher level than most expect, use 2 NPs extensively. MAs trained in using technology, standardized triage functions, training patients in self-management. As a group they stay with the practice for long periods. We are trying to ‘push the envelope' and rely less on credentialing and more on continually developing new skills.”
“The system can be an advocate. It can be a reminder that a mammogram needs to be done, that there is a system in place to make sure it happens, that things go well. A system can empower the medical assistant to insist that a patient be seen, even if it means clashing with a provider.”
“If the Respiratory Therapist notes an abnormal lab value, she is comfortable not just taking a blood sample and reporting it, but managing it. The technicians are caregivers. Expectations have changed. The ones that stay are good a adjusting therapy to within physiological parameters are cross trained so that they can take on nursing tasks, starting IVs when needed. When fully trained and confident they may tell an admitting doc that a patient is not ready to have a ventilator tube removed.”

A second limitation of this study was that the interviews were not tape-recorded to provide a raw data “gold standard” for later reference. For this reason, we went to considerable effort to ensure the quality of note taking as described in the methods section, and we obtained respondents' consent to follow-up with them to clarify notes. Follow-up was necessary in only a few instances. The notes were voluminous and rich in detail.

A third limitation is that for most of the interviews, one respondent represented each of the forty-three micro-systems. A more comprehensive assessment would include interviews with at least one person from each of the key roles within the micro-system, including patients. Such tradeoffs in qualitative analysis between breadth and depth are inevitable, 31 but given that this was an exploratory study, we decided to include as many micro-systems as possible with follow-up in later studies.

Research currently underway will expand on this work by taking a more comprehensive look at individual micro-systems and the outcomes of care provided to determine if high performing micro-systems achieve superior results for patients.

  • Directions for Further Research

This research has been exploratory in that it is the first systematic look at health care micro-systems. The power of the research is that it gave a voice to individual micro-systems and provided a way to explore them while creating constructs that may be generalizable to other micro-systems. It has begun the work of defining and characterizing health care micro-systems. The greater value of this analysis will be to go beyond the findings of this research to develop tools to help existing micro-systems improve and to replicate and extend the achievements of these micro-systems.

The basic concept of health care micro-systems—small, organized groups of providers and staff caring for a defined population of patients—is not new. The key components of micro-systems (patients, populations, providers, activities, and information technology) exist in every health care setting. However, current methods for organizing and delivering health care, preparing future health professionals, conducting health services research, and formulating policy have made it difficult to recognize the interdependence and function of the micro-system.

Further analysis of the database would likely yield additional themes. All can be the basis of hypothesis testing for continued work. For example, further work might establish criteria of effectiveness and test whether the features identified as the eight themes are predictive of effectiveness. More refined or additional questions might clarify aspects of the general themes that are critical. More intensive data gathering, for example, of multiple members of the micro-system, including patients could validate results and expand our understanding of these micro-systems.

Two questions were central as we undertook this study: (1) would the term micro-system be meaningful to clinicians in the field? (2) Would they participate and give us detailed enough information to draw inferences? The answers to both questions were clearly: Yes.

Overall, we discovered that the idea of a micro-system was very readily understood by all we interviewed. They had no difficulty in identifying and describing their own micro-systems and, when appropriate because they directed several (such as several intensive care units), differentiating among them in terms of their characteristics.

The study was assisted in its work by an extremely able and distinguished steering group and Subcommittee whose reputations in the field unquestionably enabled us to secure the participation of nearly all who were invited despite our requesting an hour and a half of a busy clinician's time. Many of those interviewed willingly went on for a longer than the allotted 90 minutes and sent us additional materials. Some who were interrupted by urgent clinical business rescheduled time to complete the interviews.

Although this was a selected—not a randomly sampled—group, and there was clearly great enthusiasm and of innovative work going on at the grass-roots level. Many of those interviewed expressed clear ideas about how they were reorganizing practices, their principles for doing so, and their commitment to an ongoing process. Respondents described their early limited successes or outright failures. We heard what had and had not been successful as they tried to disseminate their practices throughout their organizations. We believe there is much that could profitably learned and shared beyond the individual sites that has not been yet been pulled together by a unifying conceptual framework or effective mechanism for deploying what is being learned.

We were struck by two findings in particular: First, the importance of leadership at the macro-system as well as clinical level; and second, the general lack of information infrastructure in these practices. Micro-system leaders repeatedly stressed the importance of executive and governance-level support. This support was singled out repeatedly as a sine qua non to their ability to succeed. It was also apparent that although some steps have been taken to incorporate the explosion of information technologies that are being deployed for managing patient information, free-standing practices as well as much of clinical practice within hospitals have only begun to integrate data systems, use them for real-time clinical practice, or as information tools for improving the quality of care for a patient population. The potential is enormous, but as yet, almost untapped. They appear to be at a threshold of incorporating information technologies into daily practice. The potential created by the development of knowledge servers, decision support tools, consumer informatics 32 continuous electronic patient-clinician communication, and computer-based electronic health records puts most of these micro-systems almost at “time zero” for what will likely be dramatic changes in the integration of information for real-time patient care and a strong baseline for future comparison.

As research on micro-systems moves forward, it will be important to transfer what has been learned from research on teams and organizations to new research that will be conducted on micro-systems. For example, research that will be helpful includes information about the different stages of development and maturity of the organization, creating the organizational environment to support teams, socializing new members (clinicians and staff) to the team, environments that support micro-systems, the characteristics of effective leadership, and how micro-systems can build linkages that result in well-coordinated care within and across organizational boundaries.

  • IOM Quality of Care Study

This study was intended to provide more than a database for research, however. It was undertaken to provide an evidence base for the IOM Committee on the Quality of Health Care in America in formulating its conclusions and recommendations. Because that committee was charged with the formulation of recommendations about changes that can lead to threshold improvement in the quality of care in this country, its members believed that it was extremely important to draw not only on their expertise and the literature but also on the best evidence it could find of excellent performance and to do so in a systematic way as exemplified by this study. As that study was not limited by type of health care, the goals of such a project necessitated drawing from a wide range of sites serving a variety of patient populations. It also suggests a sample size that for qualitative analytic methods was quite broad but not unwieldy. The number of sites interviewed—43—served these purposes well. We had several of each “kind” of micro-system (e.g., primary care, critical care) but they varied in location, composition, and in their own approaches to organizing and delivering care, thus providing a very rich database of observation. That report, which is expected to be published in early 2001, will use the responses and analysis described in this technical report to underpin its recommendations about how health care micro-systems, macro-systems, and other organizational forms that have not yet emerged, can improve their performance.

  • Cite this Page Donaldson MS, Mohr JJ; Institute of Medicine (US). Exploring Innovation and Quality Improvement in Health Care Micro-Systems: A Cross-Case Analysis. Washington (DC): National Academies Press (US); 2001. CONCLUSIONS AND DIRECTIONS FOR FURTHER RESEARCH AND POLICY.
  • PDF version of this title (465K)

In this Page

Recent activity.

  • CONCLUSIONS AND DIRECTIONS FOR FURTHER RESEARCH AND POLICY - Exploring Innovatio... CONCLUSIONS AND DIRECTIONS FOR FURTHER RESEARCH AND POLICY - Exploring Innovation and Quality Improvement in Health Care Micro-Systems

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Future directions in evaluation research: people, organizational, and social issues

Affiliation.

  • 1 Kaplan Associates, 59 Morris Street, Hamden, CT 06517, USA. [email protected]
  • PMID: 15227551

Objective: To review evaluation literature concerning people, organizational, and social issues and provide recommendations for future research.

Method: Analyze this research and make recommendations.

Results and conclusions: Evaluation research is key in identifying how people, organizational, and social issues - all crucial to system design, development, implementation, and use - interplay with informatics projects. Building on a long history of contributions and using a variety of methods, researchers continue developing evaluation theories and methods while producing significant interesting studies. We recommend that future research: 1) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi-site or multi-system comparative studies. 3) Incorporate evaluation into all phases of a project. 4) Study failures, partial successes, and changes in project definition or outcome. 5) Employ evaluation approaches that take account of the shifting nature of health care and project environments, and do formative evaluations. 6) Incorporate people, social, organizational, cultural, and concomitant ethical issues into the mainstream of medical informatics. 7) Diversify research approaches and continue to develop new approaches. 8) Conduct investigations at different levels of analysis. 9) Integrate findings from different applications and contextual settings, different areas of health care, studies in other disciplines, and also work that is not published in traditional research outlets. 10) Develop and test theory to inform both further evaluation research and informatics practice.

PubMed Disclaimer

  • Bad health informatics can kill--is evaluation the answer? Ammenwerth E, Shaw NT. Ammenwerth E, et al. Methods Inf Med. 2005;44(1):1-3. Methods Inf Med. 2005. PMID: 15778787

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Toward an informatics research agenda: key people and organizational issues. Kaplan B, Brennan PF, Dowling AF, Friedman CP, Peel V. Kaplan B, et al. J Am Med Inform Assoc. 2001 May-Jun;8(3):235-41. doi: 10.1136/jamia.2001.0080235. J Am Med Inform Assoc. 2001. PMID: 11320068 Free PMC article.
  • Evaluating informatics applications--some alternative approaches: theory, social interactionism, and call for methodological pluralism. Kaplan B. Kaplan B. Int J Med Inform. 2001 Nov;64(1):39-56. doi: 10.1016/s1386-5056(01)00184-8. Int J Med Inform. 2001. PMID: 11673101 Review.
  • Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia. Osborne SR, Alston LV, Bolton KA, Whelan J, Reeve E, Wong Shee A, Browne J, Walker T, Versace VL, Allender S, Nichols M, Backholer K, Goodwin N, Lewis S, Dalton H, Prael G, Curtin M, Brooks R, Verdon S, Crockett J, Hodgins G, Walsh S, Lyle DM, Thompson SC, Browne LJ, Knight S, Pit SW, Jones M, Gillam MH, Leach MJ, Gonzalez-Chica DA, Muyambi K, Eshetie T, Tran K, May E, Lieschke G, Parker V, Smith A, Hayes C, Dunlop AJ, Rajappa H, White R, Oakley P, Holliday S. Osborne SR, et al. Med J Aust. 2020 Dec;213 Suppl 11:S3-S32.e1. doi: 10.5694/mja2.50881. Med J Aust. 2020. PMID: 33314144
  • Antecedents of the people and organizational aspects of medical informatics: review of the literature. Lorenzi NM, Riley RT, Blyth AJ, Southon G, Dixon BJ. Lorenzi NM, et al. J Am Med Inform Assoc. 1997 Mar-Apr;4(2):79-93. doi: 10.1136/jamia.1997.0040079. J Am Med Inform Assoc. 1997. PMID: 9067874 Free PMC article. Review.
  • Conceptual Framework for Smart Health: A Multi-Dimensional Model Using IPO Logic to Link Drivers and Outcomes. Deng J, Huang S, Wang L, Deng W, Yang T. Deng J, et al. Int J Environ Res Public Health. 2022 Dec 13;19(24):16742. doi: 10.3390/ijerph192416742. Int J Environ Res Public Health. 2022. PMID: 36554622 Free PMC article. Review.
  • Planning a holistic summative eHealth evaluation in an interdisciplinary and multi-national setting: a case study and propositions for guideline development. Jurkeviciute M, Enam A, Torres-Bonilla J, Eriksson H. Jurkeviciute M, et al. BMC Med Inform Decis Mak. 2021 Feb 17;21(1):60. doi: 10.1186/s12911-021-01399-9. BMC Med Inform Decis Mak. 2021. PMID: 33596910 Free PMC article.
  • Adherence to Oral Anticancer Medications: Evolving Interprofessional Roles and Pharmacist Workforce Considerations. Paolella GA, Boyd AD, Wirth SM, Cuellar S, Venepalli NK, Crawford SY. Paolella GA, et al. Pharmacy (Basel). 2018 Mar 8;6(1):23. doi: 10.3390/pharmacy6010023. Pharmacy (Basel). 2018. PMID: 29518017 Free PMC article.
  • Integrated Care Program for Older Adults: Analysis and Improvement. Sánchez A, Villalba-Mora E, Peinado IS, Rodriguez-Maña L. Sánchez A, et al. J Nutr Health Aging. 2017;21(8):867-873. doi: 10.1007/s12603-016-0860-5. J Nutr Health Aging. 2017. PMID: 28972238
  • Improving Evaluation to Address the Unintended Consequences of Health Information Technology:. a Position Paper from the Working Group on Technology Assessment & Quality Development. Magrabi F, Ammenwerth E, Hyppönen H, de Keizer N, Nykänen P, Rigby M, Scott P, Talmon J, Georgiou A. Magrabi F, et al. Yearb Med Inform. 2016 Nov 10;(1):61-69. doi: 10.15265/IY-2016-013. Yearb Med Inform. 2016. PMID: 27830232 Free PMC article. Review.

Publication types

  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • MedlinePlus Health Information
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

cropped Screenshot 2023 08 20 at 23.18.57

Paradigms in Psychology: Definition, Examples, and Significance

Paradigms, the lenses through which psychologists view the complexities of the human mind, have shaped the evolution of psychological thought and practice, guiding research questions, methodologies, and therapeutic approaches. These fundamental frameworks serve as the bedrock upon which our understanding of human behavior and mental processes is built. But what exactly are paradigms in psychology, and why do they matter so much?

Imagine, if you will, a pair of glasses that not only correct your vision but also tint the world in a specific hue. That’s what a paradigm does for psychologists – it provides a particular perspective through which they interpret and make sense of the vast, intricate landscape of human psychology. These paradigms aren’t just abstract concepts; they’re the very scaffolding that supports the entire field of psychology, influencing everything from how research is conducted to how patients are treated in therapy.

Now, you might be thinking, “Hold on a second. Isn’t psychology just about understanding how people think and behave?” Well, yes and no. While that’s certainly the end goal, the journey to that understanding is far from straightforward. It’s a bit like trying to navigate a dense forest – you need a map, a compass, and a good dose of intuition. Paradigms provide psychologists with that map and compass, offering a structured way to approach the wilderness of the human mind.

What Exactly is a Paradigm in Psychology?

Let’s get down to brass tacks. A paradigm in psychology is a comprehensive theoretical framework that provides a set of concepts, research methods, and problem-solving approaches within the field. It’s like the operating system of a computer – it runs in the background, dictating how information is processed and interpreted.

But here’s where it gets interesting. A psychological paradigm isn’t just a static set of rules. It’s a living, breathing entity that evolves over time. It shapes how psychologists formulate theories, design experiments, and interpret results. In essence, it’s the lens through which they view the world of human behavior and mental processes.

The key components of a psychological paradigm include:

1. A set of fundamental assumptions about human nature and behavior 2. A methodology for conducting research and gathering data 3. A framework for interpreting and explaining psychological phenomena 4. A set of ethical guidelines for research and practice

These components work together to create a cohesive approach to understanding the human mind. It’s like a recipe for baking a cake – each ingredient plays a crucial role, and changing one can dramatically alter the end result.

Now, you might be wondering, “How do these paradigms actually shape psychological theories and research?” Well, imagine you’re a detective trying to solve a complex case. The paradigm you subscribe to would be like your investigative approach. It would influence which clues you pay attention to, how you interpret evidence, and ultimately, the conclusions you draw.

In the same way, psychological paradigms guide researchers in formulating hypotheses, designing experiments, and interpreting results. They provide a framework for asking questions and seeking answers. For instance, a behaviorist paradigm might lead a researcher to focus on observable behaviors and environmental influences, while a cognitive paradigm might prompt investigations into internal mental processes.

A Walk Through the History of Psychological Paradigms

The journey of paradigms in psychology is a fascinating tale of intellectual evolution, marked by dramatic shifts and heated debates. It’s a story that begins in the late 19th century when psychology was still finding its footing as a distinct scientific discipline.

In the early days, introspection reigned supreme. Psychologists like Wilhelm Wundt and Edward Titchener believed that the key to understanding the mind lay in carefully observing and reporting one’s own conscious experiences. It was a bit like trying to understand the workings of a clock by staring at its face – interesting, but limited.

Then came the behaviorists, led by John B. Watson and later B.F. Skinner. They turned the field on its head, arguing that psychology should focus solely on observable behaviors rather than unobservable mental states. It was a radical shift, akin to deciding that the best way to understand a clock is to ignore its inner workings entirely and just focus on what the hands do.

But the pendulum swung back in the mid-20th century with the emergence of cognitive psychology. Pioneers like Ulric Neisser and George Miller argued that to truly understand human behavior, we need to consider the mental processes that underlie it. This paradigm shift in psychology was like realizing that to understand the clock, we need to open it up and examine its gears and springs.

Today, the landscape of psychological paradigms is more diverse than ever. We have biological psychology, which explores the physical basis of behavior and mental processes. There’s evolutionary psychology, which examines how our evolutionary history shapes our minds. And let’s not forget about the burgeoning field of parapsychology , which pushes the boundaries of conventional psychological science.

The Big Players: Major Paradigms in Psychology

Now that we’ve got a bird’s eye view of how paradigms have evolved, let’s zoom in on some of the major players that have shaped the field of psychology. Each of these paradigms offers a unique perspective on human behavior and mental processes, like different facets of a complex gemstone.

1. Behaviorism: The poster child of observable behavior, behaviorism focuses on how environmental stimuli shape our actions. It’s like viewing humans as sophisticated input-output machines, responding to the world around them in predictable ways.

2. Cognitive Psychology: This paradigm shifts the focus inward, examining how we process information, form memories, and make decisions. It’s akin to treating the mind as a complex computer, with various programs and processes running simultaneously.

3. Psychoanalysis: Pioneered by Sigmund Freud, this paradigm delves into the unconscious mind, exploring how hidden desires and conflicts shape our behavior. It’s like viewing the mind as an iceberg, with most of its mass hidden beneath the surface.

4. Humanistic Psychology: This approach emphasizes individual potential and the importance of self-actualization. It’s like seeing humans as seeds, each with the inherent capacity to grow and flourish under the right conditions.

5. Biological Psychology: This paradigm explores how our biology – our brains, genes, and hormones – influences our behavior and mental processes. It’s like viewing humans as intricate biological machines, with behavior emerging from complex physiological processes.

Each of these paradigms has its strengths and limitations, and they often complement each other in unexpected ways. It’s a bit like the old parable of the blind men and the elephant – each paradigm grasps a different part of the complex beast that is human psychology.

How Paradigms Shape the Psychological Landscape

Now, you might be wondering, “So what? How do these paradigms actually impact real-world psychology?” Well, buckle up, because the influence of paradigms extends far beyond academic debates and theoretical musings.

First and foremost, paradigms shape the very questions psychologists ask. A behaviorist might wonder how environmental factors influence a particular behavior, while a cognitive psychologist might be more interested in the mental processes underlying that same behavior. It’s like two people looking at the same painting – one might focus on the use of color, while the other examines the composition.

But it doesn’t stop there. Paradigms also influence the methods psychologists use to answer these questions. A behaviorist might design a controlled experiment to observe behavior under different conditions, while a psychoanalyst might rely on in-depth case studies and dream analysis. It’s a bit like choosing between a microscope and a telescope – each tool reveals different aspects of the subject at hand.

Perhaps most importantly, paradigms shape how psychologists interpret their findings. The same set of data could lead to vastly different conclusions depending on the paradigm through which it’s viewed. It’s like the classic “is the glass half full or half empty” dilemma – the facts are the same, but the interpretation can vary wildly.

This influence extends to the realm of therapy as well. The paradigm a therapist subscribes to can dramatically affect their approach to treatment. A cognitive-behavioral therapist might focus on changing thought patterns, while a psychodynamic therapist might explore childhood experiences and unconscious conflicts. It’s like having different tools in a toolbox – each has its place and purpose.

The Great Debate: Challenges and Controversies

Now, if you think the world of psychological paradigms is all harmony and agreement, think again. Like any field of science, psychology is rife with debates, controversies, and heated arguments. It’s a bit like a family reunion – there’s a lot of love, but also plenty of disagreements.

One of the biggest challenges in the field is the occurrence of paradigm shifts. These are moments when a new paradigm emerges that fundamentally changes how psychologists view their field. It’s like suddenly realizing that the earth orbits the sun, not the other way around. These shifts can be met with significant resistance, as established researchers may be reluctant to abandon the frameworks they’ve built their careers on.

Another contentious issue is the limitation of single-paradigm approaches. While each paradigm offers valuable insights, relying too heavily on any one perspective can lead to a narrow understanding of human psychology. It’s like trying to understand a symphony by listening to just one instrument – you might get part of the picture, but you’re missing out on the full complexity.

This realization has led to a growing debate over integrative and eclectic approaches in psychology. Some argue that combining insights from multiple paradigms can lead to a more comprehensive understanding of human behavior and mental processes. Others worry that this approach risks creating a hodgepodge of incompatible ideas. It’s a bit like trying to create a new cuisine by mixing ingredients from different culinary traditions – it could be a delicious fusion or a disastrous mess.

These debates and controversies are not just academic squabbles. They have real-world implications for how psychology is practiced and how mental health issues are addressed. As psychology debates continue to evolve, they shape the very future of the field.

Wrapping It Up: The Ever-Evolving World of Psychological Paradigms

As we come to the end of our journey through the landscape of psychological paradigms, let’s take a moment to recap. A paradigm in psychology is a comprehensive framework that shapes how psychologists view, research, and interpret human behavior and mental processes. It’s the lens through which they examine the complexities of the human mind, influencing everything from research questions to therapeutic approaches.

But here’s the kicker – this landscape is far from static. Psychological paradigms are in a constant state of evolution, adapting to new discoveries, societal changes, and technological advancements. It’s like watching a time-lapse video of a growing forest – the overall shape might remain recognizable, but the details are in constant flux.

So, what does the future hold for psychological paradigms? Well, if the past is any indication, we can expect continued debate, integration, and innovation. Emerging fields like neuroscience and artificial intelligence are already pushing the boundaries of how we understand the mind. Who knows? The next great paradigm shift might be just around the corner.

As we look to the future, it’s worth considering the words of psychology philosophers who have shaped our understanding of the mind. Their insights remind us that while paradigms provide valuable frameworks, the human mind remains a frontier of endless fascination and discovery.

In the end, the story of paradigms in psychology is really the story of our ongoing quest to understand ourselves. It’s a journey marked by curiosity, controversy, and occasional breakthroughs. And the best part? We’re all part of this grand adventure, each of us a living, breathing example of the very phenomena psychologists seek to understand.

So the next time you find yourself pondering the workings of your own mind, remember – you’re not just thinking about psychology, you’re living it. And in that sense, we’re all contributors to the ever-evolving tapestry of psychological paradigms. Now isn’t that a thought to wrap your mind around?

References:

1. Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

2. Leahey, T. H. (2013). A History of Psychology: From Antiquity to Modernity. Pearson.

3. Miller, G. A. (2003). The cognitive revolution: a historical perspective. Trends in Cognitive Sciences, 7(3), 141-144.

4. Neisser, U. (1967). Cognitive Psychology. Appleton-Century-Crofts.

5. Skinner, B. F. (1938). The Behavior of Organisms: An Experimental Analysis. Appleton-Century.

6. Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20(2), 158-177.

7. Wundt, W. (1874). Grundzüge der physiologischen Psychologie. Engelmann.

8. Freud, S. (1900). The Interpretation of Dreams. Franz Deuticke.

9. Maslow, A. H. (1954). Motivation and Personality. Harper & Brothers.

10. Kandel, E. R. (1998). A new intellectual framework for psychiatry. American Journal of Psychiatry, 155(4), 457-469.

Similar Posts

Psychology as a Liberal Art: Exploring Its Place in Academia

Psychology as a Liberal Art: Exploring Its Place in Academia

Amidst the hallowed halls of academia, a fervent debate rages, challenging the very fabric of psychology’s identity as it straddles the line between science and the liberal arts. This intellectual tug-of-war has been brewing for decades, with passionate advocates on both sides of the aisle. But why does this classification matter so much? And what…

4 Perspectives of Psychology: Understanding Human Behavior and Mental Processes

4 Perspectives of Psychology: Understanding Human Behavior and Mental Processes

From Freud’s provocative theories to modern neuroscience, psychologists have long sought to unravel the enigmatic puzzle of the human mind through four distinct yet interconnected perspectives. These perspectives, like the facets of a diamond, each offer a unique glimpse into the intricate workings of our psyche. They’re not just dry academic concepts, but vibrant lenses…

Moral Treatment in Psychology: Revolutionizing Mental Health Care

Moral Treatment in Psychology: Revolutionizing Mental Health Care

In a time when chains and confinement defined the treatment of the mentally ill, a revolutionary approach emerged, forever altering the landscape of psychology and offering hope for those once thought lost. This groundbreaking method, known as moral treatment, would go on to reshape the very foundations of mental health care and set the stage…

Positive Psychology’s Blind Spots: Areas Often Overlooked in the Field

Positive Psychology’s Blind Spots: Areas Often Overlooked in the Field

While positive psychology has revolutionized our understanding of well-being, its rose-tinted lens may have left some crucial aspects of the human experience in the shadows. The field of positive psychology, which emerged in the late 1990s, has undoubtedly made significant contributions to our understanding of happiness, well-being, and human flourishing. Its focus on strengths, virtues,…

Carl Jung’s Psychology: Pioneering Concepts and Enduring Contributions

Carl Jung’s Psychology: Pioneering Concepts and Enduring Contributions

A visionary psychologist who dared to explore the uncharted depths of the human psyche, Carl Jung’s groundbreaking theories revolutionized our understanding of the mind and continue to shape the landscape of modern psychology. Born in 1875 in Kesswil, Switzerland, Jung’s journey into the realm of psychology was as fascinating as it was unconventional. As a…

Eleanor Gibson’s Groundbreaking Contributions to Developmental Psychology

Eleanor Gibson’s Groundbreaking Contributions to Developmental Psychology

A trailblazing psychologist whose groundbreaking experiments reshaped our understanding of infant perception and cognitive development, Eleanor Gibson left an indelible mark on the field of developmental psychology. Her innovative research and theories continue to influence how we perceive the intricate dance between nature and nurture in shaping human cognition. Born in 1910 in Peoria, Illinois,…

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

What 570 Experts Predict the Future of Work Will Look Like

  • Nicky Dries,
  • Joost Luyckx,
  • Philip Rogiers

future directions in research example

How to make sense of divergent perspectives — and shape a better conversation about what comes next.

No one knows exactly what the future of work will look like, but many people have opinions. Research involving Belgian newspaper articles and experts shows that public commentators on the topic tend to fall into three buckets: optimists (largely tech entrepreneurs), skeptics (largely economists) , and pessimists (authors and journalists). So, who’s right — should the future involve accelerated progress, degrowth, or something in between? Because each group uses different research and has different points of view, it’s impossible to tell. But it is possible to better understand all three arguments, and to think critically about what you, personally, want the future of work to look like. A robust public debate involving every citizen, policy maker, manager, and CEO is the best way to ensure all voices are heard; after all, the future is what we make it.

Who’s right about the future of work?

  • Nicky Dries is a full professor of Organizational Behavior at KU Leuven and at BI Norwegian Business School. Her research focuses on re-politicizing the future of work and stimulating democratic debate, using methods aimed at triggering people’s imagination about the future like media analysis, robotic art and design, virtual reality, and science fiction movies.
  • Joost Luyckx is an associate professor of Business and Society at IESEG School of Management and a research fellow at KU Leuven. His research focuses on legitimacy struggles over multinational companies in the global public debate, more desirable futures of work, alternative organizations, and neutralization of social movement activism.
  • Philip Rogiers is an assistant professor of organizational behavior and organizational theory at University Ramon Llull, Esade. His research focuses on the transformation and deconstruction of jobs, along with the exploration of alternative organizational forms that support a more human-centered future of work.

Partner Center

IMAGES

  1. Future directions of research

    future directions in research example

  2. Chapter 9

    future directions in research example

  3. (PDF) Directions for Future Research and the Way Forward

    future directions in research example

  4. Summary of Directions for Future Research Issue Future direction

    future directions in research example

  5. Future research directions

    future directions in research example

  6. PPT

    future directions in research example

VIDEO

  1. Using Mind Maps to Narrow Your Research Topic

  2. Mastering Diagnostic Analytics in Data Analysis #excel #businessintelligence #dataanalysis

  3. ANROWS inConversation with Professor Liz Kelly CBE and Dr Anastasia Powell

  4. Tomorrow's Breakthroughs Today: Dr. Miranda Orr

  5. Feedback and Future Directions for the Capability Framework

  6. Machine Learning Series 02: Research Areas in machine Learning

COMMENTS

  1. How to Write Recommendations in Research

    Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable. Relatedly, when making these recommendations, avoid: Undermining your own work, but rather offer suggestions on how future studies can build upon it.

  2. Future Research

    Future research could investigate the role of factors such as cognitive functioning, motivation, and stress in this relationship. Overall, there is a need for continued research on the relationship between sleep patterns and academic performance, as this has important implications for the health and well-being of students. Example 3: Future ...

  3. Types of future research suggestion

    In this article, we discuss six types of future research suggestion. These include: (1) building on a particular finding in your research; (2) addressing a flaw in your research; examining (or testing) a theory (framework or model) either (3) for the first time or (4) in a new context, location and/or culture; (5) re-evaluating and (6 ...

  4. How to Write: Future Work/Conclusions

    In about one paragraph recap what your research question was and how you tackled it. Highlight the big accomplishments. Spend another paragraph explaining the highlights of your results. These are the main results you want the reader to remember after they put down the paper, so ignore any small details. Conclude.

  5. Conclusions and recommendations for future research

    The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8. We have developed and ...

  6. How to Write the Discussion Section of a Research Paper

    The discussion section provides an analysis and interpretation of the findings, compares them with previous studies, identifies limitations, and suggests future directions for research. This section combines information from the preceding parts of your paper into a coherent story. By this point, the reader already knows why you did your study ...

  7. (PDF) Conclusions and Future Research Directions

    of the laser with the tissue [ 1]. With respect to existing approaches, our modeling. methodology explicitly considers typical laser parameters used by clinicians dur -. ing an intervention, (i.e ...

  8. Writing a Research Paper Conclusion

    Empirical paper: Future research directions. In a more empirical paper, you can close by either making recommendations for practice (for example, in clinical or policy papers), or suggesting directions for future research. Whatever the scope of your own research, there will always be room for further investigation of related topics, and you ...

  9. Implications in Research

    Suggest directions for future research: Suggest areas for future research that could build on the current study's findings and address any limitations. ... These are the implications that a study has for ethical considerations in research. For example, a study that involves human participants must consider the ethical implications of the ...

  10. 3 CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS

    Box 3.1 summarizes the committee's conclusions regarding future directions for research in this field. The remainder of this chapter provides a more detailed discussion of the committee's conclusions.

  11. Conclusions, implications of the study and directions for future research

    In this chapter, we conclude by briefly foregrounding some of the study's implications for practice, and some of the directions for future research that stem from the project. ... in Figure 9 could also be used to generate a number of hypotheses for further empirical testing using a broader sample and quantitative research methods. Questions ...

  12. What's next? Forecasting scientific research trends

    Abstract. Scientific research trends and interests evolve over time. The ability to identify and forecast these trends is vital for educational institutions, practitioners, investors, and funding organizations. In this study, we predict future trends in scientific publications using heterogeneous sources, including historical publication time ...

  13. The future of research: Emerging trends and new directions in

    One emerging trend in research is the use of virtual and augmented reality (VR/AR) to enhance scientific inquiry. VR/AR technologies have the potential to transform the way we conduct experiments, visualize data, and collaborate with other researchers. For example, VR/AR simulations can allow researchers to explore complex data sets in three ...

  14. Suggestions for Future Research

    2. Addressing limitations of your research. Your research will not be free from limitations and these may relate to formulation of research aim and objectives, application of data collection method, sample size, scope of discussions and analysis etc. You can propose future research suggestions that address the limitations of your study. 3.

  15. Forecasting the future of artificial intelligence with machine learning

    In this work, we address the ambitious vision of developing a data-driven approach to predict future research directions 1.As new research ideas often emerge from connecting seemingly unrelated ...

  16. Recommendations for Future Research Directions

    Progress has been made in the Environmental Protection Agency's (EPA's) water security research program (see Chapter 4), but many important research questions and technical support needs remain.In Chapter 3, a framework is suggested for evaluating water security research initiatives that gives priority to research that improves response and recovery and/or develops risk reduction or ...

  17. Future Direction

    A 'future direction' in the context of Computer Science refers to potential areas of research and development that focus on enhancing security, authentication mechanisms, encryption, data storage, and privacy preservation in emerging technologies like V2X and autonomous vehicles. AI generated definition based on: Vehicular Communications, 2023.

  18. Conclusions and Directions for Further Research and Policy

    There are limitations to all sampling strategies and to qualitative research, in particular. The strength of this method was that the sample selection used input from a pool of reognized experts in the organization, delivery, and improvement of health care. Even with a pool of recognized experts, it is reasonable to expect that some high performing micro-systems were overlooked. It was also ...

  19. Exploring future research and innovation directions for a sustainable

    1. In a first step, the research team scanned for trends and developments shaping the future of the blue economy and sustainability. A broad scanning exercise was implemented to map trends and new developments across diverse sectors of the blue economy and identify research opportunities and challenges towards 2030.

  20. Future directions in evaluation research: people ...

    We recommend that future research: 1) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi-site or multi-system comparative studies. 3) Incorporate evaluation into all phases ...

  21. Paradigms in Psychology: Definition, Examples, and Impact

    2. A methodology for conducting research and gathering data 3. A framework for interpreting and explaining psychological phenomena 4. A set of ethical guidelines for research and practice. These components work together to create a cohesive approach to understanding the human mind.

  22. What 570 Experts Predict the Future of Work Will Look Like

    No one knows exactly what the future of work will look like, but many people have opinions. Research involving Belgian newspaper articles and experts shows that public commentators on the topic ...