Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 5: Psychological Measurement

Reliability and Validity of Measurement

Learning Objectives

  • Define reliability, including the different types and how they are assessed.
  • Define validity, including the different types and how they are assessed.
  • Describe the kinds of evidence that would be relevant to assessing the reliability and validity of a particular measure.

Again, measurement involves assigning scores to individuals so that they represent some characteristic of the individuals. But how do researchers know that the scores actually represent the characteristic, especially when it is a construct like intelligence, self-esteem, depression, or working memory capacity? The answer is that they conduct research using the measure to confirm that the scores make sense based on their understanding of the construct being measured. This is an extremely important point. Psychologists do not simply  assume  that their measures work. Instead, they collect data to demonstrate  that they work. If their research does not demonstrate that a measure works, they stop using it.

As an informal example, imagine that you have been dieting for a month. Your clothes seem to be fitting more loosely, and several friends have asked if you have lost weight. If at this point your bathroom scale indicated that you had lost 10 pounds, this would make sense and you would continue to use the scale. But if it indicated that you had gained 10 pounds, you would rightly conclude that it was broken and either fix it or get rid of it. In evaluating a measurement method, psychologists consider two general dimensions: reliability and validity.

Reliability

Reliability  refers to the consistency of a measure. Psychologists consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (inter-rater reliability).

Test-Retest Reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time.  Test-retest reliability  is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the  same  group of people at a later time, and then looking at  test-retest correlation  between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing Pearson’s  r . Figure 5.2 shows the correlation between two sets of scores of several university students on the Rosenberg Self-Esteem Scale, administered two times, a week apart. Pearson’s r for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

Score at time 1 is on the x-axis and score at time 2 is on the y-axis, showing fairly consistent scores

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.

Internal Consistency

A second kind of reliability is  internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioural and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials.

Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One approach is to look at a  split-half correlation . This involves splitting the items into two sets, such as the first and second halves of the items or the even- and odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of scores is examined. For example, Figure 5.3 shows the split-half correlation between several university students’ scores on the even-numbered items and their scores on the odd-numbered items of the Rosenberg Self-Esteem Scale. Pearson’s  r  for these data is +.88. A split-half correlation of +.80 or greater is generally considered good internal consistency.

Score on even-numbered items is on the x-axis and score on odd-numbered items is on the y-axis, showing fairly consistent scores

Perhaps the most common measure of internal consistency used by researchers in psychology is a statistic called  Cronbach’s α  (the Greek letter alpha). Conceptually, α is the mean of all possible split-half correlations for a set of items. For example, there are 252 ways to split a set of 10 items into two sets of five. Cronbach’s α would be the mean of the 252 split-half correlations. Note that this is not how α is actually computed, but it is a correct way of interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken to indicate good internal consistency.

Interrater Reliability

Many behavioural measures involve significant judgment on the part of an observer or a rater.  Inter-rater reliability  is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring university students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does in fact have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other. Inter-rater reliability would also have been measured in Bandura’s Bobo doll study. In this case, the observers’ ratings of how many acts of aggression a particular child committed while playing with the Bobo doll should have been highly positively correlated. Interrater reliability is often assessed using Cronbach’s α when the judgments are quantitative or an analogous statistic called Cohen’s κ (the Greek letter kappa) when they are categorical.

Validity  is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimetre longer than another’s would indicate nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure. Here we consider three basic kinds: face validity, content validity, and criterion validity.

Face Validity

Face validity  is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behaviour, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. For example, the items “I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches those of individuals who tend to suppress their aggression.

Content Validity

Content validity  is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion Validity

Criterion validity  is the extent to which people’s scores on a measure are correlated with other variables (known as  criteria ) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. When the criterion is measured at the same time as the construct, criterion validity is referred to as concurrent validity ; however, when the criterion is measured at some point in the future (after the construct has been measured), it is referred to as predictive validity (because scores on the measure have “predicted” a future outcome).

Criteria can also include other measures of the same construct. For example, one would expect new measures of test anxiety or physical risk taking to be positively correlated with existing measures of the same constructs. This is known as convergent validity .

Assessing convergent validity requires collecting data using the measure. Researchers John Cacioppo and Richard Petty did this when they created their self-report Need for Cognition Scale to measure how much people value and engage in thinking (Cacioppo & Petty, 1982) [1] . In a series of studies, they showed that people’s scores were positively correlated with their scores on a standardized academic achievement test, and that their scores were negatively correlated with their scores on a measure of dogmatism (which represents a tendency toward obedience). In the years since it was created, the Need for Cognition Scale has been used in literally hundreds of studies and has been shown to be correlated with a wide variety of other variables, including the effectiveness of an advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, & McCaslin, 2009) [2] .

Discriminant Validity

Discriminant validity , on the other hand, is the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence of discriminant validity by showing that people’s scores were not correlated with certain other variables. For example, they found only a weak correlation between people’s need for cognition and a measure of their cognitive style—the extent to which they tend to think analytically by breaking ideas into smaller parts or holistically in terms of “the big picture.” They also found no correlation between people’s need for cognition and measures of their test anxiety and their tendency to respond in socially desirable ways. All these low correlations provide evidence that the measure is reflecting a conceptually distinct construct.

Key Takeaways

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.
  • Practice: Ask several friends to complete the Rosenberg Self-Esteem Scale. Then assess its internal consistency by making a scatterplot to show the split-half correlation (even- vs. odd-numbered items). Compute Pearson’s  r too if you know how.
  • Discussion: Think back to the last college exam you took and think of the exam as a psychological measure. What construct do you think it was intended to measure? Comment on its face and content validity. What data could you collect to assess its reliability and criterion validity?
  • Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42 , 116–131. ↵
  • Petty, R. E, Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need for cognition. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behaviour (pp. 318–329). New York, NY: Guilford Press. ↵

The consistency of a measure.

The consistency of a measure over time.

The consistency of a measure on the same group of people at different times.

Consistency of people’s responses across the items on a multiple-item measure.

Method of assessing internal consistency through splitting the items into two sets and examining the relationship between them.

A statistic in which α is the mean of all possible split-half correlations for a set of items.

The extent to which different observers are consistent in their judgments.

The extent to which the scores from a measure represent the variable they are intended to.

The extent to which a measurement method appears to measure the construct of interest.

The extent to which a measure “covers” the construct of interest.

The extent to which people’s scores on a measure are correlated with other variables that one would expect them to be correlated with.

In reference to criterion validity, variables that one would expect to be correlated with the measure.

When the criterion is measured at the same time as the construct.

when the criterion is measured at some point in the future (after the construct has been measured).

When new measures positively correlate with existing measures of the same constructs.

The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

chapter 3 research methods reliability fill in the gaps

  • Welcome to Chapter 3

Chapter 3 Webinars

  • Student Experience Feedback Buttons
  • Developing the Quantitative Research Design
  • Qualitative Descriptive Design
  • Qualitative Narrative Inquiry Research
  • SAGE Research Methods
  • Alignment of Dissertation Components for DIS-9902ABC
  • IRB Resources This link opens in a new window
  • Research Examples (SAGE) This link opens in a new window
  • Dataset Examples (SAGE) This link opens in a new window

Jump to DSE Guide

Need help ask us.

chapter 3 research methods reliability fill in the gaps

Was this resource helpful?

  • Next: Developing the Quantitative Research Design >>
  • Last Updated: Nov 2, 2023 10:17 AM
  • URL: https://resources.nu.edu/c.php?g=1007179

National University

© Copyright 2024 National University. All Rights Reserved.

Privacy Policy | Consumer Information

1Library

  • No results found

Validity and Reliability

Chapter one, chapter three research methodology, 3.9 validity and reliability.

Data validity refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from measures (Dooley, 1990). That means the effectiveness of the research instruments to measure what is intending to measure. Thus validity belongs not just to a measure but depends on the relationship between the measure and its level. Validity can be content validity (validity of the measuring instrument) or construct validity (the degree of relationship between the study problem, instruments and variables). Reliability refers to the degree to which observed scores are “free from errors of measurement” (Dooley, 1990). Reliability can be estimated by the constant of scores. For example, the agreement between different items of the same questionnaire or between different raters using a measure can be checked. The value of measure depends not only on its reliability and validity but also on its specific purpose. Thus a measure with modest reliability and validity may prove adequate for initial study but too crude for making an important decision about particular phenomena. In order to reduce bias and in a view of reliability, multiple methods were employed in this study namely interviews and questionnaires. With the fact that this study is a mixed research i.e both qualititative and quantitative research were carried out, it is important to note that there is a significant difference between approaches of ensuring reliability and validity of the two researches.

3.9.1 Validity and reliability of qualitative research

In qualitative research, the appropriateness of validity and reliability is a hot topic of discussion. Some authors argue that validity and reliability in qualitative research are

inappropriate, while others say these terms are relevant to qualitative research just as they are in quantitative research. For instance Yardley (2008) argues that qualitative research accepts and works with the influence of errors caused by researcher’s influence but quantitative research depends on elimination of such errors. He therefore concludes that validity and reliability are irrelevant to the qualitative research. However this argument contradicts the concept of rigour as elaborated by Aroni et.al. (1999) which insist that a rigourous research process results in more trustworthy data. Some researchers have even explained how to improve rigour of the qualitative research and therefore ensuring validity and reliability of qualitative findings. Elliot et.al. (1999) states that validity and reliability in qualitative research can be improved by credibility checks through feedback, coherence of a story , triangulation and verification.

Phase one of this study has adopted some of the methods mentioned by Elliot et.al. (1999) to improve validity and reliability. The qualitative data were collected from three different sources, the incubator managers, the well informed incubatees and the financiers. The provides an opportunity to establish the validity and reliability of data from one source against the other source. For instance the incubatees were asked what makes them in a better position to access finance, financiers were asked what makes them prefer to provide finance to incubatees, and incubator manager were asked what makes incubatees in a better position to access finance. After data triangulation, the answers showed similar pattern i.e. there were many concepts from different sources in agreement, this ensures reliability of the data, contrary to if the data were very different from one source to another.

3.9.2 Validity and reliability in quantitative research

In quantitative research, validity and reliability are the very important measurements of research quality. To ensure that the quantitative research is valid and reliable, the following things were done; repeated reading on the developed questionnaire was carried out to check on the correctness of the wording, whether the questions measure what they are supposed to measure and if there is any biasness, as well as knowing if the respondents can understand the questions as the researcher intends. A pilot study was conducted to make sure the questionnaire yield valid information and fortunately the pilot study showed that respondents understood clearly the questions, therefore the questionnaire was used for data collection. Factor analysis and reliability testing were done to ensure construct validity and reliability.

To ensure validity of a survey in phase two of this study, before data collection the questionnaire was developed by the researcher and two experts in the area of the study evaluated and agreed that the questions were effectively capturing the topic under invstigation. Secondly, a pilot study was done to see if the respondents were understanding questions and provide relevant answers to the questions. Thirdly, the collected data were subjected to the factor analysis

The reliability of constructs was tested before and after factor analysis so as to ensure the reliability of the constructs and therefore improving the reliability of the inferential results. Below is the table presenting constructs reliability results for all nine constructs in this study before and after.

Table 3.15: Constructs reliabilities before and after factor analysis

Construct Before factor analysis After factor analysis

No. of Variable items Cronbach’s Alpha No. of Variable items Cronbach’s Alpha Business incubator’s monitoring services 6 0.713 5 0.732 Incuubatee’s financial management capabilities 18 0.646 12 0.714

Incubatee’s bonding social capital

6 0.736 5 0.742

Incubatee’s bridging social capital

4 0.641 4 0.641

Incubatee’s linking social capital

4 0.888 4 0.888

Incubator manager’s bonding social capital

6 0.828 6 0.828

Incubator manager’s bridging social capital

4 0.913 4 0.913

Incubator manager’s linking social capital

4 0.864 4 0.864

MSMEs financial accessibility 8 0.840 8 0.840

The Cronbach’s alpha results in the table above are all at an acceptable level. However, comparing Cronbach’s Alpha before and after factor analysis there are slight differences. As stated in the factor analysis section, some variable items were eliminated by the factor analysis and therefore the reliability of constructs where items were reduced has been effected. Now, if compared the construct reliabilities before and after factor analysis as presented in table 3.15, it shows that factor analysis has improved some constructs reliabilities.

The reliability of business incubator’s monitoring services has slightly increased after factor analysis. This is because of the reduction of one variable item i.e. “Provision of qualified trainers”. Correspondingly, the reliability of incubatee’s financial management capabilities has significantly increased after factor analysis. This is due to the reduction of variable items from 18 to 12. Also, the reliability of Incubatee’s bonding social capital has also slightly increased after factor analysis. This is due to the reduction of variable items from 6 to 5. In the rest of the constructs, there was no changes. The number of variable items remained the same and also the reliability of constructs reimained the same, before and after factor analysis.

  • Hypotheses and results
  • Background to the problem
  • MSMEs’ sector in Tanzania
  • Startups in Tanzania
  • Business incubators in Tanzania
  • Financial management capabilities
  • Informal and Semi-formal financing system in Tanzania
  • Role of social capital on MSMEs’ access to finance
  • MSMEs’ financial
  • MSMEs’ Financial accessibility
  • Factor analysis
  • Validity and Reliability (You are here)
  • Current status of business incubation programs in Tanzania
  • Factors for business incubators’ successful financial intermediary role
  • Financiers’ provision of requested amount of loans to incubatees
  • The contribution of business incubators to MSMEs financial accessibility
  • Incubatees and incubator managers’ social capital on Incubatees' financial accessibility

Related documents

CHAPTER THREE RESEARCH METHODOLOGY 3.1

  • February 2020

Collins Seiya at University of Arusha

  • University of Arusha

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Research Methodology

  • First Online: 29 June 2019

Cite this chapter

chapter 3 research methods reliability fill in the gaps

  • Vaneet Kaur 3  

Part of the book series: Innovation, Technology, and Knowledge Management ((ITKM))

1090 Accesses

The chapter presents methodology employed for examining framework developed, during the literature review, for the purpose of present study. In light of the research objectives, the chapter works upon the ontology, epistemology as well as the methodology adopted for the present study. The research is based on positivist philosophy which postulates that phenomena of interest in the social world, can be studied as concrete cause and effect relationships, following a quantitative research design and a deductive approach. Consequently, the present study has used the existing body of literature to deduce relationships between constructs and develops a strategy to test the proposed theory with the ultimate objective of confirming and building upon the existing knowledge in the field. Further, the chapter presents a roadmap for the study which showcases the journey towards achieving research objectives in a series of well-defined logical steps. The process followed for building survey instrument as well as sampling design has been laid down in a similar manner. While the survey design enumerates various methods adopted along with justifications, the sampling design sets forth target population, sampling frame, sampling units, sampling method and suitable sample size for the study. The chapter also spells out the operational definitions of the key variables before exhibiting the three-stage research process followed in the present study. In the first stage, questionnaire has been developed based upon key constructs from various theories/researchers in the field. Thereafter, the draft questionnaire has been refined with the help of a pilot study and its reliability and validity has been tested. Finally, in light of the results of the pilot study, the questionnaire has been finalized and final data has been collected. In doing so, the step-by-step process of gathering data from various sources has been presented. Towards end, the chapter throws spotlight on various statistical methods employed for analysis of data, along with the presentation of rationale for the selection of specific techniques used for the purpose of presentation of outcomes of the present research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Aasland, A. (2008). A user manual for SPSS analysis (pp. 1–60).

Google Scholar  

Accenture Annual Report. (2016). Annual Report: 2016 Leading in the New. Retrieved February 13, 2017 from https://www.accenture.com/t20161030T213116__w__/in-en/_acnmedia/PDF-35/Accenture-2016-Shareholder-Letter10-K006.pdf#zoom=50 .

Achieng’Nyaura, L., & Omwenga, D. J. (2016). Factors affecting employee retention in the hotel industry in Mombasa County. Imperial Journal of Interdisciplinary Research, 2 (12).

Agariya, A. K., & Yayi, S. H. (2015). ERM scale development and validation in Indian IT sector. Journal of Internet Banking and Commerce, 20 (1), 1–16.

Aibinu, A. A., & Al-Lawati, A. M. (2010). Using PLS-SEM technique to model construction organizations’ willingness to participate in e-bidding. Automation in Construction, 19 (6), 714–724.

Article   Google Scholar  

Akgün, A. E., Keskin, H., & Byrne, J. (2012). Antecedents and contingent effects of organizational adaptive Capability on firm product innovativeness. Journal of Production and Innovation Management, 29 (S1), 171–189.

Akman, G., & Yilmaz, C. (2008). Innovative capability, innovation strategy and market orientation. International Journal of Innovation and Management, 12 (1), 69–111.

Akroush, M. N., Abu-ElSamen, A. A., Al-Shibly, M. S., & Al-Khawaldeh, F. M. (2010). Conceptualisation and development of customer service skills scale: An investigation of Jordanian customers. International Journal of Mobile Communications, 8 (6), 625–653.

AlKindy, A. M., Shah, I. M., & Jusoh, A. (2016). The impact of transformational leadership behaviors on work performance of Omani civil service agencies. Asian Social Science, 12 (3), 152.

Al-Mabrouk, K., & Soar, J. (2009). A delphi examination of emerging issues for successful information technology transfer in North Africa a case of Libya. African Journal of Business Management, 3 (3), 107.

Alonso-Almeida. (2015). Proactive and reactive strategies deployed by restaurants in times of crisis: Effects on capabilities, organization and competitive advantage. International Journal of Contemporary Hospitality Management, 27 (7), 1641–1661.

Alrubaiee, P., Alzubi, H. M., Hanandeh, R., & Ali, R. A. (2015). Investigating the relationship between knowledge management processes and organizational performance the mediating effect of organizational innovation. International Review of Management and Business Research, 4 (4), 989–1009.

Alters, B. J. (1997). Whose nature of science? Journal of Research in Science Teaching, 34 (1), 39–55.

Al-Thawwad, R. M. (2008). Technology transfer and sustainability-adapting factors: Culture, physical environment, and geographical location. In Proceedings of the 2008 IAJC-IJME International Conference .

Ammachchi, N. (2017). Healthcare demand spurring cloud & analytics development rush. Retrieved February 19, 2017 from http://www.nearshoreamericas.com/firms-focus-developing-low-cost-solutions-demand-outsourcing-rises-healthcare-sector-report/ .

Anatan, L. (2014). Factors influencing supply chain competitive advantage and performance. International Journal of Business and Information, 9 (3), 311–335.

Arkkelin, D. (2014). Using SPSS to understand research and data analysis.

Aroian, K. J., Kulwicki, A., Kaskiri, E. A., Templin, T. N., & Wells, C. L. (2007). Psychometric evaluation of the Arabic language version of the profile of mood states. Research in Nursing & Health, 30 (5), 531–541.

Asongu, S. A. (2013). Liberalization and financial sector competition: A critical contribution to the empirics with an African assessment.

Ayagre, P., Appiah-Gyamerah, I., & Nartey, J. (2014). The effectiveness of internal control systems of banks. The case of Ghanaian banks. International Journal of Accounting and Financial Reporting, 4 (2), 377.

Azizi, R., Maleki, M., Moradi-moghadam, M., & Cruz-machado, V. (2016). The impact of knowledge management practices on supply chain quality management and competitive advantages. Management and Production Engineering Review, 7 (1), 4–12.

Baariu, F. K. (2015). Factors influencing subscriber adoption of Mobile payments: A case of Safaricom’s Lipana M-Pesa Service in Embu Town , Kenya (Doctoral dissertation, University of Nairobi).

Babbie, E. R. (2011). Introduction to social research . Belmont: Wadsworth Cengage Learning.

Bagozzi, R. P., & Heatherton, T. F. (1994). A general approach to representing multifaceted personality constructs: Application to state self-esteem. Structural Equation Modeling: A Multidisciplinary Journal, 1 (1), 35–67.

Barlett, J. E., Kotrlik, J. W., & Higgins, C. C. (2001). Organizational research: Determining appropriate sample size in survey research. Information Technology, Learning, and Performance Journal, 19 (1), 43.

Barrales-molina, V., Bustinza, Ó. F., & Gutiérrez-gutiérrez, L. J. (2013). Explaining the causes and effects of dynamic capabilities generation: A multiple-indicator multiple-cause modelling approach. British Journal of Management, 24 , 571–591.

Barrales-molina, V., Martínez-lópez, F. J., & Gázquez-abad, J. C. (2014). Dynamic marketing capabilities: Toward an integrative framework. International Journal of Management Reviews, 16 , 397–416.

Bastian, R. W., & Thomas, J. P. (2016). Do talkativeness and vocal loudness correlate with laryngeal pathology? A study of the vocal overdoer/underdoer continuum. Journal of Voice, 30 (5), 557–562.

Bentler, P. M., & Mooijaart, A. B. (1989). Choice of structural model via parsimony: A rationale based on precision. Psychological Bulletin, 106 (2), 315–317.

Boari, C., Fratocchi, L., & Presutti, M. (2011). The Interrelated Impact of Social Networks and Knowledge Acquisition on Internationalisation Process of High-Tech Small Firms. In Proceedings of the 32th Annual Conference Academy of International Business, Bath .

Boralh, C. F. (2013). Impact of stress on depression and anxiety in dental students and professionals. International Public Health Journal, 5 (4), 485.

Bound, J. P., & Voulvoulis, N. (2005). Household disposal of pharmaceuticals as a pathway for aquatic contamination in the United Kingdom. Environmental Health Perspectives, 113 , 1705–1711.

Breznik, L., & Lahovnik, M. (2014). Renewing the resource base in line with the dynamic capabilities view: A key to sustained competitive advantage in the IT industry. Journal for East European Management Studies, 19 (4), 453–485.

Breznik, L., & Lahovnik, M. (2016). Dynamic capabilities and competitive advantage: Findings from case studies. Management: Journal of Contemporary Management Issues, 21 (Special issue), 167–185.

Cadiz, D., Sawyer, J. E., & Griffith, T. L. (2009). Developing and validating field measurement scales for absorptive capacity and experienced community of practice. Educational and Psychological Measurement, 69 (6), 1035–1058.

Carroll, G. B., Hébert, D. M., & Roy, J. M. (1999). Youth action strategies in violence prevention. Journal of Adolescent Health, 25 (1), 7–13.

Cepeda, G., & Vera, D. (2007). Dynamic capabilities and operational capabilities: A knowledge management perspective. Journal of Business Research, 60 (5), 426–437.

Chaharmahali, S. M., & Siadat, S. A. (2010). Achieving organizational ambidexterity: Understanding and explaining ambidextrous organisation.

Champoux, A., & Ommanney, C. S. L. (1986). Photo-interpretation, digital mapping, and the evolution of glaciers in glacier National Park, BC. Annals of Glaciology, 8 (1), 27–30.

Charan, C. S., & Nambirajan, T. (2016). An empirical investigation of supply chain engineering on lean thinking paradigms of in-house goldsmiths. The International Journal of Applied Business and Economic Research, 14 (6), 4475–4492.

Chau, P. Y. (2001). Inhibitors to EDI adoption in small business: An empirical investigation. Journal of Electronic Commerce Research, 2 (2), 78–88.

Chen, L. C. (2010). Multi-skilling in the hotel industry in Taiwan.

Chen, H. H., Lee, P. Y., & Lay, T. J. (2009). Drivers of dynamic learning and dynamic competitive capabilities in international strategic alliances. Journal of Business Research, 62 (12), 1289–1295.

Chen, C. W., Yu, P. H., & Li, Y. J. (2016). Understanding group-buying websites continuous use behavior: A use and gratifications theory perspective. Journal of Economics and Management, 12 (2), 177–204.

Chua, R. L., Cockfield, G., & Al-Hakim, L. (2008, November). Factors affecting trust within Australian beef supply chain. In 4th international congress on logistics and SCM systems: Effective supply chain and logistic management for sustainable development (pp. 26–28).

Cognizant Annual Report. (2015). Cognizant annual report 2015. Retrieved February 14, 2017 from http://investors.cognizant.com/download/Cognizant_AnnualReport_2015.pdf .

Cox, B. G., Mage, D. T., & Immerman, F. W. (1988). Sample design considerations for indoor air exposure surveys. JAPCA, 38 (10), 1266–1270.

Creswell, J. W. (2009). Editorial: Mapping the field of mixed methods research. Journal of Mixed Methods Research, 3 (2), 95–108.

Creswell, J. W., & Clark, V. L. P. (2007). Designing and conducting mixed methods research . Thousand Oaks: Sage.

Daniel, J. (2011). Sampling essentials: Practical guidelines for making sampling choices . London: Sage.

De Winter, J. C., & Dodou, D. (2010). Five-point Likert items: T test versus Mann-Whitney-Wilcoxon. Practical Assessment, Research & Evaluation, 15 (11), 1–12.

Deans, P. C., Karwan, K. R., Goslar, M. D., Ricks, D. A., & Toyne, B. (1991). Identification of key international information systems issues in US-based multinational corporations. Journal of Management Information Systems, 7 (4), 27–50.

Dei Mensah, R. (2014). Effects of human resource management practices on retention of employees in the banking industry in Accra, Ghana (Doctoral dissertation, Kenyatta University).

Dubey, R. (2016). Re-imagining Infosys. Retrieved February 19, 2017 from http://www.businesstoday.in/magazine/cover-story/how-infosys-ceo-is-trying-to-bring-back-the-company-into-high-growth-mode/story/230431.html .

Dunn, S., Cragg, B., Graham, I. D., Medves, J., & Gaboury, I. (2013). Interprofessional shared decision making in the NICU: A survey of an interprofessional healthcare team. Journal of Research in Interprofessional Practice and Education, 3 (1).

Einwiller, S. (2003). When reputation engenders trust: An empirical investigation in business-to-consumer electronic commerce. Electronic Markets, 13 (3), 196–209.

Eliassen, K. M., & Hopstock, L. A. (2011). Sleep promotion in the intensive care unit—A survey of nurses’ interventions. Intensive and Critical Care Nursing, 27 (3), 138–142.

Elliott, M., Page, K., Worrall-Carter, L., & Rolley, J. (2013). Examining adverse events after intensive care unit discharge: Outcomes from a pilot questionnaire. International Journal of Nursing Practice, 19 (5), 479–486.

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4 (3), 272–299.

Filippini, R., Güttel, W. H., & Nosella, A. (2012). Dynamic capabilities and the evolution of knowledge management projects in SMEs. International Journal of Technology Management, 60 (3/4), 202.

Finstad, K. (2010). Response interpolation and scale sensitivity: Evidence against 5-point scales. Journal of Usability Studies, 5 (3), 104–110.

Fleming, C. M., & Bowden, M. (2009). Web-based surveys as an alternative to traditional mail methods. Journal of Environmental Management, 90 (1), 284–292.

Foss, N. J., & Pedersen, T. (2004). Organizing knowledge processes in the multinational corporation: An introduction. Journal of International Business Studies, 35 (5), 340–349.

Frosi, G., Barros, V. A., Oliveira, M. T., Cavalcante, U. M. T., Maia, L. C., & Santos, M. G. (2016). Increase in biomass of two woody species from a seasonal dry tropical forest in association with AMF with different phosphorus levels. Applied Soil Ecology, 102 , 46–52.

Fujisato, H., Ito, M., Takebayashi, Y., Hosogoshi, H., Kato, N., Nakajima, S., & Horikoshi, M. (2017). Reliability and validity of the Japanese version of the emotion regulation skills questionnaire. Journal of Affective Disorders, 208 , 145–152.

Garg, R., & De, K. (2012). Impact of dynamic capabilities on the export orientation and export performance of small and medium sized enterprises in emerging markets: A conceptual model. African Journal of Business Management, 6 (29), 8464–8474.

Gerbing, D. W., & Anderson, J. C. (1988). An updated paradigm for scale development incorporating unidimensionality and its assessment. Journal of Marketing Research, 25 , 186–192.

Getz, L. M., Marks, S., & Roy, M. (2014). The influence of stress, optimism, and music training on music uses and preferences. Psychology of Music, 42 (1), 71–85.

Gibson, C. B., & Birkinshaw, J. (2004). The antecedents, consequences, and mediating role of organizational ambidexterity. Academy of Management Journal, 47 (2), 209–226.

Glasow, P. A. (2005). Fundamentals of survey research methodology.

Global MAKE Report. (2016). Global Most Admired Knowledge Enterprises (MAKE) report: Executive summary. Retrieved February 22, 2017 from http://www.knowledgebusiness.com/knowledgebusiness/templates/ViewAttachment.aspx?hyperLinkId=6695 .

Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18 (1), 185–214.

Goltz, N. G. (2012). Influence of the first impression on credibility evaluation of online information (Bachelor’s thesis, University of Twente).

Graham, J. D., Beaulieu, N. D., Sussman, D., Sadowitz, M., & Li, Y. C. (1999). Who lives near coke plants and oil refineries? An exploration of the environmental inequity hypothesis. Risk Analysis, 19 (2), 171–186.

Granados, M. L. (2015). Knowing what social enterprises know. In 5th EMES International Research Conference on Social Enterprise (pp. 1–20).

Guo, Y. M., & Poole, M. S. (2009). Antecedents of flow in online shopping: A test of alternative models. Information Systems Journal, 19 (4), 369–390.

Hadadi, M., Ebrahimi Takamjani, I., Ebrahim Mosavi, M., Aminian, G., Fardipour, S., & Abbasi, F. (2016). Cross-cultural adaptation, reliability, and validity of the Persian version of the Cumberland ankle instability tool. Disability and Rehabilitation , 8288(February), 1–9. https://doi.org/10.1080/09638288.2016.1207105

Haghighi, M. A., Bagheri, R., & Kalat, P. S. (2015). The relationship of knowledge management and organizational performance in science and technology parks of Tehran. Independent Journal of Management & Production, 6 (2), 422–448.

Hahm, S., Knuth, D., Kehl, D., & Schmidt, S. (2016). The impact of different natures of experience on risk perception regarding fire-related incidents: A comparison of firefighters and emergency survivors using cross-national data. Safety Science, 82 , 274–282.

Hansen, S. S., & Lee, J. K. (2013). What drives consumers to pass along marketer-generated eWOM in social network games? Social and game factors in play. Journal of Theoretical and Applied Electronic Commerce Research, 8 (1), 53–68.

Haq, M. (2015). A comparative analysis of qualitative and quantitative research methods and a justification for adopting mixed methods in social research.

Hashim, Y. A. (2010). Determining sufficiency of sample size in management survey research activities. International Journal of Organisational Management & Entrepreneurship Development, 6 (1), 119–130.

Hill, R. (1998). What sample size is “enough” in internet survey research. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century, 6 (3–4), 1–12.

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21 (5), 967–988.

Hogan, S. J., Soutar, G. N., McColl-Kennedy, J. R., & Sweeney, J. C. (2011). Reconceptualizing professional service firm innovation capability: Scale development. Industrial Marketing Management, 40 (8), 1264–1273.

Holm, K. E., LaChance, H. R., Bowler, R. P., Make, B. J., & Wamboldt, F. S. (2010). Family factors are associated with psychological distress and smoking status in chronic obstructive pulmonary disease. General Hospital Psychiatry, 32 (5), 492–498.

Horng, J. S., Teng, C. C., & Baum, T. G. (2009). Evaluating the quality of undergraduate hospitality, tourism and leisure programmes. Journal of Hospitality, Leisure, Sport and Tourism Education, 8 (1), 37–54.

Huan, Y., & Li, D. (2015). Effects of intellectual capital on innovative performance: The role of knowledge- based dynamic capability. Management Decision, 53 (1), 40–56.

Huckleberry, S. D. (2011). Commitment to coaching: Using the sport commitment model as a theoretical framework with soccer coaches (Doctoral dissertation, Ohio University).

Humborstad, S. I. W., & Perry, C. (2011). Employee empowerment, job satisfaction and organizational commitment: An in-depth empirical investigation. Chinese Management Studies, 5 (3), 325–344.

Infosys Annual Report. (2015). Infosys annual report 2015. Retrieved February 12, 2017 from https://www.infosys.com/investors/reports-filings/annual-report/annual/Documents/infosys-AR-15.pdf .

Investment Standard. (2016). Cognizant is the best pick out of the 4 information technology service providers. Retrieved February 19, 2017 from http://seekingalpha.com/article/3961500-cognizant-best-pick-4-information-technology-service-providers .

Jansen, J. J., Van Den Bosch, F. A., & Volberda, H. W. (2005). Managing potential and realized absorptive capacity: How do organizational antecedents matter? Academy of Management Journal, 48 (6), 999–1015.

John, N. A., Seme, A., Roro, M. A., & Tsui, A. O. (2017). Understanding the meaning of marital relationship quality among couples in peri-urban Ethiopia. Culture, Health & Sexuality, 19 (2), 267–278.

Joo, J., & Sang, Y. (2013). Exploring Koreans’ smartphone usage: An integrated model of the technology acceptance model and uses and gratifications theory. Computers in Human Behavior, 29 (6), 2512–2518.

Kaehler, C., Busatto, F., Becker, G. V., Hansen, P. B., & Santos, J. L. S. (2014). Relationship between adaptive capability and strategic orientation: An empirical study in a Brazilian company. iBusiness .

Kajfez, R. L. (2014). Graduate student identity: A balancing act between roles.

Kam Sing Wong, S., & Tong, C. (2012). The influence of market orientation on new product success. European Journal of Innovation Management, 15 (1), 99–121.

Karttunen, V., Sahlman, H., Repo, J. K., Woo, C. S. J., Myöhänen, K., Myllynen, P., & Vähäkangas, K. H. (2015). Criteria and challenges of the human placental perfusion–Data from a large series of perfusions. Toxicology In Vitro, 29 (7), 1482–1491.

Kaur, V., & Mehta, V. (2016a). Knowledge-based dynamic capabilities: A new perspective for achieving global competitiveness in IT sector. Pacific Business Review International, 1 (3), 95–106.

Kaur, V., & Mehta, V. (2016b). Leveraging knowledge processes for building higher-order dynamic capabilities: An empirical evidence from IT sector in India. JIMS 8M , July- September.

Kaya, A., Iwamoto, D. K., Grivel, M., Clinton, L., & Brady, J. (2016). The role of feminine and masculine norms in college women’s alcohol use. Psychology of Men & Masculinity, 17 (2), 206–214.

Kenny, A., McLoone, S., Ward, T., & Delaney, D. (2006). Using user perception to determine suitable error thresholds for dead reckoning in distributed interactive applications.

Kianpour, K., Jusoh, A., & Asghari, M. (2012). Importance of Price for buying environmentally friendly products. Journal of Economics and Behavioral Studies, 4 (6), 371–375.

Kim, J., & Forsythe, S. (2008). Sensory enabling technology acceptance model (SE-TAM): A multiple-group structural model comparison. Psychology & Marketing, 25 (9), 901–922.

Kim, Y. J., Oh, Y., Park, S., Cho, S., & Park, H. (2013). Stratified sampling design based on data mining. Healthcare Informatics Research, 19 (3), 186–195.

Kim, R., Yang, H., & Chao, Y. (2016). Effect of brand equity& country origin on Korean consumers’ choice for beer brands. The Business & Management Review, 7 (3), 398.

Kimweli, J. M. (2013). The role of monitoring and evaluation practices to the success of donor funded food security intervention projects a case study of Kibwezi District. International Journal of Academic Research in Business and Social Sciences, 3 (6), 9.

Kinsfogel, K. M., & Grych, J. H. (2004). Interparental conflict and adolescent dating relationships: Integrating cognitive, emotional, and peer influences. Journal of Family Psychology, 18 (3), 505–515.

Kivimäki, M., Vahtera, J., Pentti, J., Thomson, L., Griffiths, A., & Cox, T. (2001). Downsizing, changes in work, and self-rated health of employees: A 7-year 3-wave panel study. Anxiety, Stress and Coping, 14 (1), 59–73.

Klemann, B. (2012). The unknowingly consumers of Fairtrade products.

Kothari, C. R. (2004). Research methodology: Methods and techniques . New Delhi: New Age International.

Krause, D. R. (1999). The antecedents of buying firms’ efforts to improve suppliers. Journal of Operations Management, 17 (2), 205–224.

Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement., 30 , 607–610.

Krige, S. M., Mahomoodally, F. M., Subratty, A. H., & Ramasawmy, D. (2012). Relationship between socio-demographic factors and eating practices in a multicultural society. Food and Nutrition Sciences, 3 (3), 286–295.

Krzakiewicz, K. (2013). Dynamic capabilities and knowledge management. Management, 17 (2), 1–15.

Kuzic, J., Fisher, J., Scollary, A., Dawson, L., Kuzic, M., & Turner, R. (2005). Modus vivendi of E-business. PACIS 2005 Proceedings , 99.

Laframboise, K., Croteau, A. M., Beaudry, A., & Manovas, M. (2009). Interdepartmental knowledge transfer success during information technology projects. International Journal of Knowledge Management , 189–210.

Landaeta, R. E. (2008). Evaluating benefits and challenges of knowledge transfer across projects. Engineering Management Journal, 20 (1), 29–38.

Lee, Y., Chen, A., Yang, Y. L., Ho, G. H., Liu, H. T., & Lai, H. Y. (2005). The prophylactic antiemetic effects of ondansetron, propofol, and midazolam in female patients undergoing sevoflurane anaesthesia for ambulatory surgery: A-42. European Journal of Anaesthesiology (EJA), 22 , 11–12.

Lee, V. H., Foo, A. T. L., Leong, L. Y., & Ooi, K. B. (2016). Can competitive advantage be achieved through knowledge management? A case study on SMEs. Expert Systems with Applications, 65 , 136–151.

Leech, N. L., Barrett, K. C., & Morgan, G. A. (2005). SPSS for intermediate statistics: Use and interpretation . New Jersey: Psychology Press.

Leonardi, F., Spazzafumo, L., & Marcellini, F. (2005). Subjective Well-being: The constructionist point of view. A longitudinal study to verify the predictive power of top-down effects and bottom-up processes. Social Indicators Research, 70 (1), 53–77.

Li, D. Y., & Liu, J. (2014). Dynamic capabilities, environmental dynamism, and competitive advantage: Evidence from China. Journal of Business Research, 67 (1), 2793–2799.

Liao, S. H., Fei, W. C., & Chen, C. C. (2007). Knowledge sharing, absorptive capacity, and innovation capability: An empirical study of Taiwan’s knowledge-intensive industries. Journal of Information Science, 33 (3), 340–359.

Liao, S. H., & Wu, C. C. (2009). The relationship among knowledge management, organizational learning, and organizational performance. International Journal of Business and Management, 4 (4), 64.

Liao, T. S., Rice, J., & Lu, J. C. (2014). The vicissitudes of Competitive advantage: Empirical evidence from Australian manufacturing SMEs. Journal of Small Business Management, 53 (2), 469–481.

Liu, S., & Deng, Z. (2015). Understanding knowledge management capability in business process outsourcing: A cluster analysis. Management Decision, 53 (1), 1–11.

Liu, C. L. E., Ghauri, P. N., & Sinkovics, R. R. (2010). Understanding the impact of relational capital and organizational learning on alliance outcomes. Journal of World Business, 45 (3), 237–249.

Luís, C., Cothran, E. G., & do Mar Oom, M. (2007). Inbreeding and genetic structure in the endangered Sorraia horse breed: Implications for its conservation and management. Journal of Heredity, 98 (3), 232–237.

MacDonald, C. M., & Atwood, M. E. (2014, June). What does it mean for a system to be useful?: An exploratory study of usefulness. In Proceedings of the 2014 conference on designing interactive systems (pp. 885–894). New York: ACM.

Mafini, C., & Dlodlo, N. (2014). The relationship between extrinsic motivation, job satisfaction and life satisfaction amongst employees in a public organisation. SA Journal of Industrial Psychology, 40 (1), 01–12.

Mafini, C., Dhurup, M., & Mandhlazi, L. (2014). Shopper typologies amongst a generation Y consumer cohort and variations in terms of age in the fashion apparel market: Original research. Acta Commercii, 14 (1), 1–11.

Mageswari, S. U., Sivasubramanian, C., & Dath, T. S. (2015). Knowledge management enablers, processes and innovation in Small manufacturing firms: A structural equation modeling approach. IUP Journal of Knowledge Management, 13 (1), 33.

Mahoney, J. T. (2005). Resource-based theory, dynamic capabilities, and real options. In Foundations for organizational science. Economic foundations of strategy . Thousand Oaks: SAGE Publications.

Malhotra, N., Hall, J., Shaw, M., & Oppenheim, P. (2008). Essentials of marketing research, 2nd Australian edition.

Manan, R. M. (2016). The use of hangman game in motivating students in Learning English. ELT Perspective, 4 (2).

Manco-Johnson, M., Morrissey-Harding, G., Edelman-Lewis, B., Oster, G., & Larson, P. (2004). Development and validation of a measure of disease-specific quality of life in young children with haemophilia. Haemophilia, 10 (1), 34–41.

Marek, L. (2016). Guess which Illinois company uses the most worker visas. Retrieved February 13, 2017 from http://www.chicagobusiness.com/article/20160227/ISSUE01/302279994/guess-which-illinois-company-uses-the-most-worker-visas .

Martin, C. M., Roach, V. A., Nguyen, N., Rice, C. L., & Wilson, T. D. (2013). Comparison of 3D reconstructive technologies used for morphometric research and the translation of knowledge using a decision matrix. Anatomical Sciences Education, 6 (6), 393–403.

Maskatia, S. A., Altman, C. A., Morris, S. A., & Cabrera, A. G. (2013). The echocardiography “boot camp”: A novel approach in pediatric cardiovascular imaging education. Journal of the American Society of Echocardiography, 26 (10), 1187–1192.

Matson, J. L., Boisjoli, J., Rojahn, J., & Hess, J. (2009). A factor analysis of challenging behaviors assessed with the baby and infant screen for children with autism traits. Research in Autism Spectrum Disorders, 3 (3), 714–722.

Matusik, S. F., & Heeley, M. B. (2005). Absorptive capacity in the software Industry: Identifying dimensions that affect knowledge and knowledge creation activities. Journal of Management, 31 (4), 549–572.

Matveev, A. V. (2002). The advantages of employing quantitative and qualitative methods in intercultural research: Practical implications from the study of the perceptions of intercultural communication competence by American and Russian managers. Bulletin of Russian Communication Association Theory of Communication and Applied Communication, 1 , 59–67.

McDermott, E. P., & Ervin, D. (2005). The influence of procedural and distributive variables on settlement rates in employment discrimination mediation. Journal of Dispute Resolution, 45 , 1–16.

McKelvie, A. (2007). Innovation in new firms: Examining the role of knowledge and growth willingness.

Mendonca, J., & Sen, A. (2016). IT companies including TCS, Infosys, Wipro bracing for slowest topline expansion on annual basis. Retrieved February 19 2017 from http://economictimes.indiatimes.com/markets/stocks/earnings/it-companies-including-tcs-infosys-wipro-bracing-for-slowest-topline-expansion-on-annual-basis/articleshow/51639858.cms .

Mesina, F., De Deyne, C., Judong, M., Vandermeersch, E., & Heylen, R. (2005). Quality survey of pre-operative assessment: Influence of a standard questionnaire: A-38. European Journal of Anaesthesiology (EJA), 22 , 11.

Michailova, S., & Zhan, W. (2014). Dynamic capabilities and innovation in MNC subsidiaries. Journal of World Business , 1–9.

Miller, R., Salmona, M., & Melton, J. (2012). Modeling student concern for professional online image. Journal of Internet Social Networking & Virtual Communities, 3 (2), 1.

Minarro-Viseras, E., Baines, T., & Sweeney, M. (2005). Key success factors when implementing strategic manufacturing initiatives. International Journal of Operations & Production Management, 25 (2), 151–179.

Monferrer, D., Blesa, A., & Ripollés, M. (2015). Catching dynamic capabilities through market-oriented networks. European Journal of International Management, 9 (3), 384–408.

Moyer, J. E. (2007). Learning from leisure reading: A study of adult public library patrons. Reference & User Services Quarterly, 46 , 66–79.

Mulaik, S. A., James, L. R., Van Alstine, J., Bennett, N., Lind, S., & Stilwell, C. D. (1989). Evaluation of goodness-of-fit indices for structural equation models. Psychological Bulletin, 105 (3), 430–445.

Murphy, T. H., & Terry, H. R. (1998). Faculty needs associated with agricultural distance education. Journal of Agricultural Education, 39 , 17–27.

Murphy, C., Hearty, C., Murray, M., & McCaul, C. (2005). Patient preferences for desired post-anaesthesia outcomes-a comparison with medical provider perspective: A-40. European Journal of Anaesthesiology (EJA), 22 , 11.

Nair, A., Rustambekov, E., McShane, M., & Fainshmidt, S. (2014). Enterprise risk management as a dynamic Capability: A test of its effectiveness during a crisis. Managerial and Decision Economics, 35 , 555–566.

Nandan, S. (2010). Determinants of customer satisfaction on service quality: A study of railway platforms in India. Journal of Public Transportation, 13 (1), 6.

NASSCOM Indian IT-BPM Industry Report. (2016). NASSCOM Indian IT-BPM Industry Report 2016. Retrieved January 11, 2017 from http://www.nasscom.in/itbpm-sector-india-strategic-review-2016 .

Nedzinskas, Š. (2013). Dynamic capabilities and organizational inertia interaction in volatile environment. Retrieved from http://archive.ism.lt/handle/1/301 .

Nguyen, T. N. Q. (2010). Knowledge management capability and competitive advantage: An empirical study of Vietnamese enterprises.

Nguyen, N. T. D., & Aoyama, A. (2014). Achieving efficient technology transfer through a specific corporate culture facilitated by management practices. The Journal of High Technology Management Research, 25 (2), 108–122.

Nguyen, Q. T. N., & Neck, P. A. (2008, July). Knowledge management as dynamic capabilities: Does it work in emerging less developed countries. In Proceedings of the 16th Annual Conference on Pacific Basin Finance, Economics, Accounting and Management (pp. 1–18).

Nieves, J., & Haller, S. (2014). Building dynamic capabilities through knowledge resources. Tourism Management, 40 , 224–232.

Nirmal, R. (2016). Indian IT firms late movers in digital race. Retrieved February 19, 2017 from http://www.thehindubusinessline.com/info-tech/indian-it-firms-late-movers-in-digital-race/article8505379.ece .

Numthavaj, P., Bhongmakapat, T., Roongpuwabaht, B., Ingsathit, A., & Thakkinstian, A. (2017). The validity and reliability of Thai Sinonasal outcome Test-22. European Archives of Oto-Rhino-Laryngology, 274 (1), 289–295.

Obwoge, M. E., Mwangi, S. M., & Nyongesa, W. J. (2013). Linking TVET institutions and industry in Kenya: Where are we. The International Journal of Economy, Management and Social Science, 2 (4), 91–96.

Oktemgil, M., & Greenley, G. (1997). Consequences of high and low adaptive capability in UK companies. European Journal of Marketing, 31 (7), 445–466.

Ouyang, Y. (2015). A cyclic model for knowledge management capability-a review study. Arabian Journal of Business and Management Review, 5 (2), 1–9.

Paloniemi, R., & Vainio, A. (2011). Legitimacy and empowerment: Combining two conceptual approaches for explaining forest owners’ willingness to cooperate in nature conservation. Journal of Integrative Environmental Sciences, 8 (2), 123–138.

Pant, S., & Lado, A. (2013). Strategic business process offshoring and Competitive advantage: The role of strategic intent and absorptive capacity. Journal of Information Science and Technology, 9 (1), 25–58.

Paramati, S. R., Gupta, R., Maheshwari, S., & Nagar, V. (2016). The empirical relationship between the value of rupee and performance of information technology firms: Evidence from India. International Journal of Business and Globalisation, 16 (4), 512–529.

Parida, V., Oghazi, P., & Cedergren, S. (2016). A study of how ICT capabilities can influence dynamic capabilities. Journal of Enterprise Information Management, 29 (2), 1–22.

Parkhurst, K. A., Conwell, Y., & Van Orden, K. A. (2016). The interpersonal needs questionnaire with a shortened response scale for oral administration with older adults. Aging & Mental Health, 20 (3), 277–283.

Payne, A. A., Gottfredson, D. C., & Gottfredson, G. D. (2006). School predictors of the intensity of implementation of school-based prevention programs: Results from a national study. Prevention Science, 7 (2), 225–237.

Pereira-Moliner, J., Font, X., Molina-Azorín, J., Lopez-Gamero, M. D., Tarí, J. J., & Pertusa-Ortega, E. (2015). The holy grail: Environmental management, competitive advantage and business performance in the Spanish hotel industry. International Journal of Contemporary Hospitality Management, 27 (5), 714–738.

Persada, S. F., Razif, M., Lin, S. C., & Nadlifatin, R. (2014). Toward paperless public announcement on environmental impact assessment (EIA) through SMS gateway in Indonesia. Procedia Environmental Sciences, 20 , 271–279.

Pertusa-Ortega, E. M., Molina-Azorín, J. F., & Claver-Cortés, E. (2010). Competitive strategy, structure and firm performance: A comparison of the resource-based view and the contingency approach. Management Decision, 48 (8), 1282–1303.

Peters, M. D., Wieder, B., Sutton, S. G., & Wake, J. (2016). Business intelligence systems use in performance measurement capabilities: Implications for enhanced competitive advantage. International Journal of Accounting Information Systems, 21 (1–17), 1–17.

Protogerou, A., Caloghirou, Y., & Lioukas, S. (2011). Dynamic capabilities and their indirect impact on firm performance. Industrial and Corporate Change, 21 (3), 615–647.

Rapiah, M., Wee, S. H., Ibrahim Kamal, A. R., & Rozainun, A. A. (2010). The relationship between strategic performance measurement systems and organisational competitive advantage. Asia-Pacific Management Accounting Journal, 5 (1), 1–20.

Reuner, T. (2016). HfS blueprint Report, ServiceNow services 2016, excerpt for Cognizant. Retrieved February 2, 2017 from https://www.cognizant.com/services-resources/Services/hfs-blueprint-report-servicenow-2016.pdf .

Ríos, V. R., & del Campo, E. P. (2013). Business research methods: Theory and practice . Madrid: ESIC Editorial.

Sachitra, V. (2015). Review of Competitive advantage measurements: The case of agricultural firms. IV, 303–317.

Sahney, S., Banwet, D. K., & Karunes, S. (2004). Customer requirement constructs: The premise for TQM in education: A comparative study of select engineering and management institutions in the Indian context. International Journal of Productivity and Performance Management, 53 (6), 499–520.

Sampe, F. (2012). The influence of organizational learning on performance in Indonesian SMEs.

Sarlak, M. A., Shafiei, M., Sarlak, M. A., Shafiei, M., Capability, M., Capability, I., & Competitive, S. (2013). A research in relationship between entrepreneurship, marketing Capability, innovative Capability and sustainable Competitive advantage. Kaveh Industrial City, 7 (8), 1490–1497.

Saunders, M., Lewis, P., & Thornhill, A. (2012). Research methods for business students . Pearson.

Schiff, J. H., Fornaschon, S., Schiff, M., Martin, E., & Motsch, J. (2005). Measuring patient dissatisfaction with anethesia care: A-41. European Journal of Anaesthesiology (EJA), 22 , 11.

Schwartz, S. J., Coatsworth, J. D., Pantin, H., Prado, G., Sharp, E. H., & Szapocznik, J. (2006). The role of ecodevelopmental context and self-concept in depressive and externalizing symptoms in Hispanic adolescents. International Journal of Behavioral Development, 30 (4), 359–370.

Scott, V. C., Sandberg, J. G., Harper, J. M., & Miller, R. B. (2012). The impact of depressive symptoms and health on sexual satisfaction for older couples: Implications for clinicians. Contemporary Family Therapy, 34 (3), 376–390.

Shafia, M. A., Shavvalpour, S., Hosseini, M., & Hosseini, R. (2016). Mediating effect of technological innovation capabilities between dynamic capabilities and competitiveness of research and technology organisations. Technology Analysis & Strategic Management, 28 , 1–16. https://doi.org/10.1080/09537325.2016.1158404 .

Shahzad, K., Faisal, A., Farhan, S., Sami, A., Bajwa, U., & Sultani, R. (2016). Integrating knowledge management (KM) strategies and processes to enhance organizational creativity and performance: An empirical investigation. Journal of Modelling in Management, 11 (1), 1–34.

Sharma, A. (2016). Five reasons why you should avoid investing in IT stocks. Retrieved February 19, 2017 from http://www.businesstoday.in/markets/company-stock/five-reasons-why-you-should-avoid-investing-in-infosys-tcs-wipro/story/238225.html .

Sharma, J. K., & Singh, A. K. (2012). Absorptive capability and competitive advantage: Some insights from Indian pharmaceutical Industry. International Journal of Management and Business Research, 2 (3), 175–192.

Shepherd, R. M., & Edelmann, R. J. (2005). Reasons for internet use and social anxiety. Personality and Individual Differences, 39 (5), 949–958.

Singh, R., & Khanduja, D. (2010). Customer requirements grouping–a prerequisite for successful implementation of TQM in technical education. International Journal of Management in Education, 4 (2), 201–215.

Small, M. J., Gupta, J., Frederic, R., Joseph, G., Theodore, M., & Kershaw, T. (2008). Intimate partner and nonpartner violence against pregnant women in rural Haiti. International Journal of Gynecology & Obstetrics, 102 (3), 226–231.

Srivastava, M. (2016). IT biggies expect weaker Sept quarter. Retrieved February 19, 2017 from http://www.business-standard.com/article/companies/it-biggies-expect-weaker-sept-quarter-116100400680_1.html .

Stoten, D. W. (2016). Discourse, knowledge and power: The continuing debate over the DBA. Journal of Management Development, 35 (4), 430–447.

Sudarvel, J., & Velmurugan, R. (2015). Semi month effect in Indian IT sector with reference to BSE IT index. International Journal of Advance Research in Computer Science and Management Studies, 3 (10), 155–159.

Sylvia, M., & Terhaar, M. (2014). An approach to clinical data Management for the Doctor of nursing practice curriculum. Journal of Professional Nursing, 30 (1), 56–62.

Tabachnick, B. G., & Fidell, L. S. (2007). Multivariate analysis of variance and covariance. Using Multivariate Statistics, 3 , 402–407.

Teece, D. J. (2014). The foundations of Enterprise performance: Dynamic and ordinary capabilities in an (economic) theory of firms. The Academy of Management Perspectives, 28 (4), 328–352.

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18 (7), 509–533.

Thomas, J. B., Sussman, S. W., & Henderson, J. C. (2001). Understanding “strategic learning”: Linking organizational learning, knowledge management, and sensemaking. Organization Science, 12 (3), 331–345.

Travis, S. E., & Grace, J. B. (2010). Predicting performance for ecological restoration: A case study using Spartina alterniflora. Ecological Applications, 20 (1), 192–204.

Tseng, S., & Lee, P. (2014). The effect of knowledge management capability and dynamic capability on organizational performance. Journal of Enterprise Information Management, 27 (2), 158–179.

Turker, D. (2009). Measuring corporate social responsibility: A scale development study. Journal of Business Ethics, 85 (4), 411–427.

Vanham, D., Mak, T. N., & Gawlik, B. M. (2016). Urban food consumption and associated water resources: The example of Dutch cities. Science of the Total Environment, 565 , 232–239.

Visser, P. S., Krosnick, J. A., & Lavrakas, P. J. (2000). Survey research. In H.T. Reis & C.M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 223-252). New York: Cambridge.

Vitale, G., Sala, F., Consonni, F., Teruzzi, M., Greco, M., Bertoli, E., & Maisano, P. (2005). Perioperative complications correlate with acid-base balance in elderly trauma patients: A-37. European Journal of Anaesthesiology (EJA), 22 , 10–11.

Wang, C. L., & Ahmed, P. K. (2004). Leveraging knowledge in the innovation and learning process at GKN. International Journal of Technology Management, 27 (6/7), 674–688.

Wang, C. L., Senaratne, C., & Rafiq, M. (2015). Success traps, dynamic capabilities and firm performance. British Journal of Management, 26 , 26–44.

Wasswa Katono, I. (2011). Student evaluation of e-service quality criteria in Uganda: The case of automatic teller machines. International Journal of Emerging Markets, 6 (3), 200–216.

Wasylkiw, L., Currie, M. A., Meuse, R., & Pardoe, R. (2010). Perceptions of male ideals: The power of presentation. International Journal of Men's Health, 9 (2), 144–153.

Wilhelm, H., Schlömer, M., & Maurer, I. (2015). How dynamic capabilities affect the effectiveness and efficiency of operating routines under high and Low levels of environmental dynamism. British Journal of Management , 1–19.

Wilkens, U., Menzel, D., & Pawlowsky, P. (2004). Inside the black-box : Analysing the generation of Core competencies and dynamic capabilities by exploring collective minds. An organizational learning perspective. Management Review, 15 (1), 8–27.

Willemsen, M. C., & de Vries, H. (1996). Saying “no” to environmental tobacco smoke: Determinants of assertiveness among nonsmoking employees. Preventive Medicine, 25 (5), 575–582.

Williams, M., Peterson, G. M., Tenni, P. C., & Bindoff, I. K. (2012). A clinical knowledge measurement tool to assess the ability of community pharmacists to detect drug-related problems. International Journal of Pharmacy Practice, 20 (4), 238–248.

Wintermark, M., Huss, D. S., Shah, B. B., Tustison, N., Druzgal, T. J., Kassell, N., & Elias, W. J. (2014). Thalamic connectivity in patients with essential tremor treated with MR imaging–guided focused ultrasound: In vivo Fiber tracking by using diffusion-tensor MR imaging. Radiology, 272 (1), 202–209.

Wipro Annual Report. (2015). Wipro annual report 2014–15. Retrieved February 16, 2017 from http://www.wipro.com/documents/investors/pdf-files/Wipro-annual-report-2014-15.pdf .

Wu, J., & Chen, X. (2012). Leaders’ social ties, knowledge acquisition capability and firm competitive advantage. Asia Pacific Journal of Management, 29 (2), 331–350.

Yamane, T. (1967). Elementary Sampling Theory Prentice Inc. Englewood Cliffs. NS, USA, 1, 371–390.

Zahra, S., Sapienza, H. J., & Davidsson, P. (2006). Entrepreneurship and dynamic capabilities: A review, model and research agenda. Journal of Management Studies, 43 (4), 917–955.

Zaied, A. N. H. (2012). An integrated knowledge management capabilities framework for assessing organizational performance. International Journal of Information Technology and Computer Science, 4 (2), 1–10.

Zakaria, Z. A., Anuar, H. S., & Udin, Z. M. (2015). The relationship between external and internal factors of information systems success towards employee performance: A case of Royal Malaysia custom department. International Journal of Economics, Finance and Management, 4 (2), 54–60.

Zheng, S., Zhang, W., & Du, J. (2011). Knowledge-based dynamic capabilities and innovation in networked environments. Journal of Knowledge Management, 15 (6), 1035–1051.

Zikmund, W. G., Babin, B. J., Carr, J. C., & Griffin, M. (2010). Business research methods . Mason: South Western Cengage Learning.

Download references

Author information

Authors and affiliations.

The University of Texas at Dallas, Richardson, TX, USA

Vaneet Kaur

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Kaur, V. (2019). Research Methodology. In: Knowledge-Based Dynamic Capabilities. Innovation, Technology, and Knowledge Management. Springer, Cham. https://doi.org/10.1007/978-3-030-21649-8_3

Download citation

DOI : https://doi.org/10.1007/978-3-030-21649-8_3

Published : 29 June 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-21648-1

Online ISBN : 978-3-030-21649-8

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Gen Intern Med
  • v.37(1); 2022 Jan

Logo of jgimed

Methods for Identifying Health Research Gaps, Needs, and Priorities: a Scoping Review

Eunice c. wong.

1 RAND Corporation, Santa Monica, CA USA

Alicia R. Maher

Aneesa motala.

2 Department of Population and Public Health Sciences, University of Southern California Gehr Family Center for Health Systems Science & Innovation, Los Angeles, USA

Rachel Ross

Olamigoke akinniranye, jody larkin, susanne hempel, associated data.

Well-defined, systematic, and transparent processes to identify health research gaps, needs, and priorities are vital to ensuring that available funds target areas with the greatest potential for impact.

The purpose of this review is to characterize methods conducted or supported by research funding organizations to identify health research gaps, needs, or priorities.

We searched MEDLINE, PsycINFO, and the Web of Science up to September 2019. Eligible studies reported on methods to identify health research gaps, needs, and priorities that had been conducted or supported by research funding organizations. Using a published protocol, we extracted data on the method, criteria, involvement of stakeholders, evaluations, and whether the method had been replicated (i.e., used in other studies).

Among 10,832 citations, 167 studies were eligible for full data extraction. More than half of the studies employed methods to identify both needs and priorities, whereas about a quarter of studies focused singularly on identifying gaps (7%), needs (6%), or priorities (14%) only. The most frequently used methods were the convening of workshops or meetings (37%), quantitative methods (32%), and the James Lind Alliance approach, a multi-stakeholder research needs and priority setting process (28%). The most widely applied criteria were importance to stakeholders (72%), potential value (29%), and feasibility (18%). Stakeholder involvement was most prominent among clinicians (69%), researchers (66%), and patients and the public (59%). Stakeholders were identified through stakeholder organizations (51%) and purposive (26%) and convenience sampling (11%). Only 4% of studies evaluated the effectiveness of the methods and 37% employed methods that were reproducible and used in other studies.

To ensure optimal targeting of funds to meet the greatest areas of need and maximize outcomes, a much more robust evidence base is needed to ascertain the effectiveness of methods used to identify research gaps, needs, and priorities.

Supplementary Information

The online version contains supplementary material available at 10.1007/s11606-021-07064-1.

Well-defined, systematic, and transparent methods to identify health research gaps, needs, and priorities are vital to ensuring that available funds target areas with the greatest potential for impact. 1 , 2 As defined in the literature, 3 , 4 research gaps are defined as areas or topics in which the ability to draw a conclusion for a given question is prevented by insufficient evidence. Research gaps are not necessarily synonymous with research needs , which are those knowledge gaps that significantly inhibit the decision-making ability of key stakeholders, who are end users of research, such as patients, clinicians, and policy makers. The selection of research priorities is often necessary when all identified research gaps or needs cannot be pursued because of resource constraints. Methods to identify health research gaps, needs, and priorities (from herein referred to as gaps, needs, priorities) can be multi-varied and there does not appear to be general consensus on best practices. 3 , 5

Several published reviews highlight the diverse methods that have been used to identify gaps and priorities. In a review of methods used to identify gaps from systematic reviews, Robinson et al. noted the wide range of organizing principles that were employed in published literature between 2001 and 2009 (e.g., care pathway, decision tree, and patient, intervention, comparison, outcome framework,). 6 In a more recent review spanning 2007 to 2017, Nyanchoka et al. found that the vast majority of studies with a primary focus on the identification of gaps (83%) relied solely on knowledge synthesis methods (e.g., systematic review, scoping review, evidence mapping, literature review). A much smaller proportion (9%) relied exclusively on primary research methods (i.e., quantitative survey, qualitative study). 7

With respect to research priorities, in a review limited to a PubMed database search covering the period from 2001 to 2014, Yoshida documented a wide range of methods to identify priorities including the use of not only knowledge synthesis (i.e., literature reviews) and primary research methods (i.e., surveys) but also multi-stage, structured methods such as Delphi, Child Health and Nutrition Research Initiative (CHNRI), James Lind Alliance Priority Setting Partnership (JLA PSP), and Essential National Health Research (ENHR). 2 The CHNRI method, originally developed for the purpose of setting global child health research priorities, typically employs researchers and experts to specify a long list of research questions, the criteria that will be used to prioritize research questions, and the technical scoring of research questions using the defined criteria. 8 During the latter stages, non-expert stakeholders’ input are incorporated by using their ratings of the importance of selected criteria to weight the technical scores. The ENHR method, initially designed for health research priority setting at the national level, involves researchers, decision-makers, health service providers, and communities throughout the entire process of identifying and prioritizing research topics. 9 The JLA PSP method convenes patients, carers, and clinicians to equally and jointly identify questions about healthcare that cannot be answered by existing evidence that are important to all groups (i.e., research needs). 10 The identified research needs are then prioritized by the groups resulting in a final list (often a top 10) of research priorities. Non-clinical researchers are excluded from voting on research needs or priorities but can be involved in other processes (e.g., knowledge synthesis). CHNRI, ENHR, and JLA PSP usually employ a mix of knowledge synthesis and primary research methods to first identify a set of gaps or needs that are then prioritized. Thus, even though CHNRI, ENHR, and JLA PSP have been referred to as priority setting methods, they actually consist of a gaps or needs identification stage that feeds into a research prioritization stage.

Nyanchoka et al.’s review found that the majority of studies focused on the identification of gaps alone (65%), whereas the remaining studies focused either on research priorities alone (17%) or on both gaps and priorities (19%). 7 In an update to Robinson et al.’s review, 6 Carey et al. reviewed the literature between 2010 and 2011 and observed that the studies conducted during this latter period of time focused more on research priorities than gaps and had increased stakeholder involvement, and that none had evaluated the reproducibility of the methods. 11

The increasing development and diversity of formal processes and methods to identify gaps and priorities are indicative of a developing field. 2 , 12 To facilitate more standardized and systematic processes, other important areas warrant further investigation. Prior reviews did not distinguish between the identification of gaps versus research needs. The Agency for Healthcare Research and Quality Evidence-based Practice Center (AHRQ EPC) Program issued a series of method papers related to establishing research needs as part of comparative effectiveness research. 13 – 15 The AHRQ EPC Program defined research needs as “evidence gaps” identified within systematic reviews that are prioritized by stakeholders according to their potential impact on practice or care. 16 Furthermore, Nyanchoka et al. relied on author designations to classify studies as focusing on gaps versus research priorities and noted that definitions of gaps varied across studies, highlighting the need to apply consistent taxonomy when categorizing studies in reviews. 7 Given the rise in the use of stakeholders in both gaps and prioritization exercises, a greater understanding of the range of practices involving stakeholders is also needed. This includes the roles and responsibilities of stakeholders (e.g., consultants versus final decision-makers), the composition of stakeholders (e.g., non-research clinicians, patients, caregivers, policymakers), and the methods used to recruit stakeholders. The lack of consensus of best practices also highlights the importance of learning the extent to which evaluations to determine the effectiveness of gaps, needs, and prioritization exercises have been conducted, and if so, what were the resultant outcomes.

To better inform efforts and organizations that fund health research, we conducted a scoping review of methods used to identify gaps, needs, and priorities that were linked to potential or actual health research funding decision-making. Hence, this scoping review was limited to studies in which the identification of health research gaps, needs, or priorities was supported or conducted by funding organizations to address the following questions 1 : What are the characteristics of methods to identify health research gaps, needs, and priorities? and 2 To what extent have evaluations of the impact of these methods been conducted? Given that scoping reviews may be executed to characterize the ways an area of research has been conducted, 17 , 18 this approach is appropriate for the broad nature of this study’s aims.

Protocol and Registration

We employed methods that conform to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews. 19 See Appendix A in the Supplementary Information. The scoping review protocol is registered with the Open Science Framework ( https://osf.io/5zjqx/ ).

Eligibility Criteria

Studies published in English that described methods to identify health research gaps, needs, or priorities that were supported or conducted by funding organizations were eligible for inclusion. We excluded studies that reported only the results of the exercise (e.g., list of priorities) absent of information on the methods used. We also excluded studies involving evidence synthesis (e.g., literature or systematic reviews) that were solely descriptive and did not employ an explicit method to identify research gaps, needs, or priorities.

Information Sources and Search Strategy

We searched the following electronic databases: MEDLINE, PsycINFO, and Web of Science. Our database search also included an update of the Nyanchoka et al. scoping review, which entailed executing their database searches for the time period following 2017 (the study’s search end date). 7 Nyanchoka et al. did not include database searches for research needs. The electronic database search and scoping review update were completed in August and September 2019, respectively . The search strategy employed for each of the databases is presented in Appendix B in the Supplementary Information.

Selection of Sources of Evidence and Data Charting Process

Two reviewers screened titles and abstracts and full-text publications. Citations that one or both reviewers considered potentially eligible were retrieved for full-text review. Relevant background articles and scoping and systematic reviews were reference mined to screen for eligible studies. Full-text publications were screened against detailed inclusion and exclusion criteria. Data was extracted by one reviewer and checked by a second reviewer. Discrepancies were resolved through discussion by the review team.

Information on study characteristics were extracted from each article including the aims of the exercise (i.e., gaps, needs, priorities, or a combination) and health condition (i.e., physical or psychological). Based on definitions in the literature, 3 – 5 the aims of the exercise were coded according to the activities that were conducted, which may not have always corresponded with the study authors’ labeling of the exercises. For instance, the JLA PSP method is often described as a priority exercise but we categorized it as a needs and priority exercise. Priority exercises can be preceded by exercises to identify gaps or needs, which then feed into the priority exercise such as in JLA PSP; however, standalone priority exercises can also be conducted (e.g., stakeholders prioritize an existing list of emerging diseases).

For each type of exercise, information on the methods were recorded. An initial list of methods was created based on previous reviews. 9 , 12 , 20 During the data extraction process, any methods not included in the initial list were subsequently added. If more than one exercise was reported within an article (e.g., gaps and priorities), information was extracted for each exercise separately. Reviewers extracted the following information: methods employed (e.g., qualitative, quantitative), criteria used (e.g., disease burden, importance to stakeholders), stakeholder involvement (e.g., stakeholder composition, method for identifying stakeholders), and whether an evaluation was conducted on the effectiveness of the exercise (see Appendix C in the Supplementary Information for full data extraction form).

Synthesis of results entailed quantitative descriptives of study characteristics (e.g., proportion of studies by aims of exercise) and characteristics of methods employed across all studies and by each type of study (e.g., gaps, needs, priorities).

The electronic database search yielded a total of 10,548 titles. Another 284 articles were identified after searching the reference lists of full-text publications, including three systematic reviews 21 – 23 and one scoping review 24 that had met eligibility criteria. Moreover, a total of 99 publications designated as relevant background articles were also reference mined to screen for eligible studies. We conducted full-text screening for 2524 articles, which resulted in 2344 exclusions (440 studies were designated as background articles). A total of 167 exercises related to the identification of gaps, needs, or priorities that were supported or conducted by a research funding organization were described across 180 publications and underwent full data extraction. See Figure ​ Figure1 1 for the flow diagram of our search strategy and reasons for exclusion.

An external file that holds a picture, illustration, etc.
Object name is 11606_2021_7064_Fig1_HTML.jpg

Literature flow

Characteristics of Sources of Evidence

Among the published exercises, the majority of studies (152/167) conducted gaps, need, or prioritization exercises related to physical health, whereas only a small fraction of studies focused on psychological health (12/167) (see Appendix D in the Supplementary Information).

Methods for Identifying Gaps, Needs, and Priorities

As seen in Table ​ Table1, 1 , only about a quarter of studies involved a singular type of exercise with 7% focused on the identification of gaps only (i.e., areas with insufficient information to draw a conclusion for a given question), 6% on needs only (i.e., knowledge gaps that inhibit the decision-making of key stakeholders), and 14% priorities only (i.e., ranked gaps or needs often because of resource constraints). Studies more commonly conducted a combination of multiple types of exercises with more than half focused on the identification of both research needs and priorities, 14% on gaps and priorities, 3% gaps, needs, and priorities, and 3% gaps and needs.

Methods for Identifying Health Research Gaps, Needs, and Priorities

Framework tool6400001412031300120
JLA PSP46280000000000465300
ENHR2100000000002200
CHNRI117000014006254500
Systematic review1100000000001100
Literature review29173252052224072978360
Evidence mapping1100000000001100
Qualitative methods281718220291204171416480
Quantitative methods5432182201148240114622255100
Consensus methods221300003131204171113360
Workshop/conference613712100770135751005211517480
Stakeholder consultation740000001201433240
Review in-progress data127001100012031367120
Review source materials251600003132401146565100
Other281700220626004171416240

JLA PSP , James Lind Alliance Priority Setting Partnerships; ENHR , Essential National Health Research; CHNRI , Child Health and Nutrition Research Initiative. Numbers in columns may add up to more than the total N or 100% since some studies employed more than one method

Across the 167 studies, the three most frequently used methods were the convening of workshops/meetings/conferences (37%), quantitative methods (32%), and the JLA PSP approach (28%). This was followed by methods involving literature reviews (17%), qualitative methods (17%), consensus methods (13%), and reviews of source materials (15%). Other methods included the CHNRI process (7%), reviews of in-progress data (7%), consultation with (non-researcher) stakeholders (4%), applying a framework tool (4%), ENHR (1%), systematic reviews (1%), and evidence mapping (1%).

The criterion most widely applied across the 167 studies was the importance to stakeholders (72%) (see Table ​ Table2). 2 ). Almost one-third (29%) considered the potential value and 18% feasibility as criteria. Burden of disease (9%), addressing inequities (8%), costs (6%), alignment with organization’s mission (3%), and patient centeredness (2%) were adopted as criteria to a lesser extent.

Criteria for Identifying Health Research Gaps, Needs, and Priorities

Costs10600004172404170000
Burden of disease159001103131206254500
Importance to stakeholders12072217550626510015638394480
Patient centeredness4200000000143300
Aligned with organization mission5318002900141100
Potential value4929325220114812012501618480
Potential risk from inaction53000031300141100
Addresses inequities138000029007294500
Feasibility301800004172409381113480
Other372200009394809381214360
Not reported148542220313002811120
Not applicable13800110000052156340
Unclear12718002936031322120

Numbers in columns may add up to more than the total N or 100% since some studies employed more than one criterion

About two-thirds of the studies included researchers (66%) and clinicians (69%) as stakeholders (see Appendix E in the Supplementary Information). Patients and the public were involved in 59% of the studies. A smaller proportion included policy makers (20%), funders (13%), product makers (8%), payers (5%), and purchasers (2%) as stakeholders. Nearly half of the studies (51%) relied on stakeholder organizations to identify stakeholders (see Appendix F in the Supplementary Information). A quarter of studies (26%) used purposive sampling and some convenience sampling (11%). Few (9%) used snowball sampling to identify stakeholders. Only a minor fraction of studies, seven of the 167 (4%), reported some type of effectiveness evaluation. 25 – 31

Our scoping review revealed that approaches to identifying gaps, needs, and priorities are less likely to occur as discrete processes and more often involve a combination of exercises. Approaches encompassing multiple exercises (e.g., gaps and needs) were far more prevalent than singular standalone exercises (e.g., gaps only) (73% vs. 27%). Findings underscore the varying importance placed on gaps, needs, and priorities, which reflect key principles of the Value of Information approach (i.e., not all gaps are important, addressing gaps do not necessarily address needs nor does addressing needs necessarily address priorities). 32

Findings differ from Nyanchoka et al.’s review in which studies involving the identification of gaps only outnumbered studies involving both gaps and priorities. 7 However, Nyanchoka et al. relied on author definitions to categorize exercises, whereas our study made designations based on our review of the activities described in the article and applied definitions drawn from the literature. 3 , 4 Lack of consensus on definitions of gaps and priority setting has been noted in the literature. 33 , 34 To the authors’ knowledge, no prior scoping review has focused on methods related to the identification of “research needs.” Findings underscore the need to develop and apply more consistent taxonomy to this growing field of research.

More than 40% of studies employed methods with a structured protocol including JLA PSP, ENHR, CHRNI, World Café, and the Dialogue model. 10 , 35 – 40 The World Café and Dialogue models particularly value the experiential perspectives of stakeholders. The World Café centers on creating a special environment, often modeled after a café, in which rounds of multi-stakeholder, small group, conversations are facilitated and prefaced with questions designed for the specific purpose of the session. Insights and results are reported and shared back to the entire group with no expectation to achieve consensus, but rather diverse perspectives are encouraged. 36 The Dialogue model is a multi-stakeholder, participatory, priority setting method involving the following phases: exploratory (informal discussions), consultation (separate stakeholder consultations), prioritization (stakeholder ratings), and integration (dialog between stakeholders). 39 Findings may indicate a trend away from non-replicable methods to approaches that afford greater transparency and reproducibility. 41 For instance, of the 17 studies published between 2000 and 2009, none had employed CHNRI and 6% used JLA PSP compared to the 141 studies between 2010 and 2019 in which 8% applied CHNRI and 32% JLA PSP. However, notable variations in implementing CHNRI and JLA PSP have been observed. 41 – 43 Though these protocols help to ensure a more standardized process, which is essential when testing the effectiveness of methods, such evaluations are infrequent but necessary to establish the usefulness of replicable methods.

Convening workshops, meetings, or conferences was the method used by the greatest proportion of studies (37%). The operationalization of even this singular method varied widely in duration (e.g., single vs. multi-day conferences), format (e.g., expert panel presentations, breakout discussion groups), processes (e.g., use of formal/informal consensus methods), and composition of stakeholders. The operationalization of other methods (e.g., quantitative, qualitative) also exhibited great diversity.

The use of explicit criteria to determine gaps, needs, or priorities is a key component of certain structured protocols 40 , 44 and frameworks. 9 , 45 In our scoping review, the criterion applied most frequently across studies (71%) was “importance to stakeholders” followed by potential value (31%) and feasibility (18%). Stakeholder values are being incorporated into the identification of gaps, needs, and exercises across a significant proportion of studies, but how this is operationalized varies widely across studies. For instance, the CHNRI typically employs multiple criteria that are scored by technical experts and these scores are then weighted based on stakeholder ratings of their relative importance. Other studies totaled scores across multiple criteria, whereas JLA PSP asks multiple stakeholders to rank the top ten priorities. The importance of involving stakeholders, especially patients and the public, in priority setting is increasingly viewed as vital to ensuring the needs of end users are met, 46 , 47 particularly in light of evidence demonstrating mismatches between the research interests of patients and researchers and clinicians. 48 – 50 In our review, clinicians (69%) and researchers (66%) were the most widely represented stakeholder groups across studies. Patients and the public (e.g., caregivers) were included as stakeholders in 59% of the studies. Only a small fraction of studies involved exercises in which stakeholders were limited to researchers only. Patients and the public were involved as stakeholders in 12% of studies published between 2000 and 2009 compared to 60% of studies between 2010 and 2019. Findings may reflect a trend away from researchers traditionally serving as one of the sole drivers of determining which research topics should be pursued.

More than half of the studies reported relying on stakeholder organizations to identify participants. Partnering with stakeholder organizations has been noted as one of the primary methods for identifying stakeholders for priority setting exercises. 34 Purposive sampling was the next most frequently used stakeholder identification method. In contrast, convenience sampling (e.g., recommendations by study team) and snowball sampling (e.g., identified stakeholders refer other stakeholders who then refer additional stakeholders) were not as frequently employed, but were documented as common methods in a prior review conducted almost a decade ago. 14 The greater use of stakeholder organizations than convenience or snowball sampling may be partly due to the more recent proliferation of published studies using structured protocols like JLA PSP, which rely heavily on partnerships with stakeholder organizations. Though methods such as snowball sampling may introduce more bias than random sampling, 14 there are no established best practices for stakeholder identification methods. 51 Nearly a quarter of studies provided either unclear or no information on stakeholder identification methods, which has been documented as a barrier to comparing across studies and assessing the validity of research priorities. 34

Determining the effectiveness of gaps, needs, and priority exercises is challenging given that outcome evaluations are rarely conducted. Only seven studies reported conducting an evaluation. 25 – 31 Evaluations varied with respect to their focus on process- (e.g., balanced stakeholder representation, stakeholder satisfaction) versus outcome-related impact (e.g., prioritized topics funded, knowledge production, benefits to health). There is no consensus on what constitutes optimal outcomes, which has been found to vary by discipline. 52

More than 90% of studies involved exercises related to physical health in contrast to a minor portfolio of work being dedicated to psychological health, which may be an indication of the low priority placed on psychological health policy research. Understanding whether funding decisions for physical versus psychological health research are similarly or differentially governed by more systematic, formal processes may be important to the extent that this affects the effective targeting of funds.

Limitations

By limiting studies to those supported or conducted by funding organizations, we may have excluded global, national, or local priority setting exercises. In addition, our scoping review categorized approaches according to the actual exercises conducted and definitions provided in the scientific literature rather than relying on the terminology employed by studies. This resulted in instances in which the category assigned to an exercise within our scoping review could diverge from the category employed by the study authors. Lastly, this study’s findings are subject to limitations often characteristic of scoping reviews such as publication bias, language bias, lack of quality assessment, and search, inclusion, and extraction biases. 53

Conclusions

The diversity and growing establishment of formal processes and methods to identify health research gaps, needs, and priorities are characteristic of a developing field. Even with the emergence of more structured and systematic approaches, the inconsistent categorization and definition of gaps, needs, and priorities inhibit efforts to evaluate the effectiveness of varied methods and processes, such efforts are rare and sorely needed to build an evidence base to guide best practices. The immense variation occurring within structured protocols, across different combinations of disparate methods, and even within singular methods, further emphasizes the importance of using clearly defined approaches, which are essential to conducting investigations of the effectiveness of these varied approaches. The recent development of reporting guidelines for priority setting for health research may facilitate more consistent and clear documentation of processes and methods, which includes the many facets of involving stakeholders. 34 To ensure optimal targeting of funds to meet the greatest areas of need and maximize outcomes, a much more robust evidence base is needed to ascertain the effectiveness of methods used to identify research gaps, needs, and priorities.

(PDF 1205 kb)

Acknowledgements

This scoping review is part of research that was sponsored by Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (now Psychological Health Center of Excellence).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

chapter 3 research methods reliability fill in the gaps

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 5: Psychological Measurement

Reliability and Validity of Measurement

Learning Objectives

  • Define reliability, including the different types and how they are assessed.
  • Define validity, including the different types and how they are assessed.
  • Describe the kinds of evidence that would be relevant to assessing the reliability and validity of a particular measure.

Again, measurement involves assigning scores to individuals so that they represent some characteristic of the individuals. But how do researchers know that the scores actually represent the characteristic, especially when it is a construct like intelligence, self-esteem, depression, or working memory capacity? The answer is that they conduct research using the measure to confirm that the scores make sense based on their understanding of the construct being measured. This is an extremely important point. Psychologists do not simply  assume  that their measures work. Instead, they collect data to demonstrate  that they work. If their research does not demonstrate that a measure works, they stop using it.

As an informal example, imagine that you have been dieting for a month. Your clothes seem to be fitting more loosely, and several friends have asked if you have lost weight. If at this point your bathroom scale indicated that you had lost 10 pounds, this would make sense and you would continue to use the scale. But if it indicated that you had gained 10 pounds, you would rightly conclude that it was broken and either fix it or get rid of it. In evaluating a measurement method, psychologists consider two general dimensions: reliability and validity.

Reliability

Reliability  refers to the consistency of a measure. Psychologists consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (inter-rater reliability).

Test-Retest Reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time.  Test-retest reliability  is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the  same  group of people at a later time, and then looking at  test-retest correlation  between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing Pearson’s  r . Figure 5.2 shows the correlation between two sets of scores of several university students on the Rosenberg Self-Esteem Scale, administered two times, a week apart. Pearson’s r for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

Score at time 1 is on the x-axis and score at time 2 is on the y-axis, showing fairly consistent scores

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.

Internal Consistency

A second kind of reliability is  internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioural and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials.

Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One approach is to look at a  split-half correlation . This involves splitting the items into two sets, such as the first and second halves of the items or the even- and odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of scores is examined. For example, Figure 5.3 shows the split-half correlation between several university students’ scores on the even-numbered items and their scores on the odd-numbered items of the Rosenberg Self-Esteem Scale. Pearson’s  r  for these data is +.88. A split-half correlation of +.80 or greater is generally considered good internal consistency.

Score on even-numbered items is on the x-axis and score on odd-numbered items is on the y-axis, showing fairly consistent scores

Perhaps the most common measure of internal consistency used by researchers in psychology is a statistic called  Cronbach’s α  (the Greek letter alpha). Conceptually, α is the mean of all possible split-half correlations for a set of items. For example, there are 252 ways to split a set of 10 items into two sets of five. Cronbach’s α would be the mean of the 252 split-half correlations. Note that this is not how α is actually computed, but it is a correct way of interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken to indicate good internal consistency.

Interrater Reliability

Many behavioural measures involve significant judgment on the part of an observer or a rater.  Inter-rater reliability  is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring university students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does in fact have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other. Inter-rater reliability would also have been measured in Bandura’s Bobo doll study. In this case, the observers’ ratings of how many acts of aggression a particular child committed while playing with the Bobo doll should have been highly positively correlated. Interrater reliability is often assessed using Cronbach’s α when the judgments are quantitative or an analogous statistic called Cohen’s κ (the Greek letter kappa) when they are categorical.

Validity  is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimetre longer than another’s would indicate nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure. Here we consider three basic kinds: face validity, content validity, and criterion validity.

Face Validity

Face validity  is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behaviour, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. For example, the items “I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches those of individuals who tend to suppress their aggression.

Content Validity

Content validity  is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion Validity

Criterion validity  is the extent to which people’s scores on a measure are correlated with other variables (known as  criteria ) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. When the criterion is measured at the same time as the construct, criterion validity is referred to as concurrent validity ; however, when the criterion is measured at some point in the future (after the construct has been measured), it is referred to as predictive validity (because scores on the measure have “predicted” a future outcome).

Criteria can also include other measures of the same construct. For example, one would expect new measures of test anxiety or physical risk taking to be positively correlated with existing measures of the same constructs. This is known as convergent validity .

Assessing convergent validity requires collecting data using the measure. Researchers John Cacioppo and Richard Petty did this when they created their self-report Need for Cognition Scale to measure how much people value and engage in thinking (Cacioppo & Petty, 1982) [1] . In a series of studies, they showed that people’s scores were positively correlated with their scores on a standardized academic achievement test, and that their scores were negatively correlated with their scores on a measure of dogmatism (which represents a tendency toward obedience). In the years since it was created, the Need for Cognition Scale has been used in literally hundreds of studies and has been shown to be correlated with a wide variety of other variables, including the effectiveness of an advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, & McCaslin, 2009) [2] .

Discriminant Validity

Discriminant validity , on the other hand, is the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence of discriminant validity by showing that people’s scores were not correlated with certain other variables. For example, they found only a weak correlation between people’s need for cognition and a measure of their cognitive style—the extent to which they tend to think analytically by breaking ideas into smaller parts or holistically in terms of “the big picture.” They also found no correlation between people’s need for cognition and measures of their test anxiety and their tendency to respond in socially desirable ways. All these low correlations provide evidence that the measure is reflecting a conceptually distinct construct.

Key Takeaways

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.
  • Practice: Ask several friends to complete the Rosenberg Self-Esteem Scale. Then assess its internal consistency by making a scatterplot to show the split-half correlation (even- vs. odd-numbered items). Compute Pearson’s  r too if you know how.
  • Discussion: Think back to the last college exam you took and think of the exam as a psychological measure. What construct do you think it was intended to measure? Comment on its face and content validity. What data could you collect to assess its reliability and criterion validity?
  • Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42 , 116–131. ↵
  • Petty, R. E, Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need for cognition. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behaviour (pp. 318–329). New York, NY: Guilford Press. ↵

The consistency of a measure.

The consistency of a measure over time.

The consistency of a measure on the same group of people at different times.

Consistency of people’s responses across the items on a multiple-item measure.

Method of assessing internal consistency through splitting the items into two sets and examining the relationship between them.

A statistic in which α is the mean of all possible split-half correlations for a set of items.

The extent to which different observers are consistent in their judgments.

The extent to which the scores from a measure represent the variable they are intended to.

The extent to which a measurement method appears to measure the construct of interest.

The extent to which a measure “covers” the construct of interest.

The extent to which people’s scores on a measure are correlated with other variables that one would expect them to be correlated with.

In reference to criterion validity, variables that one would expect to be correlated with the measure.

When the criterion is measured at the same time as the construct.

when the criterion is measured at some point in the future (after the construct has been measured).

When new measures positively correlate with existing measures of the same constructs.

The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

chapter 3 research methods reliability fill in the gaps

  • Welcome to Chapter 3

Chapter 3 Webinars

  • Student Experience Feedback Buttons
  • Developing the Quantitative Research Design
  • Qualitative Descriptive Design
  • Qualitative Narrative Inquiry Research
  • SAGE Research Methods
  • Alignment of Dissertation Components for DIS-9902ABC
  • IRB Resources This link opens in a new window
  • Research Examples (SAGE) This link opens in a new window
  • Dataset Examples (SAGE) This link opens in a new window

Jump to DSE Guide

Need help ask us.

chapter 3 research methods reliability fill in the gaps

Was this resource helpful?

  • Next: Developing the Quantitative Research Design >>
  • Last Updated: Nov 2, 2023 10:17 AM
  • URL: https://resources.nu.edu/c.php?g=1007179

National University

© Copyright 2024 National University. All Rights Reserved.

Privacy Policy | Consumer Information

1Library

  • No results found

Validity and Reliability

Chapter one, chapter three research methodology, 3.9 validity and reliability.

Data validity refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from measures (Dooley, 1990). That means the effectiveness of the research instruments to measure what is intending to measure. Thus validity belongs not just to a measure but depends on the relationship between the measure and its level. Validity can be content validity (validity of the measuring instrument) or construct validity (the degree of relationship between the study problem, instruments and variables). Reliability refers to the degree to which observed scores are “free from errors of measurement” (Dooley, 1990). Reliability can be estimated by the constant of scores. For example, the agreement between different items of the same questionnaire or between different raters using a measure can be checked. The value of measure depends not only on its reliability and validity but also on its specific purpose. Thus a measure with modest reliability and validity may prove adequate for initial study but too crude for making an important decision about particular phenomena. In order to reduce bias and in a view of reliability, multiple methods were employed in this study namely interviews and questionnaires. With the fact that this study is a mixed research i.e both qualititative and quantitative research were carried out, it is important to note that there is a significant difference between approaches of ensuring reliability and validity of the two researches.

3.9.1 Validity and reliability of qualitative research

In qualitative research, the appropriateness of validity and reliability is a hot topic of discussion. Some authors argue that validity and reliability in qualitative research are

inappropriate, while others say these terms are relevant to qualitative research just as they are in quantitative research. For instance Yardley (2008) argues that qualitative research accepts and works with the influence of errors caused by researcher’s influence but quantitative research depends on elimination of such errors. He therefore concludes that validity and reliability are irrelevant to the qualitative research. However this argument contradicts the concept of rigour as elaborated by Aroni et.al. (1999) which insist that a rigourous research process results in more trustworthy data. Some researchers have even explained how to improve rigour of the qualitative research and therefore ensuring validity and reliability of qualitative findings. Elliot et.al. (1999) states that validity and reliability in qualitative research can be improved by credibility checks through feedback, coherence of a story , triangulation and verification.

Phase one of this study has adopted some of the methods mentioned by Elliot et.al. (1999) to improve validity and reliability. The qualitative data were collected from three different sources, the incubator managers, the well informed incubatees and the financiers. The provides an opportunity to establish the validity and reliability of data from one source against the other source. For instance the incubatees were asked what makes them in a better position to access finance, financiers were asked what makes them prefer to provide finance to incubatees, and incubator manager were asked what makes incubatees in a better position to access finance. After data triangulation, the answers showed similar pattern i.e. there were many concepts from different sources in agreement, this ensures reliability of the data, contrary to if the data were very different from one source to another.

3.9.2 Validity and reliability in quantitative research

In quantitative research, validity and reliability are the very important measurements of research quality. To ensure that the quantitative research is valid and reliable, the following things were done; repeated reading on the developed questionnaire was carried out to check on the correctness of the wording, whether the questions measure what they are supposed to measure and if there is any biasness, as well as knowing if the respondents can understand the questions as the researcher intends. A pilot study was conducted to make sure the questionnaire yield valid information and fortunately the pilot study showed that respondents understood clearly the questions, therefore the questionnaire was used for data collection. Factor analysis and reliability testing were done to ensure construct validity and reliability.

To ensure validity of a survey in phase two of this study, before data collection the questionnaire was developed by the researcher and two experts in the area of the study evaluated and agreed that the questions were effectively capturing the topic under invstigation. Secondly, a pilot study was done to see if the respondents were understanding questions and provide relevant answers to the questions. Thirdly, the collected data were subjected to the factor analysis

The reliability of constructs was tested before and after factor analysis so as to ensure the reliability of the constructs and therefore improving the reliability of the inferential results. Below is the table presenting constructs reliability results for all nine constructs in this study before and after.

Table 3.15: Constructs reliabilities before and after factor analysis

Construct Before factor analysis After factor analysis

No. of Variable items Cronbach’s Alpha No. of Variable items Cronbach’s Alpha Business incubator’s monitoring services 6 0.713 5 0.732 Incuubatee’s financial management capabilities 18 0.646 12 0.714

Incubatee’s bonding social capital

6 0.736 5 0.742

Incubatee’s bridging social capital

4 0.641 4 0.641

Incubatee’s linking social capital

4 0.888 4 0.888

Incubator manager’s bonding social capital

6 0.828 6 0.828

Incubator manager’s bridging social capital

4 0.913 4 0.913

Incubator manager’s linking social capital

4 0.864 4 0.864

MSMEs financial accessibility 8 0.840 8 0.840

The Cronbach’s alpha results in the table above are all at an acceptable level. However, comparing Cronbach’s Alpha before and after factor analysis there are slight differences. As stated in the factor analysis section, some variable items were eliminated by the factor analysis and therefore the reliability of constructs where items were reduced has been effected. Now, if compared the construct reliabilities before and after factor analysis as presented in table 3.15, it shows that factor analysis has improved some constructs reliabilities.

The reliability of business incubator’s monitoring services has slightly increased after factor analysis. This is because of the reduction of one variable item i.e. “Provision of qualified trainers”. Correspondingly, the reliability of incubatee’s financial management capabilities has significantly increased after factor analysis. This is due to the reduction of variable items from 18 to 12. Also, the reliability of Incubatee’s bonding social capital has also slightly increased after factor analysis. This is due to the reduction of variable items from 6 to 5. In the rest of the constructs, there was no changes. The number of variable items remained the same and also the reliability of constructs reimained the same, before and after factor analysis.

  • Hypotheses and results
  • Background to the problem
  • MSMEs’ sector in Tanzania
  • Startups in Tanzania
  • Business incubators in Tanzania
  • Financial management capabilities
  • Informal and Semi-formal financing system in Tanzania
  • Role of social capital on MSMEs’ access to finance
  • MSMEs’ financial
  • MSMEs’ Financial accessibility
  • Factor analysis
  • Validity and Reliability (You are here)
  • Current status of business incubation programs in Tanzania
  • Factors for business incubators’ successful financial intermediary role
  • Financiers’ provision of requested amount of loans to incubatees
  • The contribution of business incubators to MSMEs financial accessibility
  • Incubatees and incubator managers’ social capital on Incubatees' financial accessibility

Related documents

Logo for M Libraries Publishing

5.2 Reliability and Validity of Measurement

Learning objectives.

Again, measurement involves assigning scores to individuals so that they represent some characteristic of the individuals. But how do researchers know that the scores actually represent the characteristic, especially when it is a construct like intelligence, self-esteem, depression, or working memory capacity? The answer is that they conduct research using the measure to confirm that the scores make sense based on their understanding of the construct being measured. This is an extremely important point. Psychologists do not simply assume that their measures work. Instead, they collect data to demonstrate that they work. If their research does not demonstrate that a measure works, they stop using it.

Reliability refers to the consistency of a measure. Psychologists consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (interrater reliability).

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time. Test-retest reliability is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the same group of people at a later time, and then looking at test-retest correlation between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing Pearson’s r . Figure 5.3 “Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times a Week Apart” shows the correlation between two sets of scores of several college students on the Rosenberg Self-Esteem Scale, given two times a week apart. Pearson’s r for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

Figure 5.3 Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times a Week Apart

Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times a Week Apart

A second kind of reliability is internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioral and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials.

Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One approach is to look at a split-half correlation . This involves splitting the items into two sets, such as the first and second halves of the items or the even- and odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of scores is examined. For example, Figure 5.4 “Split-Half Correlation Between Several College Students’ Scores on the Even-Numbered Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale” shows the split-half correlation between several college students’ scores on the even-numbered items and their scores on the odd-numbered items of the Rosenberg Self-Esteem Scale. Pearson’s r for these data is +.88. A split-half correlation of +.80 or greater is generally considered good internal consistency.

Figure 5.4 Split-Half Correlation Between Several College Students’ Scores on the Even-Numbered Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale

Split-Half Correlation Between Several College Students' Scores on the Even-Numbered Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale

Perhaps the most common measure of internal consistency used by researchers in psychology is a statistic called Cronbach’s α (the Greek letter alpha). Conceptually, α is the mean of all possible split-half correlations for a set of items. For example, there are 252 ways to split a set of 10 items into two sets of five. Cronbach’s α would be the mean of the 252 split-half correlations. Note that this is not how α is actually computed, but it is a correct way of interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken to indicate good internal consistency.

Many behavioral measures involve significant judgment on the part of an observer or a rater. Interrater reliability is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring college students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does in fact have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other. If they were not, then those ratings could not be an accurate representation of participants’ social skills. Interrater reliability is often assessed using Cronbach’s α when the judgments are quantitative or an analogous statistic called Cohen’s κ (the Greek letter kappa) when they are categorical.

Validity is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimeter longer than another’s would indicate nothing about which one had higher self-esteem.

Textbook presentations of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure. Here we consider four basic kinds: face validity, content validity, criterion validity, and discriminant validity.

Face validity is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behavior, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory (MMPI) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. Another example is the Implicit Association Test, which measures prejudice in a way that is nonintuitive to most people (see Note 5.31 “How Prejudiced Are You?” ).

How Prejudiced Are You?

The Implicit Association Test (IAT) is used to measure people’s attitudes toward various social groups. The IAT is a behavioral measure designed to reveal negative attitudes that people might not admit to on a self-report measure. It focuses on how quickly people are able to categorize words and images representing two contrasting groups (e.g., gay and straight) along with other positive and negative stimuli (e.g., the words “wonderful” or “nasty”). The IAT has been used in dozens of published research studies, and there is strong evidence for both its reliability and its validity (Nosek, Greenwald, & Banaji, 2006). You can learn more about the IAT—and take several of them for yourself—at the following website: https://implicit.harvard.edu/implicit .

Content validity is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion validity is the extent to which people’s scores on a measure are correlated with other variables (known as criteria ) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. Criteria can also include other measures of the same construct. For example, one would expect new measures of test anxiety or physical risk taking to be positively correlated with existing measures of the same constructs. So the use of converging operations is one way to examine criterion validity.

Assessing criterion validity requires collecting data using the measure. Researchers John Cacioppo and Richard Petty did this when they created their self-report Need for Cognition Scale to measure how much people value and engage in thinking (Cacioppo & Petty, 1982). In a series of studies, they showed that college faculty scored higher than assembly-line workers, that people’s scores were positively correlated with their scores on a standardized academic achievement test, and that their scores were negatively correlated with their scores on a measure of dogmatism (which represents a tendency toward obedience). In the years since it was created, the Need for Cognition Scale has been used in literally hundreds of studies and has been shown to be correlated with a wide variety of other variables, including the effectiveness of an advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, & McCaslin, 2009).

Discriminant validity is the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

  • Practice: Ask several friends to complete the Rosenberg Self-Esteem Scale. Then assess its internal consistency by making a scatterplot to show the split-half correlation (even- vs. odd-numbered items). Compute Pearson’s r too if you know how.
  • Discussion: Think back to the last college exam you took and think of the exam as a psychological measure. What construct do you think it was intended to measure? Comment on its face and content validity. What data could you collect to assess its reliability, criterion validity, and discriminant validity?
  • Practice: Take an Implicit Association Test and then list as many ways to assess its criterion validity as you can think of.

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42 , 116–131.

Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2006). The Implicit Association Test at age 7: A methodological and conceptual review. In J. A. Bargh (Ed.), Social psychology and the unconscious: The automaticity of higher mental processes (pp. 265–292). London, England: Psychology Press.

Petty, R. E, Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need for cognition. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behavior (pp. 318–329). New York, NY: Guilford Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

  • International
  • Education Jobs
  • Schools directory
  • Resources Education Jobs Schools directory News Search

RELIABILITY - Psychology Research Methods Full Lesson - AQA (new spec) - powerpoint & activities

RELIABILITY - Psychology Research Methods Full Lesson - AQA (new spec) - powerpoint & activities

Subject: Psychology

Age range: 16+

Resource type: Lesson (complete)

AQA Psychology A level (new 2015 Specification) Shop

Last updated

11 November 2022

  • Share through email
  • Share through twitter
  • Share through linkedin
  • Share through facebook
  • Share through pinterest

chapter 3 research methods reliability fill in the gaps

This includes a powerpoint covering ways of ensuring reliability ( TEST RETEST and INTER RATER). It also includes improving reliability. It has activities, videos and questions in it.

Also has a notes sheet that the students complete as we work through the powerpoint. It has lots of exam questions as well throughout it.

Tes paid licence How can I reuse this?

Get this resource as part of a bundle and save up to 27%

A bundle is a package of resources grouped together to teach a particular topic, or a series of lessons, in one place.

AQA Psychology RESEARCH METHODS - Full lessons Year 2

This bundle includes all of the lessons I have uploaded for A2 AQA Psychology revision of year 1 and new A2 bits for research methods. For the first year content these are revision lessons (e.g experiments, sampling, ethics, data etc) and assume some prior knowledge from first year. For the second year content its new lessons (e.g stats, validity, reliability, features of science). It includes powerpoints for all and then worksheets and some exam questions. Includes (in the order I teach it): Hypotheses & Variables Validity Reliability Experiments & Experimental Design Observations Content Analysis & Case Study Self Report & Correlation Ethics & Sampling Methods Data, Data Analysis & Significance Statistics Features of a science There is a similar bundle for first year original lessons as well: https://www.tes.com/teaching-resource/resource-12763300 Another good resource for RM is the workshop on designing research essays https://www.tes.com/teaching-resource/resource-12744159

VALIDITY & RELIABILITY full lessons and homework

Designed for the AQA Specification. Includes powerpoint, activities, video links and exam questions for both Validity and Reliability.

Your rating is required to reflect your happiness.

It's good to leave some feedback.

Something went wrong, please try again later.

This resource hasn't been reviewed yet

To ensure quality for our reviews, only customers who have purchased this resource can review it

Report this resource to let us know if it violates our terms and conditions. Our customer service team will review your report and will be in touch.

Not quite what you were looking for? Search by keyword to find the right resource:

Research methodology.

  • First Online: 29 June 2019

Cite this chapter

chapter 3 research methods reliability fill in the gaps

  • Vaneet Kaur 3  

Part of the book series: Innovation, Technology, and Knowledge Management ((ITKM))

1081 Accesses

The chapter presents methodology employed for examining framework developed, during the literature review, for the purpose of present study. In light of the research objectives, the chapter works upon the ontology, epistemology as well as the methodology adopted for the present study. The research is based on positivist philosophy which postulates that phenomena of interest in the social world, can be studied as concrete cause and effect relationships, following a quantitative research design and a deductive approach. Consequently, the present study has used the existing body of literature to deduce relationships between constructs and develops a strategy to test the proposed theory with the ultimate objective of confirming and building upon the existing knowledge in the field. Further, the chapter presents a roadmap for the study which showcases the journey towards achieving research objectives in a series of well-defined logical steps. The process followed for building survey instrument as well as sampling design has been laid down in a similar manner. While the survey design enumerates various methods adopted along with justifications, the sampling design sets forth target population, sampling frame, sampling units, sampling method and suitable sample size for the study. The chapter also spells out the operational definitions of the key variables before exhibiting the three-stage research process followed in the present study. In the first stage, questionnaire has been developed based upon key constructs from various theories/researchers in the field. Thereafter, the draft questionnaire has been refined with the help of a pilot study and its reliability and validity has been tested. Finally, in light of the results of the pilot study, the questionnaire has been finalized and final data has been collected. In doing so, the step-by-step process of gathering data from various sources has been presented. Towards end, the chapter throws spotlight on various statistical methods employed for analysis of data, along with the presentation of rationale for the selection of specific techniques used for the purpose of presentation of outcomes of the present research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Aasland, A. (2008). A user manual for SPSS analysis (pp. 1–60).

Google Scholar  

Accenture Annual Report. (2016). Annual Report: 2016 Leading in the New. Retrieved February 13, 2017 from https://www.accenture.com/t20161030T213116__w__/in-en/_acnmedia/PDF-35/Accenture-2016-Shareholder-Letter10-K006.pdf#zoom=50 .

Achieng’Nyaura, L., & Omwenga, D. J. (2016). Factors affecting employee retention in the hotel industry in Mombasa County. Imperial Journal of Interdisciplinary Research, 2 (12).

Agariya, A. K., & Yayi, S. H. (2015). ERM scale development and validation in Indian IT sector. Journal of Internet Banking and Commerce, 20 (1), 1–16.

Aibinu, A. A., & Al-Lawati, A. M. (2010). Using PLS-SEM technique to model construction organizations’ willingness to participate in e-bidding. Automation in Construction, 19 (6), 714–724.

Article   Google Scholar  

Akgün, A. E., Keskin, H., & Byrne, J. (2012). Antecedents and contingent effects of organizational adaptive Capability on firm product innovativeness. Journal of Production and Innovation Management, 29 (S1), 171–189.

Akman, G., & Yilmaz, C. (2008). Innovative capability, innovation strategy and market orientation. International Journal of Innovation and Management, 12 (1), 69–111.

Akroush, M. N., Abu-ElSamen, A. A., Al-Shibly, M. S., & Al-Khawaldeh, F. M. (2010). Conceptualisation and development of customer service skills scale: An investigation of Jordanian customers. International Journal of Mobile Communications, 8 (6), 625–653.

AlKindy, A. M., Shah, I. M., & Jusoh, A. (2016). The impact of transformational leadership behaviors on work performance of Omani civil service agencies. Asian Social Science, 12 (3), 152.

Al-Mabrouk, K., & Soar, J. (2009). A delphi examination of emerging issues for successful information technology transfer in North Africa a case of Libya. African Journal of Business Management, 3 (3), 107.

Alonso-Almeida. (2015). Proactive and reactive strategies deployed by restaurants in times of crisis: Effects on capabilities, organization and competitive advantage. International Journal of Contemporary Hospitality Management, 27 (7), 1641–1661.

Alrubaiee, P., Alzubi, H. M., Hanandeh, R., & Ali, R. A. (2015). Investigating the relationship between knowledge management processes and organizational performance the mediating effect of organizational innovation. International Review of Management and Business Research, 4 (4), 989–1009.

Alters, B. J. (1997). Whose nature of science? Journal of Research in Science Teaching, 34 (1), 39–55.

Al-Thawwad, R. M. (2008). Technology transfer and sustainability-adapting factors: Culture, physical environment, and geographical location. In Proceedings of the 2008 IAJC-IJME International Conference .

Ammachchi, N. (2017). Healthcare demand spurring cloud & analytics development rush. Retrieved February 19, 2017 from http://www.nearshoreamericas.com/firms-focus-developing-low-cost-solutions-demand-outsourcing-rises-healthcare-sector-report/ .

Anatan, L. (2014). Factors influencing supply chain competitive advantage and performance. International Journal of Business and Information, 9 (3), 311–335.

Arkkelin, D. (2014). Using SPSS to understand research and data analysis.

Aroian, K. J., Kulwicki, A., Kaskiri, E. A., Templin, T. N., & Wells, C. L. (2007). Psychometric evaluation of the Arabic language version of the profile of mood states. Research in Nursing & Health, 30 (5), 531–541.

Asongu, S. A. (2013). Liberalization and financial sector competition: A critical contribution to the empirics with an African assessment.

Ayagre, P., Appiah-Gyamerah, I., & Nartey, J. (2014). The effectiveness of internal control systems of banks. The case of Ghanaian banks. International Journal of Accounting and Financial Reporting, 4 (2), 377.

Azizi, R., Maleki, M., Moradi-moghadam, M., & Cruz-machado, V. (2016). The impact of knowledge management practices on supply chain quality management and competitive advantages. Management and Production Engineering Review, 7 (1), 4–12.

Baariu, F. K. (2015). Factors influencing subscriber adoption of Mobile payments: A case of Safaricom’s Lipana M-Pesa Service in Embu Town , Kenya (Doctoral dissertation, University of Nairobi).

Babbie, E. R. (2011). Introduction to social research . Belmont: Wadsworth Cengage Learning.

Bagozzi, R. P., & Heatherton, T. F. (1994). A general approach to representing multifaceted personality constructs: Application to state self-esteem. Structural Equation Modeling: A Multidisciplinary Journal, 1 (1), 35–67.

Barlett, J. E., Kotrlik, J. W., & Higgins, C. C. (2001). Organizational research: Determining appropriate sample size in survey research. Information Technology, Learning, and Performance Journal, 19 (1), 43.

Barrales-molina, V., Bustinza, Ó. F., & Gutiérrez-gutiérrez, L. J. (2013). Explaining the causes and effects of dynamic capabilities generation: A multiple-indicator multiple-cause modelling approach. British Journal of Management, 24 , 571–591.

Barrales-molina, V., Martínez-lópez, F. J., & Gázquez-abad, J. C. (2014). Dynamic marketing capabilities: Toward an integrative framework. International Journal of Management Reviews, 16 , 397–416.

Bastian, R. W., & Thomas, J. P. (2016). Do talkativeness and vocal loudness correlate with laryngeal pathology? A study of the vocal overdoer/underdoer continuum. Journal of Voice, 30 (5), 557–562.

Bentler, P. M., & Mooijaart, A. B. (1989). Choice of structural model via parsimony: A rationale based on precision. Psychological Bulletin, 106 (2), 315–317.

Boari, C., Fratocchi, L., & Presutti, M. (2011). The Interrelated Impact of Social Networks and Knowledge Acquisition on Internationalisation Process of High-Tech Small Firms. In Proceedings of the 32th Annual Conference Academy of International Business, Bath .

Boralh, C. F. (2013). Impact of stress on depression and anxiety in dental students and professionals. International Public Health Journal, 5 (4), 485.

Bound, J. P., & Voulvoulis, N. (2005). Household disposal of pharmaceuticals as a pathway for aquatic contamination in the United Kingdom. Environmental Health Perspectives, 113 , 1705–1711.

Breznik, L., & Lahovnik, M. (2014). Renewing the resource base in line with the dynamic capabilities view: A key to sustained competitive advantage in the IT industry. Journal for East European Management Studies, 19 (4), 453–485.

Breznik, L., & Lahovnik, M. (2016). Dynamic capabilities and competitive advantage: Findings from case studies. Management: Journal of Contemporary Management Issues, 21 (Special issue), 167–185.

Cadiz, D., Sawyer, J. E., & Griffith, T. L. (2009). Developing and validating field measurement scales for absorptive capacity and experienced community of practice. Educational and Psychological Measurement, 69 (6), 1035–1058.

Carroll, G. B., Hébert, D. M., & Roy, J. M. (1999). Youth action strategies in violence prevention. Journal of Adolescent Health, 25 (1), 7–13.

Cepeda, G., & Vera, D. (2007). Dynamic capabilities and operational capabilities: A knowledge management perspective. Journal of Business Research, 60 (5), 426–437.

Chaharmahali, S. M., & Siadat, S. A. (2010). Achieving organizational ambidexterity: Understanding and explaining ambidextrous organisation.

Champoux, A., & Ommanney, C. S. L. (1986). Photo-interpretation, digital mapping, and the evolution of glaciers in glacier National Park, BC. Annals of Glaciology, 8 (1), 27–30.

Charan, C. S., & Nambirajan, T. (2016). An empirical investigation of supply chain engineering on lean thinking paradigms of in-house goldsmiths. The International Journal of Applied Business and Economic Research, 14 (6), 4475–4492.

Chau, P. Y. (2001). Inhibitors to EDI adoption in small business: An empirical investigation. Journal of Electronic Commerce Research, 2 (2), 78–88.

Chen, L. C. (2010). Multi-skilling in the hotel industry in Taiwan.

Chen, H. H., Lee, P. Y., & Lay, T. J. (2009). Drivers of dynamic learning and dynamic competitive capabilities in international strategic alliances. Journal of Business Research, 62 (12), 1289–1295.

Chen, C. W., Yu, P. H., & Li, Y. J. (2016). Understanding group-buying websites continuous use behavior: A use and gratifications theory perspective. Journal of Economics and Management, 12 (2), 177–204.

Chua, R. L., Cockfield, G., & Al-Hakim, L. (2008, November). Factors affecting trust within Australian beef supply chain. In 4th international congress on logistics and SCM systems: Effective supply chain and logistic management for sustainable development (pp. 26–28).

Cognizant Annual Report. (2015). Cognizant annual report 2015. Retrieved February 14, 2017 from http://investors.cognizant.com/download/Cognizant_AnnualReport_2015.pdf .

Cox, B. G., Mage, D. T., & Immerman, F. W. (1988). Sample design considerations for indoor air exposure surveys. JAPCA, 38 (10), 1266–1270.

Creswell, J. W. (2009). Editorial: Mapping the field of mixed methods research. Journal of Mixed Methods Research, 3 (2), 95–108.

Creswell, J. W., & Clark, V. L. P. (2007). Designing and conducting mixed methods research . Thousand Oaks: Sage.

Daniel, J. (2011). Sampling essentials: Practical guidelines for making sampling choices . London: Sage.

De Winter, J. C., & Dodou, D. (2010). Five-point Likert items: T test versus Mann-Whitney-Wilcoxon. Practical Assessment, Research & Evaluation, 15 (11), 1–12.

Deans, P. C., Karwan, K. R., Goslar, M. D., Ricks, D. A., & Toyne, B. (1991). Identification of key international information systems issues in US-based multinational corporations. Journal of Management Information Systems, 7 (4), 27–50.

Dei Mensah, R. (2014). Effects of human resource management practices on retention of employees in the banking industry in Accra, Ghana (Doctoral dissertation, Kenyatta University).

Dubey, R. (2016). Re-imagining Infosys. Retrieved February 19, 2017 from http://www.businesstoday.in/magazine/cover-story/how-infosys-ceo-is-trying-to-bring-back-the-company-into-high-growth-mode/story/230431.html .

Dunn, S., Cragg, B., Graham, I. D., Medves, J., & Gaboury, I. (2013). Interprofessional shared decision making in the NICU: A survey of an interprofessional healthcare team. Journal of Research in Interprofessional Practice and Education, 3 (1).

Einwiller, S. (2003). When reputation engenders trust: An empirical investigation in business-to-consumer electronic commerce. Electronic Markets, 13 (3), 196–209.

Eliassen, K. M., & Hopstock, L. A. (2011). Sleep promotion in the intensive care unit—A survey of nurses’ interventions. Intensive and Critical Care Nursing, 27 (3), 138–142.

Elliott, M., Page, K., Worrall-Carter, L., & Rolley, J. (2013). Examining adverse events after intensive care unit discharge: Outcomes from a pilot questionnaire. International Journal of Nursing Practice, 19 (5), 479–486.

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4 (3), 272–299.

Filippini, R., Güttel, W. H., & Nosella, A. (2012). Dynamic capabilities and the evolution of knowledge management projects in SMEs. International Journal of Technology Management, 60 (3/4), 202.

Finstad, K. (2010). Response interpolation and scale sensitivity: Evidence against 5-point scales. Journal of Usability Studies, 5 (3), 104–110.

Fleming, C. M., & Bowden, M. (2009). Web-based surveys as an alternative to traditional mail methods. Journal of Environmental Management, 90 (1), 284–292.

Foss, N. J., & Pedersen, T. (2004). Organizing knowledge processes in the multinational corporation: An introduction. Journal of International Business Studies, 35 (5), 340–349.

Frosi, G., Barros, V. A., Oliveira, M. T., Cavalcante, U. M. T., Maia, L. C., & Santos, M. G. (2016). Increase in biomass of two woody species from a seasonal dry tropical forest in association with AMF with different phosphorus levels. Applied Soil Ecology, 102 , 46–52.

Fujisato, H., Ito, M., Takebayashi, Y., Hosogoshi, H., Kato, N., Nakajima, S., & Horikoshi, M. (2017). Reliability and validity of the Japanese version of the emotion regulation skills questionnaire. Journal of Affective Disorders, 208 , 145–152.

Garg, R., & De, K. (2012). Impact of dynamic capabilities on the export orientation and export performance of small and medium sized enterprises in emerging markets: A conceptual model. African Journal of Business Management, 6 (29), 8464–8474.

Gerbing, D. W., & Anderson, J. C. (1988). An updated paradigm for scale development incorporating unidimensionality and its assessment. Journal of Marketing Research, 25 , 186–192.

Getz, L. M., Marks, S., & Roy, M. (2014). The influence of stress, optimism, and music training on music uses and preferences. Psychology of Music, 42 (1), 71–85.

Gibson, C. B., & Birkinshaw, J. (2004). The antecedents, consequences, and mediating role of organizational ambidexterity. Academy of Management Journal, 47 (2), 209–226.

Glasow, P. A. (2005). Fundamentals of survey research methodology.

Global MAKE Report. (2016). Global Most Admired Knowledge Enterprises (MAKE) report: Executive summary. Retrieved February 22, 2017 from http://www.knowledgebusiness.com/knowledgebusiness/templates/ViewAttachment.aspx?hyperLinkId=6695 .

Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18 (1), 185–214.

Goltz, N. G. (2012). Influence of the first impression on credibility evaluation of online information (Bachelor’s thesis, University of Twente).

Graham, J. D., Beaulieu, N. D., Sussman, D., Sadowitz, M., & Li, Y. C. (1999). Who lives near coke plants and oil refineries? An exploration of the environmental inequity hypothesis. Risk Analysis, 19 (2), 171–186.

Granados, M. L. (2015). Knowing what social enterprises know. In 5th EMES International Research Conference on Social Enterprise (pp. 1–20).

Guo, Y. M., & Poole, M. S. (2009). Antecedents of flow in online shopping: A test of alternative models. Information Systems Journal, 19 (4), 369–390.

Hadadi, M., Ebrahimi Takamjani, I., Ebrahim Mosavi, M., Aminian, G., Fardipour, S., & Abbasi, F. (2016). Cross-cultural adaptation, reliability, and validity of the Persian version of the Cumberland ankle instability tool. Disability and Rehabilitation , 8288(February), 1–9. https://doi.org/10.1080/09638288.2016.1207105

Haghighi, M. A., Bagheri, R., & Kalat, P. S. (2015). The relationship of knowledge management and organizational performance in science and technology parks of Tehran. Independent Journal of Management & Production, 6 (2), 422–448.

Hahm, S., Knuth, D., Kehl, D., & Schmidt, S. (2016). The impact of different natures of experience on risk perception regarding fire-related incidents: A comparison of firefighters and emergency survivors using cross-national data. Safety Science, 82 , 274–282.

Hansen, S. S., & Lee, J. K. (2013). What drives consumers to pass along marketer-generated eWOM in social network games? Social and game factors in play. Journal of Theoretical and Applied Electronic Commerce Research, 8 (1), 53–68.

Haq, M. (2015). A comparative analysis of qualitative and quantitative research methods and a justification for adopting mixed methods in social research.

Hashim, Y. A. (2010). Determining sufficiency of sample size in management survey research activities. International Journal of Organisational Management & Entrepreneurship Development, 6 (1), 119–130.

Hill, R. (1998). What sample size is “enough” in internet survey research. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century, 6 (3–4), 1–12.

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21 (5), 967–988.

Hogan, S. J., Soutar, G. N., McColl-Kennedy, J. R., & Sweeney, J. C. (2011). Reconceptualizing professional service firm innovation capability: Scale development. Industrial Marketing Management, 40 (8), 1264–1273.

Holm, K. E., LaChance, H. R., Bowler, R. P., Make, B. J., & Wamboldt, F. S. (2010). Family factors are associated with psychological distress and smoking status in chronic obstructive pulmonary disease. General Hospital Psychiatry, 32 (5), 492–498.

Horng, J. S., Teng, C. C., & Baum, T. G. (2009). Evaluating the quality of undergraduate hospitality, tourism and leisure programmes. Journal of Hospitality, Leisure, Sport and Tourism Education, 8 (1), 37–54.

Huan, Y., & Li, D. (2015). Effects of intellectual capital on innovative performance: The role of knowledge- based dynamic capability. Management Decision, 53 (1), 40–56.

Huckleberry, S. D. (2011). Commitment to coaching: Using the sport commitment model as a theoretical framework with soccer coaches (Doctoral dissertation, Ohio University).

Humborstad, S. I. W., & Perry, C. (2011). Employee empowerment, job satisfaction and organizational commitment: An in-depth empirical investigation. Chinese Management Studies, 5 (3), 325–344.

Infosys Annual Report. (2015). Infosys annual report 2015. Retrieved February 12, 2017 from https://www.infosys.com/investors/reports-filings/annual-report/annual/Documents/infosys-AR-15.pdf .

Investment Standard. (2016). Cognizant is the best pick out of the 4 information technology service providers. Retrieved February 19, 2017 from http://seekingalpha.com/article/3961500-cognizant-best-pick-4-information-technology-service-providers .

Jansen, J. J., Van Den Bosch, F. A., & Volberda, H. W. (2005). Managing potential and realized absorptive capacity: How do organizational antecedents matter? Academy of Management Journal, 48 (6), 999–1015.

John, N. A., Seme, A., Roro, M. A., & Tsui, A. O. (2017). Understanding the meaning of marital relationship quality among couples in peri-urban Ethiopia. Culture, Health & Sexuality, 19 (2), 267–278.

Joo, J., & Sang, Y. (2013). Exploring Koreans’ smartphone usage: An integrated model of the technology acceptance model and uses and gratifications theory. Computers in Human Behavior, 29 (6), 2512–2518.

Kaehler, C., Busatto, F., Becker, G. V., Hansen, P. B., & Santos, J. L. S. (2014). Relationship between adaptive capability and strategic orientation: An empirical study in a Brazilian company. iBusiness .

Kajfez, R. L. (2014). Graduate student identity: A balancing act between roles.

Kam Sing Wong, S., & Tong, C. (2012). The influence of market orientation on new product success. European Journal of Innovation Management, 15 (1), 99–121.

Karttunen, V., Sahlman, H., Repo, J. K., Woo, C. S. J., Myöhänen, K., Myllynen, P., & Vähäkangas, K. H. (2015). Criteria and challenges of the human placental perfusion–Data from a large series of perfusions. Toxicology In Vitro, 29 (7), 1482–1491.

Kaur, V., & Mehta, V. (2016a). Knowledge-based dynamic capabilities: A new perspective for achieving global competitiveness in IT sector. Pacific Business Review International, 1 (3), 95–106.

Kaur, V., & Mehta, V. (2016b). Leveraging knowledge processes for building higher-order dynamic capabilities: An empirical evidence from IT sector in India. JIMS 8M , July- September.

Kaya, A., Iwamoto, D. K., Grivel, M., Clinton, L., & Brady, J. (2016). The role of feminine and masculine norms in college women’s alcohol use. Psychology of Men & Masculinity, 17 (2), 206–214.

Kenny, A., McLoone, S., Ward, T., & Delaney, D. (2006). Using user perception to determine suitable error thresholds for dead reckoning in distributed interactive applications.

Kianpour, K., Jusoh, A., & Asghari, M. (2012). Importance of Price for buying environmentally friendly products. Journal of Economics and Behavioral Studies, 4 (6), 371–375.

Kim, J., & Forsythe, S. (2008). Sensory enabling technology acceptance model (SE-TAM): A multiple-group structural model comparison. Psychology & Marketing, 25 (9), 901–922.

Kim, Y. J., Oh, Y., Park, S., Cho, S., & Park, H. (2013). Stratified sampling design based on data mining. Healthcare Informatics Research, 19 (3), 186–195.

Kim, R., Yang, H., & Chao, Y. (2016). Effect of brand equity& country origin on Korean consumers’ choice for beer brands. The Business & Management Review, 7 (3), 398.

Kimweli, J. M. (2013). The role of monitoring and evaluation practices to the success of donor funded food security intervention projects a case study of Kibwezi District. International Journal of Academic Research in Business and Social Sciences, 3 (6), 9.

Kinsfogel, K. M., & Grych, J. H. (2004). Interparental conflict and adolescent dating relationships: Integrating cognitive, emotional, and peer influences. Journal of Family Psychology, 18 (3), 505–515.

Kivimäki, M., Vahtera, J., Pentti, J., Thomson, L., Griffiths, A., & Cox, T. (2001). Downsizing, changes in work, and self-rated health of employees: A 7-year 3-wave panel study. Anxiety, Stress and Coping, 14 (1), 59–73.

Klemann, B. (2012). The unknowingly consumers of Fairtrade products.

Kothari, C. R. (2004). Research methodology: Methods and techniques . New Delhi: New Age International.

Krause, D. R. (1999). The antecedents of buying firms’ efforts to improve suppliers. Journal of Operations Management, 17 (2), 205–224.

Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement., 30 , 607–610.

Krige, S. M., Mahomoodally, F. M., Subratty, A. H., & Ramasawmy, D. (2012). Relationship between socio-demographic factors and eating practices in a multicultural society. Food and Nutrition Sciences, 3 (3), 286–295.

Krzakiewicz, K. (2013). Dynamic capabilities and knowledge management. Management, 17 (2), 1–15.

Kuzic, J., Fisher, J., Scollary, A., Dawson, L., Kuzic, M., & Turner, R. (2005). Modus vivendi of E-business. PACIS 2005 Proceedings , 99.

Laframboise, K., Croteau, A. M., Beaudry, A., & Manovas, M. (2009). Interdepartmental knowledge transfer success during information technology projects. International Journal of Knowledge Management , 189–210.

Landaeta, R. E. (2008). Evaluating benefits and challenges of knowledge transfer across projects. Engineering Management Journal, 20 (1), 29–38.

Lee, Y., Chen, A., Yang, Y. L., Ho, G. H., Liu, H. T., & Lai, H. Y. (2005). The prophylactic antiemetic effects of ondansetron, propofol, and midazolam in female patients undergoing sevoflurane anaesthesia for ambulatory surgery: A-42. European Journal of Anaesthesiology (EJA), 22 , 11–12.

Lee, V. H., Foo, A. T. L., Leong, L. Y., & Ooi, K. B. (2016). Can competitive advantage be achieved through knowledge management? A case study on SMEs. Expert Systems with Applications, 65 , 136–151.

Leech, N. L., Barrett, K. C., & Morgan, G. A. (2005). SPSS for intermediate statistics: Use and interpretation . New Jersey: Psychology Press.

Leonardi, F., Spazzafumo, L., & Marcellini, F. (2005). Subjective Well-being: The constructionist point of view. A longitudinal study to verify the predictive power of top-down effects and bottom-up processes. Social Indicators Research, 70 (1), 53–77.

Li, D. Y., & Liu, J. (2014). Dynamic capabilities, environmental dynamism, and competitive advantage: Evidence from China. Journal of Business Research, 67 (1), 2793–2799.

Liao, S. H., Fei, W. C., & Chen, C. C. (2007). Knowledge sharing, absorptive capacity, and innovation capability: An empirical study of Taiwan’s knowledge-intensive industries. Journal of Information Science, 33 (3), 340–359.

Liao, S. H., & Wu, C. C. (2009). The relationship among knowledge management, organizational learning, and organizational performance. International Journal of Business and Management, 4 (4), 64.

Liao, T. S., Rice, J., & Lu, J. C. (2014). The vicissitudes of Competitive advantage: Empirical evidence from Australian manufacturing SMEs. Journal of Small Business Management, 53 (2), 469–481.

Liu, S., & Deng, Z. (2015). Understanding knowledge management capability in business process outsourcing: A cluster analysis. Management Decision, 53 (1), 1–11.

Liu, C. L. E., Ghauri, P. N., & Sinkovics, R. R. (2010). Understanding the impact of relational capital and organizational learning on alliance outcomes. Journal of World Business, 45 (3), 237–249.

Luís, C., Cothran, E. G., & do Mar Oom, M. (2007). Inbreeding and genetic structure in the endangered Sorraia horse breed: Implications for its conservation and management. Journal of Heredity, 98 (3), 232–237.

MacDonald, C. M., & Atwood, M. E. (2014, June). What does it mean for a system to be useful?: An exploratory study of usefulness. In Proceedings of the 2014 conference on designing interactive systems (pp. 885–894). New York: ACM.

Mafini, C., & Dlodlo, N. (2014). The relationship between extrinsic motivation, job satisfaction and life satisfaction amongst employees in a public organisation. SA Journal of Industrial Psychology, 40 (1), 01–12.

Mafini, C., Dhurup, M., & Mandhlazi, L. (2014). Shopper typologies amongst a generation Y consumer cohort and variations in terms of age in the fashion apparel market: Original research. Acta Commercii, 14 (1), 1–11.

Mageswari, S. U., Sivasubramanian, C., & Dath, T. S. (2015). Knowledge management enablers, processes and innovation in Small manufacturing firms: A structural equation modeling approach. IUP Journal of Knowledge Management, 13 (1), 33.

Mahoney, J. T. (2005). Resource-based theory, dynamic capabilities, and real options. In Foundations for organizational science. Economic foundations of strategy . Thousand Oaks: SAGE Publications.

Malhotra, N., Hall, J., Shaw, M., & Oppenheim, P. (2008). Essentials of marketing research, 2nd Australian edition.

Manan, R. M. (2016). The use of hangman game in motivating students in Learning English. ELT Perspective, 4 (2).

Manco-Johnson, M., Morrissey-Harding, G., Edelman-Lewis, B., Oster, G., & Larson, P. (2004). Development and validation of a measure of disease-specific quality of life in young children with haemophilia. Haemophilia, 10 (1), 34–41.

Marek, L. (2016). Guess which Illinois company uses the most worker visas. Retrieved February 13, 2017 from http://www.chicagobusiness.com/article/20160227/ISSUE01/302279994/guess-which-illinois-company-uses-the-most-worker-visas .

Martin, C. M., Roach, V. A., Nguyen, N., Rice, C. L., & Wilson, T. D. (2013). Comparison of 3D reconstructive technologies used for morphometric research and the translation of knowledge using a decision matrix. Anatomical Sciences Education, 6 (6), 393–403.

Maskatia, S. A., Altman, C. A., Morris, S. A., & Cabrera, A. G. (2013). The echocardiography “boot camp”: A novel approach in pediatric cardiovascular imaging education. Journal of the American Society of Echocardiography, 26 (10), 1187–1192.

Matson, J. L., Boisjoli, J., Rojahn, J., & Hess, J. (2009). A factor analysis of challenging behaviors assessed with the baby and infant screen for children with autism traits. Research in Autism Spectrum Disorders, 3 (3), 714–722.

Matusik, S. F., & Heeley, M. B. (2005). Absorptive capacity in the software Industry: Identifying dimensions that affect knowledge and knowledge creation activities. Journal of Management, 31 (4), 549–572.

Matveev, A. V. (2002). The advantages of employing quantitative and qualitative methods in intercultural research: Practical implications from the study of the perceptions of intercultural communication competence by American and Russian managers. Bulletin of Russian Communication Association Theory of Communication and Applied Communication, 1 , 59–67.

McDermott, E. P., & Ervin, D. (2005). The influence of procedural and distributive variables on settlement rates in employment discrimination mediation. Journal of Dispute Resolution, 45 , 1–16.

McKelvie, A. (2007). Innovation in new firms: Examining the role of knowledge and growth willingness.

Mendonca, J., & Sen, A. (2016). IT companies including TCS, Infosys, Wipro bracing for slowest topline expansion on annual basis. Retrieved February 19 2017 from http://economictimes.indiatimes.com/markets/stocks/earnings/it-companies-including-tcs-infosys-wipro-bracing-for-slowest-topline-expansion-on-annual-basis/articleshow/51639858.cms .

Mesina, F., De Deyne, C., Judong, M., Vandermeersch, E., & Heylen, R. (2005). Quality survey of pre-operative assessment: Influence of a standard questionnaire: A-38. European Journal of Anaesthesiology (EJA), 22 , 11.

Michailova, S., & Zhan, W. (2014). Dynamic capabilities and innovation in MNC subsidiaries. Journal of World Business , 1–9.

Miller, R., Salmona, M., & Melton, J. (2012). Modeling student concern for professional online image. Journal of Internet Social Networking & Virtual Communities, 3 (2), 1.

Minarro-Viseras, E., Baines, T., & Sweeney, M. (2005). Key success factors when implementing strategic manufacturing initiatives. International Journal of Operations & Production Management, 25 (2), 151–179.

Monferrer, D., Blesa, A., & Ripollés, M. (2015). Catching dynamic capabilities through market-oriented networks. European Journal of International Management, 9 (3), 384–408.

Moyer, J. E. (2007). Learning from leisure reading: A study of adult public library patrons. Reference & User Services Quarterly, 46 , 66–79.

Mulaik, S. A., James, L. R., Van Alstine, J., Bennett, N., Lind, S., & Stilwell, C. D. (1989). Evaluation of goodness-of-fit indices for structural equation models. Psychological Bulletin, 105 (3), 430–445.

Murphy, T. H., & Terry, H. R. (1998). Faculty needs associated with agricultural distance education. Journal of Agricultural Education, 39 , 17–27.

Murphy, C., Hearty, C., Murray, M., & McCaul, C. (2005). Patient preferences for desired post-anaesthesia outcomes-a comparison with medical provider perspective: A-40. European Journal of Anaesthesiology (EJA), 22 , 11.

Nair, A., Rustambekov, E., McShane, M., & Fainshmidt, S. (2014). Enterprise risk management as a dynamic Capability: A test of its effectiveness during a crisis. Managerial and Decision Economics, 35 , 555–566.

Nandan, S. (2010). Determinants of customer satisfaction on service quality: A study of railway platforms in India. Journal of Public Transportation, 13 (1), 6.

NASSCOM Indian IT-BPM Industry Report. (2016). NASSCOM Indian IT-BPM Industry Report 2016. Retrieved January 11, 2017 from http://www.nasscom.in/itbpm-sector-india-strategic-review-2016 .

Nedzinskas, Š. (2013). Dynamic capabilities and organizational inertia interaction in volatile environment. Retrieved from http://archive.ism.lt/handle/1/301 .

Nguyen, T. N. Q. (2010). Knowledge management capability and competitive advantage: An empirical study of Vietnamese enterprises.

Nguyen, N. T. D., & Aoyama, A. (2014). Achieving efficient technology transfer through a specific corporate culture facilitated by management practices. The Journal of High Technology Management Research, 25 (2), 108–122.

Nguyen, Q. T. N., & Neck, P. A. (2008, July). Knowledge management as dynamic capabilities: Does it work in emerging less developed countries. In Proceedings of the 16th Annual Conference on Pacific Basin Finance, Economics, Accounting and Management (pp. 1–18).

Nieves, J., & Haller, S. (2014). Building dynamic capabilities through knowledge resources. Tourism Management, 40 , 224–232.

Nirmal, R. (2016). Indian IT firms late movers in digital race. Retrieved February 19, 2017 from http://www.thehindubusinessline.com/info-tech/indian-it-firms-late-movers-in-digital-race/article8505379.ece .

Numthavaj, P., Bhongmakapat, T., Roongpuwabaht, B., Ingsathit, A., & Thakkinstian, A. (2017). The validity and reliability of Thai Sinonasal outcome Test-22. European Archives of Oto-Rhino-Laryngology, 274 (1), 289–295.

Obwoge, M. E., Mwangi, S. M., & Nyongesa, W. J. (2013). Linking TVET institutions and industry in Kenya: Where are we. The International Journal of Economy, Management and Social Science, 2 (4), 91–96.

Oktemgil, M., & Greenley, G. (1997). Consequences of high and low adaptive capability in UK companies. European Journal of Marketing, 31 (7), 445–466.

Ouyang, Y. (2015). A cyclic model for knowledge management capability-a review study. Arabian Journal of Business and Management Review, 5 (2), 1–9.

Paloniemi, R., & Vainio, A. (2011). Legitimacy and empowerment: Combining two conceptual approaches for explaining forest owners’ willingness to cooperate in nature conservation. Journal of Integrative Environmental Sciences, 8 (2), 123–138.

Pant, S., & Lado, A. (2013). Strategic business process offshoring and Competitive advantage: The role of strategic intent and absorptive capacity. Journal of Information Science and Technology, 9 (1), 25–58.

Paramati, S. R., Gupta, R., Maheshwari, S., & Nagar, V. (2016). The empirical relationship between the value of rupee and performance of information technology firms: Evidence from India. International Journal of Business and Globalisation, 16 (4), 512–529.

Parida, V., Oghazi, P., & Cedergren, S. (2016). A study of how ICT capabilities can influence dynamic capabilities. Journal of Enterprise Information Management, 29 (2), 1–22.

Parkhurst, K. A., Conwell, Y., & Van Orden, K. A. (2016). The interpersonal needs questionnaire with a shortened response scale for oral administration with older adults. Aging & Mental Health, 20 (3), 277–283.

Payne, A. A., Gottfredson, D. C., & Gottfredson, G. D. (2006). School predictors of the intensity of implementation of school-based prevention programs: Results from a national study. Prevention Science, 7 (2), 225–237.

Pereira-Moliner, J., Font, X., Molina-Azorín, J., Lopez-Gamero, M. D., Tarí, J. J., & Pertusa-Ortega, E. (2015). The holy grail: Environmental management, competitive advantage and business performance in the Spanish hotel industry. International Journal of Contemporary Hospitality Management, 27 (5), 714–738.

Persada, S. F., Razif, M., Lin, S. C., & Nadlifatin, R. (2014). Toward paperless public announcement on environmental impact assessment (EIA) through SMS gateway in Indonesia. Procedia Environmental Sciences, 20 , 271–279.

Pertusa-Ortega, E. M., Molina-Azorín, J. F., & Claver-Cortés, E. (2010). Competitive strategy, structure and firm performance: A comparison of the resource-based view and the contingency approach. Management Decision, 48 (8), 1282–1303.

Peters, M. D., Wieder, B., Sutton, S. G., & Wake, J. (2016). Business intelligence systems use in performance measurement capabilities: Implications for enhanced competitive advantage. International Journal of Accounting Information Systems, 21 (1–17), 1–17.

Protogerou, A., Caloghirou, Y., & Lioukas, S. (2011). Dynamic capabilities and their indirect impact on firm performance. Industrial and Corporate Change, 21 (3), 615–647.

Rapiah, M., Wee, S. H., Ibrahim Kamal, A. R., & Rozainun, A. A. (2010). The relationship between strategic performance measurement systems and organisational competitive advantage. Asia-Pacific Management Accounting Journal, 5 (1), 1–20.

Reuner, T. (2016). HfS blueprint Report, ServiceNow services 2016, excerpt for Cognizant. Retrieved February 2, 2017 from https://www.cognizant.com/services-resources/Services/hfs-blueprint-report-servicenow-2016.pdf .

Ríos, V. R., & del Campo, E. P. (2013). Business research methods: Theory and practice . Madrid: ESIC Editorial.

Sachitra, V. (2015). Review of Competitive advantage measurements: The case of agricultural firms. IV, 303–317.

Sahney, S., Banwet, D. K., & Karunes, S. (2004). Customer requirement constructs: The premise for TQM in education: A comparative study of select engineering and management institutions in the Indian context. International Journal of Productivity and Performance Management, 53 (6), 499–520.

Sampe, F. (2012). The influence of organizational learning on performance in Indonesian SMEs.

Sarlak, M. A., Shafiei, M., Sarlak, M. A., Shafiei, M., Capability, M., Capability, I., & Competitive, S. (2013). A research in relationship between entrepreneurship, marketing Capability, innovative Capability and sustainable Competitive advantage. Kaveh Industrial City, 7 (8), 1490–1497.

Saunders, M., Lewis, P., & Thornhill, A. (2012). Research methods for business students . Pearson.

Schiff, J. H., Fornaschon, S., Schiff, M., Martin, E., & Motsch, J. (2005). Measuring patient dissatisfaction with anethesia care: A-41. European Journal of Anaesthesiology (EJA), 22 , 11.

Schwartz, S. J., Coatsworth, J. D., Pantin, H., Prado, G., Sharp, E. H., & Szapocznik, J. (2006). The role of ecodevelopmental context and self-concept in depressive and externalizing symptoms in Hispanic adolescents. International Journal of Behavioral Development, 30 (4), 359–370.

Scott, V. C., Sandberg, J. G., Harper, J. M., & Miller, R. B. (2012). The impact of depressive symptoms and health on sexual satisfaction for older couples: Implications for clinicians. Contemporary Family Therapy, 34 (3), 376–390.

Shafia, M. A., Shavvalpour, S., Hosseini, M., & Hosseini, R. (2016). Mediating effect of technological innovation capabilities between dynamic capabilities and competitiveness of research and technology organisations. Technology Analysis & Strategic Management, 28 , 1–16. https://doi.org/10.1080/09537325.2016.1158404 .

Shahzad, K., Faisal, A., Farhan, S., Sami, A., Bajwa, U., & Sultani, R. (2016). Integrating knowledge management (KM) strategies and processes to enhance organizational creativity and performance: An empirical investigation. Journal of Modelling in Management, 11 (1), 1–34.

Sharma, A. (2016). Five reasons why you should avoid investing in IT stocks. Retrieved February 19, 2017 from http://www.businesstoday.in/markets/company-stock/five-reasons-why-you-should-avoid-investing-in-infosys-tcs-wipro/story/238225.html .

Sharma, J. K., & Singh, A. K. (2012). Absorptive capability and competitive advantage: Some insights from Indian pharmaceutical Industry. International Journal of Management and Business Research, 2 (3), 175–192.

Shepherd, R. M., & Edelmann, R. J. (2005). Reasons for internet use and social anxiety. Personality and Individual Differences, 39 (5), 949–958.

Singh, R., & Khanduja, D. (2010). Customer requirements grouping–a prerequisite for successful implementation of TQM in technical education. International Journal of Management in Education, 4 (2), 201–215.

Small, M. J., Gupta, J., Frederic, R., Joseph, G., Theodore, M., & Kershaw, T. (2008). Intimate partner and nonpartner violence against pregnant women in rural Haiti. International Journal of Gynecology & Obstetrics, 102 (3), 226–231.

Srivastava, M. (2016). IT biggies expect weaker Sept quarter. Retrieved February 19, 2017 from http://www.business-standard.com/article/companies/it-biggies-expect-weaker-sept-quarter-116100400680_1.html .

Stoten, D. W. (2016). Discourse, knowledge and power: The continuing debate over the DBA. Journal of Management Development, 35 (4), 430–447.

Sudarvel, J., & Velmurugan, R. (2015). Semi month effect in Indian IT sector with reference to BSE IT index. International Journal of Advance Research in Computer Science and Management Studies, 3 (10), 155–159.

Sylvia, M., & Terhaar, M. (2014). An approach to clinical data Management for the Doctor of nursing practice curriculum. Journal of Professional Nursing, 30 (1), 56–62.

Tabachnick, B. G., & Fidell, L. S. (2007). Multivariate analysis of variance and covariance. Using Multivariate Statistics, 3 , 402–407.

Teece, D. J. (2014). The foundations of Enterprise performance: Dynamic and ordinary capabilities in an (economic) theory of firms. The Academy of Management Perspectives, 28 (4), 328–352.

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18 (7), 509–533.

Thomas, J. B., Sussman, S. W., & Henderson, J. C. (2001). Understanding “strategic learning”: Linking organizational learning, knowledge management, and sensemaking. Organization Science, 12 (3), 331–345.

Travis, S. E., & Grace, J. B. (2010). Predicting performance for ecological restoration: A case study using Spartina alterniflora. Ecological Applications, 20 (1), 192–204.

Tseng, S., & Lee, P. (2014). The effect of knowledge management capability and dynamic capability on organizational performance. Journal of Enterprise Information Management, 27 (2), 158–179.

Turker, D. (2009). Measuring corporate social responsibility: A scale development study. Journal of Business Ethics, 85 (4), 411–427.

Vanham, D., Mak, T. N., & Gawlik, B. M. (2016). Urban food consumption and associated water resources: The example of Dutch cities. Science of the Total Environment, 565 , 232–239.

Visser, P. S., Krosnick, J. A., & Lavrakas, P. J. (2000). Survey research. In H.T. Reis & C.M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 223-252). New York: Cambridge.

Vitale, G., Sala, F., Consonni, F., Teruzzi, M., Greco, M., Bertoli, E., & Maisano, P. (2005). Perioperative complications correlate with acid-base balance in elderly trauma patients: A-37. European Journal of Anaesthesiology (EJA), 22 , 10–11.

Wang, C. L., & Ahmed, P. K. (2004). Leveraging knowledge in the innovation and learning process at GKN. International Journal of Technology Management, 27 (6/7), 674–688.

Wang, C. L., Senaratne, C., & Rafiq, M. (2015). Success traps, dynamic capabilities and firm performance. British Journal of Management, 26 , 26–44.

Wasswa Katono, I. (2011). Student evaluation of e-service quality criteria in Uganda: The case of automatic teller machines. International Journal of Emerging Markets, 6 (3), 200–216.

Wasylkiw, L., Currie, M. A., Meuse, R., & Pardoe, R. (2010). Perceptions of male ideals: The power of presentation. International Journal of Men's Health, 9 (2), 144–153.

Wilhelm, H., Schlömer, M., & Maurer, I. (2015). How dynamic capabilities affect the effectiveness and efficiency of operating routines under high and Low levels of environmental dynamism. British Journal of Management , 1–19.

Wilkens, U., Menzel, D., & Pawlowsky, P. (2004). Inside the black-box : Analysing the generation of Core competencies and dynamic capabilities by exploring collective minds. An organizational learning perspective. Management Review, 15 (1), 8–27.

Willemsen, M. C., & de Vries, H. (1996). Saying “no” to environmental tobacco smoke: Determinants of assertiveness among nonsmoking employees. Preventive Medicine, 25 (5), 575–582.

Williams, M., Peterson, G. M., Tenni, P. C., & Bindoff, I. K. (2012). A clinical knowledge measurement tool to assess the ability of community pharmacists to detect drug-related problems. International Journal of Pharmacy Practice, 20 (4), 238–248.

Wintermark, M., Huss, D. S., Shah, B. B., Tustison, N., Druzgal, T. J., Kassell, N., & Elias, W. J. (2014). Thalamic connectivity in patients with essential tremor treated with MR imaging–guided focused ultrasound: In vivo Fiber tracking by using diffusion-tensor MR imaging. Radiology, 272 (1), 202–209.

Wipro Annual Report. (2015). Wipro annual report 2014–15. Retrieved February 16, 2017 from http://www.wipro.com/documents/investors/pdf-files/Wipro-annual-report-2014-15.pdf .

Wu, J., & Chen, X. (2012). Leaders’ social ties, knowledge acquisition capability and firm competitive advantage. Asia Pacific Journal of Management, 29 (2), 331–350.

Yamane, T. (1967). Elementary Sampling Theory Prentice Inc. Englewood Cliffs. NS, USA, 1, 371–390.

Zahra, S., Sapienza, H. J., & Davidsson, P. (2006). Entrepreneurship and dynamic capabilities: A review, model and research agenda. Journal of Management Studies, 43 (4), 917–955.

Zaied, A. N. H. (2012). An integrated knowledge management capabilities framework for assessing organizational performance. International Journal of Information Technology and Computer Science, 4 (2), 1–10.

Zakaria, Z. A., Anuar, H. S., & Udin, Z. M. (2015). The relationship between external and internal factors of information systems success towards employee performance: A case of Royal Malaysia custom department. International Journal of Economics, Finance and Management, 4 (2), 54–60.

Zheng, S., Zhang, W., & Du, J. (2011). Knowledge-based dynamic capabilities and innovation in networked environments. Journal of Knowledge Management, 15 (6), 1035–1051.

Zikmund, W. G., Babin, B. J., Carr, J. C., & Griffin, M. (2010). Business research methods . Mason: South Western Cengage Learning.

Download references

Author information

Authors and affiliations.

The University of Texas at Dallas, Richardson, TX, USA

Vaneet Kaur

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Kaur, V. (2019). Research Methodology. In: Knowledge-Based Dynamic Capabilities. Innovation, Technology, and Knowledge Management. Springer, Cham. https://doi.org/10.1007/978-3-030-21649-8_3

Download citation

DOI : https://doi.org/10.1007/978-3-030-21649-8_3

Published : 29 June 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-21648-1

Online ISBN : 978-3-030-21649-8

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Gen Intern Med
  • v.37(1); 2022 Jan

Logo of jgimed

Methods for Identifying Health Research Gaps, Needs, and Priorities: a Scoping Review

Eunice c. wong.

1 RAND Corporation, Santa Monica, CA USA

Alicia R. Maher

Aneesa motala.

2 Department of Population and Public Health Sciences, University of Southern California Gehr Family Center for Health Systems Science & Innovation, Los Angeles, USA

Rachel Ross

Olamigoke akinniranye, jody larkin, susanne hempel, associated data.

Well-defined, systematic, and transparent processes to identify health research gaps, needs, and priorities are vital to ensuring that available funds target areas with the greatest potential for impact.

The purpose of this review is to characterize methods conducted or supported by research funding organizations to identify health research gaps, needs, or priorities.

We searched MEDLINE, PsycINFO, and the Web of Science up to September 2019. Eligible studies reported on methods to identify health research gaps, needs, and priorities that had been conducted or supported by research funding organizations. Using a published protocol, we extracted data on the method, criteria, involvement of stakeholders, evaluations, and whether the method had been replicated (i.e., used in other studies).

Among 10,832 citations, 167 studies were eligible for full data extraction. More than half of the studies employed methods to identify both needs and priorities, whereas about a quarter of studies focused singularly on identifying gaps (7%), needs (6%), or priorities (14%) only. The most frequently used methods were the convening of workshops or meetings (37%), quantitative methods (32%), and the James Lind Alliance approach, a multi-stakeholder research needs and priority setting process (28%). The most widely applied criteria were importance to stakeholders (72%), potential value (29%), and feasibility (18%). Stakeholder involvement was most prominent among clinicians (69%), researchers (66%), and patients and the public (59%). Stakeholders were identified through stakeholder organizations (51%) and purposive (26%) and convenience sampling (11%). Only 4% of studies evaluated the effectiveness of the methods and 37% employed methods that were reproducible and used in other studies.

To ensure optimal targeting of funds to meet the greatest areas of need and maximize outcomes, a much more robust evidence base is needed to ascertain the effectiveness of methods used to identify research gaps, needs, and priorities.

Supplementary Information

The online version contains supplementary material available at 10.1007/s11606-021-07064-1.

Well-defined, systematic, and transparent methods to identify health research gaps, needs, and priorities are vital to ensuring that available funds target areas with the greatest potential for impact. 1 , 2 As defined in the literature, 3 , 4 research gaps are defined as areas or topics in which the ability to draw a conclusion for a given question is prevented by insufficient evidence. Research gaps are not necessarily synonymous with research needs , which are those knowledge gaps that significantly inhibit the decision-making ability of key stakeholders, who are end users of research, such as patients, clinicians, and policy makers. The selection of research priorities is often necessary when all identified research gaps or needs cannot be pursued because of resource constraints. Methods to identify health research gaps, needs, and priorities (from herein referred to as gaps, needs, priorities) can be multi-varied and there does not appear to be general consensus on best practices. 3 , 5

Several published reviews highlight the diverse methods that have been used to identify gaps and priorities. In a review of methods used to identify gaps from systematic reviews, Robinson et al. noted the wide range of organizing principles that were employed in published literature between 2001 and 2009 (e.g., care pathway, decision tree, and patient, intervention, comparison, outcome framework,). 6 In a more recent review spanning 2007 to 2017, Nyanchoka et al. found that the vast majority of studies with a primary focus on the identification of gaps (83%) relied solely on knowledge synthesis methods (e.g., systematic review, scoping review, evidence mapping, literature review). A much smaller proportion (9%) relied exclusively on primary research methods (i.e., quantitative survey, qualitative study). 7

With respect to research priorities, in a review limited to a PubMed database search covering the period from 2001 to 2014, Yoshida documented a wide range of methods to identify priorities including the use of not only knowledge synthesis (i.e., literature reviews) and primary research methods (i.e., surveys) but also multi-stage, structured methods such as Delphi, Child Health and Nutrition Research Initiative (CHNRI), James Lind Alliance Priority Setting Partnership (JLA PSP), and Essential National Health Research (ENHR). 2 The CHNRI method, originally developed for the purpose of setting global child health research priorities, typically employs researchers and experts to specify a long list of research questions, the criteria that will be used to prioritize research questions, and the technical scoring of research questions using the defined criteria. 8 During the latter stages, non-expert stakeholders’ input are incorporated by using their ratings of the importance of selected criteria to weight the technical scores. The ENHR method, initially designed for health research priority setting at the national level, involves researchers, decision-makers, health service providers, and communities throughout the entire process of identifying and prioritizing research topics. 9 The JLA PSP method convenes patients, carers, and clinicians to equally and jointly identify questions about healthcare that cannot be answered by existing evidence that are important to all groups (i.e., research needs). 10 The identified research needs are then prioritized by the groups resulting in a final list (often a top 10) of research priorities. Non-clinical researchers are excluded from voting on research needs or priorities but can be involved in other processes (e.g., knowledge synthesis). CHNRI, ENHR, and JLA PSP usually employ a mix of knowledge synthesis and primary research methods to first identify a set of gaps or needs that are then prioritized. Thus, even though CHNRI, ENHR, and JLA PSP have been referred to as priority setting methods, they actually consist of a gaps or needs identification stage that feeds into a research prioritization stage.

Nyanchoka et al.’s review found that the majority of studies focused on the identification of gaps alone (65%), whereas the remaining studies focused either on research priorities alone (17%) or on both gaps and priorities (19%). 7 In an update to Robinson et al.’s review, 6 Carey et al. reviewed the literature between 2010 and 2011 and observed that the studies conducted during this latter period of time focused more on research priorities than gaps and had increased stakeholder involvement, and that none had evaluated the reproducibility of the methods. 11

The increasing development and diversity of formal processes and methods to identify gaps and priorities are indicative of a developing field. 2 , 12 To facilitate more standardized and systematic processes, other important areas warrant further investigation. Prior reviews did not distinguish between the identification of gaps versus research needs. The Agency for Healthcare Research and Quality Evidence-based Practice Center (AHRQ EPC) Program issued a series of method papers related to establishing research needs as part of comparative effectiveness research. 13 – 15 The AHRQ EPC Program defined research needs as “evidence gaps” identified within systematic reviews that are prioritized by stakeholders according to their potential impact on practice or care. 16 Furthermore, Nyanchoka et al. relied on author designations to classify studies as focusing on gaps versus research priorities and noted that definitions of gaps varied across studies, highlighting the need to apply consistent taxonomy when categorizing studies in reviews. 7 Given the rise in the use of stakeholders in both gaps and prioritization exercises, a greater understanding of the range of practices involving stakeholders is also needed. This includes the roles and responsibilities of stakeholders (e.g., consultants versus final decision-makers), the composition of stakeholders (e.g., non-research clinicians, patients, caregivers, policymakers), and the methods used to recruit stakeholders. The lack of consensus of best practices also highlights the importance of learning the extent to which evaluations to determine the effectiveness of gaps, needs, and prioritization exercises have been conducted, and if so, what were the resultant outcomes.

To better inform efforts and organizations that fund health research, we conducted a scoping review of methods used to identify gaps, needs, and priorities that were linked to potential or actual health research funding decision-making. Hence, this scoping review was limited to studies in which the identification of health research gaps, needs, or priorities was supported or conducted by funding organizations to address the following questions 1 : What are the characteristics of methods to identify health research gaps, needs, and priorities? and 2 To what extent have evaluations of the impact of these methods been conducted? Given that scoping reviews may be executed to characterize the ways an area of research has been conducted, 17 , 18 this approach is appropriate for the broad nature of this study’s aims.

Protocol and Registration

We employed methods that conform to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews. 19 See Appendix A in the Supplementary Information. The scoping review protocol is registered with the Open Science Framework ( https://osf.io/5zjqx/ ).

Eligibility Criteria

Studies published in English that described methods to identify health research gaps, needs, or priorities that were supported or conducted by funding organizations were eligible for inclusion. We excluded studies that reported only the results of the exercise (e.g., list of priorities) absent of information on the methods used. We also excluded studies involving evidence synthesis (e.g., literature or systematic reviews) that were solely descriptive and did not employ an explicit method to identify research gaps, needs, or priorities.

Information Sources and Search Strategy

We searched the following electronic databases: MEDLINE, PsycINFO, and Web of Science. Our database search also included an update of the Nyanchoka et al. scoping review, which entailed executing their database searches for the time period following 2017 (the study’s search end date). 7 Nyanchoka et al. did not include database searches for research needs. The electronic database search and scoping review update were completed in August and September 2019, respectively . The search strategy employed for each of the databases is presented in Appendix B in the Supplementary Information.

Selection of Sources of Evidence and Data Charting Process

Two reviewers screened titles and abstracts and full-text publications. Citations that one or both reviewers considered potentially eligible were retrieved for full-text review. Relevant background articles and scoping and systematic reviews were reference mined to screen for eligible studies. Full-text publications were screened against detailed inclusion and exclusion criteria. Data was extracted by one reviewer and checked by a second reviewer. Discrepancies were resolved through discussion by the review team.

Information on study characteristics were extracted from each article including the aims of the exercise (i.e., gaps, needs, priorities, or a combination) and health condition (i.e., physical or psychological). Based on definitions in the literature, 3 – 5 the aims of the exercise were coded according to the activities that were conducted, which may not have always corresponded with the study authors’ labeling of the exercises. For instance, the JLA PSP method is often described as a priority exercise but we categorized it as a needs and priority exercise. Priority exercises can be preceded by exercises to identify gaps or needs, which then feed into the priority exercise such as in JLA PSP; however, standalone priority exercises can also be conducted (e.g., stakeholders prioritize an existing list of emerging diseases).

For each type of exercise, information on the methods were recorded. An initial list of methods was created based on previous reviews. 9 , 12 , 20 During the data extraction process, any methods not included in the initial list were subsequently added. If more than one exercise was reported within an article (e.g., gaps and priorities), information was extracted for each exercise separately. Reviewers extracted the following information: methods employed (e.g., qualitative, quantitative), criteria used (e.g., disease burden, importance to stakeholders), stakeholder involvement (e.g., stakeholder composition, method for identifying stakeholders), and whether an evaluation was conducted on the effectiveness of the exercise (see Appendix C in the Supplementary Information for full data extraction form).

Synthesis of results entailed quantitative descriptives of study characteristics (e.g., proportion of studies by aims of exercise) and characteristics of methods employed across all studies and by each type of study (e.g., gaps, needs, priorities).

The electronic database search yielded a total of 10,548 titles. Another 284 articles were identified after searching the reference lists of full-text publications, including three systematic reviews 21 – 23 and one scoping review 24 that had met eligibility criteria. Moreover, a total of 99 publications designated as relevant background articles were also reference mined to screen for eligible studies. We conducted full-text screening for 2524 articles, which resulted in 2344 exclusions (440 studies were designated as background articles). A total of 167 exercises related to the identification of gaps, needs, or priorities that were supported or conducted by a research funding organization were described across 180 publications and underwent full data extraction. See Figure ​ Figure1 1 for the flow diagram of our search strategy and reasons for exclusion.

An external file that holds a picture, illustration, etc.
Object name is 11606_2021_7064_Fig1_HTML.jpg

Literature flow

Characteristics of Sources of Evidence

Among the published exercises, the majority of studies (152/167) conducted gaps, need, or prioritization exercises related to physical health, whereas only a small fraction of studies focused on psychological health (12/167) (see Appendix D in the Supplementary Information).

Methods for Identifying Gaps, Needs, and Priorities

As seen in Table ​ Table1, 1 , only about a quarter of studies involved a singular type of exercise with 7% focused on the identification of gaps only (i.e., areas with insufficient information to draw a conclusion for a given question), 6% on needs only (i.e., knowledge gaps that inhibit the decision-making of key stakeholders), and 14% priorities only (i.e., ranked gaps or needs often because of resource constraints). Studies more commonly conducted a combination of multiple types of exercises with more than half focused on the identification of both research needs and priorities, 14% on gaps and priorities, 3% gaps, needs, and priorities, and 3% gaps and needs.

Methods for Identifying Health Research Gaps, Needs, and Priorities

Framework tool6400001412031300120
JLA PSP46280000000000465300
ENHR2100000000002200
CHNRI117000014006254500
Systematic review1100000000001100
Literature review29173252052224072978360
Evidence mapping1100000000001100
Qualitative methods281718220291204171416480
Quantitative methods5432182201148240114622255100
Consensus methods221300003131204171113360
Workshop/conference613712100770135751005211517480
Stakeholder consultation740000001201433240
Review in-progress data127001100012031367120
Review source materials251600003132401146565100
Other281700220626004171416240

JLA PSP , James Lind Alliance Priority Setting Partnerships; ENHR , Essential National Health Research; CHNRI , Child Health and Nutrition Research Initiative. Numbers in columns may add up to more than the total N or 100% since some studies employed more than one method

Across the 167 studies, the three most frequently used methods were the convening of workshops/meetings/conferences (37%), quantitative methods (32%), and the JLA PSP approach (28%). This was followed by methods involving literature reviews (17%), qualitative methods (17%), consensus methods (13%), and reviews of source materials (15%). Other methods included the CHNRI process (7%), reviews of in-progress data (7%), consultation with (non-researcher) stakeholders (4%), applying a framework tool (4%), ENHR (1%), systematic reviews (1%), and evidence mapping (1%).

The criterion most widely applied across the 167 studies was the importance to stakeholders (72%) (see Table ​ Table2). 2 ). Almost one-third (29%) considered the potential value and 18% feasibility as criteria. Burden of disease (9%), addressing inequities (8%), costs (6%), alignment with organization’s mission (3%), and patient centeredness (2%) were adopted as criteria to a lesser extent.

Criteria for Identifying Health Research Gaps, Needs, and Priorities

Costs10600004172404170000
Burden of disease159001103131206254500
Importance to stakeholders12072217550626510015638394480
Patient centeredness4200000000143300
Aligned with organization mission5318002900141100
Potential value4929325220114812012501618480
Potential risk from inaction53000031300141100
Addresses inequities138000029007294500
Feasibility301800004172409381113480
Other372200009394809381214360
Not reported148542220313002811120
Not applicable13800110000052156340
Unclear12718002936031322120

Numbers in columns may add up to more than the total N or 100% since some studies employed more than one criterion

About two-thirds of the studies included researchers (66%) and clinicians (69%) as stakeholders (see Appendix E in the Supplementary Information). Patients and the public were involved in 59% of the studies. A smaller proportion included policy makers (20%), funders (13%), product makers (8%), payers (5%), and purchasers (2%) as stakeholders. Nearly half of the studies (51%) relied on stakeholder organizations to identify stakeholders (see Appendix F in the Supplementary Information). A quarter of studies (26%) used purposive sampling and some convenience sampling (11%). Few (9%) used snowball sampling to identify stakeholders. Only a minor fraction of studies, seven of the 167 (4%), reported some type of effectiveness evaluation. 25 – 31

Our scoping review revealed that approaches to identifying gaps, needs, and priorities are less likely to occur as discrete processes and more often involve a combination of exercises. Approaches encompassing multiple exercises (e.g., gaps and needs) were far more prevalent than singular standalone exercises (e.g., gaps only) (73% vs. 27%). Findings underscore the varying importance placed on gaps, needs, and priorities, which reflect key principles of the Value of Information approach (i.e., not all gaps are important, addressing gaps do not necessarily address needs nor does addressing needs necessarily address priorities). 32

Findings differ from Nyanchoka et al.’s review in which studies involving the identification of gaps only outnumbered studies involving both gaps and priorities. 7 However, Nyanchoka et al. relied on author definitions to categorize exercises, whereas our study made designations based on our review of the activities described in the article and applied definitions drawn from the literature. 3 , 4 Lack of consensus on definitions of gaps and priority setting has been noted in the literature. 33 , 34 To the authors’ knowledge, no prior scoping review has focused on methods related to the identification of “research needs.” Findings underscore the need to develop and apply more consistent taxonomy to this growing field of research.

More than 40% of studies employed methods with a structured protocol including JLA PSP, ENHR, CHRNI, World Café, and the Dialogue model. 10 , 35 – 40 The World Café and Dialogue models particularly value the experiential perspectives of stakeholders. The World Café centers on creating a special environment, often modeled after a café, in which rounds of multi-stakeholder, small group, conversations are facilitated and prefaced with questions designed for the specific purpose of the session. Insights and results are reported and shared back to the entire group with no expectation to achieve consensus, but rather diverse perspectives are encouraged. 36 The Dialogue model is a multi-stakeholder, participatory, priority setting method involving the following phases: exploratory (informal discussions), consultation (separate stakeholder consultations), prioritization (stakeholder ratings), and integration (dialog between stakeholders). 39 Findings may indicate a trend away from non-replicable methods to approaches that afford greater transparency and reproducibility. 41 For instance, of the 17 studies published between 2000 and 2009, none had employed CHNRI and 6% used JLA PSP compared to the 141 studies between 2010 and 2019 in which 8% applied CHNRI and 32% JLA PSP. However, notable variations in implementing CHNRI and JLA PSP have been observed. 41 – 43 Though these protocols help to ensure a more standardized process, which is essential when testing the effectiveness of methods, such evaluations are infrequent but necessary to establish the usefulness of replicable methods.

Convening workshops, meetings, or conferences was the method used by the greatest proportion of studies (37%). The operationalization of even this singular method varied widely in duration (e.g., single vs. multi-day conferences), format (e.g., expert panel presentations, breakout discussion groups), processes (e.g., use of formal/informal consensus methods), and composition of stakeholders. The operationalization of other methods (e.g., quantitative, qualitative) also exhibited great diversity.

The use of explicit criteria to determine gaps, needs, or priorities is a key component of certain structured protocols 40 , 44 and frameworks. 9 , 45 In our scoping review, the criterion applied most frequently across studies (71%) was “importance to stakeholders” followed by potential value (31%) and feasibility (18%). Stakeholder values are being incorporated into the identification of gaps, needs, and exercises across a significant proportion of studies, but how this is operationalized varies widely across studies. For instance, the CHNRI typically employs multiple criteria that are scored by technical experts and these scores are then weighted based on stakeholder ratings of their relative importance. Other studies totaled scores across multiple criteria, whereas JLA PSP asks multiple stakeholders to rank the top ten priorities. The importance of involving stakeholders, especially patients and the public, in priority setting is increasingly viewed as vital to ensuring the needs of end users are met, 46 , 47 particularly in light of evidence demonstrating mismatches between the research interests of patients and researchers and clinicians. 48 – 50 In our review, clinicians (69%) and researchers (66%) were the most widely represented stakeholder groups across studies. Patients and the public (e.g., caregivers) were included as stakeholders in 59% of the studies. Only a small fraction of studies involved exercises in which stakeholders were limited to researchers only. Patients and the public were involved as stakeholders in 12% of studies published between 2000 and 2009 compared to 60% of studies between 2010 and 2019. Findings may reflect a trend away from researchers traditionally serving as one of the sole drivers of determining which research topics should be pursued.

More than half of the studies reported relying on stakeholder organizations to identify participants. Partnering with stakeholder organizations has been noted as one of the primary methods for identifying stakeholders for priority setting exercises. 34 Purposive sampling was the next most frequently used stakeholder identification method. In contrast, convenience sampling (e.g., recommendations by study team) and snowball sampling (e.g., identified stakeholders refer other stakeholders who then refer additional stakeholders) were not as frequently employed, but were documented as common methods in a prior review conducted almost a decade ago. 14 The greater use of stakeholder organizations than convenience or snowball sampling may be partly due to the more recent proliferation of published studies using structured protocols like JLA PSP, which rely heavily on partnerships with stakeholder organizations. Though methods such as snowball sampling may introduce more bias than random sampling, 14 there are no established best practices for stakeholder identification methods. 51 Nearly a quarter of studies provided either unclear or no information on stakeholder identification methods, which has been documented as a barrier to comparing across studies and assessing the validity of research priorities. 34

Determining the effectiveness of gaps, needs, and priority exercises is challenging given that outcome evaluations are rarely conducted. Only seven studies reported conducting an evaluation. 25 – 31 Evaluations varied with respect to their focus on process- (e.g., balanced stakeholder representation, stakeholder satisfaction) versus outcome-related impact (e.g., prioritized topics funded, knowledge production, benefits to health). There is no consensus on what constitutes optimal outcomes, which has been found to vary by discipline. 52

More than 90% of studies involved exercises related to physical health in contrast to a minor portfolio of work being dedicated to psychological health, which may be an indication of the low priority placed on psychological health policy research. Understanding whether funding decisions for physical versus psychological health research are similarly or differentially governed by more systematic, formal processes may be important to the extent that this affects the effective targeting of funds.

Limitations

By limiting studies to those supported or conducted by funding organizations, we may have excluded global, national, or local priority setting exercises. In addition, our scoping review categorized approaches according to the actual exercises conducted and definitions provided in the scientific literature rather than relying on the terminology employed by studies. This resulted in instances in which the category assigned to an exercise within our scoping review could diverge from the category employed by the study authors. Lastly, this study’s findings are subject to limitations often characteristic of scoping reviews such as publication bias, language bias, lack of quality assessment, and search, inclusion, and extraction biases. 53

Conclusions

The diversity and growing establishment of formal processes and methods to identify health research gaps, needs, and priorities are characteristic of a developing field. Even with the emergence of more structured and systematic approaches, the inconsistent categorization and definition of gaps, needs, and priorities inhibit efforts to evaluate the effectiveness of varied methods and processes, such efforts are rare and sorely needed to build an evidence base to guide best practices. The immense variation occurring within structured protocols, across different combinations of disparate methods, and even within singular methods, further emphasizes the importance of using clearly defined approaches, which are essential to conducting investigations of the effectiveness of these varied approaches. The recent development of reporting guidelines for priority setting for health research may facilitate more consistent and clear documentation of processes and methods, which includes the many facets of involving stakeholders. 34 To ensure optimal targeting of funds to meet the greatest areas of need and maximize outcomes, a much more robust evidence base is needed to ascertain the effectiveness of methods used to identify research gaps, needs, and priorities.

(PDF 1205 kb)

Acknowledgements

This scoping review is part of research that was sponsored by Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (now Psychological Health Center of Excellence).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

chapter 3 research methods reliability fill in the gaps

Chapter 3: Research methods 66—67Reliability Fill in the gaps - reliability Activity type Consolidation A simple way of checking a thorough understanding of the topic of reliability. You can decide whether to provide the words to be used or not. Either way, students should complete it without notes or text initially.

Study with Quizlet and memorize flashcards containing terms like Reliability, Validity, Reliability of a study and more.

Cornerstones of Good Research: Reliability and Validity Learn with flashcards, games, and more — for free.

Chapter 3: Research methods 68-69 Types of Validity Reliability and validity definition cards Activity type Consolidation Once both the reliability and validity spreads in the textbook have been taught, it can be useful to give out these definition cards to encourage thorough learning of these important terms. Ask students to cut the cards up.

Chapter 3: Research methods Types of validity 68—69 Validity and reliability definition cards 3.5 Validity The extent to which an observed effect is genuine. The extent to which the researcher has measured what they intended to and the extent to which the findings can be generalised. External validity The extent to which an observed

Only a reliability coefficient of 0.6 or 0.7 and above will be accepted. 3.7 Ethical Consideration Before data collection in the field, the researcher will seek permission from Government Authorities.

3.2 gives an idea of research methodology; section 3.3 explains the method for data collection and describes the sample population to be studied. Section 3.4 explains the data gathering methods including sampling and the design of the research instrument, and the data collection procedures. Finally, section 3.5 gives a summary of the research

Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to. Validity is a judgment based on various types of evidence.

Start studying Chapter 3 - Research Methods in Psychology. Learn vocabulary, terms, and more with flashcards, games, and other study tools.

Research Approach, Design, and Analysis. Chapter 3 explains the research method being used in the study. It describes the instruments associated with the chosen research method and design used; this includes information regarding instrument origin, reliability, and validity. Chapter 3 details the planned research approach, design, and analysis.

This video covers material from Research Methods for the Behavioral Sciences (4th edition) by Gravetter and Forzano. This video was created for Abe's Researc...

2ed Y2 66-67 3-3a Fill in the gaps - reliability - Free download as PDF File (.pdf), Text File (.txt) or read online for free.

CHAPTER THREE RESEARCH METHODOLOGY. 3.9 Validity and Reliability. Data validity refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from measures (Dooley, 1990). That means the effectiveness of the research instruments to measure what is intending to measure. Thus validity belongs not just to a measure ...

AQA Psychology RESEARCH METHODS - Full lessons Year 2. This bundle includes all of the lessons I have uploaded for A2 AQA Psychology revision of year 1 and new A2 bits for research methods. For the first year content these are revision lessons (e.g experiments, sampling, ethics, data etc) and assume some prior knowledge from first year.

or walking. Research frameworks and methods are also summarised in Figure 1.2 in Chapter 1. 3.2 Ethical and Practical Approach As mentioned in Chapter 1, ethical approval for the research was gained from the University Ethics Committee. Research methods were described in the ethical forms and approved by the committee.

3.1.1.1 Positivism. The researcher's intent to uncover objective truths by using quantitative methods to measure and analyze a phenomenon. They often emphasize control, objectivity, and replicability in their research. For example, a physical therapist's intent is to assess how effective is the application of laser therapy in ...

While the survey design enumerates various methods adopted along with justifications, the sampling design sets forth target population, sampling ... cific gap in the literature, when addressed through the present study, can lead to ... Chapter 6 Fig. 3.4 Steps in research design. (Source: Adapted from Zikmund et al. 2010) 3 Research Methodology. 83

A research technique that enables the indirect study of behaviour by examining communications that people produce, for example, in texts, emails, TV, film and other media. Thematic analysis. An inductive and qualitative approach to analysis that involves identifying implicit or explicit ideas within the data.

collection, analysis, and finally, problem. redefinition. In short, the cycle consists of diagnostic and therapeutic stages. According to Elliot (1991) there are two reasons using an action research in this. study. First, action research is a method and process to bridge between theory and. practice.

BACKGROUND. Well-defined, systematic, and transparent methods to identify health research gaps, needs, and priorities are vital to ensuring that available funds target areas with the greatest potential for impact. 1, 2 As defined in the literature, 3, 4 research gaps are defined as areas or topics in which the ability to draw a conclusion for a given question is prevented by insufficient evidence.

Chapter 3: Attachment romanian orphan studies. Campaign for international adoption. 90-91. After watching a documentary on Romanian orphans, Jane and her husband reconsider their decision to have biological children and decide that adopting from another country would be a much better thing to do.

In order to fill the gap, this article presents a framework of the... Chapter 3 | Methodology 69. Validity and Reliability. When... study can be confident of its validity (Neuman, 2000). 3.7 Research Instruments and Data Collection Procedures. Methodology refers to branch of philosophy that...

IMAGES

  1. how to write chapter 3 methodology

    chapter 3 research methods reliability fill in the gaps

  2. Chapter 3 Validity and Reliability

    chapter 3 research methods reliability fill in the gaps

  3. (PDF) CHAPTER THREE RESEARCH METHODOLOGY 3.1 Introduction

    chapter 3 research methods reliability fill in the gaps

  4. CHAPTER 3: Research Methodology

    chapter 3 research methods reliability fill in the gaps

  5. Chapter 3

    chapter 3 research methods reliability fill in the gaps

  6. Chapter 3

    chapter 3 research methods reliability fill in the gaps

COMMENTS

  1. PDF Fill in the gaps

    Chapter 3: Research methods 66—67Reliability Fill in the gaps - reliability Activity type Consolidation A simple way of checking a thorough understanding of the topic of reliability. You can decide whether to provide the words to be used or not. Either way, students should complete it without notes or text initially.

  2. Research Methods

    Study with Quizlet and memorize flashcards containing terms like Reliability, Validity, Reliability of a study and more.

  3. Research Methods- Chapter 3 Flashcards

    Cornerstones of Good Research: Reliability and Validity Learn with flashcards, games, and more — for free.

  4. PDF Reliability and validity definition cards handout number 3

    Chapter 3: Research methods 68-69 Types of Validity Reliability and validity definition cards Activity type Consolidation Once both the reliability and validity spreads in the textbook have been taught, it can be useful to give out these definition cards to encourage thorough learning of these important terms. Ask students to cut the cards up.

  5. research methods chapter 3: reliability and validity Flashcards

    an indication of the consistency or stability of a measuring instrument. Click the card to flip 👆. 1 / 14

  6. PDF CHAPTER THREE: RESEARCH METHODOLOGY 3.1Introduction

    3.2 gives an idea of research methodology; section 3.3 explains the method for data collection and describes the sample population to be studied. Section 3.4 explains the data gathering methods including sampling and the design of the research instrument, and the data collection procedures. Finally, section 3.5 gives a summary of the research

  7. PDF Chapter 3: Research methods Do you know everything that you need to

    Chapter 3: Research methods Types of validity 68—69 Validity and reliability definition cards 3.5 Validity The extent to which an observed effect is genuine. The extent to which the researcher has measured what they intended to and the extent to which the findings can be generalised. External validity The extent to which an observed

  8. Reliability and Validity of Measurement

    Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to. Validity is a judgment based on various types of evidence.

  9. PDF CHAPTER III RESEARCH METHODOLOGY

    3.4.2.1 Reliability of Questionnaire Regarding to the methods in calculating the reliability, the try out result of questionnaire was equally separated into first and second half, and it was calculated by using Pearson Product Moment and Spearman Brown formula to get the reliability coefficient (Sugiyono, 2008, p.190).

  10. Chapter 3: Home

    Research Approach, Design, and Analysis. Chapter 3 explains the research method being used in the study. It describes the instruments associated with the chosen research method and design used; this includes information regarding instrument origin, reliability, and validity. Chapter 3 details the planned research approach, design, and analysis.

  11. (PDF) Chapter 3 Research Design and Methodology

    Research Design and Methodology. Chapter 3 consists of three parts: (1) Purpose of the. study and research design, (2) Methods, and (3) Statistical. Data analysis procedure. Part one, Purpose of ...

  12. Validity and Reliability

    CHAPTER THREE RESEARCH METHODOLOGY. 3.9 Validity and Reliability. Data validity refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from measures (Dooley, 1990). That means the effectiveness of the research instruments to measure what is intending to measure. Thus validity belongs not just to a measure ...

  13. (Pdf) Chapter Three Research Methodology 3.1

    Only a reliability coefficient of 0.6 or 0.7 and above will be accepted. 3.7 Ethical Consideration Before data collection in the field, the researcher will seek permission from Government Authorities.

  14. PDF CHAPTER III RESEARCH METHODOLOGY validity and reliability are

    collection, analysis, and finally, problem. redefinition. In short, the cycle consists of diagnostic and therapeutic stages. According to Elliot (1991) there are two reasons using an action research in this. study. First, action research is a method and process to bridge between theory and. practice.

  15. Chapter 3

    Start studying Chapter 3 - Research Methods in Psychology. Learn vocabulary, terms, and more with flashcards, games, and other study tools.

  16. PDF Chapter 3 Research Methodology

    Chapter 3. Methodology3.1 IntroductionThe chapter presents methodology employed for examining framework developed, during the literature review, fo. the purpose of present study. In light of the research objectives, the chapter works upon the ontology, epistemology as well as the meth-odology.

  17. Methods for Identifying Health Research Gaps, Needs, and Priorities: a

    BACKGROUND. Well-defined, systematic, and transparent methods to identify health research gaps, needs, and priorities are vital to ensuring that available funds target areas with the greatest potential for impact. 1, 2 As defined in the literature, 3, 4 research gaps are defined as areas or topics in which the ability to draw a conclusion for a given question is prevented by insufficient evidence.

  18. PDF Practical Use Additional notes

    Put them in groups of 4-5 and ask them to come to a unanimous decision on the following: It's 1941 and the war is waging. You are a married man/woman with 3 small children: a girl aged 6, a boy aged 5 and a baby girl of 8 months. The man is away fighting a lot of the time. You live in London and the bombing is bad.

  19. CHAPTER 3 Finding research gap.pdf

    B) Characterizing research gaps: Use the PICOS framework to characterize research gaps related to interventions, screening tests, etc. The framework organizes research gaps as follows: 1. Population (P): information regarding the population that is not adequately represented in the evidence base (gender, race/ethnicity, age, etc.) 2.

  20. chapter 3 research methods reliability fill in the gaps

    In order to fill the gap, this article presents a framework of the... Chapter 3 | Methodology 69. Validity and Reliability. When... study can be confident of its validity (Neuman, 2000). 3.7 Research Instruments and Data Collection Procedures. Methodology refers to branch of philosophy that...

  21. PDF Chapter 3: Attachment Observing people… introduction to attachment 74-75

    Chapter 3: Attachment romanian orphan studies. Campaign for international adoption. 90-91. After watching a documentary on Romanian orphans, Jane and her husband reconsider their decision to have biological children and decide that adopting from another country would be a much better thing to do.