• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

field experiment questionnaire

Home Market Research

What is Field Research: Definition, Methods, Examples and Advantages

Field Research

What is Field Research?

Field research is defined as a qualitative method of data collection that aims to observe, interact and understand people while they are in a natural environment. For example, nature conservationists observe behavior of animals in their natural surroundings and the way they react to certain scenarios. In the same way, social scientists conducting field research may conduct interviews or observe people from a distance to understand how they behave in a social environment and how they react to situations around them.

Learn more about: Market Research

Field research encompasses a diverse range of social research methods including direct observation, limited participation, analysis of documents and other information, informal interviews, surveys etc. Although field research is generally characterized as qualitative research, it often involves multiple aspects of quantitative research in it.

Field research typically begins in a specific setting although the end objective of the study is to observe and analyze the specific behavior of a subject in that setting. The cause and effect of a certain behavior, though, is tough to analyze due to presence of multiple variables in a natural environment. Most of the data collection is based not entirely on cause and effect but mostly on correlation. While field research looks for correlation, the small sample size makes it difficult to establish a causal relationship between two or more variables.

LEARN ABOUT: Best Data Collection Tools

Methods of Field Research

Field research is typically conducted in 5 distinctive methods. They are:

  • Direct Observation

In this method, the data is collected via an observational method or subjects in a natural environment. In this method, the behavior or outcome of situation is not interfered in any way by the researcher. The advantage of direct observation is that it offers contextual data on people management , situations, interactions and the surroundings. This method of field research is widely used in a public setting or environment but not in a private environment as it raises an ethical dilemma.

  • Participant Observation

In this method of field research, the researcher is deeply involved in the research process, not just purely as an observer, but also as a participant. This method too is conducted in a natural environment but the only difference is the researcher gets involved in the discussions and can mould the direction of the discussions. In this method, researchers live in a comfortable environment with the participants of the research design , to make them comfortable and open up to in-depth discussions.

  • Ethnography

Ethnography is an expanded observation of social research and social perspective and the cultural values of an  entire social setting. In ethnography, entire communities are observed objectively. For example,  if a researcher would like to understand how an Amazon tribe lives their life and operates, he/she may chose to observe them or live amongst them and silently observe their day-to-day behavior.

LEARN ABOUT: Behavioral Targeting

  • Qualitative Interviews

Qualitative interviews are close-ended questions that are asked directly to the research subjects. The qualitative interviews could be either informal and conversational, semi-structured, standardized and open-ended or a mix of all the above three. This provides a wealth of data to the researcher that they can sort through. This also helps collect relational data. This method of field research can use a mix of one-on-one interviews, focus groups and text analysis .

LEARN ABOUT: Qualitative Interview

A case study research is an in-depth analysis of a person, situation or event. This method may look difficult to operate, however, it is one of the simplest ways of conducting research as it involves a deep dive and thorough understanding the data collection methods and inferring the data.

Steps in Conducting Field Research

Due to the nature of field research, the magnitude of timelines and costs involved, field research can be very tough to plan, implement and measure. Some basic steps in the management of field research are:

  • Build the Right Team: To be able to conduct field research, having the right team is important. The role of the researcher and any ancillary team members is very important and defining the tasks they have to carry out with defined relevant milestones is important. It is important that the upper management too is vested in the field research for its success.
  • Recruiting People for the Study: The success of the field research depends on the people that the study is being conducted on. Using sampling methods , it is important to derive the people that will be a part of the study.
  • Data Collection Methodology: As spoken in length about above, data collection methods for field research are varied. They could be a mix of surveys, interviews, case studies and observation. All these methods have to be chalked out and the milestones for each method too have to be chalked out at the outset. For example, in the case of a survey, the survey design is important that it is created and tested even before the research begins.
  • Site Visit: A site visit is important to the success of the field research and it is always conducted outside of traditional locations and in the actual natural environment of the respondent/s. Hence, planning a site visit alongwith the methods of data collection is important.
  • Data Analysis: Analysis of the data that is collected is important to validate the premise of the field research and  decide the outcome of the field research.
  • Communicating Results: Once the data is analyzed, it is important to communicate the results to the stakeholders of the research so that it could be actioned upon.

LEARN ABOUT: Research Process Steps

Field Research Notes

Keeping an ethnographic record is very important in conducting field research. Field notes make up one of the most important aspects of the ethnographic record. The process of field notes begins as the researcher is involved in the observational research process that is to be written down later.

Types of Field Research Notes

The four different kinds of field notes are:

  • Job Notes: This method of taking notes is while the researcher is in the study. This could be in close proximity and in open sight with the subject in study. The notes here are short, concise and in condensed form that can be built on by the researcher later. Most researchers do not prefer this method though due to the fear of feeling that the respondent may not take them seriously.
  • Field Notes Proper: These notes are to be expanded on immediately after the completion of events. The notes have to be detailed and the words have to be as close to possible as the subject being studied.
  • Methodological Notes: These notes contain methods on the research methods used by the researcher, any new proposed research methods and the way to monitor their progress. Methodological notes can be kept with field notes or filed separately but they find their way to the end report of a study.
  • Journals and Diaries: This method of field notes is an insight into the life of the researcher. This tracks all aspects of the researchers life and helps eliminate the Halo effect or any research bias that may have cropped up during the field research.

LEARN ABOUT: Causal Research

Reasons to Conduct Field Research

Field research has been commonly used in the 20th century in the social sciences. But in general, it takes a lot of time to conduct and complete, is expensive and in a lot of cases invasive. So why then is this commonly used and is preferred by researchers to validate data? We look at 4 major reasons:

  • Overcoming lack of data: Field research resolves the major issue of gaps in data. Very often, there is limited to no data about a topic in study, especially in a specific environment analysis . The research problem might be known or suspected but there is no way to validate this without primary research and data. Conducting field research helps not only plug-in gaps in data but collect supporting material and hence is a preferred research method of researchers.
  • Understanding context of the study: In many cases, the data collected is adequate but field research is still conducted. This helps gain insight into the existing data. For example, if the data states that horses from a stable farm generally win races because the horses are pedigreed and the stable owner hires the best jockeys. But conducting field research can throw light into other factors that influence the success like quality of fodder and care provided and conducive weather conditions.
  • Increasing the quality of data: Since this research method uses more than one tool to collect data, the data is of higher quality. Inferences can be made from the data collected and can be statistically analyzed via the triangulation of data.
  • Collecting ancillary data: Field research puts the researchers in a position of localized thinking which opens them new lines of thinking. This can help collect data that the study didn’t account to collect.

LEARN ABOUT: Behavioral Research

Examples of Field Research

Some examples of field research are:

  • Decipher social metrics in a slum Purely by using observational methods and in-depth interviews, researchers can be part of a community to understand the social metrics and social hierarchy of a slum. This study can also understand the financial independence and day-to-day operational nuances of a slum. The analysis of this data can provide an insight into how different a slum is from structured societies.
  • U nderstand the impact of sports on a child’s development This method of field research takes multiple years to conduct and the sample size can be very large. The data analysis of this research provides insights into how the kids of different geographical locations and backgrounds respond to sports and the impact of sports on their all round development.
  • Study animal migration patterns Field research is used extensively to study flora and fauna. A major use case is scientists monitoring and studying animal migration patterns with the change of seasons. Field research helps collect data across years and that helps draw conclusions about how to safely expedite the safe passage of animals.

LEARN ABOUT:  Social Communication Questionnaire

Advantages of Field Research

The advantages of field research are:

  • It is conducted in a real-world and natural environment where there is no tampering of variables and the environment is not doctored.
  • Due to the study being conducted in a comfortable environment, data can be collected even about ancillary topics.
  • The researcher gains a deep understanding into the research subjects due to the proximity to them and hence the research is extensive, thorough and accurate.

Disadvantages of Field Research

The disadvantages of field research are:

  • The studies are expensive and time-consuming and can take years to complete.
  • It is very difficult for the researcher to distance themselves from a bias in the research study.
  • The notes have to be exactly what the researcher says but the nomenclature is very tough to follow.
  • It is an interpretive method and this is subjective and entirely dependent on the ability of the researcher.
  • In this method, it is impossible to control external variables and this constantly alters the nature of the research.

LEARN ABOUT: 12 Best Tools for Researchers

MORE LIKE THIS

Stakeholder Interviews

Stakeholder Interviews: A Guide to Effective Engagement

Jul 2, 2024

zero correlation

Zero Correlation: Definition, Examples + How to Determine It

Jul 1, 2024

field experiment questionnaire

When You Have Something Important to Say, You want to Shout it From the Rooftops

Jun 28, 2024

The Item I Failed to Leave Behind — Tuesday CX Thoughts

The Item I Failed to Leave Behind — Tuesday CX Thoughts

Jun 25, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Search Menu
  • Sign in through your institution
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Studies Review
  • About the International Studies Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, what is fieldwork, purpose of fieldwork, physical safety, mental wellbeing and affect, ethical considerations, remote fieldwork, concluding thoughts, acknowledgments, funder information.

  • < Previous

Field Research: A Graduate Student's Guide

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Ezgi Irgil, Anne-Kathrin Kreft, Myunghee Lee, Charmaine N Willis, Kelebogile Zvobgo, Field Research: A Graduate Student's Guide, International Studies Review , Volume 23, Issue 4, December 2021, Pages 1495–1517, https://doi.org/10.1093/isr/viab023

  • Permissions Icon Permissions

What is field research? Is it just for qualitative scholars? Must it be done in a foreign country? How much time in the field is “enough”? A lack of disciplinary consensus on what constitutes “field research” or “fieldwork” has left graduate students in political science underinformed and thus underequipped to leverage site-intensive research to address issues of interest and urgency across the subfields. Uneven training in Ph.D. programs has also left early-career researchers underprepared for the logistics of fieldwork, from developing networks and effective sampling strategies to building respondents’ trust, and related issues of funding, physical safety, mental health, research ethics, and crisis response. Based on the experience of five junior scholars, this paper offers answers to questions that graduate students puzzle over, often without the benefit of others’ “lessons learned.” This practical guide engages theory and praxis, in support of an epistemologically and methodologically pluralistic discipline.

¿Qué es la investigación de campo? ¿Es solo para académicos cualitativos? ¿Debe realizarse en un país extranjero? ¿Cuánto tiempo en el terreno es “suficiente”? La falta de consenso disciplinario con respecto a qué constituye la “investigación de campo” o el “trabajo de campo” ha causado que los estudiantes de posgrado en ciencias políticas estén poco informados y, por lo tanto, capacitados de manera insuficiente para aprovechar la investigación exhaustiva en el sitio con el objetivo de abordar los asuntos urgentes y de interés en los subcampos. La capacitación desigual en los programas de doctorado también ha provocado que los investigadores en las primeras etapas de su carrera estén poco preparados para la logística del trabajo de campo, desde desarrollar redes y estrategias de muestreo efectivas hasta generar la confianza de las personas que facilitan la información, y las cuestiones relacionadas con la financiación, la seguridad física, la salud mental, la ética de la investigación y la respuesta a las situaciones de crisis. Con base en la experiencia de cinco académicos novatos, este artículo ofrece respuestas a las preguntas que desconciertan a los estudiantes de posgrado, a menudo, sin el beneficio de las “lecciones aprendidas” de otras personas. Esta guía práctica incluye teoría y praxis, en apoyo de una disciplina pluralista desde el punto de vista epistemológico y metodológico.

En quoi consiste la recherche de terain ? Est-elle uniquement réservée aux chercheurs qualitatifs ? Doit-elle être effectuée dans un pays étranger ? Combien de temps faut-il passer sur le terrain pour que ce soit « suffisant » ? Le manque de consensus disciplinaire sur ce qui constitue une « recherche de terrain » ou un « travail de terrain » a laissé les étudiants diplômés en sciences politiques sous-informés et donc sous-équipés pour tirer parti des recherches de terrain intensives afin d'aborder les questions d'intérêt et d'urgence dans les sous-domaines. L'inégalité de formation des programmes de doctorat a mené à une préparation insuffisante des chercheurs en début de carrière à la logistique du travail de terrain, qu'il s'agisse du développement de réseaux et de stratégies d’échantillonnage efficaces, de l'acquisition de la confiance des personnes interrogées ou des questions de financement, de sécurité physique, de santé mentale, d’éthique de recherche et de réponse aux crises qui y sont associées. Cet article s'appuie sur l'expérience de cinq jeunes chercheurs pour proposer des réponses aux questions que les étudiants diplômés se posent, souvent sans bénéficier des « enseignements tirés » par les autres. Ce guide pratique engage théorie et pratique en soutien à une discipline épistémologiquement et méthodologiquement pluraliste.

Days before embarking on her first field research trip, a Ph.D. student worries about whether she will be able to collect the qualitative data that she needs for her dissertation. Despite sending dozens of emails, she has received only a handful of responses to her interview requests. She wonders if she will be able to gain more traction in-country. Meanwhile, in the midst of drafting her thesis proposal, an M.A. student speculates about the feasibility of his project, given a modest budget. Thousands of miles away from home, a postdoc is concerned about their safety, as protests erupt outside their window and state security forces descend into the streets.

These anecdotes provide a small glimpse into the concerns of early-career researchers undertaking significant projects with a field research component. Many of these fieldwork-related concerns arise from an unfortunate shortage in curricular offerings for qualitative and mixed-method research in political science graduate programs ( Emmons and Moravcsik 2020 ), 1 as well as the scarcity of instructional materials for qualitative and mixed-method research, relative to those available for quantitative research ( Elman, Kapiszewski, and Kirilova 2015 ; Kapiszewski, MacLean, and Read 2015 ; Mosley 2013 ). A recent survey among the leading United States Political Science programs in Comparative Politics and International Relations found that among graduate students who have carried out international fieldwork, 62 percent had not received any formal fieldwork training and only 20 percent felt very or mostly prepared for their fieldwork ( Schwartz and Cronin-Furman 2020 , 7–8). This shortfall in training and instruction means that many young researchers are underprepared for the logistics of fieldwork, from developing networks and effective sampling strategies to building respondents’ trust. In addition, there is a notable lack of preparation around issues of funding, physical safety, mental health, research ethics, and crisis response. This is troubling, as field research is highly valued and, in some parts of the field, it is all but expected, for instance in comparative politics.

Beyond subfield-specific expectations, research that leverages multiple types of data and methods, including fieldwork, is one of the ways that scholars throughout the discipline can more fully answer questions of interest and urgency. Indeed, multimethod work, a critical means by which scholars can parse and evaluate causal pathways, is on the rise ( Weller and Barnes 2016 ). The growing appearance of multimethod research in leading journals and university presses makes adequate training and preparation all the more significant ( Seawright 2016 ; Nexon 2019 ).

We are five political scientists interested in providing graduate students and other early-career researchers helpful resources for field research that we lacked when we first began our work. Each of us has recently completed or will soon complete a Ph.D. at a United States or Swedish university, though we come from many different national backgrounds. We have conducted field research in our home countries and abroad. From Colombia and Guatemala to the United States, from Europe to Turkey, and throughout East and Southeast Asia, we have spanned the globe to investigate civil society activism and transitional justice in post-violence societies, conflict-related sexual violence, social movements, authoritarianism and contentious politics, and the everyday politics and interactions between refugees and host-country citizens.

While some of us have studied in departments that offer strong training in field research methods, most of us have had to self-teach, learning through trial and error. Some of us have also been fortunate to participate in short courses and workshops hosted by universities such as the Consortium for Qualitative Research Methods and interdisciplinary institutions such as the Peace Research Institute Oslo. Recognizing that these opportunities are not available to or feasible for all, and hoping to ease the concerns of our more junior colleagues, we decided to compile our experiences and recommendations for first-time field researchers.

Our experiences in the field differ in several key respects, from the time we spent in the field to the locations we visited, and how we conducted our research. The diversity of our experiences, we hope, will help us reach and assist the broadest possible swath of graduate students interested in field research. Some of us have spent as little as ten days in a given country or as much as several months, in some instances visiting a given field site location just once and in other instances returning several times. At times, we have been able to plan weeks and months in advance. Other times, we have quickly arranged focus groups and impromptu interviews. Other times still, we have completed interviews virtually, when research participants were in remote locations or when we ourselves were unable to travel, of note during the coronavirus pandemic. We have worked in countries where we are fluent or have professional proficiency in the language, and in countries where we have relied on interpreters. We have worked in settings with precarious security as well as in locations that feel as comfortable as home. Our guide is not intended to be prescriptive or exhaustive. What we offer is a set of experience-based suggestions to be implemented as deemed relevant and appropriate by the researcher and their advisor(s).

In terms of the types of research and data sources and collection, we have conducted archival research, interviews, focus groups, and ethnographies with diplomats, bureaucrats, military personnel, ex-combatants, civil society advocates, survivors of political violence, refugees, and ordinary citizens. We have grappled with ethical dilemmas, chief among them how to get useful data for our research projects in ways that exceed the minimal standards of human subjects’ research evaluation panels. Relatedly, we have contemplated how to use our platforms to give back to the individuals and communities who have so generously lent us their time and knowledge, and shared with us their personal and sometimes harrowing stories.

Our target audience is first and foremost graduate students and early-career researchers who are interested in possibly conducting fieldwork but who either (1) do not know the full potential or value of fieldwork, (2) know the potential and value of fieldwork but think that it is excessively cost-prohibitive or otherwise infeasible, or (3) who have the interest, the will, and the means but not necessarily the know-how. We also hope that this resource will be of value to graduate programs, as they endeavor to better support students interested in or already conducting field research. Further, we target instructional faculty and graduate advisors (and other institutional gatekeepers like journal and book reviewers), to show that fieldwork does not have to be year-long, to give just one example. Instead, the length of time spent in the field is a function of the aims and scope of a given project. We also seek to formalize and normalize the idea of remote field research, whether conducted because of security concerns in conflict zones, for instance, or because of health and safety concerns, like the Covid-19 pandemic. Accordingly, researchers in the field for shorter stints or who conduct fieldwork remotely should not be penalized.

We note that several excellent resources on fieldwork such as the bibliography compiled by Advancing Conflict Research (2020) catalogue an impressive list of articles addressing questions such as ethics, safety, mental health, reflexivity, and methods. Further resources can be found about the positionality of the researcher in the field while engaging vulnerable communities, such as in the research field of migration ( Jacobsen and Landau 2003 ; Carling, Bivand Erdal, and Ezzati 2014 ; Nowicka and Cieslik 2014 ; Zapata-Barrero and Yalaz 2019 ). However, little has been written beyond conflict-affected contexts, fragile settings, and vulnerable communities. Moreover, as we consulted different texts and resources, we found no comprehensive guide to fieldwork explicitly written with graduate students in mind. It is this gap that we aim to fill.

In this paper, we address five general categories of questions that graduate students puzzle over, often without the benefit of others’ “lessons learned.” First, What is field research? Is it just for qualitative scholars? Must it be conducted in a foreign country? How much time in the field is “enough”? Second, What is the purpose of fieldwork? When does it make sense to travel to a field site to collect data? How can fieldwork data be used? Third, What are the nuts and bolts? How does one get ready and how can one optimize limited time and financial resources? Fourth, How does one conduct fieldwork safely? What should a researcher do to keep themselves, research assistants, and research subjects safe? What measures should they take to protect their mental health? Fifth, How does one conduct ethical, beneficent field research?

Finally, the Covid-19 pandemic has impressed upon the discipline the volatility of research projects centered around in-person fieldwork. Lockdowns and closed borders left researchers sequestered at home and unable to travel, forced others to cut short any trips already begun, and unexpectedly confined others still to their fieldwork sites. Other factors that may necessitate a (spontaneous) readjustment of planned field research include natural disasters, a deteriorating security situation in the field site, researcher illness, and unexpected changes in personal circumstances. We, therefore, conclude with a section on the promise and potential pitfalls of remote (or virtual) fieldwork. Throughout this guide, we engage theory and praxis to support an epistemologically and methodologically pluralistic discipline.

The concept of “fieldwork” is not well defined in political science. While several symposia discuss the “nuts and bolts” of conducting research in the field within the pages of political science journals, few ever define it ( Ortbals and Rincker 2009 ; Hsueh, Jensenius, and Newsome 2014 ). Defining the concept of fieldwork is important because assumptions about what it is and what it is not underpin any suggestions for conducting it. A lack of disciplinary consensus about what constitutes “fieldwork,” we believe, explains the lack of a unified definition. Below, we discuss three areas of current disagreement about what “fieldwork” is, including the purpose of fieldwork, where it occurs, and how long it should be. We follow this by offering our definition of fieldwork.

First, we find that many in the discipline view fieldwork as squarely in the domain of qualitative research, whether interpretivist or positivist. However, field research can also serve quantitative projects—for example, by providing crucial context, supporting triangulation, or illustrating causal mechanisms. For instance, Kreft (2019) elaborated her theory of women's civil society mobilization in response to conflict-related sexual violence based on interviews she carried out in Colombia. She then examined cross-national patterns through statistical analysis. Conversely, Willis's research on the United States military in East Asia began with quantitative data collection and analysis of protest events before turning to fieldwork to understand why protests occurred in some instances but not others. Researchers can also find quantifiable data in the field that is otherwise unavailable to them at home ( Read 2006 ; Chambers-Ju 2014 ; Jensenius 2014 ). Accordingly, fieldwork is not in the domain of any particular epistemology or methodology as its purpose is to acquire data for further information.

Second, comparative politics and international relations scholars often opine that fieldwork requires leaving the country in which one's institution is based. Instead, we propose that what matters most is the nature of the research project, not the locale. For instance, some of us in the international relations subfield have interviewed representatives of intergovernmental organizations (IGOs) and international nongovernmental organizations (INGOs), whose headquarters are generally located in Global North countries. For someone pursuing a Ph.D. in the United States and writing on transnational advocacy networks, interviews with INGO representatives in New York certainly count as fieldwork ( Zvobgo 2020 ). Similarly, a graduate student who returns to her home country to interview refugees and native citizens is conducting a field study as much as a researcher for whom the context is wholly foreign. Such interviews can provide necessary insights and information that would not have been gained otherwise—one of the key reasons researchers conduct fieldwork in the first place. In other instances, conducting any in-person research is simply not possible, due to financial constraints, safety concerns, or other reasons. For example, the Covid-19 pandemic has forced many researchers to shift their face-to-face research plans to remote data collection, either over the phone or virtually ( Howlett 2021 , 2). For some research projects, gathering data through remote methods may yield the same if not similar information than in-person research ( Howlett 2021 , 3–4). As Howlett (2021 , 11) notes, digital platforms may offer researchers the ability to “embed ourselves in other contexts from a distance” and glimpse into our subjects’ lives in ways similar to in-person research. By adopting a broader definition of fieldwork, researchers can be more flexible in getting access to data sources and interacting with research subjects.

Third, there is a tendency, especially among comparativists, to only count fieldwork that spans the better part of a year; even “surgical strike” field research entails one to three months, according to some scholars ( Ortbals and Rincker 2009 ; Weiss, Hicken, and Kuhonta 2017 ). The emphasis on spending as much time as possible in the field is likely due to ethnographic research traditions, reflected in classics such as James Scott's Weapons of the Weak , which entail year-long stints of research. However, we suggest that the appropriate amount of time in the field should be assessed on a project-by-project basis. Some studies require the researcher to be in the field for long periods; others do not. For example, Willis's research on the discourse around the United States’ military presence in overseas host communities has required months in the field. By contrast, Kreft only needed ten days in New York to carry out interviews with diplomats and United Nations staff, in a context with which she already had some familiarity from a prior internship. Likewise, Zvobgo spent a couple of weeks in her field research sites, conducting interviews with directors and managers of prominent human rights nongovernmental organizations. This population is not so large as to require a whole month or even a few months. This has also been the case for Irgil, as she had spent one month in the field site conducting interviews with ordinary citizens. The goal of the project was to acquire information on citizens’ perceptions of refugees. As we discuss in the next section, when deciding how long to spend in the field, scholars must consider the information their project requires and consider the practicalities of fieldwork, notably cost.

Thus, we highlight three essential points in fieldwork and offer a definition accordingly: fieldwork involves acquiring information, using any set of appropriate data collection techniques, for qualitative, quantitative, or experimental analysis through embedded research whose location and duration is dependent on the project. We argue that adopting such a definition of “fieldwork” is necessary to include the multitude of forms fieldwork can take, including remote methods, whose value and challenges the Covid-19 pandemic has impressed upon the discipline.

When does a researcher need to conduct fieldwork? Fieldwork can be effective for (1) data collection, (2) theory building, and (3) theory testing. First, when a researcher is interested in a research topic, yet they could not find an available and/or reliable data source for the topic, fieldwork could provide the researcher with plenty of options. Some research agendas can require researchers to visit archives to review historical documents. For example, Greitens (2016) visited national archives in the Philippines, South Korea, Taiwan, and the United States to find historical documents about the development of coercive institutions in past authoritarian governments for her book, Dictators and Their Secret Police . Also, newly declassified archival documents can open new possibilities for researchers to examine restricted topics. To illustrate, thanks to the newly released archival records of the Chinese Communist Party's communications, and exchange of visits with the European communist world, Sarotte (2012) was able to study the Party's decision to crack down on Tiananmen protesters, which had previously been deemed as an unstudiable topic due to the limited data.

Other research agendas can require researchers to conduct (semistructured) in-depth interviews to understand human behavior or a situation more closely, for example, by revealing the meanings of concepts for people and showing how people perceive the world. For example, O'Brien and Li (2005) conducted in-depth interviews with activists, elites, and villagers to understand how these actors interact with each other and what are the outcomes of the interaction in contentious movements in rural China. Through research, they revealed that protests have deeply influenced all these actors’ minds, a fact not directly observable without in-depth interviews.

Finally, data collection through fieldwork should not be confined to qualitative data ( Jensenius 2014 ). While some quantitative datasets can be easily compiled or accessed through use of the internet or contact with data-collection agencies, other datasets can only be built or obtained through relationships with “gatekeepers” such as government officials, and thus require researchers to visit the field ( Jensenius 2014 ). Researchers can even collect their own quantitative datasets by launching surveys or quantifying data contained in archives. In a nutshell, fieldwork will allow researchers to use different techniques to collect and access original/primary data sources, whether these are qualitative, quantitative, or experimental in nature, and regardless of the intended method of analysis. 2

But fieldwork is not just for data collection as such. Researchers can accomplish two other fundamental elements of the research process: theory building and theory testing. When a researcher finds a case where existing theories about a phenomenon do not provide plausible explanations, they can build a theory through fieldwork ( Geddes 2003 ). Lee's experience provides a good example. When studying the rise of a protest movement in South Korea for her dissertation, Lee applied commonly discussed social movement theories, grievances, political opportunity, resource mobilization, and repression, to explain the movement's eruption and found that these theories do not offer a convincing explanation for the protest movement. She then moved on to fieldwork and conducted interviews with the movement participants to understand their motivations. Finally, through those interviews, she offered an alternative theory that the protest participants’ collective identity shaped during the authoritarian past played a unifying factor and eventually led them to participate in the movement. Her example shows that theorization can take place through careful review and rigorous inference during fieldwork.

Moreover, researchers can test their theory through fieldwork. Quantitative observational data has limitations in revealing causal mechanisms ( Esarey 2017 ). Therefore, many political scientists turn their attention to conducting field experiments or lab-in-the-field experiments to reveal causality ( Druckman et al. 2006 ; Beath, Christia, and Enikolopov 2013 ; Finseraas and Kotsadam 2017 ), or to leveraging in-depth insights or historical records gained through qualitative or archival research in process-tracing ( Collier 2011 ; Ricks and Liu 2018 ). Surveys and survey experiments may also be useful tools to substantiate a theoretical story or test a theory ( Marston 2020 ). Of course, for most Ph.D. students, especially those not affiliated with more extensive research projects, some of these options will be financially prohibitive.

A central concern for graduate students, especially those working with a small budget and limited time, is optimizing time in the field and integrating remote work. We offer three pieces of advice: have a plan, build in flexibility, and be strategic, focusing on collecting data that are unavailable at home. We also discuss working with local translators or research assistants. Before we turn to these more practical issues arising during fieldwork, we address a no less important issue: funding.

The challenge of securing funds is often overlooked in discussions of what constitutes field research. Months- or year-long in-person research can be cost-prohibitive, something academic gatekeepers must consider when evaluating “what counts” and “what is enough.” Unlike their predecessors, many graduate students today have a significant amount of debt and little savings. 3 Additionally, if researchers are not able to procure funding, they have to pay out of pocket and possibly take on more debt. Not only is in-person fieldwork costly, but researchers may also have to forego working while they are in the field, making long stretches in the field infeasible for some.

For researchers whose fieldwork involves travelling to another location, procuring funding via grants, fellowships, or other sources is a necessity, regardless of how long one plans to be in the field. A good mantra for applying for research funding is “apply early and often” ( Kelsky 2015 , 110). Funding applications take a considerable amount of time to prepare, from writing research statements to requesting letters of recommendation. Even adapting one's materials for different applications takes time. Not only is the application process itself time-consuming, but the time between applying for and receiving funds, if successful, can be quite long, from several months to a year. For example, after defending her prospectus in May 2019, Willis began applying to funding sources for her dissertation, all of which had deadlines between June and September. She received notifications between November and January; however, funds from her successful applications were not available until March and April, almost a year later. 4 Accordingly, we recommend applying for funding as early as possible; this not only increases one's chances of hitting the ground running in the field, but the application process can also help clarify the goals and parameters of one's research.

Graduate students should also apply often for funding opportunities. There are different types of funding for fieldwork: some are larger, more competitive grants such as the National Science Foundation Political Science Doctoral Dissertation Improvement Grant in the United States, others, including sources through one's own institution, are smaller. Some countries, like Sweden, boast a plethora of smaller funding agencies that disburse grants of 20,000–30,0000 Swedish Kronor (approx. 2,500–3,500 U.S. dollars) to Ph.D. students in the social sciences. Listings of potential funding sources are often found on various websites including those belonging to universities, professional organizations (such as the American Political Science Association or the European Consortium for Political Research), and governmental institutions dealing with foreign affairs. Once you have identified fellowships and grants for which you and your project are a good match, we highly recommend soliciting information and advice from colleagues who have successfully applied for them. This can include asking them to share their applications with you, and if possible, to have them, another colleague or set of colleagues read through your project description and research plan (especially for bigger awards) to ensure that you have made the best possible case for why you should be selected. While both large and small pots of funding are worth applying for, many researchers end up funding their fieldwork through several small grants or fellowships. One small award may not be sufficient to fund the entirety of one's fieldwork, but several may. For example, Willis's fieldwork in Japan and South Korea was supported through fellowships within each country. Similarly, Irgil was able to conduct her fieldwork abroad through two different and relatively smaller grants by applying to them each year.

Of course, situations vary in different countries with respect to what kinds of grants from what kinds of funders are available. An essential part of preparing for fieldwork is researching the funding landscape well in advance, even as early as the start of the Ph.D. We encourage first-time field researchers to be aware that universities and departments may themselves not be aware of the full range of possible funds available, so it is always a good idea to do your own research and watch research-related social media channels. The amount of funding needed thereby depends on the nature of one's project and how long one intends to be in the field. As we elaborate in the next section, scholars should think carefully about their project goals, the data required to meet those goals, and the requisite time to attain them. For some projects, even a couple of weeks in the field is sufficient to get the needed information.

Preparing to Enter “the field”

It is important to prepare for the field as much as possible. What kind of preparations do researchers need? For someone conducting interviews with NGO representatives, this might involve identifying the largest possible pool of potential respondents, securing their contact information, sending them study invitation letters, finding a mutually agreeable time to meet, and pulling together short biographies for each interviewee in order to use your time together most effectively. If you plan to travel to conduct interviews, you should reach out to potential respondents roughly four to six weeks prior to your arrival. For individuals who do not respond, you can follow up one to two weeks before you arrive and, if needed, once more when you are there. This is still no guarantee for success, of course. For Kreft, contacting potential interviewees in Colombia initially proved more challenging than anticipated, as many of the people she targeted did not respond to her emails. It turned out that many Colombians have a preference for communicating via phone or, in particular, WhatsApp. Some of those who responded to her emails sent in advance of her field trip asked her to simply be in touch once she was in the country, to set up appointments on short notice. This made planning and arranging her interview schedule more complicated. Therefore, a general piece of advice is to research your target population's preferred communication channels and mediums in the field site if email requests yield no or few responses.

In general, we note for the reader that contacting potential research participants should come after one has designed an interview questionnaire (plus an informed consent protocol) and sought and received, where applicable, approval from institutional review boards (IRBs) or other ethical review procedures in place (both at one's home institution/in the country of the home institution as well as in the country where one plans to conduct research if travelling abroad). The most obvious advantage of having the interview questionnaire in place and having secured all necessary institutional approvals before you start contacting potential interviewees is that you have a clearer idea of the universe of individuals you would like to interview, and for what purpose. Therefore, it is better to start sooner rather than later and be mindful of “high seasons,” when institutional and ethical review boards are receiving, processing, and making decisions on numerous proposals. It may take a few months for them to issue approvals.

On the subject of ethics and review panels, we encourage you to consider talking openly and honestly with your supervisors and/or funders about the situations where a written consent form may not be suitable and might need to be replaced with “verbal consent.” For instance, doing fieldwork in politically unstable contexts, highly scrutinized environments, or vulnerable communities, like refugees, might create obstacles for the interviewees as well as the researcher. The literature discusses the dilemma in offering the interviewees anonymity and requesting signed written consent in addition to the emphasis on total confidentiality ( Jacobsen and Landau 2003 ; Mackenzie, McDowell, and Pittaway 2007 ; Saunders, Kitzinger, and Kitzinger 2015 ). Therefore, in those situations, the researcher might need to take the initiative on how to act while doing the interviews as rigorously as possible. In her fieldwork, Irgil faced this situation as the political context of Turkey did not guarantee that there would not be any adverse consequences for interviewees on both sides of her story: citizens of Turkey and Syrian refugees. Consequently, she took hand-written notes and asked interviewees for their verbal consent in a safe interview atmosphere. This is something respondents greatly appreciated ( Irgil 2020 ).

Ethical considerations, of course, also affect the research design itself, with ramifications for fieldwork. When Kreft began developing her Ph.D. proposal to study women's political and civil society mobilization in response to conflict-related sexual violence, she initially aimed to recruit interviewees from the universe of victims of this violence, to examine variation among those who did and those who did not mobilize politically. As a result of deeper engagement with the literature on researching conflict-related sexual violence, conversations with senior colleagues who had interviewed victims, and critical self-reflection of her status as a researcher (with no background in psychology or social work), she decided to change focus and shift toward representatives of civil society organizations and victims’ associations. This constituted a major reconfiguration of her research design, from one geared toward identifying the factors that drive mobilization of victims toward using insights from interviews to understand better how those mobilize perceive and “make sense” of conflict-related sexual violence. Needless to say, this required alterations to research strategies and interview guides, including reassessing her planned fieldwork. Kreft's primary consideration was not to cause harm to her research participants, particularly in the form of re-traumatization. She opted to speak only with those women who on account of their work are used to speaking about conflict-related sexual violence. In no instance did she inquire about interviewees’ personal experiences with sexual violence, although several brought this up on their own during the interviews.

Finally, if you are conducting research in another country where you have less-than-professional fluency in the language, pre-fieldwork planning should include hiring a translator or research assistant, for example, through an online hiring platform like Upwork, or a local university. Your national embassy or consulate is another option; many diplomatic offices have lists of individuals who they have previously contracted. More generally, establishing contact with a local university can be beneficial, either in the form of a visiting researcher arrangement, which grants access to research groups and facilities like libraries or informally contacting individual researchers. The latter may have valuable insights into the local context, contacts to potential research participants, and they may even be able to recommend translators or research assistants. Kreft, for example, hired local research assistants recommended by researchers at a Bogotá-based university and remunerated them equivalent to the salary they would have received as graduate research assistants at the university, while also covering necessary travel expenses. Irgil, on the other hand, established contacts with native citizens and Syrian gatekeepers, who are shop owners in the area where she conducted her research because she had the opportunity to visit the fieldwork site multiple times.

Depending on the research agenda, researchers may visit national archives, local government offices, etc. Before visiting, researchers should contact these facilities and make sure the materials that they need are accessible. For example, Lee visited the Ronald Reagan Presidential Library Archives to find the United States’ strategic evaluations on South Korea's dictator in the 1980s. Before her visit, she contacted librarians in the archives, telling them her visit plans and her research purpose. Librarians made suggestions on which categories she should start to review based on her research goal, and thus she was able to make a list of categories of the materials she needed, saving her a lot of her time.

Accessibility of and access to certain facilities/libraries can differ depending on locations/countries and types of facilities. Facilities in authoritarian countries might not be easily accessible to foreign researchers. Within democratic countries, some facilities are more restrictive than others. Situations like the pandemic or national holidays can also restrict accessibility. Therefore, researchers are well advised to do preliminary research on whether a certain facility opens during the time they visit and is accessible to researchers regardless of their citizenship status. Moreover, researchers must contact the staff of facilities to know whether identity verification is needed and if so, what kind of documents (photo I.D. or passport) should be exhibited.

Adapting to the Reality of the Field

Researchers need to be flexible because you may meet people you did not make appointments with, come across opportunities you did not expect, or stumble upon new ideas about collecting data in the field. These happenings will enrich your field experience and will ultimately be beneficial for your research. Similarly, researchers should not be discouraged by interviews that do not go according to plan; they present an opportunity to pursue relevant people who can provide an alternative path to your work. Note that planning ahead does not preclude fortuitous encounters or epiphanies. Rather, it provides a structure for them to happen.

If your fieldwork entails travelling abroad, you will also be able to recruit more interviewees once you arrive at your research site. In fact, you may have greater success in-country; not everyone is willing to respond to a cold email from an unknown researcher in a foreign country. In Irgil's fieldwork, she contacted store owners that are known in the area and who know the community. This eased her process of introduction into the community and recruiting interviewees. For Zvobgo, she had fewer than a dozen interviews scheduled when she travelled to Guatemala to study civil society activism and transitional justice since the internal armed conflict. But she was able to recruit additional participants in-country. Interviewees with whom she built a rapport connected her to other NGOs, government offices, and the United Nations country office, sometimes even making the call and scheduling interviews for her. Through snowball sampling, she was able to triple the number of participants. Likewise, snowball sampling was central to Kreft's recruitment of interview partners. Several of her interviewees connected her to highly relevant individuals she would never have been able to identify and contact based on web searches alone.

While in the field, you may nonetheless encounter obstacles that necessitate adjustments to your original plans. Once Kreft had arrived in Colombia, for example, it transpired quickly that carrying out in-person interviews in more remote/rural areas was near impossible given her means, as these were not easily accessible by bus/coach, further complicated by a complex security situation. Instead, she adjusted her research design and shifted her focus to the big cities, where most of the major civil society organizations are based. She complemented the in-person interviews carried out there with a smaller number of phone interviews with civil society activists in rural areas, and she was also able to meet a few activists operating in rural or otherwise inaccessible areas as they were visiting the major cities. The resulting focus on urban settings changed the kinds of generalizations she was able to make based on her fieldwork data and produced a somewhat different study than initially anticipated.

This also has been the case for Irgil, despite her prior arrangements with the Syrian gatekeepers, which required adjustments as in the case of Kreft. Irgil acquired research clearance one year before, during the interviews with native citizens, conducting the interviews with Syrian refugees. She also had her questionnaire ready based on the previously collected data and the media search she had conducted for over a year before travelling to the field site. As she was able to visit the field site multiple times, two months before conducting interviews with Syrian refugees, she developed a schedule with the Syrian gatekeepers and informants. Yet, once she was in the field, influenced by Turkey's recent political events and the policy of increasing control over Syrian refugees, half of the previously agreed informants changed their minds or did not want to participate in interviews. As Irgil was following the policies and the news related to Syrian refugees in Turkey closely, this did not come as that big of a surprise but challenged the previously developed strategy to recruit interviewees. Thus, she changed the strategy of finding interviewees in the field site, such as asking people, almost one by one, whether they would like to participate in the interview. Eventually, she could not find willing Syrian women refugees as she had planned, which resulted in a male-dominant sample. As researchers encounter such situations, it is essential to remind oneself that not everything can go according to plan, that “different” does not equate to “worse,” but that it is important to consider what changes to fieldwork data collection and sampling imply for the study's overall findings and the contribution it makes to the literature.

We should note that conducting interviews is very taxing—especially when opportunities multiply, as in Zvobgo's case. Depending on the project, each interview can take an hour, if not two or more. Hence, you should make a reasonable schedule: we recommend no more than two interviews per day. You do not want to have to cut off an interview because you need to rush to another one, whether the interviews are in-person or remote. And you do not want to be too exhausted to have a robust engagement with your respondent who is generously lending you their time. Limiting the number of interviews per day is also important to ensure that you can write comprehensive and meaningful fieldnotes, which becomes even more essential where it is not possible to audio-record your interviews. Also, be sure to remember to eat, stay hydrated, and try to get enough sleep.

Finally, whether to provide gifts or payments to the subject also requires adapting to the reality of the field. You must think about payments beforehand when you apply for IRB approval (or whatever other ethical review processes may be in place) since these applications usually contain questions about payments. Obviously, the first step is to carefully evaluate whether the gifts and payments provided can harm the subject or are likely to unduly affect the responses they will give in response to your questions. If that is not the case, you have to make payment decisions based on your budget, field situation, and difficulties in recruitment. Usually, payment of respondents is more common in survey research, whereas it is less common in interviews and focus groups.

Nevertheless, payment practices vary depending on the field and the target group. In some cases, it may become a custom to provide small gifts or payments when interviewing a certain group. In other cases, interviewees might be offended if they are provided with money. Therefore, knowing past practices and field situations is important. For example, Lee provided small coffee gift cards to one group while she did not to the other based on previous practices of other researchers. That is, for a particular group, it has become a custom for interviewers to pay interviewees. Sometimes, you may want to reimburse your subject's interview costs such as travel expenses and provide beverages and snacks during the conduct of research, as Kreft did when conducting focus groups in Colombia. To express your gratitude to your respondents, you can prepare small gifts such as your university memorabilia (e.g., notebooks and pens). Since past practices about payments can affect your interactions and interviews with a target group, you want to seek advice from your colleagues and other researchers who had experiences interacting with the target group. If you cannot find researchers who have this knowledge, you can search for published works on the target population to find if the authors share their interview experiences. You may also consider contacting the authors for advice before your interviews.

Researching Strategically

Distinguishing between things that can only be done in person at a particular site and things that can be accomplished later at home is vital. Prioritize the former over the latter. Lee's fieldwork experience serves as a good example. She studied a conservative protest movement called the Taegeukgi Rally in South Korea. She planned to conduct interviews with the rally participants to examine their motivations for participating. But she only had one month in South Korea. So, she focused on things that could only be done in the field: she went to the rally sites, she observed how protests proceeded, which tactics and chants were used, and she met participants and had some casual conversations with them. Then, she used the contacts she made while attending the rallies to create a social network to solicit interviews from ordinary protesters, her target population. She was able to recruit twenty-five interviewees through good rapport with the people she met. The actual interviews proceeded via phone after she returned to the United States. In a nutshell, we advise you not to be obsessed with finishing interviews in the field. Sometimes, it is more beneficial to use your time in the field to build relationships and networks.

Working With Assistants and Translators

A final consideration on logistics is working with research assistants or translators; it affects how you can carry out interviews, focus groups, etc. To what extent constant back-and-forth translation is necessary or advisable depends on the researcher's skills in the interview language and considerations about time and efficiency. For example, Kreft soon realized that she was generally able to follow along quite well during her interviews in Colombia. In order to avoid precious time being lost to translation, she had her research assistant follow the interview guide Kreft had developed, and interjected follow-up questions in Spanish or English (then to be translated) as they arose.

Irgil's and Zvobgo's interviews went a little differently. Irgil's Syrian refugee interviewees in Turkey were native Arabic speakers, and Zvobgo's interviewees in Guatemala were native Spanish speakers. Both Irgil and Zvobgo worked with research assistants. In Irgil's case, her assistant was a Syrian man, who was outside of the area. Meanwhile, Zvobgo's assistant was an undergraduate from her home institution with a Spanish language background. Irgil and Zvobgo began preparing their assistants a couple of months before entering the field, over Skype for Irgil and in-person for Zvobgo. They offered their assistants readings and other resources to provide them with the necessary background to work well. Both Irgil and Zvobgo's research assistants joined them in the interviews and actually did most of the speaking, introducing the principal investigator, explaining the research, and then asking the questions. In Zvobgo's case, interviewee responses were relayed via a professional interpreter whom she had also hired. After every interview, Irgil and Zvobgo and their respective assistants discussed the answers of the interviewees, potential improvements in phrasing, and elaborated on their hand-written interview notes. As a backup, Zvobgo, with the consent of her respondents, had accompanying audio recordings.

Researchers may carry out fieldwork in a country that is considerably less safe than what they are used to, a setting affected by conflict violence or high crime rates, for instance. Feelings of insecurity can be compounded by linguistic barriers, cultural particularities, and being far away from friends and family. Insecurity is also often gendered, differentially affecting women and raising the specter of unwanted sexual advances, street harassment, or even sexual assault ( Gifford and Hall-Clifford 2008 ; Mügge 2013 ). In a recent survey of Political Science graduate students in the United States, about half of those who had done fieldwork internationally reported having encountered safety issues in the field, (54 percent female, 47 percent male), and only 21 percent agreed that their Ph.D. programs had prepared them to carry out their fieldwork safely ( Schwartz and Cronin-Furman 2020 , 8–9).

Preventative measures scholars may adopt in an unsafe context may involve, at their most fundamental, adjustments to everyday routines and habits, restricting one's movements temporally and spatially. Reliance on gatekeepers may also necessitate adopting new strategies, such as a less vehement and cold rejection of unwanted sexual advances than one ordinarily would exhibit, as Mügge (2013) illustratively discusses. At the same time, a competitive academic job market, imperatives to collect novel and useful data, and harmful discourses surrounding dangerous fieldwork also, problematically, shape incentives for junior researchers to relax their own standards of what constitutes acceptable risk ( Gallien 2021 ).

Others have carefully collected a range of safety precautions that field researchers in fragile or conflict-affected settings may take before and during fieldwork ( Hilhorst et al. 2016 ). Therefore, we are more concise in our discussion of recommendations, focusing on the specific situations of graduate students. Apart from ensuring that supervisors and university administrators have the researcher's contact information in the field (and possibly also that of a local contact person), researchers can register with their country's embassy or foreign office and any crisis monitoring and prevention systems it has in place. That way, they will be informed of any possible unfolding emergencies and the authorities have a record of them being in the country.

It may also be advisable to set up more individualized safety protocols with one or two trusted individuals, such as friends, supervisors, or colleagues at home or in the fieldwork setting itself. The latter option makes sense in particular if one has an official affiliation with a local institution for the duration of the fieldwork, which is often advisable. Still, we would also recommend establishing relationships with local researchers in the absence of a formal affiliation. To keep others informed of her whereabouts, Kreft, for instance, made arrangements with her supervisors to be in touch via email at regular intervals to report on progress and wellbeing. This kept her supervisors in the loop, while an interruption in communication would have alerted them early if something were wrong. In addition, she announced planned trips to other parts of the country and granted her supervisors and a colleague at her home institution emergency reading access to her digital calendar. To most of her interviews, she was moreover accompanied by her local research assistant/translator. If the nature of the research, ethical considerations, and the safety situation allow, it might also be possible to bring a local friend along to interviews as an “assistant,” purely for safety reasons. This option needs to be carefully considered already in the planning stage and should, particularly in settings of fragility or if carrying out research on politically exposed individuals, be noted in any ethical and institutional review processes where these are required. Adequate compensation for such an assistant should be ensured. It may also be advisable to put in place an emergency plan, that is, choose emergency contacts back home and “in the field,” know whom to contact if something happens, and know how to get to the nearest hospital or clinic.

We would be remiss if we did not mention that, when in an unfamiliar context, one's safety radar may be misguided, so it is essential to listen to people who know the context. For example, locals can give advice on which means of transport are safe and which are not, a question that is of the utmost importance when traveling to appointments. For example, Kreft was warned that in Colombia regular taxis are often unsafe, especially if waved down in the streets, and that to get to her interviews safely, she should rely on a ride-share service. In one instance, a Colombian friend suggested that when there was no alternative to a regular taxi, Kreft should book through the app and share the order details, including the taxi registration number or license plate, with a friend. Likewise, sharing one's cell phone location with a trusted friend while traveling or when one feels unsafe may be a viable option. Finally, it is prudent to heed the safety recommendations and travel advisories provided by state authorities and embassies to determine when and where it is safe to travel. Especially if researchers have a responsibility not only for themselves but also for research assistants and research participants, safety must be a top priority.

This does not mean that a researcher should be careless in a context they know either. Of course, conducting fieldwork in a context that is known to the researcher offers many advantages. However, one should be prepared to encounter unwanted events too. For instance, Irgil has conducted fieldwork in her country of origin in a city she knows very well. Therefore, access to the site, moving around the site, and blending in has not been a problem; she also has the advantage of speaking the native language. Yet, she took notes of the streets she walked in, as she often returned from the field site after dark and thought she might get confused after a tiring day. She also established a closer relationship with two or three store owners in different parts of the field site if she needed something urgent, like running out of battery. Above all, one should always be aware of one's surroundings and use common sense. If something feels unsafe, chances are it is.

Fieldwork may negatively affect the researcher's mental health and mental wellbeing regardless of where one's “field” is, whether related to concerns about crime and insecurity, linguistic barriers, social isolation, or the practicalities of identifying, contacting and interviewing research participants. Coping with these different sources of stress can be both mentally and physically exhausting. Then there are the things you may hear, see and learn during the research itself, such as gruesome accounts of violence and suffering conveyed in interviews or archival documents one peruses. Kreft and Zvobgo have spoken with women victims of conflict-related sexual violence, who sometimes displayed strong emotions of pain and anger during the interviews. Likewise, Irgil and Willis have spoken with members of other vulnerable populations such as refugees and former sex workers ( Willis 2020 ).

Prior accounts ( Wood 2006 ; Loyle and Simoni 2017 ; Skjelsbæk 2018 ; Hummel and El Kurd 2020 ; Williamson et al. 2020 ; Schulz and Kreft 2021 ) show that it is natural for sensitive research and fieldwork challenges to affect or even (vicariously) traumatize the researcher. By removing researchers from their regular routines and support networks, fieldwork may also exacerbate existing mental health conditions ( Hummel and El Kurd 2020 ). Nonetheless, mental wellbeing is rarely incorporated into fieldwork courses and guidelines, where these exist at all. But even if you know to anticipate some sort of reaction, you rarely know what that reaction will be until you experience it. When researching sensitive or difficult topics, for example, reactions can include sadness, frustration, anger, fear, helplessness, and flashbacks to personal experiences of violence ( Williamson et al. 2020 ). For example, Kreft responded with episodic feelings of depression and both mental and physical exhaustion. But curiously, these reactions emerged most strongly after she had returned from fieldwork and in particular as she spent extended periods analyzing her interview data, reliving some of the more emotional scenes during the interviews and being confronted with accounts of (sexual) violence against women in a concentrated fashion. This is a crucial reminder that fieldwork does not end when one returns home; the after-effects may linger. Likewise, Zvobgo was physically and mentally drained upon her return from the field. Both Kreft and Zvobgo were unable to concentrate for long periods of time and experienced lower-than-normal levels of productivity for weeks afterward, patterns that formal and informal conversations with other scholars confirm to be common ( Schulz and Kreft 2021 ). Furthermore, the boundaries between “field” and “home” are blurred when conducting remote fieldwork ( Howlett 2021 , 11).

Nor are these adverse reactions limited to cases where the researcher has carried out the interviews themselves. Accounts of violence, pain, and suffering transported in reports, secondary literature, or other sources can evoke similar emotional stress, as Kreft experienced when engaging in a concentrated fashion with additional accounts of conflict-related sexual violence in Colombia and with the feminist literature on sexual and gender-based violence in the comfort of her Swedish office. This could also be applicable to Irgil's fieldwork as she interviewed refugees whose traumas have come out during the interviews or recall specific events triggered by the questions. Likewise, Lee has reviewed primary and secondary materials on North Korean defectors in the national archives and these materials contain violent, intense, emotional narratives.

Fortunately, there are several strategies to cope with and manage such adverse consequences. In a candid and insightful piece, other researchers have discussed the usefulness of distractions, sharing with colleagues, counseling, exercise, and, probably less advisable in the long term, comfort eating and drinking ( Williamson et al. 2020 ; see also Loyle and Simoni 2017 ; Hummel and El Kurd 2020 ). Our experiences largely tally with their observations. In this section, we explore some of these in more detail.

First, in the face of adverse consequences on your mental wellbeing, whether in the field or after your return, it is essential to be patient and generous with yourself. Negative effects on the researcher's mental wellbeing can hit in unexpected ways and at unexpected times. Even if you think that certain reactions are disproportionate or unwarranted at that specific moment, they may simply have been building up over a long time. They are legitimate. Second, the importance of taking breaks and finding distractions, whether that is exercise, socializing with friends, reading a good book, or watching a new series, cannot be overstated. It is easy to fall into a mode of thinking that you constantly have to be productive while you are “in the field,” to maximize your time. But as with all other areas in life, balance is key and rest is necessary. Taking your mind off your research and the research questions you puzzle over is also a good way to more fully soak up and appreciate the context in which you find yourself, in the case of in-person fieldwork, and about which you ultimately write.

Third, we cannot stress enough the importance of investing in social relations. Before going on fieldwork, researchers may want to consult others who have done it before them. Try to find (junior) scholars who have done fieldwork on similar kinds of topics or in the same country or countries you are planning to visit. Utilizing colleagues’ contacts and forging connections using social media are valuable strategies to expand your networks (in fact, this very paper is the result of a social media conversation and several of the authors have never met in person). Having been in the same situation before, most field researchers are, in our experience, generous with their time and advice. Before embarking on her first trip to Colombia, Kreft contacted other researchers in her immediate and extended network and received useful advice on questions such as how to move around Bogotá, whom to speak to, and how to find a research assistant. After completing her fieldwork, she has passed on her experiences to others who contacted her before their first fieldwork trip. Informal networks are, in the absence of more formalized fieldwork preparation, your best friend.

In the field, seeking the company of locals and of other researchers who are also doing fieldwork alleviates anxiety and makes fieldwork more enjoyable. Exchanging experiences, advice and potential interviewee contacts with peers can be extremely beneficial and make the many challenges inherent in fieldwork (on difficult topics) seem more manageable. While researchers conducting remote fieldwork may be physically isolated from other researchers, even connecting with others doing remote fieldwork may be comforting. And even when there are no precise solutions to be found, it is heartening or even cathartic to meet others who are in the same boat and with whom you can talk through your experiences. When Kreft shared some of her fieldwork-related struggles with another researcher she had just met in Bogotá and realized that they were encountering very similar challenges, it was like a weight was lifted off her shoulders. Similarly, peer support can help with readjustment after the fieldwork trip, even if it serves only to reassure you that a post-fieldwork dip in productivity and mental wellbeing is entirely natural. Bear in mind that certain challenges are part of the fieldwork experience and that they do not result from inadequacy on the part of the researcher.

Finally, we would like to stress a point made by Inger Skjelsbæk (2018 , 509) and which has not received sufficient attention: as a discipline, we need to take the question of researcher mental wellbeing more seriously—not only in graduate education, fieldwork preparation, and at conferences, but also in reflecting on how it affects the research process itself: “When strong emotions arise, through reading about, coding, or talking to people who have been impacted by [conflict-related sexual violence] (as victims or perpetrators), it may create a feeling of being unprofessional, nonscientific, and too subjective.”

We contend that this is a challenge not only for research on sensitive issues but also for fieldwork more generally. To what extent is it possible, and desirable, to uphold the image of the objective researcher during fieldwork, when we are at our foundation human beings? And going even further, how do the (anticipated) effects of our research on our wellbeing, and the safety precautions we take ( Gifford and Hall-Clifford 2008 ), affect the kinds of questions we ask, the kinds of places we visit and with whom we speak? How do they affect the methods we use and how we interpret our findings? An honest discussion of affective responses to our research in methods sections seems utopian, as emotionality in the research process continues to be silenced and relegated to the personal, often in gendered ways, which in turn is considered unconnected to the objective and scientific research process ( Jamar and Chappuis 2016 ). But as Gifford and Hall-Clifford (2008 , 26) aptly put it: “Graduate education should acknowledge the reality that fieldwork is scholarly but also intimately personal,” and we contend that the two shape each other. Therefore, we encourage political science as a discipline to reflect on researcher wellbeing and affective responses to fieldwork more carefully, and we see the need for methods courses that embrace a more holistic notion of the subjectivity of the researcher.

Interacting with people in the field is one of the most challenging yet rewarding parts of the work that we do, especially in comparison to impersonal, often tedious wrangling and analysis of quantitative data. Field researchers often make personal connections with their interviewees. Consequently, maintaining boundaries can be a bit tricky. Here, we recommend being honest with everyone with whom you interact without overstating the abilities of a researcher. This appears as a challenge in the field, particularly when you empathize with people and when they share profound parts of their lives with you for your research in addition to being “human subjects” ( Fujii 2012 ). For instance, when Irgil interviewed native citizens about the changes in their neighborhood following the arrival of Syrian refugees, many interviewees questioned what she would offer them in return for their participation. Irgil responded that her primary contribution would be her published work. She also noted, however, that academic papers can take a year, sometimes longer, to go through the peer-reviewed process and, once published, many studies have a limited audience. The Syrian refugees posed similar questions. Irgil responded not only with honesty but also, given this population's vulnerable status, she provided them contact information for NGOs with which they could connect if they needed help or answers to specific questions.

For her part, Zvobgo was very upfront with her interviewees about her role as a researcher: she recognized that she is not someone who is on the frontlines of the fight for human rights and transitional justice like they are. All she could/can do is use her platform to amplify their stories, bringing attention to their vital work through her future peer-reviewed publications. She also committed to sending them copies of the work, as electronic journal articles are often inaccessible due to paywalls and university press books are very expensive, especially for nonprofits. Interviewees were very receptive; some were even moved by the degree of self-awareness and the commitment to do right by them. In some cases, this prompted them to share even more, because they knew that the researcher was really there to listen and learn. This is something that junior scholars, and all scholars really, should always remember. We enter the field to be taught. Likewise, Kreft circulated among her interviewees Spanish-language versions of an academic article and a policy brief based on the fieldwork she had carried out in Colombia.

As researchers from the Global North, we recognize a possible power differential between us and our research subjects, and certainly an imbalance in power between the countries where we have been trained and some of the countries where we have done and continue to do field research, particularly in politically dynamic contexts ( Knott 2019 ). This is why we are so concerned with being open and transparent with everyone with whom we come into contact in the field and why we are committed to giving back to those who so generously lend us their time and knowledge. Knott (2019 , 148) summarizes this as “Reflexive openness is a form of transparency that is methodologically and ethically superior to providing access to data in its raw form, at least for qualitative data.”

We also recognize that academics, including in the social sciences and especially those hailing from countries in the Global North, have a long and troubled history of exploiting their power over others for the sake of their research—including failing to be upfront about their research goals, misrepresenting the on-the-ground realities of their field research sites (including remote fieldwork), and publishing essentializing, paternalistic, and damaging views and analyses of the people there. No one should build their career on the backs of others, least of all in a field concerned with the possession and exercise of power. Thus, it is highly crucial to acknowledge the power hierarchies between the researcher and the interviewees, and to reflect on them both in the field and beyond the field upon return.

A major challenge to conducting fieldwork is when researchers’ carefully planned designs do not go as planned due to unforeseen events outside of our control, such as pandemics, natural disasters, deteriorating security situations in the field, or even the researcher falling ill. As the Covid-19 pandemic has made painfully clear, researchers may face situations where in-person research is simply not possible. In some cases, researchers may be barred entry to their fieldwork site; in others, the ethical implications of entering the field greatly outweigh the importance of fieldwork. Such barriers to conducting in-person research require us to reconsider conventional notions of what constitutes fieldwork. Researchers may need to shift their data collection methods, for example, conducting interviews remotely instead of in person. Even while researchers are in the field, they may still need to carry out part of their interviews or surveys virtually or by phone. For example, Kreft (2020) carried out a small number of interviews remotely while she was based in Bogotá, because some of the women's civil society activists with whom she intended to speak were based in parts of the country that were difficult and/or dangerous to access.

Remote field research, which we define as the collection of data over the internet or over the phone where in-person fieldwork is not possible due to security, health or other risks, comes with its own sets of challenges. For one, there may be certain populations that researchers cannot reach remotely due to a lack of internet connectivity or technology such as cellphones and computers. In such instances, there will be a sampling bias toward individuals and groups that do have these resources, a point worth noting when scholars interpret their research findings. In the case of virtual research, the risk of online surveillance, hacking, or wiretapping may also produce reluctance on the part of interviewees to discuss sensitive issues that may compromise their safety. Researchers need to carefully consider how the use of digital technology may increase the risk to research participants and what changes to the research design and any interview guides this necessitates. In general, it is imperative that researchers reflect on how they can ethically use digital technology in their fieldwork ( Van Baalen 2018 ). Remote interviews may also be challenging to arrange for researchers who have not made connections in person with people in their community of interest.

Some of the serendipitous happenings we discussed earlier may also be less likely and snowball sampling more difficult. For example, in phone or virtual interviews, it is harder to build good rapport and trust with interviewees as compared to face-to-face interviews. Accordingly, researchers should be more careful in communicating with interviewees and creating a comfortable interview environment. Especially when dealing with sensitive topics, researchers may have to make several phone calls and sometimes have to open themselves to establishing trust with interviewees. Also, researchers must be careful in protecting interviewees in phone or virtual interviews when they deal with sensitive topics of countries interviewees reside in.

The inability to physically visit one's community of interest may also encourage scholars to critically reflect on how much time in the field is essential to completing their research and to consider creative, alternative means for accessing information to complete their projects. While data collection techniques such as face-to-face interviews and archival work in the field may be ideal in normal times, there exist other data sources that can provide comparably useful information. For example, in her research on the role of framing in the United States base politics, Willis found that social media accounts and websites yielded information useful to her project. Many archives across the world have also been digitized. Researchers may also consider crowdsourcing data from the field among their networks, as fellow academics tend to collect much more data in the field than they ever use in their published works. They may also elect to hire someone, perhaps a graduate student, in a city or a country where they cannot travel and have the individual access, scan, and send archival materials. This final suggestion may prove generally useful to researchers with limited time and financial resources.

Remote qualitative data collection techniques, while they will likely never be “the gold-standard,” also pose several advantages. These techniques may help researchers avoid some of the issues mentioned previously. Remote interviews, for example, are less time-consuming in terms of travel to the interview site ( Archibald et al. 2019 ). The implication is that researchers may have less fatigue from conducting interviews and/or may be able to conduct more interviews. For example, while Willis had little energy to do anything else after an in-person interview (or two) in a given day, she had much more energy after completing remote interviews. Second, remote fieldwork also helps researchers avoid potentially dangerous situations in the field mentioned previously. Lastly, remote fieldwork generally presents fewer financial barriers than in-person research ( Archibald et al. 2019 ). In that sense, considering remote qualitative data collection, a type of “fieldwork” may make fieldwork more accessible to a greater number of scholars.

Many of the substantive, methodological and practical challenges that arise during fieldwork can be anticipated. Proper preparation can help you hit the ground running once you enter your fieldwork destination, whether in-person or virtually. Nonetheless, there is no such thing as being perfectly prepared for the field. Some things will simply be beyond your control, and especially as a newcomer to field research, and you should be prepared for things to not go as planned. New questions will arise, interview participants may cancel appointments, and you might not get the answers you expected. Be ready to make adjustments to research plans, interview guides, or questionnaires. And, be mindful of your affective reactions to the overall fieldwork situation and be gentle with yourself.

We recommend approaching fieldwork as a learning experience as much as, or perhaps even more than, a data collection effort. This also applies to your research topic. While it is prudent always to exercise a healthy amount of skepticism about what people tell you and why, the participants in your research will likely have unique perspectives and knowledge that will challenge yours. Be an attentive listener and remember that they are experts of their own experiences.

We encourage more institutions to offer courses that cover field research preparation and planning, practical advice on safety and wellbeing, and discussion of ethics. Specifically, we align with Schwartz and Cronin-Furman's (2020 , 3) contention “that treating fieldwork preparation as the methodology will improve individual scholars’ experiences and research.” In this article, we outline a set of issue areas in which we think formal preparation is necessary, but we note that our discussion is by no means exhaustive. Formal fieldwork preparation should also extend beyond what we have covered in this article, such as issues of data security and preparing for nonqualitative fieldwork methods. We also note that field research is one area that has yet to be comprehensively addressed in conversations on diversity and equity in the political science discipline and the broader academic profession. In a recent article, Brielle Harbin (2021) begins to fill this gap by sharing her experiences conducting in-person election surveys as a Black woman in a conservative and predominantly white region of the United States and the challenges that she encountered. Beyond race and gender, citizenship, immigration status, one's Ph.D. institution and distance to the field also affect who is able to do what type of field research, where, and for how long. Future research should explore these and related questions in greater detail because limits on who is able to conduct field research constrict the sociological imagination of our field.

While Emmons and Moravcsik (2020) focus on leading Political Science Ph.D. programs in the United States, these trends likely obtain, both in lower ranked institutions in the broader United States as well as in graduate education throughout North America and Europe.

As all the authors have carried out qualitative fieldwork, this is the primary focus of this guide. This does not, however, mean that we exclude quantitative or experimental data collection from our definition of fieldwork.

There is great variation in graduate students’ financial situations, even in the Global North. For example, while higher education is tax-funded in most countries in Europe and Ph.D. students in countries such as Sweden, Norway, Denmark, the Netherlands, and Switzerland receive a comparatively generous full-time salary, healthcare and contributions to pension schemes, Ph.D. programs in other contexts like the United States and the United Kingdom have (high) enrollment fees and rely on scholarships, stipends, or departmental duties like teaching to (partially) offset these, while again others, such as Germany, are commonly financed by part-time (50 percent) employment at the university with tasks substantively unrelated to the dissertation. These different preconditions leave many Ph.D. students struggling financially and even incurring debt, while others are in a more comfortable financial position. Likewise, Ph.D. programs around the globe differ in structure, such as required coursework, duration and supervision relationships. Naturally, all of these factors have a bearing on the extent to which fieldwork is feasible. We acknowledge unequal preconditions across institutions and contexts, and trust that those Ph.D. students interested in pursuing fieldwork are best able to assess the structural and institutional context in which they operate and what this implies for how, when, and how long to carry out fieldwork.

In our experience, this is not only the general cycle for graduate students in North America, but also in Europe and likely elsewhere.

For helpful advice and feedback on earlier drafts, we wish to thank the editors and reviewers at International Studies Review , and Cassandra Emmons. We are also grateful to our interlocuters in Argentina, Canada, Colombia, Germany, Guatemala, Japan, Kenya, Norway, the Philippines, Sierra Leone, South Korea, Spain, Sweden, Turkey, the United Kingdom, and the United States, without whom this reflection on fieldwork would not have been possible. All authors contributed equally to this manuscript.

This material is based upon work supported by the Forskraftstiftelsen Theodor Adelswärds Minne, Knut and Alice Wallenberg Foundation(KAW 2013.0178), National Science Foundation Graduate Research Fellowship Program(DGE-1418060), Southeast Asia Research Group (Pre-Dissertation Fellowship), University at Albany (Initiatives for Women and the Benevolent Association), University of Missouri (John D. Bies International Travel Award Program and Kinder Institute on Constitutional Democracy), University of Southern California (Provost Fellowship in the Social Sciences), Vetenskapsrådet(Diarienummer 2019-06298), Wilhelm och Martina Lundgrens Vetenskapsfond(2016-1102; 2018-2272), and William & Mary (Global Research Institute Pre-doctoral Fellowship).

Advancing Conflict Research . 2020 . The ARC Bibliography . Accessed September 6, 2020, https://advancingconflictresearch.com/resources-1 .

Google Scholar

Google Preview

Archibald Mandy M. , Ambagtsheer Rachel C. , Casey Mavourneen G. , Lawless Michael . 2019 . “ Using Zoom Videoconferencing for Qualitative Data Collection: Perceptions and Experiences of Researchers and Participants .” International Journal of Qualitative Methods 18 : 1 – 18 .

Beath Andrew , Christia Fotini , Enikolopov Ruben . 2013 . “ Empowering Women Through Development Aid: Evidence from a Field Experiment in Afghanistan .” American Political Science Review 107 ( 3 ): 540 – 57 .

Carling Jorgen , Erdal Marta Bivand , Ezzati Rojan . 2014 . “ Beyond the Insider–Outsider Divide in Migration Research .” Migration Studies 2 ( 1 ): 36 – 54 .

Chambers-Ju Christopher . 2014 . “ Data Collection, Opportunity Costs, and Problem Solving: Lessons from Field Research on Teachers’ Unions in Latin America .” P.S.: Political Science & Politics 47 ( 2 ): 405 – 9 .

Collier David . 2011 . “ Understanding Process Tracing .” P.S.: Political Science and Politics 44 ( 4 ): 823 – 30 .

Druckman James N. , Green Donald P. , Kuklinski James H. , Lupia Arthur . 2006 . “ The Growth and Development of Experimental Research in Political Science .” American Political Science Review 100 ( 4 ): 627 – 35 .

Elman Colin , Kapiszewski Diana , Kirilova Dessislava . 2015 . “ Learning Through Research: Using Data to Train Undergraduates in Qualitative Methods .” P.S.: Political Science & Politics 48 ( 1 ): 39 – 43 .

Emmons Cassandra V. , Moravcsik Andrew M. . 2020 . “ Graduate Qualitative Methods Training in Political Science: A Disciplinary Crisis .” P.S.: Political Science & Politics 53 ( 2 ): 258 – 64 .

Esarey Justin. 2017 . “ Causal Inference with Observational Data .” In Analytics, Policy, and Governance , edited by Bachner Jennifer , Hill Kathryn Wagner , Ginsberg Benjamin , 40 – 66 . New Haven : Yale University Press .

Finseraas Henning , Kotsadam Andreas . 2017 . “ Does Personal Contact with Ethnic Minorities Affect anti-immigrant Sentiments? Evidence from a Field Experiment .” European Journal of Political Research 56 : 703 – 22 .

Fujii Lee Ann . 2012 . “ Research Ethics 101: Dilemmas and Responsibilities .” P.S.: Political Science & Politics 45 ( 4 ): 717 – 23 .

Gallien Max . 2021 . “ Solitary Decision-Making and Fieldwork Safety .” In The Companion to Peace and Conflict Fieldwork , edited by Ginty Roger Mac , Brett Roddy , Vogel Birte , 163 – 74 . Cham, Switzerland : Palgrave Macmillan .

Geddes Barbara . 2003 . Paradigms and Sand Castles: Theory Building and Research Design in Comparative Politics . Ann Arbor : University of Michigan Press .

Gifford Lindsay , Hall-Clifford Rachel . 2008 . “ From Catcalls to Kidnapping: Towards an Open Dialogue on the Fieldwork Experiences of Graduate Women .” Anthropology News 49 ( 6 ): 26 – 7 .

Greitens Sheena C. 2016 . Dictators and Their Secret Police: Coercive Institutions and State Violence . Cambridge : Cambridge University Press .

Harbin Brielle M. 2021 . “ Who's Able to Do Political Science Work? My Experience with Exit Polling and What It Reveals about Issues of Race and Equity .” PS: Political Science & Politics 54 ( 1 ): 144 – 6 .

Hilhorst Dorothea , Hogson Lucy , Jansen Bram , Mena Rodrigo Fluhmann . 2016 . Security Guidelines for Field Research in Complex, Remote and Hazardous Places . Accessed August 25, 2020, http://hdl.handle.net/1765/93256 .

Howlett Marnie. 2021 . “ Looking At the ‘Field’ Through a Zoom Lens: Methodological Reflections on Conducting Online Research During a Global Pandemic .” Qualitative Research . Online first .

Hsueh Roselyn , Jensenius Francesca Refsum , Newsome Akasemi . 2014 . “ Fieldwork in Political Science: Encountering Challenges and Crafting Solutions: Introduction .” PS: Political Science & Politics 47 ( 2 ): 391 – 3 .

Hummel Calla , El Kurd Dana . 2020 . “ Mental Health and Fieldwork .” P.S.: Political Science & Politics 54 ( 1 ): 121 – 5 .

Irgil Ezgi. 2020 . “ Broadening the Positionality in Migration Studies: Assigned Insider Category .” Migration Studies . Online first .

Jacobsen Karen , Landau Lauren B. . 2003 . “ The Dual Imperative in Refugee Research: Some Methodological and Ethical Considerations in Social Science Research on Forced Migration .” Disasters 27 ( 3 ): 185 – 206 .

Jamar Astrid , Chappuis Fairlie . 2016 . “ Conventions of Silence: Emotions and Knowledge Production in War-Affected Research Environments .” Parcours Anthropologiques 11 : 95 – 117 .

Jensenius Francesca R. 2014 . “ The Fieldwork of Quantitative Data Collection .” P.S.: Political Science & Politics 47 ( 2 ): 402 – 4 .

Kapiszewski Diana , MacLean Lauren M. , Read Benjamin L. . 2015 . Field Research in Political Science: Practices and Principles . Cambridge : Cambridge University Press .

Kelsky Karen . 2015 . The Professor Is In: The Essential Guide to Turning Your Ph.D. Into a Job . New York : Three Rivers Press .

Knott Eleanor . 2019 . “ Beyond the Field: Ethics After Fieldwork in Politically Dynamic Contexts .” Perspectives on Politics 17 ( 1 ): 140 – 53 .

Kreft Anne-Kathrin . 2019 . “ Responding to Sexual Violence: Women's Mobilization in War .” Journal of Peace Research 56 ( 2 ): 220 – 33 .

Kreft Anne-Kathrin . 2020 . “ Civil Society Perspectives on Sexual Violence in Conflict: Patriarchy and War Strategy in Colombia .” International Affairs 96 ( 2 ): 457 – 78 .

Loyle Cyanne E. , Simoni Alicia . 2017 . “ Researching Under Fire: Political Science and Researcher Trauma .” P.S.: Political Science & Politics 50 ( 1 ): 141 – 5 .

Mackenzie Catriona , McDowell Christopher , Pittaway Eileen . 2007 . “ Beyond ‘do No Harm’: The Challenge of Constructing Ethical Relationships in Refugee Research .” Journal of Refugee Studies 20 ( 2 ): 299 – 319 .

Marston Jerome F. 2020 . “ Resisting Displacement: Leveraging Interpersonal Ties to Remain Despite Criminal Violence in Medellín, Colombia .” Comparative Political Studies 53 ( 13 ): 1995 – 2028 .

Mosley Layna , ed. 2013 . Interview Research in Political Science . Ithaca : Cornell University Press .

Mügge Liza M. 2013 . “ Sexually Harassed by Gatekeepers: Reflections on Fieldwork in Surinam and Turkey .” International Journal of Social Research Methodology 16 ( 6 ): 541 – 6 .

Nexon Daniel. 2019 . International Studies Quarterly (ISQ) 2019 Annual Editorial Report . Accessed August 25, 2020, https://www.isanet.org/Portals/0/Documents/ISQ/2019_ISQ%20Report.pdf?ver = 2019-11-06-103524-300 .

Nowicka Magdalena , Cieslik Anna . 2014 . “ Beyond Methodological Nationalism in Insider Research with Migrants .” Migration Studies 2 ( 1 ): 1 – 15 .

O'Brien Kevin J. , Li Lianjiang . 2005 . “ Popular Contention and Its Impact in Rural China .” Comparative Political Studies 38 ( 3 ): 235 – 59 .

Ortbals Candice D. , Rincker Meg E. . 2009 . “ Fieldwork, Identities, and Intersectionality: Negotiating Gender, Race, Class, Religion, Nationality, and Age in the Research Field Abroad: Editors’ Introduction .” P.S.: Political Science & Politics 42 ( 2 ): 287 – 90 .

Read Benjamin. 2006 . “ Site-intensive Methods: Fenno and Scott in Search of Coalition .” Qualitative & Multi-method Research 4 ( 2 ): 10 – 3 .

Ricks Jacob I. , Liu Amy H. . 2018 . “ Process-Tracing Research Designs: A Practical Guide .” P.S.: Political Science & Politics 51 ( 4 ): 842 – 6 .

Sarotte Mary E. 2012 . “ China's Fear of Contagion: Tiananmen Square and the Power of the European Example .” International Security 37 ( 2 ): 156 – 82 .

Saunders Benjamin , Kitzinger Jenny , Kitzinger Celia . 2015 . “ Anonymizing Interview Data: Challenges and Compromise in Practice .” Qualitative Research 15 ( 5 ): 616 – 32 .

Schulz Philipp , Kreft Anne-Kathrin . 2021 . “ Researching Conflict-Related Sexual Violence: A Conversation Between Early Career Researchers .” International Feminist Journal of Politics . Advance online access .

Schwartz Stephanie , Cronin-Furman Kate . 2020 . “ Ill-Prepared: International Fieldwork Methods Training in Political Science .” Working Paper .

Seawright Jason . 2016 . “ Better Multimethod Design: The Promise of Integrative Multimethod Research .” Security Studies 25 ( 1 ): 42 – 9 .

Skjelsbæk Inger . 2018 . “ Silence Breakers in War and Peace: Research on Gender and Violence with an Ethics of Engagement .” Social Politics: International Studies in Gender , State & Society 25 ( 4 ): 496 – 520 .

Van Baalen Sebastian . 2018 . “ ‘Google Wants to Know Your Location’: The Ethical Challenges of Fieldwork in the Digital Age .” Research Ethics 14 ( 4 ): 1 – 17 .

Weiss Meredith L. , Hicken Allen , Kuhonta Eric Martinez . 2017 . “ Political Science Field Research & Ethics: Introduction .” The American Political Science Association—Comparative Democratization Newsletter 15 ( 3 ): 3 – 5 .

Weller Nicholas , Barnes Jeb . 2016 . “ Pathway Analysis and the Search for Causal Mechanisms .” Sociological Methods & Research 45 ( 3 ): 424 – 57 .

Williamson Emma , Gregory Alison , Abrahams Hilary , Aghtaie Nadia , Walker Sarah-Jane , Hester Marianne . 2020 . “ Secondary Trauma: Emotional Safety in Sensitive Research .” Journal of Academic Ethics 18 ( 1 ): 55 – 70 .

Willis Charmaine . 2020 . “ Revealing Hidden Injustices: The Filipino Struggle Against U.S. Military Presence .” Minds of the Movement (blog). October 27, 2020, https://www.nonviolent-conflict.org/blog_post/revealing-hidden-injustices-the-filipino-struggle-against-u-s-military-presence/ .

Wood Elizabeth Jean . 2006 . “ The Ethical Challenges of Field Research in Conflict Zones .” Qualitative Sociology 29 ( 3 ): 373 – 86 .

Zapata-Barrero Ricard , Yalaz Evren . 2019 . “ Qualitative Migration Research Ethics: Mapping the Core Challenges .” GRITIM-UPF Working Paper Series No. 42 .

Zvobgo Kelebogile . 2020 . “ Demanding Truth: The Global Transitional Justice Network and the Creation of Truth Commissions .” International Studies Quarterly 64 ( 3 ): 609 – 25 .

Month: Total Views:
June 2021 456
July 2021 77
August 2021 58
September 2021 67
October 2021 49
November 2021 36
December 2021 67
January 2022 69
February 2022 61
March 2022 50
April 2022 50
May 2022 23
June 2022 90
July 2022 87
August 2022 103
September 2022 109
October 2022 144
November 2022 146
December 2022 74
January 2023 162
February 2023 177
March 2023 273
April 2023 194
May 2023 225
June 2023 246
July 2023 262
August 2023 279
September 2023 307
October 2023 333
November 2023 441
December 2023 357
January 2024 431
February 2024 386
March 2024 481
April 2024 356
May 2024 356
June 2024 301
July 2024 25

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2486
  • Print ISSN 1521-9488
  • Copyright © 2024 International Studies Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Breadcrumbs Section. Click here to navigate to respective pages.

Field Study and Field Experiment

Field Study and Field Experiment

DOI link for Field Study and Field Experiment

Click here to navigate to parent product.

The field study method and its variation, the field experiment, along with the laboratory and questionnaire survey methods, are the oldest and most classic means of studying organizations. The field study method is used to gather information on organizational or work-system functioning through systematic direct observation. This information is most often used to identify possible causal relationships between work-system variables and to identify problems with organizational functioning.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Seven Examples of Field Experiments for Sociology

Table of Contents

Last Updated on June 12, 2023 by Karl Thompson

Field experiments aren’t the most widely used research method in Sociology, but the examiners seem to love asking questions about them – below are seven examples of this research method.

Looked at collectively, the results of the field experiments below reveal punishingly depressing findings about human action –  they suggest that people are racist, sexist, shallow, passive, and prepared to commit violence when ordered to do so by authority figures.

The experiments are outlined in the form of a timeline, with the most recent first providing contemporary examples of field experiments, and those towards the end the more classic examples I’m sure everyone’s has heard of (Rosenthal and Jacobsen for example).

2014 – The Domestic Abuse in the Lift Experiment

Researchers set up a hidden camera in a lift while members of the group played an abusive boyfriend and his victim. The male actors swore at the women and physically assaulted them while members of the public were in the lift

Most of the lift’s passengers ignored the abuse, while only one out of 53 people intervened in an attempt to stop it.

The experiment was organised by   STHLM Panda , which describes itself as “doing social experiments, joking with people and documenting the society we live in”.

2010 – The Ethnicity/ Gender and Bike Theft Experiment

In this experiment two young male actors, dressed in a similar manner, one white the other black take it in turns to act out stealing a bike which is chained to a post in a public park. The two actors (one after the other) spend an hour hacksawing/ bolt-cuttering their way through the bike lock (acting this out several times over) as about 100 people walk by in each case.

Towards the end of the film, a third actor steps in – an attractive young, blonde female – people actually help her to steal the bike!

2009 – The Ethnicity and Job Application Experiment

Researchers sent nearly 3,000 job applications under false identities in an attempt to discover if employers were discriminating against jobseekers with foreign names.

field experiment questionnaire

They found that an applicant who appeared to be white would send 9 applications before receiving a positive response of either an invitation to an interview or an encouraging telephone call. Minority candidates with the same qualifications and experience had to send 16 applications before receiving a similar response .

All the job vacancies were in the private, public and voluntary sectors and were based in Birmingham, Bradford, Bristol, Glasgow, Leeds, London and Manchester. The report concludes that there was no plausible explanation for the difference in treatment found between white British and ethnic minority applicants other than racial discrimination .

2008 – The £5 Note Theft and Social Disorder Experiment

broken windows theory

The experiment was actually a bit more complex – for the full details see the Keizer et al source below – this was also actually one of six experiments designed to test out Wilson and Kelling’s 1996 ‘broken windows theory’.

1971 – The Stanford Prison Experiment

The simulated prison included three six by nine foot prison cells. Each cell held three prisoners and included three cots. Other rooms across from the cells were utilized for the prison guards and warden. One very small space was designated as the solitary confinement room, and yet another small room served as the prison yard.

The Stanford Prison Experiment demonstrates the powerful role that the situation can play in human behaviour. Because the guards were placed in a position of power, they began to behave in ways they would not normally act in their everyday lives or in other situations. The prisoners, placed in a situation where they had no real control, became passive and depressed.

1968 – Rosenthal and Jacobson’s ‘Self-Fulfilling Prophecy’ Experiment

Self fulfilling prophecy

All of the pupils were re-tested 8 months later and he spurters had gained 12 IQ points compared to an average of 8.

1924-32 The Hawthorne Factory Experiments

The workers’ productivity seemed to improve with any changes made, and slumped when the study ended. It was suggested that the productivity gain occurred because the workers were more motivated due to the increased interest being shown in them during the experiments.

Related Posts 

Field Experiments in Sociology  – covers the strengths and limitations of the method

Swedish social experiment shows people ignoring domestic abuse in a lift – The Guardian

Keizer et al – The Spreading of Disorder – Science Express Report

The Hawthorne Effect – Wikipedia

Share this:

6 thoughts on “seven examples of field experiments for sociology”, leave a reply cancel reply.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Discover more from ReviseSociology

Last updated 27/06/24: Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

field experiment questionnaire

  • > Advances in Experimental Political Science
  • > Field Experiments with Survey Outcomes

field experiment questionnaire

Book contents

  • Frontmatter
  • List of Figures
  • List of Tables
  • List of Boxes
  • List of Contributors
  • Acknowledgements
  • 1 A New Era of Experimental Political Science
  • Part I Experimental Designs
  • 2 Conjoint Survey Experiments
  • 3 Audit Studies in Political Science
  • 4 Field Experiments with Survey Outcomes
  • 5 How to Tame Lab-in-the-Field Experiments
  • 6 Natural Experiments
  • 7 Virtual Consent: The Bronze Standard for Experimental Ethics
  • Part II Experimental Data
  • Part III Experimental Treatments and Measures
  • Part IV Experimental Analys is and Presentation
  • Part V Experimental Reliability and Generalizability
  • Part VI Using Experiments to study Identity
  • Part VII Using Experiments to Study Government Actions
  • Author Index
  • Subject Index

4 - Field Experiments with Survey Outcomes

from Part I - Experimental Designs

Published online by Cambridge University Press:  08 March 2021

Field experiments with survey outcomes are experiments where outcomes are measured by surveys but treatments are delivered by a separate mechanism in the real world, such as by mailers, door-to-door canvasses, phone calls, or online ads. Such experiments combine the realism of field experimentation with the ability to measure psychological and cognitive processes that play a key role in theories throughout the social sciences. However, common designs for such experiments are often prohibitively expensive and vulnerable to bias. In this chapter, we review how four methodological practices currently uncommon in such experiments can dramatically reduce costs and improve the accuracy of experimental results when at least two are used in combination: (1) online surveys recruited from a defined sampling frame (2) with at least one baseline wave prior to treatment (3) with multiple items combined into an index to measure outcomes and, (4) when possible, a placebo control for the purpose of identifying which subjects can be treated. We provide a general and extensible framework that allows researchers to determine the most efficient mix of these practices in diverse applications. We conclude by discussing limitations and potential extensions.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • Field Experiments with Survey Outcomes
  • By Joshua L. Kalla , David E. Broockman , Jasjeet S. Sekhon
  • Edited by James N. Druckman , Northwestern University, Illinois , Donald P. Green , Columbia University, New York
  • Book: Advances in Experimental Political Science
  • Online publication: 08 March 2021
  • Chapter DOI: https://doi.org/10.1017/9781108777919.006

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

Online field experiments: a selective survey of methods

  • Experimental Tools
  • Published: 19 May 2015
  • Volume 1 , pages 29–42, ( 2015 )

Cite this article

field experiment questionnaire

  • Yan Chen 1 &
  • Joseph Konstan 2  

7545 Accesses

24 Citations

1 Altmetric

Explore all metrics

The Internet presents today’s researchers with unprecedented opportunities to conduct field experiments. Using examples from Economics and Computer Science, we present an analysis of the design choices, with particular attention to the underlying technologies, in conducting online field experiments and report on lessons learned.

Similar content being viewed by others

field experiment questionnaire

What is Qualitative in Qualitative Research

field experiment questionnaire

Literature reviews as independent studies: guidelines for academic practice

field experiment questionnaire

Customer engagement in social media: a framework and meta-analysis

Avoid common mistakes on your manuscript.

1 Introduction

Field experiments allow researchers to combine the control of laboratory experiments with some of the ecological validity of field studies. Areas such as medicine (Lohr et al. 1986 ), economics (Harrison and List 2004 ), and social psychology (Lerner et al. 2003 ) have all incorporated field experiments in their research. One of the challenges of field experiments, however, is the substantial cost of conducting them, particularly at a sufficient scale for studying high-variance social phenomena. Online communities present a more practical and cost effective venue for conducting field experiments. Given sufficient access to a community of users and the software substrate for their community, researchers can study both short- and long-term effects of a wide range of manipulations.

In this paper, we present an analysis of the design choices for online field experiments using representative studies from both Economics and Computer Science. Within Computer Science, we focus on two subfields, i.e., Human–Computer Interactions (HCI), and Computer-Supported Collaborative Work (CSCW). We first summarize current methods for conducting online field experiments, with particular emphasis on the underlying technologies, and then offer some insights and design advice for social scientists interested in conducting such studies.

From the extensive catalog of online field experiments, we choose a representative set of academic studies which use a variety of technologies and cover a broad spectrum of online sites, including those focused on social networking (Facebook, LinkedIn), user-generated content (Wikipedia, MovieLens), e-commerce (eBay, Yahoo!), online games (World of Warcraft), crowdfunding (Kiva), and crowdsourcing (Google Answers, TopCoder, oDesk, Taskcn). Note that we do not include experiments conducted on Amazon’s Mechanical Turk, as they have been covered in a separate survey (Horton et al. 2011 ). Nor do we include experiments conducted on platforms designed for behavioral experimentation, such as LabintheWild (Reinecke and Gajos 2015 ) and TestMyBrain (Germine et al. 2012 ).

We also note that IT companies, including Amazon, eBay, Facebook, Google, LinkedIn, Microsoft, Netflix, ShopDirect, Yahoo, and Zynga, conduct a large number of commercial online field experiments, sometimes called A/B testing, to evaluate new ideas, guide product development, and improve interface design. Footnote 1 Although the vast majority of these experiments are not intended for publication and are thus not discussed here, the veracity of these studies nonetheless primarily depends on the same methodological issues academic researchers are concerned with that we discuss in this paper.

2 Technologies for intervention

In this section, we discuss four basic experimental technologies for intervention within an online community: email and SMS/texting, modified web interfaces, bots, and add-ons. For each technology, we provide case studies to demonstrate how the underlying technology has been used for intervention.

2.1 Email and text

Email is one of the most common intervention technologies used by researchers. Compared to modified web interface, email is more likely to get participant attention. To illustrate the use of email as a tool for intervention, we examine studies in user-generated content and crowdfunding.

In the first study, Ling et al. ( 2005 ) conduct four field experiments with members of the MovieLens online movie recommender community ( http://www.movielens.org ). In three of these experiments, selected users of the system receive an email message asking them to rate more movies (i.e., to contribute effort and knowledge to the community). In all, over 2400 users receive an email message crafted to test hypotheses based on the Collective Effort Model from social psychology (Karau and Williams 1993 ). These experiments yield several interesting results. First, the researchers find that highlighting a member’s uniqueness by pointing out that the member had rated movies rarely rated by others increases rating behavior. Second, they find that setting specific rating goals (either for individuals or for a group) also increases rating behavior. Surprisingly, highlighting the benefits of movie ratings, either to the member or to others, does not increase the number of ratings.

This experiment demonstrates how to conduct an email intervention. It also underscores the importance of proper controls in an online field experiment. Rating activity peaked after the mailings, but also after the post-experiment thank-you email. Indeed, they find that any reminder about the site seems to promote more visits. In general, this shows it is good practice to have two control conditions in online field experiments, one without any contact from the experimenters and a placebo condition in which participants are contacted but do not receive any treatment content.

Our second example is from a recent field experiment on Kiva ( http://www.kiva.org ), the first microlending website to match lenders with entrepreneurs in developing countries. In this study, Chen et al. ( 2015 ) run a large-scale randomized field experiment ( n  = 22,233) by posting team forum messages. Footnote 2 Kiva lenders make zero-interest loans to entrepreneurs in developing countries, often out of pro-social motives (Liu et al. 2012 ). A unique mechanism to increase lender engagement is the Kiva lending team, a system through which lenders can create teams or join existing teams. A team leaderboard sorts teams by the total loan amounts designated by their team members. To understand the effects of the lending teams mechanism on pro-social lending, the researchers examine the role of coordination in reducing search costs and the role of competition through team goal setting. Compared to the control, they find that goal-setting significantly increases lending activities of previously inactive teams. In their experimental design, Chen et al. use a built-in feature in Kiva to summarize daily forum messages into one email that is sent to each team member’s inbox. Thus, their experimental intervention is incorporated into the normal flow of emails that lenders receive.

To prepare for an online field experiment, it is often useful to analyze site archival data through a public application programming interface (API), which enables researchers to download data the site collects about its users. For example, through their empirical analysis of the Kiva archival data, Chen et al. are able to assess the role of teams in influencing lending activities, information which provides guidance for the design of their subsequent field experiment.

Similar to email interventions, text messages have been used effectively to implement field experiments. In developing countries in particular, since cell phone penetration has far exceeded that of the personal computer, texting may be a better method of interventions. Compared to emails, the unique challenge of text messaging is the character limit, as a text message should be concise enough to fit a cell phone screen. We refer the reader to Kast et al. ( 2011 ) as an example of a field experiment using text messages to encourage savings among low-income micro-entrepreneurs in Chile.

2.2 Modified web interface

Another technology utilized in online field experiments is the modified web interface. In particular, randomized experiments through modified web interface are often used in the technology industry to evaluate the effects of changes in user interface design. Software packages, such as PlanOut, Footnote 3 have been developed to facilitate such experimentation (Bakshy et al. 2014 ). We examine how modified web interfaces have been used in settings such as ad placement and online employment.

In a large-scale field experiment on Yahoo!, Reiley et al. ( 2010 ) investigate whether the competing sponsored advertisements placed at the top of a webpage ( north ads ), exert externalities on each other. To study this question, they run a field experiment on Yahoo, where they randomize the number of north ads from zero to four for a representative sample of search queries. Two experiments were conducted with about 100,000 observations per treatment among Yahoo! Search users. Interestingly, the researchers find that rival north ads impose a positive externality on existing north listings. That is, a topmost north ad receives more clicks when additional north ads appear below it. This experiment uses modified web interface to determine user behavior in a domain where existing theory has little to say, but companies care a great deal about.

In a social-advertising experiment, Bakshy et al. ( 2012 ) use a modified web interface to investigate the effect of social cues on consumer responses to ads on Facebook. In their first experiment ( n  = 23,350,087), the researchers provide one to three social cues in word-of-mouth advertising, and then measure how responses increase as a function of the number of cues. In their second experiment ( n  = 5,735,040), they examine the effect of augmenting ad units with a minimal social cue about a single peer. Their findings show that a social cue significantly increases consumer clicks and connections with the advertised entity. Using a measurement of tie strength based on the total amount of communication between subjects and their peers, they find that these influence effects are greatest for strong ties. Their field experiment allows them to measure the magnitude of effects predicted by network theory.

More recently, Gee ( 2014 ) presents a field experiment which varies the amount of information seen by 2 million job seekers when viewing 100,000 job postings on LinkedIn ( https://www.linkedin.com/job/ ). Users are randomized into a treatment group who see the true number of people who previously started an application, and a control group who see no such information for any of the postings during the 16 days of the experiment. The findings show that the additional information in the treatment increases the likelihood a person will start and finish an application by 2 to 5 percent. Furthermore, Gee finds that the treatment increases the number of female applicants, a finding of interest to the advertising firms in the high tech and finance industry, where women are under-represented. In this case, the researcher brings her theoretical knowledge and academic perspective to an existing randomized field experiment designed and conducted by a company to gain richer insights.

As a tool for intervention, modified web interface can also be used in combination with emails. For example, Chen et al. ( 2010a ) design a field experiment on MovieLens that sends 398 users a personalized email newsletter, with either social or personal information. Each newsletter contains the same five links: (1) rate popular movies, (2) rate rare movies, (3) invite a buddy to use MovieLens, (4) help us update the MovieLens database, and (5) visit the MovieLens home page. Participants who visit MovieLens during the month after receiving the newsletter receive a slightly modified interface with the four links from the email newsletter included in the “shortcuts” pane of the main MovieLens interface. The authors find that users receiving behavioral information about a median user’s total number of movie ratings demonstrate a 530 % increase in their number of ratings if they are below the median. They also find that users who receive the average user’s net benefit score increase activities that help others if they are above average.

From a design standpoint, this study follows user behavior for an extended period of time, which enables the experimenters to detect whether the effects are long lasting or temporal substitution. To correctly estimate the effects of an intervention, the experimenter should consider temporal substitution or spatial displacement, whichever is appropriate. Footnote 4 This study contributes to our understanding of the effects of social information interventions.

Another technology available for online field experiments is the bot, a program or script that makes automated edits or suggestions. Wikipedia is one community that allows bots if the experimenter receives approval from a group of designated Wikipedia users with the technical skills and wiki-experience to oversee and make decisions on bot activity. Footnote 5 Bots on Wikipedia are governed by the following policy, “The burden of proof is on the bot-maker to demonstrate that the bot is harmless, is useful, is not a server hog, and has been approved” by the Bot Approvals Group.

One study that makes use of a bot is that of Cosley et al. ( 2007 ), who deploy an intelligent task-routing agent, SuggestBot, to study how Wikipedia workload distribution interfaces affect the amount of work members undertake and complete. They deploy SuggestBot to pre-process a dump of Wikipedia data to build a learning model of what articles a user might be interested in editing based on their past editing behavior. SuggestBot then recommends editing jobs to users through their talk pages. Their findings show that personalized recommendations lead to nearly four times as many actual edits as random suggestions.

One challenge in using bots on a third-party website is the level of detail of observation available (e.g., observation of edits, but not reading behavior), but this is all determined by the nature of the programming interface for extending the underlying site or browser to implement monitoring. Footnote 6 Nonetheless, bots can be used to address technical design questions motivated by social science. They can also be a way to enhance matching at a relatively low cost.

2.4 Add-ons

A final technology that can be utilized by researchers is the add-on, such as a browser extension, Footnote 7 that can monitor a participant’s behavior across multiple sites. For example, once users install the MTogether browser extension or mobile app, researchers can access their cross-site behavior over an extended period of time, creating a large-scale longitudinal panel to facilitate data collection and intervention (Resnick et al. forthcoming).

The advantage of a browser extension is significant when the experimental intervention is based on information gathered across multiple sites. For example, Munson et al. ( 2013 ) deploy a browser extension to nudge users to read balanced political viewpoints. The extension monitors users’ web browsing, accesses and classifies their browsing history, dynamically aggregates the political leaning of their reading selections, and then encourages those whose reading leans one way or the other to read a more balanced selection of news. Users see a balance icon that indicates their leaning as well as recommendations of sites representing more neutral positions or the “other” side. Users in the control group receive aggregate statistics only after the 28th day of the experiment. Compared to the control group, users in the treatment show a modest move toward balanced exposure. This study provides a practical tool to potentially alleviate the polarization of the US political landscape.

Another advantage of an add-on is that interventions can be carried out in real time. For example, in response to earlier research showing that being reverted (and in many cases being reverted without comment or rudely) is a cause for attrition among new Wikipedia editors, Halfaker et al. ( 2011 ) deploy an add-on (built in JavaScript) to alert Wikipedia editors who are performing revert operations that they are reverting a new editor. It also provides a convenient interface for sending an explanatory message to that new editor. This intervention has led to significant changes in interaction and an increase in retention of new editors (based on the messaging and warning, respectively). It provides a much needed technology to increase the retention of new Wikipedia editors, which can be extended to other online communities as well.

In sum, the technology available within online communities provides researchers with the opportunity to conduct large-scale and real-time interventions that better capture participant behavior in field experiments.

3 Design choices

To aid researchers interested in conducting an online field experiment, we outline a number of design considerations, including (1) the access and degree of control the experimenter exerts over the online venue, (2) issues of recruiting, informed consent, and the IRB, (3) the identifiability and authentication of subjects, and (4) the nature of the control group. Note that these dimensions exclude the three core design features of any experiment—the hypotheses, the experimental conditions, and the presentation of the experimental manipulation, as these vary substantially with each individual study. We also do not include power analysis as this can be found in statistics textbooks.

3.1 Access and degree of control

When a researcher has the flexibility to choose among several different sites on which to conduct a study, the degree of experimenter control is an important factor to consider.

Experimenter-as-user involves minimal or no collaboration with the site owners. On many online sites, experimenters may establish identities as users for the purposes of both gathering field data and introducing interventions. Naturally, both the types of manipulation possible and the data that can be gathered are limited by the system design. Furthermore, some online communities have usage agreements or codes of conduct that prohibit such research uses. The experimenter-as-user approach has been used since the first economic field experiment conducted over the Internet, where Lucking-Reiley ( 1999 ) auctioned off pairs of identical collectible Magic: the Gathering trading cards using different auction formats to test the revenue equivalence theorem. Using an Internet newsgroup exclusively devoted to the trading of cards, with substantial trading volume and a variety of auction mechanisms, he found that the Dutch auction produced 30-percent higher revenues than the first-price auction. These results are counter to well-known theoretical predictions and previous laboratory results, which might be due to several differences between the field setting and those of the laboratory: (1) simultaneous auction for multiple items, (2) real versus induced values, and (3) slow Dutch auctions. A subsequent lab experiment shows that (3) matters—the slower the speed of the Dutch auction, the more revenue it raises (Katok and Kwasnica 2008 ). This pair of studies demonstrate the complementarity between laboratory and field experiments.

In another study, Resnick et al. ( 2006 ) conducted a field experiment on eBay to study Internet reputation systems. In their design, a high-reputation, established eBay seller sold matched pairs of vintage postcards under his regular identity as well as under seven new seller identities (also operated by him). With this approach, they were able to measure the difference in buyers’ willingness-to-pay, and put a price on good reputation. Since eBay was not involved in the experiment, data were collected directly from the eBay webpage using a Web crawler, an Internet bot that systematically browses and copies webpages for indexing and data collection. The significance of this experiment is their empirical estimate for a widely discussed parameter, the value of reputation.

Similarly, the experimenter-as-employer model has been used for crowdsourcing experiments, testing social preferences on the now-defunct Google Answers (Harper et al. 2008 ; Chen et al. 2010b ), labor market sorting on TopCoder (Boudreau and Lakhani 2011 ), and contest behavior on Taskcn (Liu et al. 2014 ). In one such experimenter-as-employer study, Pallais ( 2014 ) evaluates the effects of employment and feedback on subsequent employment outcomes on oDesk ( https://www.odesk.com/ ), an online labor market for freelance workers. In this study, 952 randomly-selected workers are hired for data entry jobs. After job completion, each receives either a detailed or coarse public evaluation. Using oDesk administrative data, Pallais finds that both the act of hiring a worker and the provision of a detailed evaluation substantially improve a participant’s subsequent employment rates, earnings and reservation wages. Her results have important public policy implications for markets for inexperienced workers as well as reputation building.

A site with a public interface is another option that allows for substantial experimenter control. Facebook, LinkedIn, and Wikipedia all encourage the integration of third-party applications. For example, Cosley et al. ( 2007 ) use the Wikipedia data dumps to build a model of users (based on editing behavior) and articles to identify the articles a user might be interested in editing. They then deploy SuggestBot to recommend articles to potential editors. Their study illustrates the challenges of working through an open interface, as their profiles are limited to those with existing editing experience. In the online gaming area, Williams et al. ( 2006 ) use a public interface to study the social aspect of guilds in World of Warcraft. They gather their data through player bots, interfaces that provide statistics on currently active players.

A collaborative relationship with a site owner is another choice that can provide a fair amount of data and control. For example, Chen et al. ( 2006 ) worked with the Internet Public Library (IPL) to test the effectiveness of various fund-raising mechanisms proposed in the literature. These were implemented through a variety of solicitation delivery interfaces (e.g., pop-up messages, pop-under messages, and in-window links). Their findings show that Seed Money (i.e., an initial large donation) and Matching mechanisms (i.e., a benefactor commits to match donations at a fixed rate) each generate significantly higher user click-through response rates than a Premium mechanism, which offers donors an award or prize correlated with the gift size. In this case, their collaboration with the IPL staff allows them to collect micro-behavioral data, such as user click-streams. Such collaborative relationships can be extremely effective, but tend to develop slowly as the site owner gains trust in the collaborating researcher. As such they are best viewed as a substantial investment in research infrastructure rather than as a quick target for a single study. Finally, a variation of the collaborative model is to partner with companies through shared research projects that involve doctoral students as interns. Furthermore, many IT companies have been hiring academics, who conduct online field experiments both for the company and for pure academic research.

Lastly, owning your own site is the option that gives the experimenter the most control and flexibility in the experimental design and data collection. One site, MovieLens, was created by researchers more than a decade ago, and has provided the ability to control and measure every aspect of the system and of user interaction with it over time. For example, it allows researchers to modify the interface, implement varying interfaces for different experimental groups, analyze usage data to assign users into experimental groups, and email users who opt in to invite them to participate in experiments. One study conducted with MovieLens examines the effects of social information on contribution behavior by providing personalized email newsletters with varying social comparison information (Chen et al. 2010a ). The experimenters have access to user history data (e.g., number of movies rated, frequency of login, and other usage data) that aids in assigning subjects to groups and in personalizing their newsletters. They were able to track user activity in the month following the newletter mailing (and beyond) to determine the effect of the manipulation on user interaction with the site as a whole. Finally, the site allows for a modified web interface to present the email newsletter links within the site itself. This level of control and observation would be difficult without direct control over the site.

Despite the advantages, site ownership can be costly. The original MovieLens implementation took about 1 month of development with two masters students working on it. The fixed cost was small because the site was a clone of the EachMovie site that DEC was shutting down, with few features and no design. Since then, the research team has maintained a solid investment in MovieLens, with a full-time staff member supporting its maintenance, ongoing development, and related research—usually working together with two or three part-time undergrads and masters students, and occasionally several Ph.D. students. During an experiment, the costs increase, with a full-time staff member who devotes about 1/4 of his time to site maintenance, a Ph.D. student who devotes about 10 h a week to system development and enhancements, and rotating responsibility in the lab for handling customer support for about one to 2 h per week.

Starting a site from scratch involves higher fixed costs. For example, launching LabintheWild required 6 months of programming effort. Subsequently, it takes approximately ten programming hours to maintain the site (excluding the construction of new experiments) and an additional 10 h per week for general maintenance, including writing blog posts, updating its facebook page and answering participant emails. Footnote 8

3.2 Recruiting, informed consent and the IRB

In addition to considering what type of experimenter control is best suited, researchers must consider issues related to subject recruiting and ethical issues related to the experiment. Online field experiments use two types of subject recruiting. The first type is natural selection. In the eBay field experiments discussed above, the experimental tasks are natural tasks that participants interested the item undertake. These are natural field experiments (Harrison and List 2004 ), where participants do not know that they are in an experiment. In nearly all cases, no informed consent is presented to the participants because awareness of the research study and being monitored can affect behavior (List 2008 ).

The second type of online recruiting method is sampling. An experimenter with access to a database of site users can generate a pool of potential subjects, and in some way recruit them to participate. From the pool, the experimenter may invite a random sample, may create a stratified or other systematic sample, or may simply implement the experimental interface across the entire pool. In one study, Leider et al. ( 2009 ) recruit their subjects from Facebook, but then direct them to the researchers’ own website to conduct the actual experiment.

Subject recruitment may be explicit, as in Chen et al. ( 2010a ), who recruit via email, with those who reply as subjects. Other experiments, such as the email studies shown in Ling et al. ( 2005 ), randomly select users and assign those users into groups, where being sent the email is the experimental treatment. By contrast, Sen et al.’s ( 2006 ) tagging experiments present the interface to the entire community. For experiments which accept convenience samples of those users who volunteer, who visit the site, or who otherwise discover the manipulation, there is the concern of sample selection bias. Even studies that do not require explicit consent, such as Cosley et al. ( 2007 ) or Sen et al. ( 2006 ), face sample selection biased towards frequent or attentive users.

The recruitment strategy for online field experiments is closely related to the question of informed consent. Compared with laboratory experiments, it is much more common for field experiments to request a waiver of informed consent so as to avoid changing the behavior of the subject.

In general, researcher who plan to run online field experiments should go through the IRB process, to have a disinterested third party evaluate the ethical aspect of the proposed experiment, even though the process might not be able to screen out all unethical studies. In our experience, some university IRBs are reasonable and efficient, while others bureaucratic. In the industry, to our knowledge, Yahoo Research established an IRB process for online field experiments, whereas other major IT companies do not, although some have tight privacy controls on all use of individual-level data.

3.3 Identification and authentication

Researchers interested in conducting online field experiments need to consider how they will accurately identify user and track individual activities, as most studies benefit from the ability to identify users over a period of time.

Identification requires that a user offer a unique identifier, such as a registered login name. Authentication is a process that verifies the proffered identity, to increase the confidence that the user proffering the identity is actually the owner of that identity. An identification and authentication system may also ensure that a real-world user has only a single identity in the online community of interest (Friedman and Resnick 2001 ). Sites that provide personalization or reputation systems typically require login with an ID and password. E-commerce sites may require login, but often not until a purchase is being made. In contrast, many information services, from CNN.com and ESPN.com to the Internet Public Library, do not require any user identification. For these sites, creating an identification system that requires users to create accounts and login might adversely affect usage and public satisfaction with the service, and would therefore likely to be discouraged by the site owners.

Three methods commonly used for tracking users on sites without logins are session tracking, IP addresses, and cookies. Each method has both strengths and weaknesses. For example, session tracking on a web server can identify a sequence of user actions within a session, but not across sessions. IP addresses, on the other hand, can be used to track a user across multiple sessions originating from the same computer. However, they cannot follow a user from computer to computer and are often reissued to a new computer with the original computer receiving a new address. Cookies are small files that a website can ask a user’s web browser to store on the user’s computer and deliver at a later time. Cookies can identify a user even if her IP address changes, but not if a user moves to a different computer or browser, or chooses to reject cookies.

In one study, Chen et al. ( 2006 ) use cookies to ensure that a user remains in the same experimental group throughout the experiment. Users who store cookies receive the same campaign message. For other users, the researchers attempt to write a cookie to create an ID for the user in the experimental database. This approach cannot protect against users returning via multiple machines, but it is a practical solution. We should note that people who reject cookies may be more technologically savvy than the average user, which raises sample bias questions for some studies. In the end, there is no perfect method for determining online identification and authentication. Whenever possible, researchers should conduct online field experiments on sites which require login with an ID and password.

3.4 Control group

Finally, designing appropriate control conditions for online field experiments can be challenging. In many cases, it is necessary to have at least two different control groups. One group receives a carefully matched stimulus, with the exception of the hypothesized active ingredient. For example, if studying personalization, the control group could receive an unpersonalized version of the interface; if studying varying content, the control group could receive the same media, but different content; if studying the medium, the control group could receive the same content, but with a different medium. We call this type of control the placebo , as it is similar to the placebo in medical experiments. The placebo design can improve the precision in which the causal effects are estimated (Johnson et al. 2015 ; chapter 5 in Gerber and Green 2012 ). However, an online experiment often requires an additional control in which users are not contacted by the experimenters, to help estimate the extent of any Hawthorne effects or mere contact effect (Ling et al. 2005 ). To be effective, this control needs to be selected from the group of recruits, volunteers, or other eligible subjects.

4 Conclusion

The number of online field experiments has grown tremendously in recent years, spanning fields in economics and computer science as diverse as public finance, labor economics, industrial organization, development economics, human–computer interactions, computer-supported collaborative work, and e-commerce. With the expansion of the Web and e-commerce, we expect this number to grow even more. While some experiments do not lend themselves to Internet testing, we expect that many field experiments on charitable giving, social networks, auctions, personalized advertisement, and health behavior will be conducted online.

Compared to their offline counterparts, online field experiments tend to have both a larger number of observations and natural language as variables of interest, which sometimes require new tools for data manipulation and analysis. We refer the reader to Varian ( 2014 ) for an overview of these new tools and machine learning techniques.

Working at the intersection of economics and computer science, this paper has provided a discussion of the main technologies for conducting such experiments, including case studies to highlight the potential for online field experiments. It has also provided insight into some of the design considerations for researchers in navigating the online field experiment arena.

Google alone conducts more than 10,000 online field experiments per year (private communication with Hal Varian). At Microsoft’s Bing, over 200 concurrent experiments are running on any given day, involving about 100 million active monthly customers (Kohavi et al. 2013 ).

Founded in 2005, Kiva partners with microfinance institutions and matches individual lenders from developed countries with low-income entrepreneurs in developing countries as well as in selected cities in the United States. Through Kiva’s platform, anyone can make a zero-interest loan of $25 or more to support an entrepreneur. As of January 2015, more than 2 million lenders across 208 countries have contributed $666 million in loans, reaching over 1.5 million borrowers in more than 73 countries.

PlanOut is an open source software package developed by Facebook researchers. For detailed information, see https://facebook.github.io/planout/ .

An example of spatial displacement in the blood donation context is reported in Lacetera et al. ( 2012 ). The authors find that, while economic incentives increase blood donations, a substantial proportion of this increase is explained by donors leaving neighboring drives without incentives to attend those with incentives.

Wikipedia’s bot policy can be found at https://en.wikipedia.org/wiki/Wikipedia:Bot_policy . The approval procedure can be found at https://en.wikipedia.org/wiki/Wikipedia:Bots/Requests_for_approval .

For example, Chrome, Firefox and Internet Explorer all have interfaces where researchers can extend the browser to monitor page views and scrolling activities. In some cases, researchers can extend underlying sites through a programming interface that may permit notice of reading or editing behavior.

A browser extension is a computer program that extends the functionality of a web browser, such as improving its user interface, without directly affecting viewable content of a web page. Source: https://msdn.microsoft.com/en-us/library/aa753587(VS.85).aspx .

Private communication with Katharina Reinecke.

Bakshy, E., Eckles, D., & Bernstein, M. S. (2014). Designing and deploying online field experiments. In Proceedings of the 23rd International Conference on World Wide Web , WWW ’14 ACM New York, NY, USA, pp. 283–292.

Bakshy, E., Eckles, D., Yan, R., & Rosenn, I. (2012). Social influence in social advertising: Evidence from field experiments. In Proceedings of the 13th ACM Conference on Electronic Commerce , EC ’12 ACM New York, NY, USA, pp. 146–161.

Boudreau, K. J., & Lakhani, K. (2011). The confederacy of heterogeneous software organizations and heterogeneous developers: field experimental evidence on sorting and worker effort. doi: 10.2139/ssrn.1898277

Chen, Y., Li, X., & MacKie-Mason, J. (2006). Online fund-raising mechanisms: A field experiment. Contributions to Economic Analysis and Policy , Berkeley Electronic Press, 5 (2), Article 4.

Chen, Y. F. M. H., Konstan, J., & Li, S. X. (2010a). Social comparisons and contributions to online communities: A field experiment on movielens. American Economic Review , 100 (4), 1358–1398.

Article   Google Scholar  

Chen, Y., Teck-Hua, H., & Kim, Y.-M. (2010b). Knowledge market design: A field experiment at Google Answers. Journal of Public Economic Theory , 12 (4), 641–664.

Chen, R., Chen, Y., Liu, Y., & Mei, Q. (2015). Does team competition increase pro-social Lending? Evidence from online microfinance. Games and Economic Behavior . doi: 10.1016/j.geb.2015.02.001 .

Cosley, D., Frankowski, D., Terveen, L., & Riedl, J. (2007). SuggestBot: Using intelligent task routing to help people find work in wikipedia. In Proceedings of the 12th international conference on Intelligent user interfaces , pp. 32–41. Downloaded on February 23, 2003 at http://www.communitytechnology.org/nsf_ci_report/ .

Friedman, E. J., & Resnick, P. (2001). The social cost of cheap pseudonyms. Journal of Economics and Management Strategy , 10 (2), 173–199.

Gee, L. K. (2014). The More You Know: Information Effects in Job Application Rates by Gender in A Large Field Experiment. Tufts University Manuscript.

Gerber, A. S., & Green, D. P. (2012). Field experiments: Design, analysis, and interpretation . New York: WW Norton & Company, Inc.

Google Scholar  

Germine, L., Nakayama, K., Duchaine, B. C., Chabris, C. F., Chatterjee, G., & Wilmer, J. B. (2012). Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments. Psychonomic Bulletin and Review , 19 (5), 847–857.

Halfaker, A., Song, B., Stuart, D. A., Kittur, A., & Riedl, J. (2011). NICE: Social translucence through UI intervention. In Proceedings of the 7th International Symposium on Wikis and Open Collaboration , WikiSym ’11 ACM New York, NY, USA, pp. 101–104.

Harper, F. M., Raban, D., Rafaeli, S., & Konstan, J. A. (2008). Predictors of answer quality in online Q&A sites. In CHI ’08: Proceeding of the 26th Annual SIGCHI Conference on Human Factors in Computing Systems , ACM New York, NY, pp. 865–874.

Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature , 42 (4), 1009–1055.

Horton, J. J., Rand, D. G., & Zeckhauser, R. J. (2011). The online laboratory: Conducting experiments in a real labor market. Experimental Economics , 14 (3), 399–425.

Johnson, G. A., Lewis, R. A., & Reiley, D. (2015). Location, location, location: repetition and proximity increase advertising effectiveness. http://www.davidreiley.com/papers/LocationLocationLocation.pdf .

Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology , 65 , 681–706.

Kast, F., Meier, S., & Pomeranz, D. (2011). Under-savers anonymous: Evidence on self-help groups and peer pressure as a savings commitment device. Working Paper, Columbia Business School.

Katok, E., & Kwasnica, A. M. (2008). Time is money: The effect of clock speed on seller’s revenue in Dutch auctions. Experimental Economics , 11 (4), 344–357.

Kohavi, R., Deng, A., Frasca, B., Walker, T., Xu, Y., & Pohlmann, N. (2013). Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining , KDD ’13 ACM New York, NY, USA, pp. 1168–1176.

Lacetera, N., Macis, M., & Slonim, R. (2012). Will there be blood? Incentives and displacement effects in pro-social behavior. American Economic Journal: Economic Policy , 4 (1), 186–223.

Leider, S., Mobius, M. M., Rosenblat, T., & Do, Q.-A. (2009). Directed altruism and enforced reciprocity in social networks: How much is a friend worth? Quarterly Journal of Economics , 124 (4), 1815–1851.

Lerner, J. S., Gonzalez, R. M., Small, D. A., & Fischhoff, B. (2003). Effects of fear and anger on perceived risks of terrorism a national field experiment. Psychological Science , 14 (2), 144–150.

Ling, K., Beenen, G., Ludford, P., Wang, X., Chang, K., Li, X., et al. (2005). Using social psychology to motivate contributions to online communities. Journal of Computer-Mediated Communication , 10 (4). doi: 10.1111/j.1083-6101.2005.tb00273.x .

List, J. A. (2008). Informed consent in social science. Science , 322 , 672.

Liu, T. X., Yang, J., Adamic, L. A., & Chen, Y. (2014). Crowdsourcing with all-pay auctions: a field experiment on Taskcn. Management Science 60 (8), 2020–2037.

Liu, Y., Chen, R., Chen, Y., Mei, Q., & Salib, S. (2012). “I loan because...”: Understanding motivations for pro-social lending. In Proceedings of the fifth ACM international conference on Web search and data mining , WSDM ’12 ACM New York, NY, USA, pp. 503–512.

Lohr, K. N., Brook, R. H., Kamberg, C. J., Goldberg, G. A., Leibowitz, A., Keesey, J., et al. (1986). Use of medical care in the RAND health insurance experiment: Diagnosis-and service-specific analyses in a randomized controlled trial. Medical Care 24 (9 Suppl), S1–S87.

Lucking-Reiley, D. (1999). Using field experiments to test equivalence between auction formats: Magic on the internet. American Economic Review , 89 (5), 1063–1080.

Munson, S., Lee, S., & Resnick, P. (2013). Encouraging reading of diverse political viewpoints with a browser widget. In International AAAI Conference on Weblogs and Social Media , ICWSM 2013, Boston, USA.

Pallais, A. (2014). Inefficient hiring in entry-level labor markets. American Economic Review , 104 (11), 3565–3599.

Reiley, D. H., Li, S.-M., & Lewis, R. A. (2010). Northern exposure: A field experiment measuring externalities between search advertisements. In Proceedings of the 11th ACM Conference on Electronic Commerce , EC’10 ACM New York, NY, USA, pp. 297–304.

Reinecke, K., & Gajos, K. (2015). LabintheWild: Conducting large-scale online experiments with uncompensated samples. In Computer supported cooperative work and social computing (CSCW) , Vancouver, BC, Canada.

Resnick, P., Adar, E., & Lampe, C. What social media data we are missing and how to get it. The Annals of the American Academy of Political and Social Science (forthcoming).

Resnick, P., Zeckhauser, R., Swanson, J., & Lockwood, K. (2006). The value of reputation on eBay: A controlled experiment. Experimental Economics , 9 (2), 79–101.

Sen, S., Lam, S. K., Rashid, A. M., Cosley, D., Frankowski, D., Osterhouse, J., et al. (2006). Tagging, communities, vocabulary, evolution. In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work , ACM, pp. 181–190.

Varian, H. R. (2014). Big data: New tricks for econometrics. The Journal of Economic Perspectives , 28 (2), 3–27.

Williams, D., Ducheneaut, N., Xiong, L., Zhang, Y., Yee, N., & Nickell, E. (2006). From tree house to barracks the social life of guilds in world of warcraft. Games and Culture , 1 (4), 338–361.

Download references

Acknowledgments

We would like to thank Eytan Bakshy, Tawanna Dillahunt, Sara Kiesler, Nancy Kotzian, Robert Kraut, David Reiley, Katharina Reinecke, Paul Resnick, John Riedl and Loren Terveen, for helpful conversations on the topic and comments on a previous version. We are grateful to Robert Slonim and two anonymous referees for their comments and suggestions which significantly improve the paper. The financial support from the National Science Foundation through Grants Nos. IIS-0325837 and BCS-1111019 is gratefully acknowledged. Chen: School of Information, University of Michigan, 105 State Street, Ann Arbor, MI 48109-2112. Email: [email protected]. Konstan: Department of Computer Science and Engineering, University of Minnesota, 200 Union Street SE, Minneapolis, MN 55455. Email: [email protected].

Author information

Authors and affiliations.

University of Michigan, Ann Arbor, MI, USA

Department of Computer Science and Engineering, University of Minnesota, 200 Union Street SE, Minneapolis, MN, 55455, USA

Joseph Konstan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yan Chen .

Rights and permissions

Reprints and permissions

About this article

Chen, Y., Konstan, J. Online field experiments: a selective survey of methods. J Econ Sci Assoc 1 , 29–42 (2015). https://doi.org/10.1007/s40881-015-0005-3

Download citation

Received : 01 November 2014

Revised : 12 March 2015

Accepted : 16 March 2015

Published : 19 May 2015

Issue Date : July 2015

DOI : https://doi.org/10.1007/s40881-015-0005-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online field experiment
  • A/B testing

JEL Classification

  • Find a journal
  • Publish with us
  • Track your research
  • Key Differences

Know the Differences & Comparisons

Difference Between Survey and Experiment

survey vs experiment

While surveys collected data, provided by the informants, experiments test various premises by trial and error method. This article attempts to shed light on the difference between survey and experiment, have a look.

Content: Survey Vs Experiment

Comparison chart.

Basis for ComparisonSurveyExperiment
MeaningSurvey refers to a technique of gathering information regarding a variable under study, from the respondents of the population.Experiment implies a scientific procedure wherein the factor under study is isolated to test hypothesis.
Used inDescriptive ResearchExperimental Research
SamplesLargeRelatively small
Suitable forSocial and Behavioral sciencesPhysical and natural sciences
Example ofField researchLaboratory research
Data collectionObservation, interview, questionnaire, case study etc.Through several readings of experiment.

Definition of Survey

By the term survey, we mean a method of securing information relating to the variable under study from all or a specified number of respondents of the universe. It may be a sample survey or a census survey. This method relies on the questioning of the informants on a specific subject. Survey follows structured form of data collection, in which a formal questionnaire is prepared, and the questions are asked in a predefined order.

Informants are asked questions concerning their behaviour, attitude, motivation, demographic, lifestyle characteristics, etc. through observation, direct communication with them over telephone/mail or personal interview. Questions are asked verbally to the respondents, i.e. in writing or by way of computer. The answer of the respondents is obtained in the same form.

Definition of Experiment

The term experiment means a systematic and logical scientific procedure in which one or more independent variables under test are manipulated, and any change on one or more dependent variable is measured while controlling for the effect of the extraneous variable. Here extraneous variable is an independent variable which is not associated with the objective of study but may affect the response of test units.

In an experiment, the investigator attempts to observe the outcome of the experiment conducted by him intentionally, to test the hypothesis or to discover something or to demonstrate a known fact. An experiment aims at drawing conclusions concerning the factor on the study group and making inferences from sample to larger population of interest.

Key Differences Between Survey and Experiment

The differences between survey and experiment can be drawn clearly on the following grounds:

  • A technique of gathering information regarding a variable under study, from the respondents of the population, is called survey. A scientific procedure wherein the factor under study is isolated to test hypothesis is called an experiment.
  • Surveys are performed when the research is of descriptive nature, whereas in the case of experiments are conducted in experimental research.
  • The survey samples are large as the response rate is low, especially when the survey is conducted through mailed questionnaire. On the other hand, samples required in the case of experiments is relatively small.
  • Surveys are considered suitable for social and behavioural science. As against this, experiments are an important characteristic of physical and natural sciences.
  • Field research refers to the research conducted outside the laboratory or workplace. Surveys are the best example of field research. On the contrary, Experiment is an example of laboratory research. A laboratory research is nothing but research carried on inside the room equipped with scientific tools and equipment.
  • In surveys, the data collection methods employed can either be observation, interview, questionnaire, or case study. As opposed to experiment, the data is obtained through several readings of the experiment.

While survey studies the possible relationship between data and unknown variable, experiments determine the relationship. Further, Correlation analysis is vital in surveys, as in social and business surveys, the interest of the researcher rests in understanding and controlling relationships between variables. Unlike experiments, where casual analysis is significant.

You Might Also Like:

questionnaire vs interview

sanjay kumar yadav says

November 17, 2016 at 1:08 am

Ishika says

September 9, 2017 at 9:30 pm

The article was quite helpful… Thank you.

May 21, 2018 at 3:26 pm

Can you develop your Application for Android

Surbhi S says

May 21, 2018 at 4:21 pm

Yeah, we will develop android app soon.

October 31, 2018 at 12:32 am

If I was doing an experiment with Poverty and Education level, which do you think would be more appropriate for me?

Thanks, Chris

Ndaware M.M says

January 7, 2021 at 2:29 am

So interested,

Victoria Addington says

May 18, 2023 at 5:31 pm

Thank you for explaining the topic

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on July 15, 2021 by Pritha Bhandari . Revised on June 22, 2023.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs. surveys, questionnaire methods, open-ended vs. closed-ended questions, question wording, question order, step-by-step guide to design, other interesting articles, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives , placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleansing and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalize your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimizing these will help you avoid several types of research bias , including sampling bias , ascertainment bias , and undercoverage bias .

Prevent plagiarism. Run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • cost-effective
  • easy to administer for small and large groups
  • anonymous and suitable for sensitive topics

But they may also be:

  • unsuitable for people with limited literacy or verbal skills
  • susceptible to a nonresponse bias (most people invited may not complete the questionnaire)
  • biased towards people who volunteer because impersonal survey requests often go ignored.

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • help you ensure the respondents are representative of your target audience
  • allow clarifications of ambiguous or unclear questions and answers
  • have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • costly and time-consuming to perform
  • more difficult to analyze if you have qualitative responses
  • likely to contain experimenter bias or demand characteristics
  • likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalizable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert scale questions collect ordinal data using rating scales with 5 or 7 points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio scales , you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer “multiracial” for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle for productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarizing responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorize answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Positive frame Negative frame
Should protests of pandemic-related restrictions be allowed? Should protests of pandemic-related restrictions be forbidden?

Use a mix of both positive and negative frames to avoid research bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counter argument within the question as well.

Unbalanced Balanced
Do you favor…? Do you favor or oppose…?
Do you agree that…? Do you agree or disagree that…?

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favor flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barreled questions. Double-barreled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

field experiment questionnaire

You can organize the questions logically, with a clear progression from simple to complex. Alternatively, you can randomize the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioral or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimize order effects because they can be a source of systematic error or bias in your study.

Randomization

Randomization involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomization, order effects will be minimized in your dataset. But a randomized order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalize your variables of interest into questionnaire items. Operationalizing concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivized or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomize questions. Randomizing questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis. You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved July 2, 2024, from https://www.scribbr.com/methodology/questionnaire/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, survey research | definition, examples & methods, what is a likert scale | guide & examples, reliability vs. validity in research | difference, types and examples, what is your plagiarism score.

field experiment questionnaire

Live revision! Join us for our free exam revision livestreams Watch now →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Types of Experiment: Overview

Last updated 6 Sept 2022

  • Share on Facebook
  • Share on Twitter
  • Share by Email

Different types of methods are used in research, which loosely fall into 1 of 2 categories.

Experimental (Laboratory, Field & Natural) & N on experimental ( correlations, observations, interviews, questionnaires and case studies).

All the three types of experiments have characteristics in common. They all have:

  • an independent variable (I.V.) which is manipulated or a naturally occurring variable
  • a dependent variable (D.V.) which is measured
  • there will be at least two conditions in which participants produce data.

Note – natural and quasi experiments are often used synonymously but are not strictly the same, as with quasi experiments participants cannot be randomly assigned, so rather than there being a condition there is a condition.

Laboratory Experiments

These are conducted under controlled conditions, in which the researcher deliberately changes something (I.V.) to see the effect of this on something else (D.V.).

Control – lab experiments have a high degree of control over the environment & other extraneous variables which means that the researcher can accurately assess the effects of the I.V, so it has higher internal validity.

Replicable – due to the researcher’s high levels of control, research procedures can be repeated so that the reliability of results can be checked.

Limitations

Lacks ecological validity – due to the involvement of the researcher in manipulating and controlling variables, findings cannot be easily generalised to other (real life) settings, resulting in poor external validity.

Field Experiments

These are carried out in a natural setting, in which the researcher manipulates something (I.V.) to see the effect of this on something else (D.V.).

Validity – field experiments have some degree of control but also are conducted in a natural environment, so can be seen to have reasonable internal and external validity.

Less control than lab experiments and therefore extraneous variables are more likely to distort findings and so internal validity is likely to be lower.

Natural / Quasi Experiments

These are typically carried out in a natural setting, in which the researcher measures the effect of something which is to see the effect of this on something else (D.V.). Note that in this case there is no deliberate manipulation of a variable; this already naturally changing, which means the research is merely measuring the effect of something that is already happening.

High ecological validity – due to the lack of involvement of the researcher; variables are naturally occurring so findings can be easily generalised to other (real life) settings, resulting in high external validity.

Lack of control – natural experiments have no control over the environment & other extraneous variables which means that the researcher cannot always accurately assess the effects of the I.V, so it has low internal validity.

Not replicable – due to the researcher’s lack of control, research procedures cannot be repeated so that the reliability of results cannot be checked.

  • Laboratory Experiment
  • Field experiment
  • Quasi Experiment
  • Natural Experiment
  • Field experiments

You might also like

Field experiments, laboratory experiments, natural experiments, control of extraneous variables, similarities and differences between classical and operant conditioning, learning approaches - social learning theory, differences between behaviourism and social learning theory, ​research methods in the social learning theory, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

July 3, 2024

Is a Drug Even Needed to Induce a Psychedelic Experience?

A Stanford anesthesiologist deconstructs the component parts of what it means to undergo a psychedelic trip

By Gary Stix

Illustration, surreal, multiple nested passageways in the shape of a human head, in the distance past the last passageway is a blue sky with white clouds

Jorm Sangsorn/Getty Images

A debate has long percolated among researchers as to whether what happens after taking a psychedelic drug results from the placebo effect—rooted in a person’s belief that taking psilocybin or ketamine is going to give them a transformative experience. Boris D. Heifets, an associate professor of anesthesiology at the Stanford University School of Medicine, has been tackling this question amid his broader laboratory investigations of what exactly happens in mind and brain when someone takes a psychedelic . How much of this sometimes life-altering experience is chemical and empirical, and how much is mental and subjective? It turns out the effects may consist of a lot more than just a simple biochemical response to a drug activating, say, the brain’s serotonin receptors. Heifets recently talked with Scientific American about his years-long quest to define the essence of the psychedelic experience.

[ An edited transcript of the interview follows .]

Are we coming any closer to understanding how psychedelics work and how they work in the context of therapy. Are we closer to using these transformational experiences to treat psychiatric disorders?

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Having been in this field for a while, there’s still this inescapable problem of how to study psychedelics. One framework that I find very useful is thinking about it in three categories.

There's the biochemical drug effect, which interacts with basic brain biology—chemicals interacting with receptors on cells. That happens whether or not you can “feel” the effect of the drug. Then there is the conscious experience related to changes in sensation and revelatory, hallucinatory and ecstatic feelings. These experiences are closely tied to taking the drug, and usually we think of them as caused by the drug. But it is actually quite difficult to say whether a lasting change in mood or outlook was a result of the drug—a biochemical effect—or the trip itself, the experiential effect.

The third factor, then, is all of those aspects of the overall drug experience that are independent of the drug or trip—the non-drug factors, what [psychologist and psychedelics advocate] Timothy Leary called the “set and setting.” How much does your state of mind and the setting in which you take a drug influence the outcome? This category includes expectations about improvement in, say, your depression, expectations about the experience, the stress level in the environment. It would also include integration, making sense of these intense experiences afterward and integrating them into your life. And it’s useful to put each of these things in its own box because I think each of them is somewhat isolated. The goal is to make each box smaller and smaller, to really deconstruct the pieces.

So how have you gone about examining all this?

One example of how we’ve used this framework in our research is an experiment in which we gave [participants with depression] ketamine during general anesthesia . The idea was to explore just the biochemical drug effect by blanking out conscious experience to see whether people got better from their depression.

Our intention with this experiment was to get at this question that a lot of people have been asking: Is it the drug or the trip that is making someone better? You can address that question in a couple of different ways. One is to redesign the drug to eliminate the trip. But that is a very long process. As an anesthesiologist, my solution of course was to address the problem with the use of general anesthesia. We used the anesthetics to basically suppress conscious experience of the associated psychological effects of ketamine, which many people think may be relevant and even crucial to the antidepressant effects.

We collaborated closely with psychiatrists Laura Hack and Alan Schatzberg, [both] at Stanford, and we designed this study to look like every ketamine study in the past 15 years. We picked the same type of participants: [people] with moderate to severe major depressive disorder who had failed other treatments for moderate or severe depression. We administered the same questionnaires; we gave the same dose of ketamine.

The difference was these participants happened to be coming in for surgery for hips, knees, hernias, and while they were under general anesthesia, we gave them a standard antidepressant dose of ketamine. Because the patients were under anesthesia and couldn’t tell whether they were on a drug or not, this may have been the first blinded study of ketamine.

What was surprising was that the placebo group [who received no ketamine] also got better, indistinguishably from those who received the drug. Almost 60 percent of the patients had their symptom load cut in half, and there was at least 30 percent remission from major depressive disorder. These were patients who had been sick for years, and that finding was a big surprise. In a sense, it was a failed trial in that we couldn't tell the difference between our two groups.

What I take from that is really that this doesn't say much about how ketamine works. What it does say is just how big a therapeutic effect you can attribute to nondrug factors. That’s what people call the placebo effect.

It’s a word that describes everything from sugar pills to our surgeries. In our case, it may have had something to do with the preparation for the surgery. We messaged patients early; we engaged with them early. They weren’t used to people being interested in their mental health.

What did you discuss?

We talked to them for hours; we heard about their histories; we got to know them. I think they felt seen and heard in a way that many patients don’t, going into surgery. I’m thinking about parallels with the preparation steps for psychedelic trials. Patients in both types of research are motivated to be in these studies. In our study, they were told that they were testing the therapeutic potential of a drug and that there was a 50–50 chance they might get it. And then there was the big event of actually having the surgery. In this case, it was similar to having a psychedelic trial—a big, stressful, life-impacting event.

The patients closed their eyes and opened them after the surgery, and in many cases, they had the sense that no time had passed. They knew they went through something because they had the bandages and scars to prove it. What I take from that is that these nondrug effects, such as expectations of a particular outcome, are almost certainly present in most psychedelic trials and are independently able to drive a big therapeutic effect.

It became obvious that people had powerful experiences. Most people don't spontaneously improve from years of depression. After surgery, they get worse. That's what the data show. And the fact that we're able to make this degree of a positive impact after hours and hours of interpersonal contact and messaging, that’s important. This was a really clear demonstration to me that nondrug factors, such as expectations and feelings of hope, contribute a substantial portion to the effects we’ve seen. And you would be foolish to disregard those components in designing a therapy. And, you know, the truth is that most clinicians make use of these techniques every day in building a rapport with patients, leveraging this placebo response.

Does that suggest in any way that the effects of psychedelics might be substantially—or perhaps entirely—placebo effects?

So this is where I think you have to ask the question: What do we mean by placebo? Characteristically, people use the word placebo in a kind of a dismissive way, right? If a person responds to placebo, the subtle implication is there was nothing wrong. And that’s not what we’re talking about here.

Think about everyday situations that bring about life changes. A heart attack or near-death experience may cause someone in a high stress job to change their job and lifestyle habits—exercising and eating better. That all can be grouped under the label of a placebo effect.

Another possibility to achieve the same goal is having a transformational experience that you then use to make changes in your life. So the question is: How do you do this in a practical way? You can’t exactly go out and give people heart attacks or even send them on life-changing experiences, such as skydiving or on trips to the Riviera. But you can give them a psychedelic. That’s a big, powerful experience. In many cases, that is unique in some people’s lives and confers the opportunity to make changes for the better.

How does giving an actual psychedelic drug to someone in a clinical trial relate to the three categories you mentioned earlier?

Let’s circle back to this idea that psychedelic transformation could rely either on the biochemical effect, the experience of the trip itself, or nondrug factors. Our study of ketamine during anesthesia really highlighted the role of nondrug factors such as expectation but didn’t really get at the question of “Is it the drug or the trip?”

To answer that, some [of my] scientist colleagues are testing nonpsychedelics, or nonhallucinogenic psychedelic derivatives, to see whether patients with depression, for example, get better after treatment with a drug that can cause some of the same biochemical changes as a classical psychedelic but doesn’t have a “trip” associated with it. That’s “taking the trip out of the drug.” But what if you could “take the drug out of the trip,” meaning [the creation of] an experience that is reproducible across people that checks many of the same boxes as a classic psychedelic-induced trip but that doesn’t actually require the use of a psychedelic molecule? So what, in this context, you provide people with is a profound experience that can even be somewhat standardized so you can study it. And it would be powerful and vivid and meaningful and revelatory. Do you get the same types of effects?

That would not be definitive evidence. But it would strongly suggest that maybe there’s nothing intrinsically special about the activity of a drug that activates a particular receptor that mediates the effects of psychedelics. What that would do is put front and center the role of human experience in psychological transformation.

So you might be able to bypass the need for a psychedelic drug if you can get the same result with a nonpsychoactive drug?

Maybe you can—we just don’t know. That’s an empirical question.

To try to answer that question, I’ve worked closely with Harrison Chow, also an anesthesiologist at Stanford, on a protocol that we call “dreaming during anesthesia.” It's really a state of consciousness that happens before emergence from anesthesia. When patients awaken from surgery, they progress from a state that is deeper than sleep. And they pass through a number of conscious states, some of which produce dreams . They wake up, and about 20 percent of patients will have some dream memory imagery.

What we do is prolong that process and use EEG [electroencephalography] to home in on a specific biomarker of that state. We can hold someone in this preemergent state for 15 minutes. Participants wake up, and the stories they tell are very hard to ignore. These are some of the most vivid dreams they’ve ever had. They say things like “that was more real than real.” The participants with trauma dream of reintegrating their body map, reimagining their body [as] once again whole. We had a participant who had been assigned male at birth and had gender-affirming] surgery. She had been in the military and reimagined her life before her gender-affirming care. She saw herself doing high-intensity military training exercises, now with her body aligning with her gender.

These are intense experiences—vivid, emotionally salient, possibly hallucinatory. We published a couple of case reports now where we actually have seen therapeutic effects on a par with what we see in psychedelic medicine: powerful experiences followed by a resolution of symptoms in a psychiatric disorder.

What we’re seeing is a shared physiology in terms of EEG results for these dream states and the EEGs present for psychedelics. We see at least some shared phenomenology in terms of description of the experiences, and there are also similar therapeutic effects.

What are some of your next steps?

In addition to possibly producing a very compelling therapeutic using the common anesthetic propofol, we are working hard to develop experimental tools using anesthesia, using our knowledge of how placebo works in the brain to separate these three factors: the drug effect, the experiential effect and nondrug factors. At least two of those big effects, neither of which depends on administering a psychedelic, appear to be capable of generating a profound therapeutic impact that certainly would be sufficient on its own to claim the outcomes seen in psychedelic trials. And that, to me, shows that maybe the emphasis is misplaced when we're focused on reengineering the drug to get rid of hallucinogenic effects. We should be focused on reengineering the experience.

But we're still working on number three, the drug effect. We have collaborations with David Olson, a chemist at the University of California, Davis, who has pioneered the use of nonhallucinogenic psychedelics. We are helping to characterize the profound neuroplastic effects of a drug he has developed that appears, at least in mice, not to trigger the same type of brain activation that classical psychedelics do. What I’m trying to convey is that, using these approaches, we are able to get some traction to experimentally define, isolate and identify the components of this very complex therapeutic package we call psychedelic therapy.

Human Subjects Office

Medical terms in lay language.

Please use these descriptions in place of medical jargon in consent documents, recruitment materials and other study documents. Note: These terms are not the only acceptable plain language alternatives for these vocabulary words.

This glossary of terms is derived from a list copyrighted by the University of Kentucky, Office of Research Integrity (1990).

For clinical research-specific definitions, see also the Clinical Research Glossary developed by the Multi-Regional Clinical Trials (MRCT) Center of Brigham and Women’s Hospital and Harvard  and the Clinical Data Interchange Standards Consortium (CDISC) .

Alternative Lay Language for Medical Terms for use in Informed Consent Documents

A   B   C   D   E   F   G   H   I  J  K   L   M   N   O   P   Q   R   S   T   U   V   W  X  Y  Z

ABDOMEN/ABDOMINAL body cavity below diaphragm that contains stomach, intestines, liver and other organs ABSORB take up fluids, take in ACIDOSIS condition when blood contains more acid than normal ACUITY clearness, keenness, esp. of vision and airways ACUTE new, recent, sudden, urgent ADENOPATHY swollen lymph nodes (glands) ADJUVANT helpful, assisting, aiding, supportive ADJUVANT TREATMENT added treatment (usually to a standard treatment) ANTIBIOTIC drug that kills bacteria and other germs ANTIMICROBIAL drug that kills bacteria and other germs ANTIRETROVIRAL drug that works against the growth of certain viruses ADVERSE EFFECT side effect, bad reaction, unwanted response ALLERGIC REACTION rash, hives, swelling, trouble breathing AMBULATE/AMBULATION/AMBULATORY walk, able to walk ANAPHYLAXIS serious, potentially life-threatening allergic reaction ANEMIA decreased red blood cells; low red cell blood count ANESTHETIC a drug or agent used to decrease the feeling of pain, or eliminate the feeling of pain by putting you to sleep ANGINA pain resulting from not enough blood flowing to the heart ANGINA PECTORIS pain resulting from not enough blood flowing to the heart ANOREXIA disorder in which person will not eat; lack of appetite ANTECUBITAL related to the inner side of the forearm ANTIBODY protein made in the body in response to foreign substance ANTICONVULSANT drug used to prevent seizures ANTILIPEMIC a drug that lowers fat levels in the blood ANTITUSSIVE a drug used to relieve coughing ARRHYTHMIA abnormal heartbeat; any change from the normal heartbeat ASPIRATION fluid entering the lungs, such as after vomiting ASSAY lab test ASSESS to learn about, measure, evaluate, look at ASTHMA lung disease associated with tightening of air passages, making breathing difficult ASYMPTOMATIC without symptoms AXILLA armpit

BENIGN not malignant, without serious consequences BID twice a day BINDING/BOUND carried by, to make stick together, transported BIOAVAILABILITY the extent to which a drug or other substance becomes available to the body BLOOD PROFILE series of blood tests BOLUS a large amount given all at once BONE MASS the amount of calcium and other minerals in a given amount of bone BRADYARRHYTHMIAS slow, irregular heartbeats BRADYCARDIA slow heartbeat BRONCHOSPASM breathing distress caused by narrowing of the airways

CARCINOGENIC cancer-causing CARCINOMA type of cancer CARDIAC related to the heart CARDIOVERSION return to normal heartbeat by electric shock CATHETER a tube for withdrawing or giving fluids CATHETER a tube placed near the spinal cord and used for anesthesia (indwelling epidural) during surgery CENTRAL NERVOUS SYSTEM (CNS) brain and spinal cord CEREBRAL TRAUMA damage to the brain CESSATION stopping CHD coronary heart disease CHEMOTHERAPY treatment of disease, usually cancer, by chemical agents CHRONIC continuing for a long time, ongoing CLINICAL pertaining to medical care CLINICAL TRIAL an experiment involving human subjects COMA unconscious state COMPLETE RESPONSE total disappearance of disease CONGENITAL present before birth CONJUNCTIVITIS redness and irritation of the thin membrane that covers the eye CONSOLIDATION PHASE treatment phase intended to make a remission permanent (follows induction phase) CONTROLLED TRIAL research study in which the experimental treatment or procedure is compared to a standard (control) treatment or procedure COOPERATIVE GROUP association of multiple institutions to perform clinical trials CORONARY related to the blood vessels that supply the heart, or to the heart itself CT SCAN (CAT) computerized series of x-rays (computerized tomography) CULTURE test for infection, or for organisms that could cause infection CUMULATIVE added together from the beginning CUTANEOUS relating to the skin CVA stroke (cerebrovascular accident)

DERMATOLOGIC pertaining to the skin DIASTOLIC lower number in a blood pressure reading DISTAL toward the end, away from the center of the body DIURETIC "water pill" or drug that causes increase in urination DOPPLER device using sound waves to diagnose or test DOUBLE BLIND study in which neither investigators nor subjects know what drug or treatment the subject is receiving DYSFUNCTION state of improper function DYSPLASIA abnormal cells

ECHOCARDIOGRAM sound wave test of the heart EDEMA excess fluid collecting in tissue EEG electric brain wave tracing (electroencephalogram) EFFICACY effectiveness ELECTROCARDIOGRAM electrical tracing of the heartbeat (ECG or EKG) ELECTROLYTE IMBALANCE an imbalance of minerals in the blood EMESIS vomiting EMPIRIC based on experience ENDOSCOPIC EXAMINATION viewing an  internal part of the body with a lighted tube  ENTERAL by way of the intestines EPIDURAL outside the spinal cord ERADICATE get rid of (such as disease) Page 2 of 7 EVALUATED, ASSESSED examined for a medical condition EXPEDITED REVIEW rapid review of a protocol by the IRB Chair without full committee approval, permitted with certain low-risk research studies EXTERNAL outside the body EXTRAVASATE to leak outside of a planned area, such as out of a blood vessel

FDA U.S. Food and Drug Administration, the branch of federal government that approves new drugs FIBROUS having many fibers, such as scar tissue FIBRILLATION irregular beat of the heart or other muscle

GENERAL ANESTHESIA pain prevention by giving drugs to cause loss of consciousness, as during surgery GESTATIONAL pertaining to pregnancy

HEMATOCRIT amount of red blood cells in the blood HEMATOMA a bruise, a black and blue mark HEMODYNAMIC MEASURING blood flow HEMOLYSIS breakdown in red blood cells HEPARIN LOCK needle placed in the arm with blood thinner to keep the blood from clotting HEPATOMA cancer or tumor of the liver HERITABLE DISEASE can be transmitted to one’s offspring, resulting in damage to future children HISTOPATHOLOGIC pertaining to the disease status of body tissues or cells HOLTER MONITOR a portable machine for recording heart beats HYPERCALCEMIA high blood calcium level HYPERKALEMIA high blood potassium level HYPERNATREMIA high blood sodium level HYPERTENSION high blood pressure HYPOCALCEMIA low blood calcium level HYPOKALEMIA low blood potassium level HYPONATREMIA low blood sodium level HYPOTENSION low blood pressure HYPOXEMIA a decrease of oxygen in the blood HYPOXIA a decrease of oxygen reaching body tissues HYSTERECTOMY surgical removal of the uterus, ovaries (female sex glands), or both uterus and ovaries

IATROGENIC caused by a physician or by treatment IDE investigational device exemption, the license to test an unapproved new medical device IDIOPATHIC of unknown cause IMMUNITY defense against, protection from IMMUNOGLOBIN a protein that makes antibodies IMMUNOSUPPRESSIVE drug which works against the body's immune (protective) response, often used in transplantation and diseases caused by immune system malfunction IMMUNOTHERAPY giving of drugs to help the body's immune (protective) system; usually used to destroy cancer cells IMPAIRED FUNCTION abnormal function IMPLANTED placed in the body IND investigational new drug, the license to test an unapproved new drug INDUCTION PHASE beginning phase or stage of a treatment INDURATION hardening INDWELLING remaining in a given location, such as a catheter INFARCT death of tissue due to lack of blood supply INFECTIOUS DISEASE transmitted from one person to the next INFLAMMATION swelling that is generally painful, red, and warm INFUSION slow injection of a substance into the body, usually into the blood by means of a catheter INGESTION eating; taking by mouth INTERFERON drug which acts against viruses; antiviral agent INTERMITTENT occurring (regularly or irregularly) between two time points; repeatedly stopping, then starting again INTERNAL within the body INTERIOR inside of the body INTRAMUSCULAR into the muscle; within the muscle INTRAPERITONEAL into the abdominal cavity INTRATHECAL into the spinal fluid INTRAVENOUS (IV) through the vein INTRAVESICAL in the bladder INTUBATE the placement of a tube into the airway INVASIVE PROCEDURE puncturing, opening, or cutting the skin INVESTIGATIONAL NEW DRUG (IND) a new drug that has not been approved by the FDA INVESTIGATIONAL METHOD a treatment method which has not been proven to be beneficial or has not been accepted as standard care ISCHEMIA decreased oxygen in a tissue (usually because of decreased blood flow)

LAPAROTOMY surgical procedure in which an incision is made in the abdominal wall to enable a doctor to look at the organs inside LESION wound or injury; a diseased patch of skin LETHARGY sleepiness, tiredness LEUKOPENIA low white blood cell count LIPID fat LIPID CONTENT fat content in the blood LIPID PROFILE (PANEL) fat and cholesterol levels in the blood LOCAL ANESTHESIA creation of insensitivity to pain in a small, local area of the body, usually by injection of numbing drugs LOCALIZED restricted to one area, limited to one area LUMEN the cavity of an organ or tube (e.g., blood vessel) LYMPHANGIOGRAPHY an x-ray of the lymph nodes or tissues after injecting dye into lymph vessels (e.g., in feet) LYMPHOCYTE a type of white blood cell important in immunity (protection) against infection LYMPHOMA a cancer of the lymph nodes (or tissues)

MALAISE a vague feeling of bodily discomfort, feeling badly MALFUNCTION condition in which something is not functioning properly MALIGNANCY cancer or other progressively enlarging and spreading tumor, usually fatal if not successfully treated MEDULLABLASTOMA a type of brain tumor MEGALOBLASTOSIS change in red blood cells METABOLIZE process of breaking down substances in the cells to obtain energy METASTASIS spread of cancer cells from one part of the body to another METRONIDAZOLE drug used to treat infections caused by parasites (invading organisms that take up living in the body) or other causes of anaerobic infection (not requiring oxygen to survive) MI myocardial infarction, heart attack MINIMAL slight MINIMIZE reduce as much as possible Page 4 of 7 MONITOR check on; keep track of; watch carefully MOBILITY ease of movement MORBIDITY undesired result or complication MORTALITY death MOTILITY the ability to move MRI magnetic resonance imaging, diagnostic pictures of the inside of the body, created using magnetic rather than x-ray energy MUCOSA, MUCOUS MEMBRANE moist lining of digestive, respiratory, reproductive, and urinary tracts MYALGIA muscle aches MYOCARDIAL pertaining to the heart muscle MYOCARDIAL INFARCTION heart attack

NASOGASTRIC TUBE placed in the nose, reaching to the stomach NCI the National Cancer Institute NECROSIS death of tissue NEOPLASIA/NEOPLASM tumor, may be benign or malignant NEUROBLASTOMA a cancer of nerve tissue NEUROLOGICAL pertaining to the nervous system NEUTROPENIA decrease in the main part of the white blood cells NIH the National Institutes of Health NONINVASIVE not breaking, cutting, or entering the skin NOSOCOMIAL acquired in the hospital

OCCLUSION closing; blockage; obstruction ONCOLOGY the study of tumors or cancer OPHTHALMIC pertaining to the eye OPTIMAL best, most favorable or desirable ORAL ADMINISTRATION by mouth ORTHOPEDIC pertaining to the bones OSTEOPETROSIS rare bone disorder characterized by dense bone OSTEOPOROSIS softening of the bones OVARIES female sex glands

PARENTERAL given by injection PATENCY condition of being open PATHOGENESIS development of a disease or unhealthy condition PERCUTANEOUS through the skin PERIPHERAL not central PER OS (PO) by mouth PHARMACOKINETICS the study of the way the body absorbs, distributes, and gets rid of a drug PHASE I first phase of study of a new drug in humans to determine action, safety, and proper dosing PHASE II second phase of study of a new drug in humans, intended to gather information about safety and effectiveness of the drug for certain uses PHASE III large-scale studies to confirm and expand information on safety and effectiveness of new drug for certain uses, and to study common side effects PHASE IV studies done after the drug is approved by the FDA, especially to compare it to standard care or to try it for new uses PHLEBITIS irritation or inflammation of the vein PLACEBO an inactive substance; a pill/liquid that contains no medicine PLACEBO EFFECT improvement seen with giving subjects a placebo, though it contains no active drug/treatment PLATELETS small particles in the blood that help with clotting POTENTIAL possible POTENTIATE increase or multiply the effect of a drug or toxin (poison) by giving another drug or toxin at the same time (sometimes an unintentional result) POTENTIATOR an agent that helps another agent work better PRENATAL before birth PROPHYLAXIS a drug given to prevent disease or infection PER OS (PO) by mouth PRN as needed PROGNOSIS outlook, probable outcomes PRONE lying on the stomach PROSPECTIVE STUDY following patients forward in time PROSTHESIS artificial part, most often limbs, such as arms or legs PROTOCOL plan of study PROXIMAL closer to the center of the body, away from the end PULMONARY pertaining to the lungs

QD every day; daily QID four times a day

RADIATION THERAPY x-ray or cobalt treatment RANDOM by chance (like the flip of a coin) RANDOMIZATION chance selection RBC red blood cell RECOMBINANT formation of new combinations of genes RECONSTITUTION putting back together the original parts or elements RECUR happen again REFRACTORY not responding to treatment REGENERATION re-growth of a structure or of lost tissue REGIMEN pattern of giving treatment RELAPSE the return of a disease REMISSION disappearance of evidence of cancer or other disease RENAL pertaining to the kidneys REPLICABLE possible to duplicate RESECT remove or cut out surgically RETROSPECTIVE STUDY looking back over past experience

SARCOMA a type of cancer SEDATIVE a drug to calm or make less anxious SEMINOMA a type of testicular cancer (found in the male sex glands) SEQUENTIALLY in a row, in order SOMNOLENCE sleepiness SPIROMETER an instrument to measure the amount of air taken into and exhaled from the lungs STAGING an evaluation of the extent of the disease STANDARD OF CARE a treatment plan that the majority of the medical community would accept as appropriate STENOSIS narrowing of a duct, tube, or one of the blood vessels in the heart STOMATITIS mouth sores, inflammation of the mouth STRATIFY arrange in groups for analysis of results (e.g., stratify by age, sex, etc.) STUPOR stunned state in which it is difficult to get a response or the attention of the subject SUBCLAVIAN under the collarbone SUBCUTANEOUS under the skin SUPINE lying on the back SUPPORTIVE CARE general medical care aimed at symptoms, not intended to improve or cure underlying disease SYMPTOMATIC having symptoms SYNDROME a condition characterized by a set of symptoms SYSTOLIC top number in blood pressure; pressure during active contraction of the heart

TERATOGENIC capable of causing malformations in a fetus (developing baby still inside the mother’s body) TESTES/TESTICLES male sex glands THROMBOSIS clotting THROMBUS blood clot TID three times a day TITRATION a method for deciding on the strength of a drug or solution; gradually increasing the dose T-LYMPHOCYTES type of white blood cells TOPICAL on the surface TOPICAL ANESTHETIC applied to a certain area of the skin and reducing pain only in the area to which applied TOXICITY side effects or undesirable effects of a drug or treatment TRANSDERMAL through the skin TRANSIENTLY temporarily TRAUMA injury; wound TREADMILL walking machine used to test heart function

UPTAKE absorbing and taking in of a substance by living tissue

VALVULOPLASTY plastic repair of a valve, especially a heart valve VARICES enlarged veins VASOSPASM narrowing of the blood vessels VECTOR a carrier that can transmit disease-causing microorganisms (germs and viruses) VENIPUNCTURE needle stick, blood draw, entering the skin with a needle VERTICAL TRANSMISSION spread of disease

WBC white blood cell

  • Environment
  • Science & Technology
  • Business & Industry
  • Health & Public Welfare
  • Topics (CFR Indexing Terms)
  • Public Inspection
  • Presidential Documents
  • Document Search
  • Advanced Document Search
  • Public Inspection Search
  • Reader Aids Home
  • Office of the Federal Register Announcements
  • Using FederalRegister.Gov
  • Understanding the Federal Register
  • Recent Site Updates
  • Federal Register & CFR Statistics
  • Videos & Tutorials
  • Developer Resources
  • Government Policy and OFR Procedures
  • Congressional Review
  • My Clipboard
  • My Comments
  • My Subscriptions
  • Sign In / Sign Up
  • Site Feedback
  • Search the Federal Register

The Federal Register

The daily journal of the united states government.

  • Legal Status

This site displays a prototype of a “Web 2.0” version of the daily Federal Register. It is not an official legal edition of the Federal Register, and does not replace the official print version or the official electronic version on GPO’s govinfo.gov.

The documents posted on this site are XML renditions of published Federal Register documents. Each document posted on the site includes a link to the corresponding official PDF file on govinfo.gov. This prototype edition of the daily Federal Register on FederalRegister.gov will remain an unofficial informational resource until the Administrative Committee of the Federal Register (ACFR) issues a regulation granting it official legal status. For complete information about, and access to, our official publications and services, go to About the Federal Register on NARA's archives.gov.

The OFR/GPO partnership is committed to presenting accurate and reliable regulatory information on FederalRegister.gov with the objective of establishing the XML-based Federal Register as an ACFR-sanctioned publication in the future. While every effort has been made to ensure that the material on FederalRegister.gov is accurately displayed, consistent with the official SGML-based PDF version on govinfo.gov, those relying on it for legal research should verify their results against an official edition of the Federal Register. Until the ACFR grants it official status, the XML rendition of the daily Federal Register on FederalRegister.gov does not provide legal notice to the public or judicial notice to the courts.

Agency Information Collection Activities; Submission to the Office of Management and Budget (OMB) for Review and Approval; Comment Request; Generic Clearance for Census Bureau Field Tests and Evaluations

A Notice by the Census Bureau on 07/02/2024

Document Details

Information about this document as published in the Federal Register .

Document Statistics

Published document.

This document has been published in the Federal Register . Use the PDF linked in the document sidebar for the official electronic format.

Enhanced Content - Table of Contents

This table of contents is a navigational tool, processed from the headings within the legal text of Federal Register documents. This repetition of headings to form internal navigation links has no substantive legal effect.

Enhanced Content - Submit Public Comment

  • This feature is not available for this document.

Enhanced Content - Read Public Comments

Enhanced content - sharing.

  • Email this document to a friend

Enhanced Content - Document Print View

  • Print this document

Enhanced Content - Document Tools

These tools are designed to help you understand the official document better and aid in comparing the online edition to the print edition.

These markup elements allow the user to see how the document follows the Document Drafting Handbook that agencies use to create their documents. These can be useful for better understanding how a document is structured but are not part of the published document itself.

Enhanced Content - Developer Tools

This document is available in the following developer friendly formats:.

  • JSON: Normalized attributes and metadata
  • XML: Original full text XML
  • MODS: Government Publishing Office metadata

More information and documentation can be found in our developer tools pages .

Official Content

  • View printed version (PDF)

This PDF is the current document as it appeared on Public Inspection on 07/01/2024 at 8:45 am. It was viewed 0 times while on Public Inspection.

If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register. Only official editions of the Federal Register provide legal notice of publication to the public and judicial notice to the courts under 44 U.S.C. 1503 & 1507 . Learn more here .

The Department of Commerce will submit the following information collection request to the Office of Management and Budget (OMB) for review and clearance in accordance with the Paperwork Reduction Act of 1995, on or after the date of publication of this notice. We invite the general public and other Federal agencies to comment on proposed, and continuing information collections, which helps us assess the impact of our information collection requirements and minimize the public's reporting burden. Public comments were previously requested via the Federal Register on 7/26/2022 during a 60-day comment period. This notice allows for an additional 30 days for public comments.

Agency: U.S. Census Bureau.

Title: Burden Increase for the Generic Clearance for Census Bureau Field Tests and Evaluations.

OMB Control Number: 0607-0971.

Form Number(s): Not yet determined.

Type of Request: Request for a burden increase.

Number of Respondents: 113,791 per year.

Average Hours per Response: 26.58 minutes.

Burden Hours: 50,424.33 hours annually.

Needs and Uses: The U.S. Census Bureau is committed to conducting research to identify possible cost and burden reductions in future census and survey, while maintaining high quality results. The Census Bureau requests an increase of 60,500 hours to the existing burden estimates for this Generic Clearance. The Census Bureau is making no other changes to this Clearance. This increase will bring the total burden hours for this Clearance to 211,773 hours over the three-year period. Studies to research and evaluate how to improve data collection activities for data collection programs at the Census Bureau have outpaced the original burden estimates. Larger sample sizes will allow us to continue to explore how the Census Bureau can improve efficiency, data quality, and response rates and reduce respondent burden in future census and survey operations, evaluations and experiments. This research program is for respondent communication, questionnaire and procedure development, and evaluation purposes. We will use data tabulations to evaluate the results of testing.

Affected Public: Individuals or households, businesses or other for profit, farms.

Frequency: Once.

Respondent's Obligation: Voluntary or Mandatory, depending on cited authority.

Legal Authority: Data collection for this project is authorized under the authorizing legislation for the questionnaire being tested. This may be 13 U.S.C. 131 , 141 , 161 , 181 , 182 , 193 , and 301 for Census Bureau sponsored surveys, and title 13 and 15 for surveys sponsored by other Federal agencies. We do not now know what other titles will be referenced, since we do not know what survey questionnaires will be pretested during the course of the clearance.

Written comments and recommendations for the proposed change should be submitted within 30 days of the publication of this notice on the following website www.reginfo.gov/​public/​do/​PRAMain . Find this particular information collection by selecting “Currently under 30-day Review—Open for Public Comments” or by using the search function and entering either the title of the collection or the OMB Control Number 0607-0971.

Mary Lenaiyasa,

PRA Program Manager, Policy Coordination Office, U.S. Census Bureau.

[ FR Doc. 2024-14529 Filed 7-1-24; 8:45 am]

BILLING CODE 3510-07-P

  • Executive Orders

Reader Aids

Information.

  • About This Site
  • Accessibility
  • No Fear Act
  • Continuity Information

IMAGES

  1. Questionnaire from Experiment 1.

    field experiment questionnaire

  2. Solved 8. In a field experiment, researchers planted S.

    field experiment questionnaire

  3. Field test questionnaire results

    field experiment questionnaire

  4. Sample choice set included in the final discrete choice experiment

    field experiment questionnaire

  5. Sample choice tasks from choice experiment surveys

    field experiment questionnaire

  6. The Result of the Practicality Questionnaire on field test

    field experiment questionnaire

VIDEO

  1. Lecture 11 Ch9 Evaluation studies Part 5 Field Testing and Questionnaire types PSSUQ SUS

  2. FieldView App

  3. Andy Field Interview Part 3

  4. Introduction about Field Testing of WBM

  5. Steps in Research

  6. Sociology and Research Design

COMMENTS

  1. What is Field Research: Definition, Methods, Examples and Advantages

    Field research is defined as a qualitative method of data collection that aims to observe, interact and understand people while they are in a natural environment. This article talks about the reasons to conduct field research and their methods and steps. This article also talks about examples of field research and the advantages and disadvantages of this research method.

  2. Field Experiments

    FIELD EXPERIMENTS. As the name suggests, a field study is an experiment performed in the real world. Unlike case studies and observational studies, a field experiment still follows all of the steps of the scientific process; basically, the same rules apply: an independent variable is manipulated to see how it affects a dependent variable.

  3. PDF Field experimentation Methods for Social Psychology

    This course instructs students how to design, analyze, and interpret psychology field experiments. Students will employ modern design and software tools in order to integrate social psychology questions into established research methodologies. After designing ecologically valid field experiments, this course will imbue students with the ...

  4. Experimental Method In Psychology

    2. Field Experiment. A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

  5. A hands-on guide to conducting field experiments using mobile

    The value of field experimentation in business and psychological research has been increasingly acknowledged in recent years (e.g., Gneezy, 2017; van Heerde et al., 2021; Morales et al., 2017; Viglia et al., 2021).In field experiments, participants are unaware of their involvement in a study where researchers manipulate factors to test hypotheses in natural settings.

  6. Field Research: A Graduate Student's Guide

    Therefore, many political scientists turn their attention to conducting field experiments or lab-in-the-field experiments to reveal causality (Druckman et al. 2006; Beath, ... She also had her questionnaire ready based on the previously collected data and the media search she had conducted for over a year before travelling to the field site. As ...

  7. Field Experiment

    Field Experiment. It is a psychological experiment conducted in the context of everyday life and is one of the important methods of psychological research. The Soviet psychological community credited the method to the Russian psychologist A. F. Lazursky, based his study of personality on the use of natural experiments in 1910, and published ...

  8. Field experiment

    Field experiments are experiments carried out outside of laboratory settings.. They randomly assign subjects (or other sampling units) to either treatment or control groups to test claims of causal relationships. Random assignment helps establish the comparability of the treatment and control group so that any differences between them that emerge after the treatment has been administered ...

  9. Experimental Methods in Survey Research

    A thorough and comprehensive guide to the theoretical, practical, and methodological approaches used in survey experiments across disciplines such as political science, health sciences, sociology, economics, psychology, and marketing This book explores and explains the broad range of experimental designs embedded in surveys that use both probability and non-probability samples. It approaches ...

  10. Field Study and Field Experiment

    ABSTRACT. The field study method and its variation, the field experiment, along with the laboratory and questionnaire survey methods, are the oldest and most classic means of studying organizations. The field study method is used to gather information on organizational or work-system functioning through systematic direct observation.

  11. PDF Field Experimentation Methods For Social Psychology

    Course Overview. This course instructs students how to design, analyze, and interpret psychology field experiments. Students will employ design and software tools in order to integrate social psychology questions into established research methodologies. This course will imbue students with the hypothesis testing and visualization tools needed ...

  12. Survey Research

    A questionnaire, where a list of ... The priorities of a research design can vary depending on the field, but you usually have to specify: Your research questions and/or hypotheses; Your overall approach (e.g., qualitative or quantitative) The type of design you're using (e.g., a survey, experiment, or case study) Your sampling methods or ...

  13. Guide to Experimental Design

    Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  14. Research Methods In Psychology

    Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment. ... Responses are recorded on a questionnaire, and the researcher ...

  15. Seven Examples of Field Experiments for Sociology

    Field experiments aren't the most widely used research method in Sociology, but the examiners seem to love asking questions about them - below are seven examples of this research method.. Looked at collectively, the results of the field experiments below reveal punishingly depressing findings about human action - they suggest that people are racist, sexist, shallow, passive, and prepared ...

  16. PDF FIELD STUDIES QUESTIONNAIRES

    explain when and why questionnaires may be appropriate evaluation technique choice; discuss their pros/cons. list different styles of questions (open, closed, likert, etc.) and give examples of what they are appropriate for; give examples of data different kinds of questions can collect. discuss important considerations for designing and ...

  17. Field Experiments with Survey Outcomes (Chapter 4)

    Summary. Field experiments with survey outcomes are experiments where outcomes are measured by surveys but treatments are delivered by a separate mechanism in the real world, such as by mailers, door-to-door canvasses, phone calls, or online ads. Such experiments combine the realism of field experimentation with the ability to measure ...

  18. Online field experiments: a selective survey of methods

    Field experiments allow researchers to combine the control of laboratory experiments with some of the ecological validity of field studies. Areas such as medicine (Lohr et al. 1986), economics (Harrison and List 2004), and social psychology (Lerner et al. 2003) have all incorporated field experiments in their research.One of the challenges of field experiments, however, is the substantial cost ...

  19. Field Experiments

    Experiments look for the effect that manipulated variables (independent variables) have on measured variables (dependent variables), i.e. causal effects. Field experiments are conducted in a natural setting (e.g. at a sports event or on public transport), as opposed to the artificial environment created in laboratory experiments. Some variables cannot be controlled due to the unpredictability ...

  20. Difference Between Survey and Experiment (with Comparison Chart)

    Observation, interview, questionnaire, case study etc. Through several readings of experiment. Definition of Survey. ... Surveys are the best example of field research. On the contrary, Experiment is an example of laboratory research. A laboratory research is nothing but research carried on inside the room equipped with scientific tools and ...

  21. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  22. Types of Experiment: Overview

    Experimental (Laboratory, Field & Natural) & Non experimental ( correlations, observations, interviews, questionnaires and case studies). All the three types of experiments have characteristics in common. They all have: there will be at least two conditions in which participants produce data. Note - natural and quasi experiments are often ...

  23. Field Experiment

    Field Experimentation. Donald P. Green, Alan S. Gerber, in Encyclopedia of Social Measurement, 2005 Conclusion. The results from large-scale field experiments command unusual attention in both academic circles and the public at large. Although every experiment has its limitations, field experiments are widely regarded as exceptionally authoritative.

  24. What Makes a Psychedelic Experience? Not Always a Drug, It Turns Out

    One example of how we've used this framework in our research is an experiment in which we gave [participants with depression] ketamine during general anesthesia. The idea was to explore just the ...

  25. Medical Terms in Lay Language

    Human Subjects Office / IRB Hardin Library, Suite 105A 600 Newton Rd Iowa City, IA 52242-1098. Voice: 319-335-6564 Fax: 319-335-7310

  26. Federal Register :: Agency Information Collection Activities

    This research program is for respondent communication, questionnaire and procedure development, and evaluation purposes. We will use data tabulations to evaluate the results of testing. Affected Public: Individuals or households, businesses or other for profit, farms. Frequency: Once.

  27. Artificial intelligence literacy in sustainable development: A

    another questionnaire inquiring into their learning experiences. The questions asked for the students' reflections and evaluations on their current experiences, knowledge, skills and expectations regarding artificial intelligence and sustainability. They were also asked to reflect on their learning experiences, in short narratives.

  28. PDF 54768 Federal Register /Vol. 89, No. 127/Tuesday, July 2 ...

    questionnaire being tested. This may be 13 U.S.C. 131, 141, 161, 181, 182, 193, and 301 for Census Bureau sponsored surveys, and title 13 and 15 for surveys sponsored by other Federal agencies. We do not now know what other titles will be referenced, since we do not know what survey questionnaires will be pretested during the course of the ...