U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

National Institute of Environmental Health Sciences

Your environment. your health., what is ethics in research & why is it important, by david b. resnik, j.d., ph.d..

December 23, 2020

The ideas and opinions expressed in this essay are the author’s own and do not necessarily represent those of the NIH, NIEHS, or US government.

ethic image decorative header

When most people think of ethics (or morals), they think of rules for distinguishing between right and wrong, such as the Golden Rule ("Do unto others as you would have them do unto you"), a code of professional conduct like the Hippocratic Oath ("First of all, do no harm"), a religious creed like the Ten Commandments ("Thou Shalt not kill..."), or a wise aphorisms like the sayings of Confucius. This is the most common way of defining "ethics": norms for conduct that distinguish between acceptable and unacceptable behavior.

Most people learn ethical norms at home, at school, in church, or in other social settings. Although most people acquire their sense of right and wrong during childhood, moral development occurs throughout life and human beings pass through different stages of growth as they mature. Ethical norms are so ubiquitous that one might be tempted to regard them as simple commonsense. On the other hand, if morality were nothing more than commonsense, then why are there so many ethical disputes and issues in our society?

Alternatives to Animal Testing

test tubes on a tray decorrative image

Alternative test methods are methods that replace, reduce, or refine animal use in research and testing

Learn more about Environmental science Basics

One plausible explanation of these disagreements is that all people recognize some common ethical norms but interpret, apply, and balance them in different ways in light of their own values and life experiences. For example, two people could agree that murder is wrong but disagree about the morality of abortion because they have different understandings of what it means to be a human being.

Most societies also have legal rules that govern behavior, but ethical norms tend to be broader and more informal than laws. Although most societies use laws to enforce widely accepted moral standards and ethical and legal rules use similar concepts, ethics and law are not the same. An action may be legal but unethical or illegal but ethical. We can also use ethical concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last century, many social reformers have urged citizens to disobey laws they regarded as immoral or unjust laws. Peaceful civil disobedience is an ethical way of protesting laws or expressing political viewpoints.

Another way of defining 'ethics' focuses on the disciplines that study standards of conduct, such as philosophy, theology, law, psychology, or sociology. For example, a "medical ethicist" is someone who studies ethical standards in medicine. One may also define ethics as a method, procedure, or perspective for deciding how to act and for analyzing complex problems and issues. For instance, in considering a complex issue like global warming , one may take an economic, ecological, political, or ethical perspective on the problem. While an economist might examine the cost and benefits of various policies related to global warming, an environmental ethicist could examine the ethical values and principles at stake.

See ethics in practice at NIEHS

Read latest updates in our monthly  Global Environmental Health Newsletter

global environmental health

Many different disciplines, institutions , and professions have standards for behavior that suit their particular aims and goals. These standards also help members of the discipline to coordinate their actions or activities and to establish the public's trust of the discipline. For instance, ethical standards govern conduct in medicine, law, engineering, and business. Ethical norms also serve the aims or goals of research and apply to people who conduct scientific research or other scholarly or creative activities. There is even a specialized discipline, research ethics, which studies these norms. See Glossary of Commonly Used Terms in Research Ethics and Research Ethics Timeline .

There are several reasons why it is important to adhere to ethical norms in research. First, norms promote the aims of research , such as knowledge, truth, and avoidance of error. For example, prohibitions against fabricating , falsifying, or misrepresenting research data promote the truth and minimize error.

Join an NIEHS Study

See how we put research Ethics to practice.

Visit Joinastudy.niehs.nih.gov to see the various studies NIEHS perform.

join a study decorative image

Second, since research often involves a great deal of cooperation and coordination among many different people in different disciplines and institutions, ethical standards promote the values that are essential to collaborative work , such as trust, accountability, mutual respect, and fairness. For example, many ethical norms in research, such as guidelines for authorship , copyright and patenting policies , data sharing policies, and confidentiality rules in peer review, are designed to protect intellectual property interests while encouraging collaboration. Most researchers want to receive credit for their contributions and do not want to have their ideas stolen or disclosed prematurely.

Third, many of the ethical norms help to ensure that researchers can be held accountable to the public . For instance, federal policies on research misconduct, conflicts of interest, the human subjects protections, and animal care and use are necessary in order to make sure that researchers who are funded by public money can be held accountable to the public.

Fourth, ethical norms in research also help to build public support for research. People are more likely to fund a research project if they can trust the quality and integrity of research.

Finally, many of the norms of research promote a variety of other important moral and social values , such as social responsibility, human rights, animal welfare, compliance with the law, and public health and safety. Ethical lapses in research can significantly harm human and animal subjects, students, and the public. For example, a researcher who fabricates data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by regulations and guidelines relating to radiation or biological safety may jeopardize his health and safety or the health and safety of staff and students.

Codes and Policies for Research Ethics

Given the importance of ethics for the conduct of research, it should come as no surprise that many different professional associations, government agencies, and universities have adopted specific codes, rules, and policies relating to research ethics. Many government agencies have ethics rules for funded researchers.

  • National Institutes of Health (NIH)
  • National Science Foundation (NSF)
  • Food and Drug Administration (FDA)
  • Environmental Protection Agency (EPA)
  • US Department of Agriculture (USDA)
  • Singapore Statement on Research Integrity
  • American Chemical Society, The Chemist Professional’s Code of Conduct
  • Code of Ethics (American Society for Clinical Laboratory Science)
  • American Psychological Association, Ethical Principles of Psychologists and Code of Conduct
  • Statement on Professional Ethics (American Association of University Professors)
  • Nuremberg Code
  • World Medical Association's Declaration of Helsinki

Ethical Principles

The following is a rough and general summary of some ethical principles that various codes address*:

what is an unethical research study

Strive for honesty in all scientific communications. Honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, research sponsors, or the public.

what is an unethical research study

Objectivity

Strive to avoid bias in experimental design, data analysis, data interpretation, peer review, personnel decisions, grant writing, expert testimony, and other aspects of research where objectivity is expected or required. Avoid or minimize bias or self-deception. Disclose personal or financial interests that may affect research.

what is an unethical research study

Keep your promises and agreements; act with sincerity; strive for consistency of thought and action.

what is an unethical research study

Carefulness

Avoid careless errors and negligence; carefully and critically examine your own work and the work of your peers. Keep good records of research activities, such as data collection, research design, and correspondence with agencies or journals.

what is an unethical research study

Share data, results, ideas, tools, resources. Be open to criticism and new ideas.

what is an unethical research study

Transparency

Disclose methods, materials, assumptions, analyses, and other information needed to evaluate your research.

what is an unethical research study

Accountability

Take responsibility for your part in research and be prepared to give an account (i.e. an explanation or justification) of what you did on a research project and why.

what is an unethical research study

Intellectual Property

Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give proper acknowledgement or credit for all contributions to research. Never plagiarize.

what is an unethical research study

Confidentiality

Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.

what is an unethical research study

Responsible Publication

Publish in order to advance research and scholarship, not to advance just your own career. Avoid wasteful and duplicative publication.

what is an unethical research study

Responsible Mentoring

Help to educate, mentor, and advise students. Promote their welfare and allow them to make their own decisions.

what is an unethical research study

Respect for Colleagues

Respect your colleagues and treat them fairly.

what is an unethical research study

Social Responsibility

Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.

what is an unethical research study

Non-Discrimination

Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors not related to scientific competence and integrity.

what is an unethical research study

Maintain and improve your own professional competence and expertise through lifelong education and learning; take steps to promote competence in science as a whole.

what is an unethical research study

Know and obey relevant laws and institutional and governmental policies.

what is an unethical research study

Animal Care

Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.

what is an unethical research study

Human Subjects protection

When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy; take special precautions with vulnerable populations; and strive to distribute the benefits and burdens of research fairly.

* Adapted from Shamoo A and Resnik D. 2015. Responsible Conduct of Research, 3rd ed. (New York: Oxford University Press).

Ethical Decision Making in Research

Although codes, policies, and principles are very important and useful, like any set of rules, they do not cover every situation, they often conflict, and they require interpretation. It is therefore important for researchers to learn how to interpret, assess, and apply various research rules and how to make decisions and act ethically in various situations. The vast majority of decisions involve the straightforward application of ethical rules. For example, consider the following case:

The research protocol for a study of a drug on hypertension requires the administration of the drug at different doses to 50 laboratory mice, with chemical and behavioral tests to determine toxic effects. Tom has almost finished the experiment for Dr. Q. He has only 5 mice left to test. However, he really wants to finish his work in time to go to Florida on spring break with his friends, who are leaving tonight. He has injected the drug in all 50 mice but has not completed all of the tests. He therefore decides to extrapolate from the 45 completed results to produce the 5 additional results.

Many different research ethics policies would hold that Tom has acted unethically by fabricating data. If this study were sponsored by a federal agency, such as the NIH, his actions would constitute a form of research misconduct , which the government defines as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all researchers classify as unethical are viewed as misconduct. It is important to remember, however, that misconduct occurs only when researchers intend to deceive : honest errors related to sloppiness, poor record keeping, miscalculations, bias, self-deception, and even negligence do not constitute misconduct. Also, reasonable disagreements about research methods, procedures, and interpretations do not constitute research misconduct. Consider the following case:

Dr. T has just discovered a mathematical error in his paper that has been accepted for publication in a journal. The error does not affect the overall results of his research, but it is potentially misleading. The journal has just gone to press, so it is too late to catch the error before it appears in print. In order to avoid embarrassment, Dr. T decides to ignore the error.

Dr. T's error is not misconduct nor is his decision to take no action to correct the error. Most researchers, as well as many different policies and codes would say that Dr. T should tell the journal (and any coauthors) about the error and consider publishing a correction or errata. Failing to publish a correction would be unethical because it would violate norms relating to honesty and objectivity in research.

There are many other activities that the government does not define as "misconduct" but which are still regarded by most researchers as unethical. These are sometimes referred to as " other deviations " from acceptable research practices and include:

  • Publishing the same paper in two different journals without telling the editors
  • Submitting the same paper to different journals without telling the editors
  • Not informing a collaborator of your intent to file a patent in order to make sure that you are the sole inventor
  • Including a colleague as an author on a paper in return for a favor even though the colleague did not make a serious contribution to the paper
  • Discussing with your colleagues confidential data from a paper that you are reviewing for a journal
  • Using data, ideas, or methods you learn about while reviewing a grant or a papers without permission
  • Trimming outliers from a data set without discussing your reasons in paper
  • Using an inappropriate statistical technique in order to enhance the significance of your research
  • Bypassing the peer review process and announcing your results through a press conference without giving peers adequate information to review your work
  • Conducting a review of the literature that fails to acknowledge the contributions of other people in the field or relevant prior work
  • Stretching the truth on a grant application in order to convince reviewers that your project will make a significant contribution to the field
  • Stretching the truth on a job application or curriculum vita
  • Giving the same research project to two graduate students in order to see who can do it the fastest
  • Overworking, neglecting, or exploiting graduate or post-doctoral students
  • Failing to keep good research records
  • Failing to maintain research data for a reasonable period of time
  • Making derogatory comments and personal attacks in your review of author's submission
  • Promising a student a better grade for sexual favors
  • Using a racist epithet in the laboratory
  • Making significant deviations from the research protocol approved by your institution's Animal Care and Use Committee or Institutional Review Board for Human Subjects Research without telling the committee or the board
  • Not reporting an adverse event in a human research experiment
  • Wasting animals in research
  • Exposing students and staff to biological risks in violation of your institution's biosafety rules
  • Sabotaging someone's work
  • Stealing supplies, books, or data
  • Rigging an experiment so you know how it will turn out
  • Making unauthorized copies of data, papers, or computer programs
  • Owning over $10,000 in stock in a company that sponsors your research and not disclosing this financial interest
  • Deliberately overestimating the clinical significance of a new drug in order to obtain economic benefits

These actions would be regarded as unethical by most scientists and some might even be illegal in some cases. Most of these would also violate different professional ethics codes or institutional policies. However, they do not fall into the narrow category of actions that the government classifies as research misconduct. Indeed, there has been considerable debate about the definition of "research misconduct" and many researchers and policy makers are not satisfied with the government's narrow definition that focuses on FFP. However, given the huge list of potential offenses that might fall into the category "other serious deviations," and the practical problems with defining and policing these other deviations, it is understandable why government officials have chosen to limit their focus.

Finally, situations frequently arise in research in which different people disagree about the proper course of action and there is no broad consensus about what should be done. In these situations, there may be good arguments on both sides of the issue and different ethical principles may conflict. These situations create difficult decisions for research known as ethical or moral dilemmas . Consider the following case:

Dr. Wexford is the principal investigator of a large, epidemiological study on the health of 10,000 agricultural workers. She has an impressive dataset that includes information on demographics, environmental exposures, diet, genetics, and various disease outcomes such as cancer, Parkinson’s disease (PD), and ALS. She has just published a paper on the relationship between pesticide exposure and PD in a prestigious journal. She is planning to publish many other papers from her dataset. She receives a request from another research team that wants access to her complete dataset. They are interested in examining the relationship between pesticide exposures and skin cancer. Dr. Wexford was planning to conduct a study on this topic.

Dr. Wexford faces a difficult choice. On the one hand, the ethical norm of openness obliges her to share data with the other research team. Her funding agency may also have rules that obligate her to share data. On the other hand, if she shares data with the other team, they may publish results that she was planning to publish, thus depriving her (and her team) of recognition and priority. It seems that there are good arguments on both sides of this issue and Dr. Wexford needs to take some time to think about what she should do. One possible option is to share data, provided that the investigators sign a data use agreement. The agreement could define allowable uses of the data, publication plans, authorship, etc. Another option would be to offer to collaborate with the researchers.

The following are some step that researchers, such as Dr. Wexford, can take to deal with ethical dilemmas in research:

What is the problem or issue?

It is always important to get a clear statement of the problem. In this case, the issue is whether to share information with the other research team.

What is the relevant information?

Many bad decisions are made as a result of poor information. To know what to do, Dr. Wexford needs to have more information concerning such matters as university or funding agency or journal policies that may apply to this situation, the team's intellectual property interests, the possibility of negotiating some kind of agreement with the other team, whether the other team also has some information it is willing to share, the impact of the potential publications, etc.

What are the different options?

People may fail to see different options due to a limited imagination, bias, ignorance, or fear. In this case, there may be other choices besides 'share' or 'don't share,' such as 'negotiate an agreement' or 'offer to collaborate with the researchers.'

How do ethical codes or policies as well as legal rules apply to these different options?

The university or funding agency may have policies on data management that apply to this case. Broader ethical rules, such as openness and respect for credit and intellectual property, may also apply to this case. Laws relating to intellectual property may be relevant.

Are there any people who can offer ethical advice?

It may be useful to seek advice from a colleague, a senior researcher, your department chair, an ethics or compliance officer, or anyone else you can trust. In the case, Dr. Wexford might want to talk to her supervisor and research team before making a decision.

After considering these questions, a person facing an ethical dilemma may decide to ask more questions, gather more information, explore different options, or consider other ethical rules. However, at some point he or she will have to make a decision and then take action. Ideally, a person who makes a decision in an ethical dilemma should be able to justify his or her decision to himself or herself, as well as colleagues, administrators, and other people who might be affected by the decision. He or she should be able to articulate reasons for his or her conduct and should consider the following questions in order to explain how he or she arrived at his or her decision:

  • Which choice will probably have the best overall consequences for science and society?
  • Which choice could stand up to further publicity and scrutiny?
  • Which choice could you not live with?
  • Think of the wisest person you know. What would he or she do in this situation?
  • Which choice would be the most just, fair, or responsible?

After considering all of these questions, one still might find it difficult to decide what to do. If this is the case, then it may be appropriate to consider others ways of making the decision, such as going with a gut feeling or intuition, seeking guidance through prayer or meditation, or even flipping a coin. Endorsing these methods in this context need not imply that ethical decisions are irrational, however. The main point is that human reasoning plays a pivotal role in ethical decision-making but there are limits to its ability to solve all ethical dilemmas in a finite amount of time.

Promoting Ethical Conduct in Science

globe decorative image

Do U.S. research institutions meet or exceed federal mandates for instruction in responsible conduct of research? A national survey

NCBI Pubmed

 Read about U.S. research instutuins follow federal manadates for ethics in research 

Learn more about NIEHS Research

Most academic institutions in the US require undergraduate, graduate, or postgraduate students to have some education in the responsible conduct of research (RCR) . The NIH and NSF have both mandated training in research ethics for students and trainees. Many academic institutions outside of the US have also developed educational curricula in research ethics

Those of you who are taking or have taken courses in research ethics may be wondering why you are required to have education in research ethics. You may believe that you are highly ethical and know the difference between right and wrong. You would never fabricate or falsify data or plagiarize. Indeed, you also may believe that most of your colleagues are highly ethical and that there is no ethics problem in research..

If you feel this way, relax. No one is accusing you of acting unethically. Indeed, the evidence produced so far shows that misconduct is a very rare occurrence in research, although there is considerable variation among various estimates. The rate of misconduct has been estimated to be as low as 0.01% of researchers per year (based on confirmed cases of misconduct in federally funded research) to as high as 1% of researchers per year (based on self-reports of misconduct on anonymous surveys). See Shamoo and Resnik (2015), cited above.

Clearly, it would be useful to have more data on this topic, but so far there is no evidence that science has become ethically corrupt, despite some highly publicized scandals. Even if misconduct is only a rare occurrence, it can still have a tremendous impact on science and society because it can compromise the integrity of research, erode the public’s trust in science, and waste time and resources. Will education in research ethics help reduce the rate of misconduct in science? It is too early to tell. The answer to this question depends, in part, on how one understands the causes of misconduct. There are two main theories about why researchers commit misconduct. According to the "bad apple" theory, most scientists are highly ethical. Only researchers who are morally corrupt, economically desperate, or psychologically disturbed commit misconduct. Moreover, only a fool would commit misconduct because science's peer review system and self-correcting mechanisms will eventually catch those who try to cheat the system. In any case, a course in research ethics will have little impact on "bad apples," one might argue.

According to the "stressful" or "imperfect" environment theory, misconduct occurs because various institutional pressures, incentives, and constraints encourage people to commit misconduct, such as pressures to publish or obtain grants or contracts, career ambitions, the pursuit of profit or fame, poor supervision of students and trainees, and poor oversight of researchers (see Shamoo and Resnik 2015). Moreover, defenders of the stressful environment theory point out that science's peer review system is far from perfect and that it is relatively easy to cheat the system. Erroneous or fraudulent research often enters the public record without being detected for years. Misconduct probably results from environmental and individual causes, i.e. when people who are morally weak, ignorant, or insensitive are placed in stressful or imperfect environments. In any case, a course in research ethics can be useful in helping to prevent deviations from norms even if it does not prevent misconduct. Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For example, some unethical authorship practices probably reflect traditions and practices that have not been questioned seriously until recently. If the director of a lab is named as an author on every paper that comes from his lab, even if he does not make a significant contribution, what could be wrong with that? That's just the way it's done, one might argue. Another example where there may be some ignorance or mistaken traditions is conflicts of interest in research. A researcher may think that a "normal" or "traditional" financial relationship, such as accepting stock or a consulting fee from a drug company that sponsors her research, raises no serious ethical issues. Or perhaps a university administrator sees no ethical problem in taking a large gift with strings attached from a pharmaceutical company. Maybe a physician thinks that it is perfectly appropriate to receive a $300 finder’s fee for referring patients into a clinical trial.

If "deviations" from ethical conduct occur in research as a result of ignorance or a failure to reflect critically on problematic traditions, then a course in research ethics may help reduce the rate of serious deviations by improving the researcher's understanding of ethics and by sensitizing him or her to the issues.

Finally, education in research ethics should be able to help researchers grapple with the ethical dilemmas they are likely to encounter by introducing them to important concepts, tools, principles, and methods that can be useful in resolving these dilemmas. Scientists must deal with a number of different controversial topics, such as human embryonic stem cell research, cloning, genetic engineering, and research involving animal or human subjects, which require ethical reflection and deliberation.

  • Unethical Research Practices to Avoid: Examples & Detection

busayo.longe

Almost every aspect of human life is guided by guidelines, rules, and regulations. That is why, from time immemorial, there have been set ethics and rules that serve as a guide and moderator in human activities.

Without these recognized ethics, everyone will approach issues in ways they deem appropriate. This applies to the research and research community.

Research guidelines provide information about accepted research ethics to the research community and the researchers. These guidelines provide ethics, advice, and guidance. They help researchers develop ethical discretion and also to prevent the researchers from committing scientific misconduct, and promote good scientific practice. 

We are going to discuss research ethics, what they mean, how they are adopted, how important they are, and how they affect research and research institutions.

What are Research Ethics?

Ethics are a set of rules, which can be broadly written and unwritten. They govern a human’s behavioral expectations and that of others.

While society broadly agrees on some ethical values such as murder is terrible and disallowed, a wide variation also exists on how we can interpret these set values in practice.

Now research ethics refers to the values and the norms an institution has put in place to help regulate scientific activities. It is a collection of scientific morals in the line of duty. This guideline specifies the traits or behaviors that are recognized by the research community based on the general ethics of science and society at large.

Research guidelines are binding on both the researcher and the institution. This is because there are responsibilities to be carried out by both the researcher and the institution to ensure that their research is reliable. However, it is important that the institutions are clear on the research ethics roles and responsibilities at every point. Part of the duties of the institution is to have good administrational management and funding that would allow researchers to comply with designed ethical guidelines and norms.

Read: Research Bias: Definition, Types + Examples

The guidelines primarily cover research and other research-related activities, which include teaching, dissemination of research information, and also the management of institutions. Research ethical guidelines are also used as tools in the assessment of an individual case in the planning of research and even when reporting or publishing the outcomes and findings of that study.

The research ethics guideline covers the projects of students at all levels, and that of the doctoral research fellows. It is also the responsibility of the institution to provide relevant training regarding research ethics to the students and doctoral research Fellows. This is because the research norms and guidelines apply to all researches regardless of whether they are commissioned research, applied research, or basic research.

Research conducted by the public, or private institutions is also subjected to these guidelines and ethics. Consulting firms that perform research-related tasks, such as systematic acquisition and information processing about individuals, groups, and organizations, are also not excluded.

Read: Consent Letter: Writing Guide, Types, [+12 Consent Samples]

There are guidelines regulating research at different levels based on recognized norms for research ethics. We are going to look at these research norms below.

  • There are norms for good scientific practice. They relate to finding accuracy and relevant knowledge in research. These norms are Originality, trustworthiness, academic freedom, and openness.
  • There are norms for the research communities. They guide the relationship between the people that partake together in research. These norms are respect, accountability, confidentiality, integrity, impartiality, constructive criticism, informed and free consent, and human dignity.
  • There are norms that guide the researchers’ relationship with the rest of society. These norms are social responsibility, dissemination of research, and independence.

The first two groups listed above are internal ethical norms. They relate to regulating the research communities, while the other relates to the relationship that exists between the research and the outside world or the society. Many times, the lines between these norms get blurred.

Research ethics are the standard ethics set by the supervising institution or bodies to govern how scientific research and other types of research are conducted in various research institutions such as the universities, and also moderate how they are interpreted.

Read: Sampling Bias: Definition, Types + [Examples]

Why is Research Ethics Important?

The aim of research ethics is to guide researchers to conduct their studies and report their findings without deception or intent to directly or indirectly cause harm to their subjects or any member of society, as the case may be.

Also, research ethics establish the validity of a researcher’s study or research. It establishes that the research is authentic and error/bias-free. This will give the researcher credibility within the institution and in the public.

Research ethics also ensure the safety of research or study subjects and the researcher. This is because it is a must that all researchers follow these guidelines.

Another importance of research ethics is that it shows that your research publications are not plagiarized, and your readers are not reading unverified data. This is achieved through the research manuscript. Your research findings must adhere to the set guidelines.

The last point to consider is that research ethics provide the researcher with a sense of responsibility. This makes it easy to find appropriate solutions in the case of any misconduct.

Read: Systematic Errors in Research: Definition, Examples

Examples of Unethical Research Practices

Here’s a list of unethical practices every researcher must avoid

1. Duplicate publication

It is unethical for a researcher to submit a research paper or publication that has two or more seminal journals which could be with or without acknowledgment of these other journals. This practice is known as duplicate submission or duplicate publication. 

Some authors practice duplication of publication so as to increase their numbers of submissions; however, it is unethical and it also amounts to the wasting of time of the publication resources and the journal reviewers why it also serves no benefits today to the scientific community and humanity at large.

You can only submit your research paper to just one journal.

2. Research data falsification

the falsification or fabrication of research data occurs when a researcher tries to manipulate the procedures used in conducting research or the important findings just to have the researcher’s desired result.

Recording non-existent data or falsifying a data recording is known as research fabrication.

Research data fabrication is common in the pharmaceutical industry. They do this fabrication 2 market a specific drug today to the general public without considering the drug’s side effects. This act is unethical and it is also a wastage of the limited resources available for research.

This can result in revoking the physician’s clinical license, the prosecution of the physician, and also create huge mistrust in the mind of the public.

3. Plagiarism

Plagiarism is a huge offense in the research community. It is the practice of taking another person’s research or work or even idea and inculcating it in your own writing without giving them the dual credit. In some cases, just for recognition, the researcher can even use another person’s research as their own publication journal.

In other cases, the researcher may change the letters of someone else’s publication to their own words without referencing the original author. This is known as self-plagiarism. 

With technology, there are more tools to detect plagiarism. This means it is now very easy for journal editors to detect plagiarism.  Plagiarism may not be intentional sometimes, it may just happen accidentally. However, you can avoid it by referencing all the sources you used in writing your own scientific journal.

Ensure that all the authors whose work you have used are properly cited in your paper, regardless of if they’re from previous publications.

4. Authorship Conflict

ICMJE (The International Committee of Medical Journal Editors) guidelines provided that anyone who has contributed to the conception, the designing of research data, contributed to the data analysis, helped to draft or revise the journal and seek approval before the journal is published has an authorship claim to the journal.

Now an authorship conflict can arise if the name of a person who has contributed to the journal in any form is not included in the publication.

If one of the persons whose name was cited in the journal does not give consent or agree to its publication. That is an authorship conflict and it is unethical.

If the name of one additional author is cited while the name of an already cited author is removed, whether before publication or after publication, it is an authorship conflict.

Another cause of authorship conflict is citing a person’s name based on “senior in practice” or family affiliation when the said person has contributed nothing to the research and the documentation of the research findings.

Authorship conflict can be avoided before conducting the research by selecting the authors in the beginning and also by the journals asking the authors to submit a checklist that contains the criteria for authorship.

5. Conflict of interest

Conflict of interest arises in research when the author or the researcher gets influenced by financial reasons or personal issues that ultimately affect the quality of the outcome of the study.

When these conflicts of interest arise, which could be personal conditions and financial consideration or other types of conflicts, the researcher should truthfully disclose the current situation to the editorial team, and do so completely without leaving out a detail.

Read: Undercoverage Bias: Definition, Examples in Survey Research

Research ethical guidelines are designed to guide researchers in research conduct and publications. 

This is why all researchers should develop habits of self-consciousness, self-restraint, and self of responsibility. This will enable them to take importance to the welfare of the members of the research community, the public, and their own reputation. Bearing all this in mind what wood prevents them from partaking in any misconduct in their research and publication.

Implications and Consequences of Unethical Research Practices

A researcher’s ethical obligations are to be taken seriously because they are truly no laughing matter. Here are the things at stake if violated.

  • A researcher risks an unapproved study or publication if the research proposal submitted to the supervising institution does not meet the research ethical requirements. This implies that the researcher will not be allowed to proceed with the research until the ethical conditions have been met according to the standard by the supervising institution.
  • If you have gotten a go-ahead for your study or research, failure to comply with the guidelines can result in your research being declared void and retracted. This means that you have to follow every step throughout the research else you can face disciplinary actions.
  • If your research publication is connected to your doctorate degree, your doctorate title might be revoked. If the nature of your ethics breach is criminal, then the supervising institution can take legal action against the researcher. This may lead to prison sentences for the researcher.
  • Also, an unethical omission can cause the characteristics of the researcher to be questioned in terms of reliability, and also, the validity of the test will be questioned.
  • Unethical conduct in research can put in bad media coverage and damage the reputation of you and your research institution.

Researchers should remember the consequence of unethical conduct is humiliation. Aside from the loss of reputation, there could be legal consequences. That is why researchers should not take shortcuts when conducting their research.

Retraction watch is a website where retracted researchers are publicized and no researcher would want to be featured on this website for unethical conduct.

Read: Type I vs Type II Errors: Causes, Examples & Prevention

How to Detect Unethical Research Practices

Here are some tools and mechanisms you can use to prevent and detect unethical practices. 

1. Management responsibility

In its day-to-day dealings, the management must maintain the highest standards of integrity. This is because if senior management is dishonest and corrupt, they will spread dishonest and fraudulent acts to all levels. It is their responsibility to be the highest standards when it comes to integrity.

The management is to serve as an example for all in the research organization to follow by pointing out correct and acceptable behavior to the staff. To ensure maximum security, they must make sure that the organization has all the procedures and control measures in place to ensure maximum security. 

2. Code of ethics

There must be a setup code of ethics for all employees to follow in every establishment. This should be a formal statement containing ethical codes of conduct for the organization’s employees to follow. The Code of Ethics should unambiguously state the type of behavior expected from the employees, and what is unacceptable.

3. Personnel policies and procedures

If policies and procedures are open and fair, and also efficient personnel is in an organization, the organization’s exposure to fraud will be minimal. Organizations should consider putting effective policies in place. 

Ethical Principles in Research

Respect for individuals.

  • Researchers must base their study on the fundamental respect for human dignity.
  • The research must respect their privacy. The autonomy and integrity of the individual must be protected.
  • It must perform its duty to inform participants of what they’re being given. They must be provided with adequate information about the research and the purpose of the research.
  • It must derive consent to notify. The participants must willingly grant consent before the research can proceed.
  • It must also practice confidentiality. All personal data must be handled with utmost care and confidentiality.
  • There must also be a responsibility to protect children and not cause others harm.

Respect for Institutions

  • The research must be in accordance with the rules of the public administration.
  • There must be adequate respect given to private establishments that do not want to give out their data or information.
  • The interest of the vulnerable groups must be protected at all times.
  • Research should be carried out on other cultures appropriately. And these cultures must be respected.
  • Cultural monuments such as archives, artifacts, and texts must be treated with maximum care and preserved.
Explore:  21 Chrome Extensions for Academic Researchers in 2021

Formplus Features for Ethical Research

If you want to conduct a survey or make use of questionnaires in your research, the best website to use is Formplus. 

Formplus has all you need to develop your form, administer your form, gather the data from your survey or questionnaire, and interpret it.

Formplus also has in place all the requirements a researcher needs to follow ethical guidelines. Your forms are secured, private, and protected.

  • GDPR-Compliant

The GDPR-compliant consent form helps you to maintain the European Union’s data privacy laws. You can collect personal information such as names, emails, phone numbers, and addresses using the GDPR compliant form builder on Formplus.

  • Privacy Policy and Security

The privacy protection policy of Formplus is so transparent that it will tell you what the data collected from you will be used for, and who it is shared with. Also, your data is so secure that not even Formplus can access it without your permission.

Security is also 100% guaranteed. Formplus takes extra steps to ensure the privacy of its users is protected. Formplus website complies with all international laws and requirements you can think of. All your sensitive information is protected online and offline. 

Rules and guidelines are important. Not only because they give you instructions on how to carry out a procedure in research, but also because they make you responsible and respectable humans in society.

There are set ethical guidelines that protect the researcher, the research community, and the general public. It is in a researcher’s best interest to follow these laid down rules because the consequences are a long-lasting dent to whoever is involved and that might even be the end of their career.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • code of ethics
  • data falsification
  • duplicate publication
  • principles of research
  • research ethics
  • unethical research
  • unethical research practices
  • busayo.longe

Formplus

You may also like:

Market Research: Types, Methods & Survey Examples

A complete guide on market research; definitions, survey examples, templates, importance and tips.

what is an unethical research study

Exploratory Research: What are its Method & Examples?

Overview on exploratory research, examples and methodology. Shows guides on how to conduct exploratory research with online surveys

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH

American Psychological Association Logo

Research ethics

what is an unethical research study

Leading the charge to address research misconduct

two women talking with masks sitting apart

New guidance for protecting research participants

Thomas Plante investigates the ethical life

Facebook as a research tool

Two fictional scenarios explore plagiarism and fabrication or falsification of data.

Learning from cases of research misconduct

Five principles for research ethics

APA Ethics Code Section 8: Research and Publication

Conflict of Interest (NIH Office of Extramural Research)

Conflicts of Interests in Research

Sample Policy and Procedures for Responding to Allegations of Scientific Misconduct

Responsible Science, Volume I: Ensuring the Integrity of the Research Process

APA publications

Practical Ethics for Psychologists

Handbook of Research Ethics in Psychological Science

Essentials of Consensual Qualitative Research

Research Ethics in Psychological Science

Ethical Conflicts in Psychology, 5th Ed.

Further reading

On Being a Scientist: Conflicts of Interest

Resources for Research Ethics Education: Conflicts of Interests

Resources for Research Ethics Education: Whistleblowing

Responsible Conduct of Research Module on Conflicts of Interest

Responsible Conduct of Research Module on Research Misconduct

APA programs and governance

Guidelines for Ethical Conduct in the Care and Use of Nonhuman Animals in Research

Committee on Animal Research and Ethics (CARE)

Committee on Human Research (CHR)

Committee on Human Research (CHR)

Ethical guidance for the COVID-19 era

APA Ethics Office

  • Original article
  • Open access
  • Published: 01 April 2021

Unethical practices within medical research and publication – An exploratory study

  • S. D. Sivasubramaniam 1 ,
  • M. Cosentino 2 ,
  • L. Ribeiro 3 &
  • F. Marino 2  

International Journal for Educational Integrity volume  17 , Article number:  7 ( 2021 ) Cite this article

27k Accesses

1 Citations

4 Altmetric

Metrics details

The data produced by the scientific community impacts on academia, clinicians, and the general public; therefore, the scientific community and other regulatory bodies have been focussing on ethical codes of conduct. Despite the measures taken by several research councils, unethical research, publishing and/or reviewing behaviours still take place. This exploratory study considers some of the current unethical practices and the reasons behind them and explores the ways to discourage these within research and other professional disciplinary bodies. These interviews/discussions with PhD students, technicians, and academics/principal investigators (PIs) (N=110) were conducted mostly in European higher education institutions including UK, Italy, Ireland, Portugal, Czech Republic and Netherlands.

Through collegiate discussions, sharing experiences and by examining previously published/reported information, authors have identified several less reported behaviours. Some of these practices are mainly influenced either by the undue institutional expectations of research esteem or by changes in the journal review process. These malpractices can be divided in two categories relating to (a) methodological malpractices including data management, and (b) those that contravene publishing ethics. The former is mostly related to “committed bias”, by which the author selectively uses the data to suit their own hypothesis, methodological malpractice relates to selection of out-dated protocols that are not suited to the intended work. Although these are usually unintentional, incidences of intentional manipulations have been reported to authors of this study. For example, carrying out investigations without positive (or negative) controls; but including these from a previous study. Other methodological malpractices include unfair repetitions to gain statistical significance, or retrospective ethical approvals. In contrast, the publication related malpractices such as authorship malpractices, ethical clearance irregularities have also been reported. The findings also suggest a globalised approach with clear punitive measures for offenders is needed to tackle this problem.

Introduction

Scientific research depends on effectively planned, innovative investigation coupled with truthful, critically analysed reporting. The research findings impact on academia, clinicians, and the general public, but the scientific community is usually expected to “self-regulate”, focussing on ethical codes of conduct (or behaviour). The concept of self-regulation is built-in from the early stages of research grant application until the submission of the manuscripts for gaining impact. However, increasing demands on research esteem, coupled with the way this is captured/assessed, has created a relentless pressure to publish at all costs; this has resulted in several scientific misconduct (Rawat and Meena 2014 ). Since the beginning of this century, cases of blatant scientific misconduct have received significant attention. For example, questionable research practices (QRP) have been exposed by whistle blowers within the scientific community and publicised by the media (Altman 2006 ; John et al. 2012 ). Moreover, organisations such as the Centre for Scientific Integrity (CSI) concentrate on the transparency, integrity and reproducibility of published data, and promote best practices (www1 n.d. ). These measures focus on “scholarly conduct” and promote ethical behaviour in research and the way it is reported/disseminated, yet the number of misconduct and/or QRP’s are on the rise. In 2008, a survey amongst researchers funded by the National Institutes of Health (NIH) suggested there might be as many as 1,000 cases of potential scientific misconduct going unreported each year (Titus et al. 2008 ). Another report on bioRxiv (an open access pre-print repository) showed 6% of the papers (59 out of 960) published in one journal (Molecular and Cellular Biology - MCB), between 2009 and 2016, contained inappropriately duplicated images (Bik et al. 2018 ). Brainard ( 2018 ) recently reported that the number of articles retracted by scientific journals had increased 10-fold in the past 10 years. If the reported incidence of scientific misconduct is this high, then one can predict the prevalence of other, unreported forms of misconduct. The World Association of Medical Editors (WAME) has identified the following as the most reported misconduct: fabrication, falsification, plagiarism/ghost writing, image/data manipulation, improprieties of authorship, misappropriation of the ideas of others, violation of local and international regulations (including animal/human rights and ethics), inappropriate/false reporting (i.e. wrongful whistle-blowing) (www2 n.d. ).

However, WAME failed to identify other forms of scientific misconduct, such as; reviewer bias (including reviewers’ own scientific, religious or political beliefs) (Adler and Stayer 2017 ), conflicts of interests (Bero 2017 ), and peer-review fixing, which is widespread, especially after the introduction of author appointed peer reviewers (Ferguson et al., 2014 ; Thomas 2018 ). The most recent Retraction Watch report has shown that more than 500 published manuscripts have been retracted due to peer-review fixing; many of these are from a small group of authors (cited in Meadows 2017 ). Other reasons for retraction include intentional/unintentional misconduct, fraud and to a lesser extent honest errors. According to Fang et al. ( 2012 ), in a detailed study using 2,047 retracted articles within biomedical and life-sciences, 67.4% of retractions were due to some form of misconduct (including fraud/suspected fraud, duplicate publication, and plagiarism). Only 21.3% of retractions were due to genuine error. As can be seen, most of the information regarding academic misconduct is reported, detected or meta-analysed from databases. As for reporting (or whistle blowing), many scientists have shown been reticent to raise concerns, mainly because of the fear of aftermath or implications of doing so (Bouter and Hendrix 2017 ). An anonymous information-gathering activity amongst scientists, junior scientists, technicians and PhD students may highlight the misconduct issues that are being encountered in their day-to-day laboratory, and scholarly, activities. Therefore, this exploratory study of an interview-based study reports potentially un-divulged misconduct and tries to form a link with previously reported misconduct that are either being enforced, practiced or discussed within scientific communities.

Methodology

This qualitative exploratory study was based on informal mini-interviews conducted through collegiate discussions with technicians, PhD scholars, and fellow academics (N=110) within medical and biomedical sciences mainly in European higher education institutions including UK, Italy, Ireland, Portugal, Czech Republic and Netherlands (only 5 PhD students). PhD students (n=75), technicians (mostly in the UK; n=25) and academics/principal investigators (PIs; n=10), around Europe, have taken part in this qualitative narrative exploration study. These mini-interviews were carried out in accordance with local ethical guidance and processes. The discussions or conversations were not voice recorded; nor the details of interviewees taken to maintain anonymity. The data was captured (in long-hand) by summarising their views on following three questions (see below).

These answers/notes were then grouped according to their similarities and summarised (see Tables  1 and  2 ). The mini-interviews were semi-structured, based around three questions.

Have you encountered any individual or institutional malpractices in your research area/laboratory?

If so, could you give a short description of this misconduct?

What are the measures, in your opinion, needed to minimise or remove these misconduct?

we also examined recently published and/or reported (in media) unethical practice or misconduct to compare our findings (see Table  2 ). Fig.  1 summarises the methodology and its meta-cognitive reflection (similar to Eaton et al. 2019 ).

figure 1

Interactive enquiry-based explorative methodology used in this study

Results and discussion

As stated above, this manuscript is an exploratory study of unethical practice amongst medical researchers that are not well known or previously reported. Hence, the methodology applied was more exploratory with minimal focus on standardisation, using details of qualitative approach and paradigm, or the impact of researcher characteristics and reflexivity (British Medical Journal (BMJ) – www3 n.d. ). Most importantly, our initial informal meetings prior to this study clearly indicated that the participants were reluctant to provide information that would assist for an analysis linked to researcher characteristics and/or reflexivity. Thus, the level of data presented herein would not be suitable for a full thematic analysis. We do accept this as a research limitation.

This study has identified some less reported (not well-known) unethical behaviours or misconduct. These findings from technician/PhD scholars and academics/PIs are summarised in Tables  1 and  2 . The study initially aimed to identify any previously unreported unethical research conducts, however, the data shows that many previously identified misconduct are still common amongst researchers. Since the interviews were not audio recorded (to reassure anonymity), the participants were openly reported the unethical practices within their laboratories (or elsewhere). This may cast doubts on the accuracy of data interpretation. To minimise this, we have captured the summary of the conversation in long-hand.

We were able to generalise two emerging themes linked to the periods of a typical research cycle (as described by Hevner 2007 ); (a) methodological malpractices (including data management), and (b) those that contravene publishing ethics. Researcher-linked behaviours happen during laboratory investigation stage, where researchers employ questionable research practices, these include self-imposed as well as acquired (or taught) habits. As can be seen from Tables  1 and  2 , these misconduct are mainly carried out by either PhD scholars, post-doctoral scientists or early career researchers. These reported “practices” may be common amongst laboratory staff, especially given the fact that some of these practices have been nicknamed (e.g. ghost repeats, data mining etc. – see Table  1 ). Individual or researcher-linked unethical behaviours mostly related to “committed bias”, by which the researcher selectively uses the data to suit their own hypothesis or what they perceive as ground-breaking. This often results in conducts where research (and in some cases data/results) is statistically manipulated to suit the perceived conclusion.

Although this is a small-scale pilot study, we feel this reflects the common trend in laboratory-based research. As mentioned earlier, although this study was set out to detect unreported research misconduct/malpractices, study participant reported some of the behaviours that were already reported in previous studies.

In contrast, established academics, professors and PIs tend to commit publication-related misconduct. These can be divided into author-related or reviewer-related misconduct. The former includes QRPs during manuscript preparation (such as selective usage of data, omitting outliers, improper ethical clearance, authorship demands etc). The latter is carried out by the academics when they review others manuscripts and includes delaying review decisions, reciprocal reviewing etc.

From tables above, it seems that most of the reported misconduct can be easily prevented if specific and accurate guidelines or code of conduct are present in each research laboratory (see below). This aspect, for example is of minor impact in the clinical research, where the study protocol is rigorously detailed in advance, the specific analysis that will be included in the final report specified in advance with clear primary or secondary endpoints, and all the analysis/reports need to be stored for the final study revision/conclusion. All these different steps are regulated by Good Clinical Practice guidelines (GCP; National Institute for Health Research Clinical Research Network (NIHR CRN- www4 n.d. ).

This by no means indicates that in clinical research fraud does not exist, but that it is easier to discover it than in laboratory based-investigations. The paper of Verhagen et al. ( 2003 ) clearly refers to a specific situation that commonly happens in a research laboratory. The majority of experiments within biomedical research are conducted on tissues or cells. Therefore, the experimental set-ups, including negative and positive controls can easily (and frequently) be manipulated. This can only be prevented by using Standard Operating Procedures (SOP) and well written and clear regulation such as Good Laboratory Practice (GLP; Directive 2004 /9/EC), and written protocols. However, at present, no such regulations exist apart from in industry-based research, where GLP is mandatory. In a survey-based systematic review, Fanelli ( 2009 ) reported that approximately 2% of scientists claimed they had fabricated their data, at some point in their research career. It is worth noting, Fanelli’s study (as well as ours) only reported data from those who were willing admit engaging in these activities. This cast questions on actual number of occurrences, as many of them would not have reported misconduct. Other authors have highlighted the same issue and cast doubt on the reproducibility of scientific data (Resnik and Shamoo, 2017 ; Brall et al, 2017 ; Shamoo 2016 ; Collins and Tabak 2014 ; Kornfeld and Titus 2016 ).

The interview responses

We also wanted to understand the causes of these QRPs to obtain a clear picture of these misconduct. Based on interview responses, we have tried to give a narrative but critical description of individual perceptions, and their rationalisation in relation to previously published information.

Methodological malpractices

The data reported herein show that PhD scholars/post-doctoral fellows are mostly involved in laboratory-linked methodological misconduct. Many of them (especially the post-doctoral scientists) blamed supervisory/institutional pressures on not only enhancing publishing record, but also maintaining high impact. One post-doctoral scientist claimed “ there is always a constant pressure on publication; my supervisor said the reason you are not producing any meaningful data is because you are a perfectionist ”. He further recalled his supervisor once saying “ if the data is 80% correct, you should accept it as valid and stop repeating until you are satisfied ”.

Likewise, another researcher who recently returned from the US said “ I was an excellent researcher here (home country), but when I went to America, they demanded at least one paper every six months ”. “ When I was unable to deliver this (and missed a year without publishing any papers), my supervisor stopped meeting me, I was not invited for any laboratory meetings, presentations, and proposal discussions; in fact, they made me quit the job ”. A PhD student recalled his supervisor jokingly hinting “ if you want a perfect negative control, use water it will not produce any results ”. Comments and demands like these must have played a big role in encouraging laboratory based misconduct. In particular, the pressure to publish more papers in a limited period led to misconduct such as data manipulation (removing outliers, duplicate replications, etc.) or changing the aim of the study, and as a consequence including data set that were not previously considered, because the results are not in line with the original aim of the study. All these aspects force the young researchers to adopt an attitude that leads them to obtain publishable results by any means (ethical or not) – A “ Machiavellian personality trait ” as put by Tijdink et al. ( 2016 ). Indeed, an immoral message is being delivered to these young researchers (future scientists), enhancing cheating behaviours. In fact, Buljan et al. ( 2018 ) have recently highlighted the research environment, in which a scientist is working, as one of the potential causes of misconduct.

Behaviours that contravene publishing ethics

Academics (and PIs) have mostly identified misconduct linked to contravening publishing ethics. This finding itself shows that most of the academics who took part in this study has less “presence” within their laboratories. When confronted with the data obtained from PhD scholars and technicians, some of them vehemently denied these claims. Others came up with a variety of excuses. One lecturer/researcher said, “ I have got far too much teaching to be in my laboratory ”. Another professor said, “ I have post-docs within my laboratory, they will look after the rest; to be honest, my research skills are too old to refresh!” One PI replied, “ why should I check them? No one checked me when I was doing research ”. All these replies show a lack of care for research malpractices. It is true that academics are under pressure to deliver high impact research, carry out consultancy work, get involved with internationalisation within academia and teach (Edwards and Roy 2017 ). However, these pressures should not undermine research ethics.

One researcher claimed to have noticed at least two different versions of “ convenient ethical clearance ”. According to him, some researchers, especially those using human tissues, avoid specifying their research aims; and instead write an application in such a way that they can use these samples for a variety of different projects (bearing in mind of possible future developments). For example, if they aim to use the tissue to study a particular protein, the ethical application would mention all the related proteins and linked pathways. They justify this by claiming the tissues are precious, therefore they are “ maximising the effective use of available material ”. Whilst understanding the rationale within their argument, the academic who witnessed this practice asked a question “ how ethical it is to supply misleading information in an ethical application ?” He also highlighted issues with backdating ethical approval in one institution. That is, the ethical approval was obtained (or in his words “ staged ”) after the study has been completed. Although this is one incident reported by one whistle-blower, it highlights the institutional malpractices.

Selective use of data is another category reported here and elsewhere (Priya Satalkar & David Shaw, 2019 ; Blatt 2013 ; Bornmann 2013 ). One academic reported incidences of researchers purposely avoiding data to maximise the statistical significance. If this is the case, then the validity of reported work, its statistical significance, and in some cases its clinical usage are in question. What is interesting is that, as elegantly reported by Fanelli ( 2010 ), in the highest percentage of published papers, the findings always report the data that are in line with the original hypothesis. In fact, the number of papers published reporting negative results are very limited.

Misconduct relating to authorships have been highlighted in many previous studies (Ploug 2018 ; Vera-Badillo et al. 2016 ; Ng Chirk Jenn 2006 ). The British Medical Journal (BMJ – www5 n.d. ) has classified two main types of misconduct relating to authorships; (a) omission of a collaborator who has contributed significantly to the project and (b) inclusion of an author who has not (or minimally) contributed. Interestingly in this study, one academic claimed that he was under pressure to include the research co-ordinator of his department as an author in every publication.

He recalled the first instance when he was pressurised to include the co-ordinator, “ It was my first paper as a PI but due to my institutional policy, all potential publications needed to be scrutinised by the co-ordinator for their worthiness of publication ”, “ so when I submitted for internal scrutiny, I was called by the co-ordinator who simply said there is nothing wrong with this study, but one important name is missing in authors’ list ” (indirectly requesting her name to be included). Likewise, another PI said, “ it is an unwritten institutional policy to include at least one professor on every publication ”. Yet another PI claimed, “ this is common in my laboratory – all post-doctoral scientists would have a chance to be an author ” “ by this way we would build their research esteem ”. His justification for this was “ many post-doctoral scientists spend a substantial amount of time mentoring other scientists and PhD students, therefore they deserve honorary authorships ”. Similar malpractices have also been highlighted by other authors (Vera-Badillo et al. 2016 ; Gøtzsche et al. 2007 ) but the worrying finding is that in many cases, the practice is institutionalised. With regards to authorships, according to the International Committee of Medical Journal Editors (ICMJE – www6 n.d. ), authorships can only be given to those with (a) a substantial contribution (at least to a significant part of the investigation), (b) involvement in manuscript preparation including contribution to critical review. However, our discussions have revealed complementary authorships, authorship denial, etc.

Malpractices in peer-review process

The final QRP highlighted by our interviewees relates to the vreviewing process. One academic openly admitted, “ I and Dr X usually use each other as reviewers because we both understand our research”, he further added, “the blind reviewing is the thing of the past, every author has his own writing style, and if you are in one particular research field, with time, you would be able to predict the origin of the manuscript you are reviewing (whether it is your friend or a person with a conflicting research interest!”. Another academic said that “ the era of blind reviewing is long gone, authors are intentionally or unintentionally identifying themselves within the manuscripts with sentences such as ‘previously we have shown’. “This allows the reviewer to identify the authors from the reference list ”. He further claimed he also experienced reviewers intentionally delaying acceptance or asking for further experiments to be carried out, simply because they wanted their manuscript (on a related topic) to be published first! Incidences like this, though minimal, cast questions on the reviewing process itself.

Recent reports by Thomas ( 2018 ) and Preston ( 2017 ) (see also Adler and Stayer 2017 ) have highlighted issues (or scams) such as an author reviewing his own manuscripts! Of course, many journals do not use the suggested reviewers; instead, they build a database of reviewers and randomly select appropriate reviewers. Still, it is not clear how robust this approach is in curtailing reviewer-based misconduct. Organisations such as Retraction Watch constantly pick up and report these malpractices, yet there are no definite sanctions or punishment for the culprits (Zimmerman 2017 )

One of the academic interviewees recalled an incidence in which an author has been dismissed due to a serious image manipulation scam, yet obtained a research tenure in another institution within 3 months of dismissal. Galbraith ( 2017 ) reviewed summaries of 284 integrity-related cases published by the Office of Research Integrity (ORI), and found that in around 47% of cases the researchers received moderate punishment and were often permitted to continue their research. This highlights the need for a globalised approach with clear sanction measures to tackle research misconduct. Although this is a small-scale study, it has highlighted that despite measures taken by research regulatory bodies, the problem of misconduct is still there. The main problem behind this is “the lack of care” underpinned by pressures for esteem.

Limitations

This is an exploratory study with minimal focus on standardisation, using details of qualitative approach and paradigm, or the impact of researcher characteristics and reflexivity. Therefore, the level of data presented herein is not suited for a full thematic analysis. Also, this is a small-scale study with a sample size of of 110 participants who are further divided into sub-groups (such as PhD students, technicians and PIs). This limits the scope of analysing variability in the responses of individual sub-groups, and therefore might have resulted in voluntary response bias (i.e. responses are influenced by individual perceptions against research misconduct). Yet, the study has highlighted the issue of research misconduct is worth pursuing using a large sample. It also highlighted the common QRPs (both laboratory and publication related) that need to be focussed further, enabling us to establish a right research design for future studies.

The way forward

This exploratory study (and previously reported large scale studies) showed QRP is still a problem in science and medical research. So what are the way forward to stop these types of misconduct? Whilst it is important to set up confirmed criteria for individual research conduct, it is also important to set up institutional policies. These policies should aim at promoting academic/research integrity, with paramount attention on the training of young researchers about research integrity. The focus should be on young researchers attaining rigorous learning/application of the best methodological and professional standards in their research. In fact, the Singapore statement on research integrity (www7 n.d. ), not only highlights the importance of individual researchers maintaining integrity in their research, but also insists the roles of institutions creating/sustaining research integrity via educational programmes with continuous monitoring (Cosentino and Picozzi 2013 ). Considering the findings from this study, it would also be appropriate to suggest an international regulatory body to regularly monitor these practices involving all stakeholders including governments.

In fact, this (and other studies) have highlighted the importance of re-validating the “voluntary commitment” to follow the research integrity. With respect to individual researchers, we propose using a unified approach for early career researchers (ECRs). They should be educated about the importance of ethics/ethical behaviours (see Table  3 ) for our suggestion for ECRs). We feel it is vital to provide compulsory ethical training throughout their career (not just at the beginning). It is also advisable to regularly carry out “peer review” visits/processes between laboratories for ethical and health/safety aspects. Most importantly, it is time for the research community to move away from the expectation of “self-governance” establish and international research governance guidelines that can monitored by individual countries.

We, do agree this is a small-scale pilot study and due to the way it was conducted, we are unable to carry out a full thematic analysis. This was mainly because the participants were extremely reluctant to offer information to formulate researcher characteristics. Also, the study data in many cases conforms to the previously reported fact, that QRP and research misconduct is still a problem within science and medicine. Yet, this study has attempted to narrate the previously unreported justifications given by the interviewees. In addition, we were able to highlight that these activities are becoming regular occurrence (those nick-named behaviours). We also provided some directives on how academic pressures are inflicted upon early career researchers. We also provided some recommendations in regard to the training ECRs.

Significance

The study has highlighted the negative influence on supervisory/peer pressures and/or inappropriate training may be main causes for these misconducts, highlighting the importance on devising and implementing a universal research code of conduct. Although this was an exploratory investigation, the data presented herein have pointed out that unethical practices can still be widespread within biomedical field. It highlighted the fact that despite the proactive/reflective measures taken by the research governance organisations, these practices are still going on in different countries within Europe. As the study being explorative, we had the flexibility to adapt and evolve our questions in reflection to the responses. This would help us to carry out a detailed systematic research in this topic involving international audience/researchers.

Concluding remarks

To summarise, this small-scale interview-based narrative study has highlighted that QRP and research misconduct is still a problem within science and medicine. Although they may be influenced by institutional and career-related pressures, these practices seriously undermine ethical standards, and question the validity of data that are being reported. The findings also suggest that both methodological and publication-related malpractices continue, despite being widely reported. The measures taken by journal editors and other regulatory bodies such as WAME and ICMJE may not be efficient to curtail these practices. Therefore, it would be important to take steps in providing a universal research code of conduct. Without a globalised approach with clear punitive measures for offenders, research misconduct and QRP not only affect reliability, reproducibility, and integrity of research, but also hinder the public trustworthiness for medical research. This study has also highlighted the importance of carrying out large-scale studies to obtain a clear picture about misconduct undermining research ethics culture.

Availability of data and materials

The authors confirm that the data supporting the findings of this study are available within the article

Adler AC, Stayer SA (2017) Bias Among Peer Reviewers. JAMA. 318(8):755. https://doi.org/10.1001/jama.2017.9186

Article   Google Scholar  

Altman, LK (2006). For science gatekeepers, a credibility gap. The New York Times. Retrieved from http://www.nytimes.com/2006/05/02/health/02docs.html?pagewanted=all . Accessed 26 July 2019

Bero L (2017) Addressing Bias and Conflict of Interest Among Biomedical Researchers. JAMA 317(17):1723–1724. https://doi.org/10.1001/jama.2017.3854

Bik EM, Fang FC, Kullas AL, Davis RJ, Casadevall A (2018) Analysis and Correction of Inappropriate Image Duplication: the. Mol Cell Biol Exp. https://doi.org/10.1128/MCB.00309-18

Blatt M (2013) Manipulation and Misconduct in the Handling of Image Data. Plant Physiol 163(1):3–4. https://doi.org/10.1104/pp.113.900471

Bornmann L (2013) Research Misconduct—Definitions, Manifestations and Extent. Publications. 1:87–98. https://doi.org/10.3390/publications1030087

Bouter LM, Hendrix S (2017) Both whistle-blowers and the scientists they accuse are vulnerable and deserve protection. Account Res 24(6):359–366. https://doi.org/10.1080/08989621.2017.1327814

Brainard J (2018) Rethinking retractions. Science. 362(6413):390–393. https://doi.org/10.1126/science.362.6413.390

Brall C, Maeckelberghe E, Porz R, Makhoul J, Schröder-Bäck P (2017) Research Ethics 2.0: New Perspectives on norms, values, and integrity in genomic research in times of even ccarcer resources. Public Health Genomics 20:27–35. https://doi.org/10.1159/000462960

Buljan I, Barać L, Marušić A (2018) How researchers perceive research misconduct in biomedicine and how they would prevent it: A qualitative study in a small scientific community. Account Res 25(4):220–238. https://doi.org/10.1080/08989621.2018.1463162

Collins FS and Tabak LA (2014) Policy: NIH plans to enhance reproducibility. NATURE (Comment) - https://www.nature.com/news/policy-nih-plans-to-enhance-reproducibility-1.14586

Cosentino M , and Picozzi M (2013) Transparency for each research article: Institutions must also be accountable for research integrity. BMJ 2013;347:f5477 doi: https://doi.org/10.1136/bmj.f5477 .

Directive 2004/9/EC of the European Parliament and of the Council of 11 February 2004 on the inspection and verification of good laboratory practice (GLP).  https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:050:0028:0043:EN:PDF . Accessed  07 Sep 2019

Eaton SE, Chibry N, Toye MA, Toye MA, Rossi S (2019) Interinstitutional perspectives on contract cheating: a qualitative narrative exploration from Canada. Int J Educ Integr 15:9. https://doi.org/10.1007/s40979-019-0046-0

Edwards M, Roy (2017) Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environ Eng Sci 34(1):51–61. https://doi.org/10.1089/ees.2016.0223

Fanelli D (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. Plos One 4(5):e5738

Fanelli D (2010) Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS One 5(4):e10271. https://doi.org/10.1371/journal.pone.0010271

Fang FC, Steen RG, Casadevall A (2012) Misconduct accounts for the majority of retracted scientific publications. PNAS 109(42):17028–11703. https://doi.org/10.1073/pnas.1212247109

Ferguson C, Marcus A, Oransky I (2014) Publishing: Publishing: The peer-review scam. Nature (News review). 515(7528):480-2. http://www.nature.com/news/publishing-the-peer-review-scam-1.16400. Accessed  21 Nov 2019

Galbraith KL (2017) Life after research misconduct: Punishments and the pursuit of second chances. J Empir Res Hum Res Ethics 12(1):26–32. https://doi.org/10.1177/1556264616682568

Gøtzsche PC, Hróbjartsson A, Johansen HK, HaahrMT ADG, Chan A-W (2007) Ghost Authorship in Industry-Initiated Randomised Trial. Plos-Med. https://doi.org/10.1371/journal.pmed.0040019

Hevner AR (2007) A Three Cycle View of Design Science Research. Scand J Inf Syst 19(2):4 https://aisel.aisnet.org/sjis/vol19/iss2/4

Google Scholar  

Jenn NC (2006) Common Ethical Issues In Research And Publication. Malays Fam Physician 1(2-3):74–76

John LK, Loewenstein G, Prelec D (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 23(5):524–532

Kornfeld DS, Titus SL (2016) (2016) Stop ignoring misconduct. Nature. 537(7618):29–30. https://doi.org/10.1038/537029a

Meadows, A. (2017). What does transparent peer review mean and why is it important? The Scholarly Kitchen, [blog of the Society for Scholarly Publishing.] [Google Scholar]

Ploug TJ (2018) Should all medical research be published? The moral responsibility of medical journal. Med Ethics 44:690–694

Preston A (2017) The future of peer review. Scie Am. Retrieved from https://blogs.scientificamerican.com/observations/the-future-of-peer-review/

Rawat S, Meena S (2014) Publish or perish: Where are we heading? J Res Med Sci. 19(2):87–89

Resnik DB, Shamoo AE (2017) Reproducibility and Research Integrity. Account Res. 24(2):116–123. https://doi.org/10.1080/08989621.2016.1257387

Satalka P, Shaw D (2019) How do researchers acquire and develop notions of research integrity? A qualitative study among biomedical researchers in Switzerland. BMC Med Ethics 20:72. https://doi.org/10.1186/s12910-019-0410-x

Shamoo AE (2016) Audit of research data. Account Res. 23(1):1–3. https://doi.org/10.1080/08989621.2015.1096727

Thomas SP (2018) Current controversies regarding peer review in scholarly journals. Issues Ment Health Nurs 39(2):99–101. https://doi.org/10.1080/01612840.2018.1431443.

Tijdink JK, Bouter LM, Veldkamp CL, van de Ven PM, Wicherts JM, Smulders YM (2016) Personality traits are associated with research misbehavior in Dutch scientists: A cross-sectional study. Plos One. https://doi.org/10.1371/journal.pone.0163251

Titus SL, Wells JA, Rhoades LJ (2008) Repairing research integrity. Nature 453:980–982

Vera-Badillo, Marc Napoleonea FE, Krzyzanowskaa MK, Alibhaib SMH, Chanc A-W, Ocanad A, Templetone AJ, Serugaf B, Amira E, Tannocka IF, (2016) Honorary and ghost authorship in reports of randomised clinical trials in oncology. Eur J Cancer (66)1 doi: https://doi.org/10.1016/j.ejca.2016.06.023

Verhagen H, Aruoma OI, van Delft JH, Dragsted LO, Ferguson LR, Knasmüller S, Pool-Zobel BL, Poulsen HE, Williamson G, Yannai S (2003) The 10 basic requirements for a scientific paper reporting antioxidant, antimutagenic or anticarcinogenic potential of test substances in in vitro experiments and animal studies in vivo. Food Chem Toxicol. 41(5):603–610

www1 n.d.:  https://retractionwatch.com/the-center-for-scientific-integrity/ . Accessed 13 Nov 2019

www2 n.d.: http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html . Accessed 07 July 2019

www3 n.d.: https://bmjopen.bmj.com/content/bmjopen/8/12/e024499/DC1/embed/inline-supplementary-material-1.pdf?download=true . Accessed 26 July 2019

www4n.d.: http://www.crn.nihr.ac.uk/learning-development/ - National Institute for Health Research Clinical Research Network (NIHR CRN) - Accessed 13 Nov 2019

www5 n.d.: https://www.bmj.com/about-bmj/resources-authors/forms-policies-and-checklists/scientific-misconduct . Accessed 07 July 2019

www6 n.d.: http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html - Accessed 0 July 2019

www7 n.d.: http://www.singaporestatement.org . Accessed 10 Aug 2019

Zimmerman SV (2017), "The Canadian Experience: A Response to ‘Developing Standards for Research Practice: Some Issues for Consideration’ by James Parry", Finding Common Ground: Consensus in Research Ethics Across the Social Sciences (Advances in Research Ethics and Integrity, Vol. 1) Emerald Publishing Limited, pp. 103-109. https://doi.org/10.1108/S2398-601820170000001009

Download references

Acknowledgements

Authors wish to thank the organising committee of the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania for accepting this paper to be presented in the conference. We also sincerely thank Dr Carol Stalker, school of Psychology, University of Derby, for her critical advice on the statistical analysis.

Not applicable – the study was carried out as a collaborative effort amongst the authors.

Author information

Authors and affiliations.

School of Human Sciences, University of Derby, Derby, DE22 1GB, UK

S. D. Sivasubramaniam

Center of Research in Medical Pharmacology, University of Insubria, Via Ravasi, 2, 21100, Varese, VA, Italy

M. Cosentino & F. Marino

Faculty of Medicine, University of Porto, Porto, Portugal

You can also search for this author in PubMed   Google Scholar

Contributions

Dr Sivasubramaniam has produced the questionnaire with interview format with the contribution of all other authors. He also has read the manuscript with the help of Prof Consentino. The latter also contributed for the initial literature survey and discussion. Drs Marino and Ribario have helped in the data collection and analysis. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to S. D. Sivasubramaniam .

Ethics declarations

Competing interests.

The authors can certify that they have NO affiliations with or involvement in any organization or entity with any financial or non-financial interests (including personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sivasubramaniam, S.D., Cosentino, M., Ribeiro, L. et al. Unethical practices within medical research and publication – An exploratory study. Int J Educ Integr 17 , 7 (2021). https://doi.org/10.1007/s40979-021-00072-y

Download citation

Received : 17 July 2020

Accepted : 24 January 2021

Published : 01 April 2021

DOI : https://doi.org/10.1007/s40979-021-00072-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical research
  • Research misconduct
  • Committed bias
  • Unethical practices

International Journal for Educational Integrity

ISSN: 1833-2595

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is an unethical research study

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on October 18, 2021 by Pritha Bhandari . Revised on May 9, 2024.

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviors, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to

  • protect the rights of research participants
  • enhance research validity
  • maintain scientific or academic integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, other interesting articles, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research objectives with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation Your participants are free to opt in or out of the study at any point in time.
Informed consent Participants know the purpose, benefits, risks, and funding behind the study before they agree or decline to join.
Anonymity You don’t know the identities of the participants. Personally identifiable data is not collected.
Confidentiality You know who the participants are but you keep that information hidden from everyone else. You anonymize personally identifiable data so that it can’t be linked to other data by anyone else.
Potential for harm Physical, social, psychological and all other types of harm are kept to an absolute minimum.
Results communication You ensure your work is free of or research misconduct, and you accurately represent your results.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process , so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

You make sure to provide all potential participants with all the relevant information about

  • what the study is about
  • the risks and benefits of taking part
  • how long the study will take
  • your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymize data collection . For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymization is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources or counseling or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine academic integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2024, May 09). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved June 27, 2024, from https://www.scribbr.com/methodology/research-ethics/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, data collection | definition, methods & examples, what is self-plagiarism | definition & how to avoid it, how to avoid plagiarism | tips on citing sources, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Ethical Issues in Research

  • Living reference work entry
  • First Online: 05 March 2021
  • Cite this living reference work entry

what is an unethical research study

  • Juwel Rana 2 , 3 , 4 ,
  • Segufta Dilshad 2 &
  • Md. Ali Ahsan 5  

385 Accesses

2 Citations

The most important human endeavor is the striving for morality in our actions. Our inner balance and even our very existence depend on it. Only morality in our actions can give beauty and dignity to life – Albert Einstein.

Ethics ; Methodology ; Mixed-method research ; Observation ; Qualitative research ; Quantitative research ; Research ; Research design ; Research ethics

Ethics is a set of standards, a code, or value system, worked out from human reason and experience, by which free human actions are determined as ultimately right or wrong, good, or evil. If acting agrees with these standards, it is ethical, otherwise unethical.

Scientific research refers to a persistent exercise towards producing new knowledge to unveil a new stream of ideas in academia for humankind.

Research ethics refer to some of the genres that researchers follow to protect the rights in developing research strategies and building a trusted relationship between the...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Bulmer M (1982) Social Research Ethics: An Examination of the Merits of Covert Participant Observation. Holmes & Meier Publishers

Google Scholar  

Butler I (2002) A Code of Ethics for Social Work and Social Care Research. Br J Soc Work [Internet]. 32(2):239–48. Available from: https://doi.org/10.1093/bjsw/32.2.239

Fisher CB, Anushko AE (2008) The SAGE Handbook of Social Research Methods [Internet]. London: SAGE Publications Ltd; p. 95–109. Available from: https://methods.sagepub.com/book/the-sage-handbook-of-socialresearch-methods

Hill J, Wright LT (2001) A qualitative research agenda for small to medium-sized enterprises. Mark Intell Plan 19(6):432–443

Homan R (1991) The ethics of social research. Addison-Wesley Longman Limited

Israel M, Hay I (2006) Research ethics for social scientists. Sage

Kimmel AJ (1988) Ethics and values in applied social research. 1st ed. SAGE Publications Inc

Orb A, Eisenhauer L, Wynaden D (2001) Ethics in qualitative research. J Nurs Scholarsh 33(1):93–96

Principles of research ethics [Internet]. Lund Research Ltd. 2012 [cited 2020 Dec 15]. Available from: https://dissertation.laerd.com/principles-of-research-ethics.php

Robley LR (1995) The ethics of qualitative nursing research. J Prof Nurs 11(1):45–48

Wiles R, Charles V, Crow G, Heath S (2006) Researching researchers: lessons for research ethics. Qual Res. 6(3):283–99

Download references

Author information

Authors and affiliations.

Department of Public Health, School of Health and Life Sciences, North South University, Dhaka, Bangladesh

Juwel Rana & Segufta Dilshad

Department of Biostatistics and Epidemiology, School of Health and Health Sciences, University of Massachusetts Amherst, Amherst, MA, USA

Department of Research and Innovation, South Asia Institute for Social Transformation (SAIST), Dhaka, Bangladesh

Space and Environment Research Center (SERC), Rajshahi, Bangladesh

Md. Ali Ahsan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Juwel Rana .

Editor information

Editors and affiliations.

Florida Atlantic University, Boca Raton, FL, USA

Ali Farazmand

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Cite this entry.

Rana, J., Dilshad, S., Ahsan, M.A. (2021). Ethical Issues in Research. In: Farazmand, A. (eds) Global Encyclopedia of Public Administration, Public Policy, and Governance. Springer, Cham. https://doi.org/10.1007/978-3-319-31816-5_462-1

Download citation

DOI : https://doi.org/10.1007/978-3-319-31816-5_462-1

Received : 01 February 2021

Accepted : 14 February 2021

Published : 05 March 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-31816-5

Online ISBN : 978-3-319-31816-5

eBook Packages : Springer Reference Economics and Finance Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

The BMJ logo

Should unethical research be retracted?

By William Bülow

It is no news that researchers sometimes make mistakes, or that some of us even commit fraudulent acts, such as data fabrication or falsification. Despite precautionary measures, such as careful editorial practices and peer-review, fraudulent or flawed research papers sometimes get published. When this happens, these papers should be retracted. This is crucial in order to ensure integrity in research and to retain trust, both among researchers and from the general public.

Recent work in research ethics has come to devote important attention to how to improve the system of paper retraction. The retraction of a paper is often very stigmatizing, which in turn might discourage self-reported or author-initiated retractions due to honest mistakes. This is of course very unfortunate. There is an ongoing discussion of how to reform the retraction system in order to create a change of attitudes towards retractions which, as a result, could encourage author-initiated retractions of this sort. Among suggested measures are increased transparency and openness about the reason for retraction, as well as who initiated it. This type of information is often lacking in retraction notices.

We very much welcome this engagement with the retraction system and suggestions for how it might be improved. However, in our forthcoming paper “ Why unethical research should be retracted ” my co-authors and I focus on an issue that, at least to our knowledge, has received much less attention in this context. This is the issue of whether papers that report unethical research – for example, research performed without appropriate concern for the moral rights and interests of the research participants – should also be retracted. If yes, why is that so?

As we began to think about this normative issue we became interested in exploring to what extent this issue is acknowledged in the retraction policies of academic publishers. We therefore performed a literature survey. As we point out in our paper, many journals do not have explicit policies for how to handle unethical research. As a group of ethicists with an interest in publication ethics we then began to think about what would be an appropriate policy for this matter.

In our paper we identify and discuss four normative arguments for why unethical research should be retracted. The first argument is that it is wrong in itself to publish unethical research and that any such publications should therefore be retracted. As our analysis shows, however, this is a rather weak argument. The second argument is that it is important to retract such papers in order to communicate that unethical research is unacceptable and under no circumstances should be encouraged. The third argument that we have identified is that retractions are appropriate for unethical research because it calls the trustworthiness of the whole paper into question. We argue that even if this type of argument provides a reason to start an inquiry about the paper’s scientific validity, it does not in itself provide a reason to retract the paper. The forth argument, which we refer to as the argument from complicity, suggests that failure to retract papers that report deeply unethical research might constitute a form of complicity, since the publisher makes it possible for the researchers to complete their unethical research by bringing it all the way to publication of results. Having discussed all of these arguments, our conclusion is that there is a strong case for retracting papers resulting from unethical research. However, we grant that there might be exceptions – for instance if the data from unethical research could promote an important good. In these cases other researchers should get access to the data, methods and records needed for replication, follow-ups or confirmatory research with a different design, although the paper gets retracted. Here some kind of digital repository controlled by the journal might be a solution.

As we acknowledge in the paper, there is an issue of what should count as unethical research, and whether all instances of unethical research call for retraction. In the light of our discussion of this matter, our normative analysis and our empirical survey, we end our paper with the following recommendation for a policy that scientific and scholarly journals can adapt:

  • Both papers that are the result of misconduct and seriously unethical research should be retracted and only their abstracts left online, always clearly marked as ‘retracted’.
  • Retraction notices should be informative and easily accessible. Not only should they spell out that the reason for retraction is that the research is unethical (when that is the case) but also in what way and to what extent it is unethical.
  • Corrections should only be used for honest errors.
  • An editorial note, tied to the paper, is recommendable when a paper is unethical in some respect, but where this is considered insufficient ground for retraction.

Paper title : Why unethical research should be retracted

Author(s) : William Bülow 1 , Tove Godskesen 2,3 , Gert Helgesson 4 , Stefan Eriksson 3

  • Department of Philosophy, Stockholm University
  • The Department of Health Care Sciences, Ersta Sköndal Bräcke University College
  • Centre for Research Ethics & Bioethics, Uppsala University, Sweden
  • Stockholm Centre for Healthcare Ethics, Dept. of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden

Competing interests : None

Comment and Opinion | Open Debate

The views and opinions expressed on this site are solely those of the original authors. They do not necessarily represent the views of BMJ and should not be used to replace medical advice. Please see our full Blog Terms and Conditions .

All BMJ blog posts are posted under a CC-BY-NC licence

BMJ Journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Unethical human research in the field of neuroscience: a historical review

Affiliations.

  • 1 King Abdulaziz Medical City, King Saud bin Abdulaziz University for Health Sciences, P.O. Box: 12723, Jeddah, 21483, Saudi Arabia. [email protected].
  • 2 King Saud bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia.
  • 3 King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia.
  • PMID: 29460160
  • DOI: 10.1007/s10072-018-3245-1

Understanding the historical foundations of ethics in human research are key to illuminating future human research and clinical trials. This paper gives an overview of the most remarkable unethical human research and how past misconducts helped develop ethical guidelines on human experimentation such as The Nuremberg Code 1947 following WWII. Unethical research in the field of neuroscience also proved to be incredibly distressing. Participants were often left with life-long cognitive disabilities. This emphasizes the importance of implicating strict rules and ethical guidelines in neuroscience research that protect participants and respects their dignity. The experiments conducted by German Nazi in the concentration camps during WWII are probably the most inhumane and brutal ever conducted. The Nuremberg Code of 1947, one of the few positive outcomes of the Nazi experiments, is often considered the first document to set out ethical regulations of human research. It consists of numerous necessary criteria, to highlight a few, the subject must give informed consent, there must be a concrete scientific basis for the experiment, and the experiment should yield positive results that cannot be obtained in any other way. In the end, we must remember, the interest of the patient must always prevail over the interest of science or society.

Keywords: Ethics; History; Neuroscience; Unethical research.

PubMed Disclaimer

Similar articles

  • Nazi Medical Research in Neuroscience: Medical Procedures, Victims, and Perpetrators. Loewenau A, Weindling PJ. Loewenau A, et al. Can Bull Med Hist. 2016 Fall;33(2):418-446. doi: 10.3138/cbmh.33.2.152-27012015. Epub 2016 Sep 7. Can Bull Med Hist. 2016. PMID: 28155423
  • The role of an independent and interdisciplinary assessment of research studies with human subjects in Europe and worldwide. Rittner C. Rittner C. Leg Med (Tokyo). 2009 Apr;11 Suppl 1:S80-1. doi: 10.1016/j.legalmed.2009.01.018. Epub 2009 Mar 6. Leg Med (Tokyo). 2009. PMID: 19269235 Review.
  • [The origin of informed consent]. Mallardi V. Mallardi V. Acta Otorhinolaryngol Ital. 2005 Oct;25(5):312-27. Acta Otorhinolaryngol Ital. 2005. PMID: 16602332 Italian.
  • Scientific misconduct and unethical human experimentation: historic parallels and moral implications. Lefor AT. Lefor AT. Nutrition. 2005 Jul-Aug;21(7-8):878-82. doi: 10.1016/j.nut.2004.10.011. Nutrition. 2005. PMID: 15975498
  • Retreat from Nuremberg: can we prevent unethical medical research? Horner JS. Horner JS. Public Health. 1999 Sep;113(5):205-10. doi: 10.1016/s0033-3506(99)00160-2. Public Health. 1999. PMID: 10557112 Review.
  • Poor Representation of Developing Countries in Editorial Boards of Leading Obstetrics and Gynaecology Journals. Rawat S, Mathe P, Unnithan VB, Kumar P, Abhishek K, Praveen N, Guleria K. Rawat S, et al. Asian Bioeth Rev. 2023 Feb 7;15(3):241-258. doi: 10.1007/s41649-023-00241-w. eCollection 2023 Jul. Asian Bioeth Rev. 2023. PMID: 37399006
  • Demographic reporting across a decade of neuroimaging: a systematic review. Sterling E, Pearl H, Liu Z, Allen JW, Fleischer CC. Sterling E, et al. Brain Imaging Behav. 2022 Dec;16(6):2785-2796. doi: 10.1007/s11682-022-00724-8. Epub 2022 Sep 17. Brain Imaging Behav. 2022. PMID: 36114313 Free PMC article. Review.
  • Ethics in field experimentation: A call to establish new standards to protect the public from unwanted manipulation and real harms. McDermott R, Hatemi PK. McDermott R, et al. Proc Natl Acad Sci U S A. 2020 Dec 1;117(48):30014-30021. doi: 10.1073/pnas.2012021117. Epub 2020 Nov 23. Proc Natl Acad Sci U S A. 2020. PMID: 33229586 Free PMC article.
  • AAN position statement: Ethical issues in clinical research in neurology. Tolchin B, Conwit R, Epstein LG, Russell JA; Ethics, Law, and Humanities Committee, a joint committee of the American Academy of Neurology, American Neurological Association, and Child Neurology Society. Tolchin B, et al. Neurology. 2020 Apr 14;94(15):661-669. doi: 10.1212/WNL.0000000000009241. Epub 2020 Mar 16. Neurology. 2020. PMID: 32179700 Free PMC article. Review.
  • Unethical work must be filtered out or flagged. Ruxton GD, Mulder T. Ruxton GD, et al. Nature. 2019 Aug;572(7768):171-172. doi: 10.1038/d41586-019-02378-x. Nature. 2019. PMID: 31384054 No abstract available.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources, other literature sources.

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • 20 Most Unethical Experiments in Psychology

Humanity often pays a high price for progress and understanding — at least, that seems to be the case in many famous psychological experiments. Human experimentation is a very interesting topic in the world of human psychology. While some famous experiments in psychology have left test subjects temporarily distressed, others have left their participants with life-long psychological issues . In either case, it’s easy to ask the question: “What’s ethical when it comes to science?” Then there are the experiments that involve children, animals, and test subjects who are unaware they’re being experimented on. How far is too far, if the result means a better understanding of the human mind and behavior ? We think we’ve found 20 answers to that question with our list of the most unethical experiments in psychology .

Emma Eckstein

what is an unethical research study

Electroshock Therapy on Children

what is an unethical research study

Operation Midnight Climax

what is an unethical research study

The Monster Study

what is an unethical research study

Project MKUltra

what is an unethical research study

The Aversion Project

what is an unethical research study

Unnecessary Sexual Reassignment

what is an unethical research study

Stanford Prison Experiment

what is an unethical research study

Milgram Experiment

what is an unethical research study

The Monkey Drug Trials

what is an unethical research study

Featured Programs

Facial expressions experiment.

what is an unethical research study

Little Albert

what is an unethical research study

Bobo Doll Experiment

what is an unethical research study

The Pit of Despair

what is an unethical research study

The Bystander Effect

what is an unethical research study

Learned Helplessness Experiment

what is an unethical research study

Racism Among Elementary School Students

what is an unethical research study

UCLA Schizophrenia Experiments

what is an unethical research study

The Good Samaritan Experiment

what is an unethical research study

Robbers Cave Experiment

what is an unethical research study

Related Resources:

  • What Careers are in Experimental Psychology?
  • What is Experimental Psychology?
  • The 25 Most Influential Psychological Experiments in History
  • 5 Best Online Ph.D. Marriage and Family Counseling Programs
  • Top 5 Online Doctorate in Educational Psychology
  • 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
  • Top 10 Online Master’s in Forensic Psychology
  • 10 Most Affordable Counseling Psychology Online Programs
  • 10 Most Affordable Online Industrial Organizational Psychology Programs
  • 10 Most Affordable Online Developmental Psychology Online Programs
  • 15 Most Affordable Online Sport Psychology Programs
  • 10 Most Affordable School Psychology Online Degree Programs
  • Top 50 Online Psychology Master’s Degree Programs
  • Top 25 Online Master’s in Educational Psychology
  • Top 25 Online Master’s in Industrial/Organizational Psychology
  • Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
  • Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
  • 50 Great Small Colleges for a Bachelor’s in Psychology
  • 50 Most Innovative University Psychology Departments
  • The 30 Most Influential Cognitive Psychologists Alive Today
  • Top 30 Affordable Online Psychology Degree Programs
  • 30 Most Influential Neuroscientists
  • Top 40 Websites for Psychology Students and Professionals
  • Top 30 Psychology Blogs
  • 25 Celebrities With Animal Phobias
  • Your Phobias Illustrated (Infographic)
  • 15 Inspiring TED Talks on Overcoming Challenges
  • 10 Fascinating Facts About the Psychology of Color
  • 15 Scariest Mental Disorders of All Time
  • 15 Things to Know About Mental Disorders in Animals
  • 13 Most Deranged Serial Killers of All Time

Online Psychology Degree Guide

Site Information

  • About Online Psychology Degree Guide
  • The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

5 Unethical Medical Experiments Brought Out of the Shadows of History

Prisoners and other vulnerable populations often bore the brunt of unethical medical experimentation..

medical instruments on a table - shutterstock

Most people are aware of some of the heinous medical experiments of the past that violated human rights. Participation in these studies was either forced or coerced under false pretenses. Some of the most notorious examples include the experiments by the Nazis, the Tuskegee syphilis study, the Stanford Prison Experiment, and the CIA’s LSD studies.

But there are many other lesser-known experiments on vulnerable populations that have flown under the radar. Study subjects often didn’t — or couldn’t — give consent. Sometimes they were lured into participating with a promise of improved health or a small amount of compensation. Other times, details about the experiment were disclosed but the extent of risks involved weren’t.

This perhaps isn’t surprising, as doctors who conducted these experiments were representative of prevailing attitudes at the time of their work. But unfortunately, even after informed consent was introduced in the 1950s , disregard for the rights of certain populations continued. Some of these researchers’ work did result in scientific advances — but they came at the expense of harmful and painful procedures on unknowing subjects.

Here are five medical experiments of the past that you probably haven’t heard about. They illustrate just how far the ethical and legal guidepost, which emphasizes respect for human dignity above all else, has moved.

The Prison Doctor Who Did Testicular Transplants

From 1913 to 1951, eugenicist Leo Stanley was the chief surgeon at San Quentin State Prison, California’s oldest correctional institution. After performing vasectomies on prisoners, whom he recruited through promises of improved health and vigor, Stanley turned his attention to the emerging field of endocrinology, which involves the study of certain glands and the hormones they regulate. He believed the effects of aging and decreased hormones contributed to criminality, weak morality, and poor physical attributes. Transplanting the testicles of younger men into those who were older would restore masculinity, he thought.  

Stanley began by using the testicles of executed prisoners — but he ran into a supply shortage. He solved this by using the testicles of animals, including goats and deer. At first, he physically implanted the testicles directly into the inmates. But that had complications, so he switched to a new plan: He ground up the animal testicles into a paste, which he injected into prisoners’ abdomens. By the end of his time at San Quentin, Stanley did an estimated 10,000 testicular procedures .

The Oncologist Who Injected Cancer Cells Into Patients and Prisoners

During the 1950s and 1960s, Sloan-Kettering Institute oncologist Chester Southam conducted research to learn how people’s immune systems would react when exposed to cancer cells. In order to find out, he injected live HeLa cancer cells into patients, generally without their permission. When patient consent was given, details around the true nature of the experiment were often kept secret. Southam first experimented on terminally ill cancer patients, to whom he had easy access. The result of the injection was the growth of cancerous nodules , which led to metastasis in one person.

Next, Southam experimented on healthy subjects , which he felt would yield more accurate results. He recruited prisoners, and, perhaps not surprisingly, their healthier immune systems responded better than those of cancer patients. Eventually, Southam returned to infecting the sick and arranged to have patients at the Jewish Chronic Disease Hospital in Brooklyn, NY, injected with HeLa cells. But this time, there was resistance. Three doctors who were asked to participate in the experiment refused, resigned, and went public.

The scandalous newspaper headlines shocked the public, and legal proceedings were initiated against Southern. Some in the scientific and medical community condemned his experiments, while others supported him. Initially, Southam’s medical license was suspended for one year, but it was then reduced to a probation. His career continued to be illustrious, and he was subsequently elected president of the American Association for Cancer Research.

The Aptly Named ‘Monster Study’

Pioneering speech pathologist Wendell Johnson suffered from severe stuttering that began early in his childhood. His own experience motivated his focus on finding the cause, and hopefully a cure, for stuttering. He theorized that stuttering in children could be impacted by external factors, such as negative reinforcement. In 1939, under Johnson’s supervision, graduate student Mary Tudor conducted a stuttering experiment, using 22 children at an Iowa orphanage. Half received positive reinforcement. But the other half were ridiculed and criticized for their speech, whether or not they actually stuttered. This resulted in a worsening of speech issues for the children who were given negative feedback.

The study was never published due to the multitude of ethical violations. According to The Washington Post , Tudor was remorseful about the damage caused by the experiment and returned to the orphanage to help the children with their speech. Despite his ethical mistakes, the Wendell Johnson Speech and Hearing Clinic at the University of Iowa bears Johnson's name and is a nod to his contributions to the field.

The Dermatologist Who Used Prisoners As Guinea Pigs

One of the biggest breakthroughs in dermatology was the invention of Retin-A, a cream that can treat sun damage, wrinkles, and other skin conditions. Its success led to fortune and fame for co-inventor Albert Kligman, a dermatologist at the University of Pennsylvania . But Kligman is also known for his nefarious dermatology experiments on prisoners that began in 1951 and continued for around 20 years. He conducted his research on behalf of companies including DuPont and Johnson & Johnson.

Kligman’s work often left prisoners with pain and scars as he used them as study subjects in wound healing and exposed them to deodorants, foot powders, and more for chemical and cosmetic companies. Dow once enlisted Kligman to study the effects of dioxin, a chemical in Agent Orange, on 75 inmates at Pennsylvania's Holmesburg Prison. The prisoners were paid a small amount for their participation but were not told about the potential side effects.

In the University of Pennsylvania’s journal, Almanac , Kligman’s obituary focused on his medical advancements, awards, and philanthropy. There was no acknowledgement of his prison experiments. However, it did mention that as a “giant in the field,” he “also experienced his fair share of controversy.”

The Endocrinologist Who Irradiated Prisoners

When the Atomic Energy Commission wanted to know how radiation affected male reproductive function, they looked to endocrinologist Carl Heller . In a study involving Oregon State Penitentiary prisoners between 1963 and 1973, Heller designed a contraption that would radiate their testicles at varying amounts to see what effect it had, particularly on sperm production. The prisoners also were subjected to repeated biopsies and were required to undergo vasectomies once the experiments concluded.

Although study participants were paid, it raised ethical issues about the potential coercive nature of financial compensation to prison populations. The prisoners were informed about the risks of skin burns, but likely were not told about the possibility of significant pain, inflammation, and the small risk of testicular cancer.

  • personal health
  • behavior & society

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

  • Skip to Main Content
  • Skip to audio player to listen live

Enter the username on file and we'll send you a code to reset your password.

A verification code has been emailed to

The high cost of medical whistleblowing

A man next to his book

Log in to share your opinion with MPR News and add it to your profile.

Thanks for liking this story! We have added it to a list of your favorite stories.

If you worked in medicine and saw something unethical, would you speak up? Would you report your boss or colleagues?    

That’s the situation that confronted Carl Elliott, a medical school graduate and philosophy professor who at the time was working as a bioethicist at the University of Minnesota Medical School.

He blew the whistle on his employer after a vulnerable man died by suicide while enrolled in a psychiatric drug study. After being ignored by university officials and dropped by colleagues, Elliott was vindicated years later when a state inquiry corroborated his concerns.  

The experience left him shaken and curious. What happens to other whistleblowers?   

Support the News you Need

Gifts from individuals keep MPR News accessible to all - free of paywalls and barriers.

In his new book, “ The Occasional Human Sacrifice: Medical Experimentation and the Price of Saying No ,” Elliott interviews the people who spoke up against wrongdoing in six medical research studies.  

They range from the public health social worker who exposed the Tuskegee syphilis study to the young doctor who blew the whistle on a Seattle cancer center for enrolling uninformed patients in an experimental transplant study that was more likely to kill them than a standard treatment.  

MPR News guest host Euan Kerr talked with Elliott about what motivates whistleblowers, the isolation and hopelessness that accompanies speaking out and what could make it easier to report and stop unethical medical research.  

Carl Elliott  is a professor in the Department of Philosophy at the University of Minnesota and previously a professor of bioethics in the University of Minnesota Medical School. He writes for academic journals and publications such as The New Yorker, The Atlantic Monthly and The New York Times. He’s written seven books including “Better Than Well: American Medicine Meets the American Dream” and “White Coat, Black Hat: Adventures on the Dark Side of Medicine.” His most recent book published in May is about whistleblowers in medical research, “ The Occasional Human Sacrifice: Medical Experimentation and the Price of Saying No .”  

Two men posing for a portrait

Subscribe to the MPR News with Angela Davis podcast on: Apple Podcasts , Google Podcasts , Spotify or RSS .    

Use the audio player above to listen to the full conversation.   

A Study of Unethical Research Practices

10 Pages Posted: 11 May 2022

Pooja Umashankar

Independent

Date Written: April 15, 2022

Ethics are a set of principles for the right conduct in a particular field. Similarly there are certain ethics to be followed while writing a research paper. The research ethics validate the study or the research conducted by the researcher. It makes the work authentic and original. As there is deviance from all set of norms and rules, there are also ways of deviating from the research ethics. These ways of deviating from the ways of research are known as unethical research practices. Some of the unethical research practices include plagiarism, duplicate publication, authorship conflict. The unethical research practices do create a negative impact on the field of research. They definitely need to be controlled and penalised. Research not only spreads knowledge in the society but also influences the opinion of the public to a great extent. Thus it is very essential that research is free from unethical means. Only when a research is ethically done, it can be a foundation for the development of the society. The research paper deals with the various types of unethical research practices, their impact and also suggests certain ways in which these practices can be controlled.

Keywords: Ethics, research, plagiarism, copyright

JEL Classification: z10

Suggested Citation: Suggested Citation

Pooja Umashankar (Contact Author)

Independent ( email ), do you have a job opening that you would like to promote on ssrn, paper statistics, related ejournals, ethics ejournal.

Subscribe to this free journal for more curated articles on this topic

Rhetorical Theory eJournal

Innovation educator: courses, cases & teaching ejournal.

Subscribe to this fee journal for more curated articles on this topic

Innovation Practice eJournal

Other innovation research & policy ejournal.

  • Grades 6-12
  • School Leaders

Check Out Our 32 Fave Amazon Picks! 📦

New Research Says Having a Black Teacher Reduces Special Education Referrals for Black Students

The reduction in special education referrals was most significant among economically disadvantaged Black boys.

Quote about how having a black teacher reduces SPED referrals for black students

As teachers, we all strive to create an environment that fosters fair opportunities for all students. In our schools today, diversity among students is common, yet the demographics of certain classrooms, especially special education (SPED) classrooms, are concerning. White teachers often dominate schools , making us wonder why special education classes are sometimes the most racially diverse . How does this discrepancy arise? Is it mere coincidence, or are there underlying biases and systemic structures influencing which students are referred to these programs? What if the racial makeup of our teaching staffs could influence student success more than we realized? Recent findings from a study in North Carolina provide some compelling insights into these questions.

Having a Black teacher reduces special education referrals for Black students.

This research conducted by Cassandra M.D. Hart and Constance A. Lindsay explores the impact of teacher-student racial congruence on the identification of Black students for discretionary educational services, specifically gifted and special education programs. Are there underlying biases influencing which students are referred to these programs? Here’s what their research has to say:

Key findings from Hart and Lindsay (2024)

  • Reduction in special education referrals for Black students: The study demonstrates that Black students matched with Black teachers are significantly less likely to receive referrals to special education compared to their peers with teachers of other races. This effect is especially pronounced among economically disadvantaged Black boys.
  • Impact on disability categories with high discretion: The findings highlight a stronger relationship for disabilities that have a more discretionary component in their identification, such as specific learning disabilities. This suggests that the teacher’s race can play a critical role in the decision-making process for referrals, potentially reducing subjective bias in identification.
  • No impact on gifted program identification: Black teachers did not increase the likelihood of identifying Black students for gifted programs. This indicates that teacher-student race may be more significant in preventing unwarranted SPED referrals than in enhancing access to gifted education.
  • Variability based on student characteristics: The study examined how Black teachers’ effects varied among students with different characteristics, such as economic disadvantage and gender. Economically disadvantaged Black boys experienced the most significant reduction in special education referrals, underscoring the importance of considering student background in educational strategies.

Can we trust this research?

Not all research measures up equally! Here’s what our We Are Teachers “ Malarkey Odometer ” says when it comes to this publication based on four key factors.

  • Peer-reviewed? Yes! While these data come from 2007 through 2013, this manuscript likely went through many rounds of the peer-review process.
  • Sample size: Their sample size is huge! They have an n = 408,959 for their gifted and talented portion of the study and an n = 546,433 for the SPED portion. This study has huge statistical power!
  • Researchers’ credentials: Hart and Lindsay have amassed over 6,000 citations in the academic field, even though they are considered fairly new academics. This manuscript was published in the high-impact American Educational Research Journal , a dream for any researcher.
  • Methodology: This is a “semi”-causal study. Since random assignment based on race is unethical, researchers employed a “quasi-experimental” approach to study outcomes. This means they looked for naturally occurring situations that approximate a controlled experiment. They also used data from North Carolina public schools, where there is a significant, but varying, presence of Black teachers. With all methodology considered, these researchers utilized the strongest tools they could in this situation.

What does this mean for teachers?

The findings suggest that the race of the person who stands in front of the classroom can significantly impact the educational trajectory of Black students. But how can teachers apply these findings?

  • Advocate for diversity: Promote and support initiatives in your school to hire and retain Black educators, or advocate for a Grow Your Own program to start in your district. A diverse teaching staff provides crucial role models and enhances cultural competence within the school community.
  • Reflect on bias: All educators should engage in self-reflection to identify and address their own biases in student interactions and evaluations. Participate in professional development opportunities focused on cultural competency and anti-racist teaching practices to minimize biased decisions.
  • Engage in policy changes: Join efforts to advocate for policies that advance racial equity in teacher recruitment, hiring practices, and ongoing professional development. Encourage your school district to implement standards that prioritize diversity and inclusion.
  • Dr. Constance Lindsay told We Are Teachers: “All teachers can benefit from having diverse colleagues in service of improving student outcomes, particularly novice teachers.”

As educators, our role extends beyond academics: We also shape an equitable and inclusive educational environment. Hart and Lindsay’s (2024) findings highlight that our teaching staffs’ composition profoundly influences student outcomes and opportunities. I know it’s easy to think, “Well, it’s not my responsibility to change the diversity of the teacher workforce,” but it is your responsibility to check your bias . By reflecting, advocating systemic changes, and embracing diversity, we make educational equity a reality, not an aspiration. Let’s be the educators who not only wonder about change but also enact it, recognizing and nurturing every child’s potential.

Looking for more articles like this? Be sure to subscribe to  our newsletters !

Copyright © 2024. All rights reserved. 5335 Gate Parkway, Jacksonville, FL 32256

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Monitoring Employees Makes Them More Likely to Break Rules

  • Chase Thiel,
  • Julena M. Bonner,
  • David Welsh,
  • Niharika Garud

what is an unethical research study

Researchers found that when workers know they’re being surveilled, they often feel less responsible for their own conduct.

As remote work becomes the norm, more and more companies have begun tracking employees through desktop monitoring, video surveillance, and other digital tools. These systems are designed to reduce rule-breaking — and yet new research suggests that in some cases, they can seriously backfire. Specifically, the authors found across two studies that monitored employees were substantially more likely to break rules, including engaging in behaviors such as cheating on a test, stealing equipment, and purposely working at a slow pace. They further found that this effect was driven by a shift in employees’ sense of agency and personal responsibility: Monitoring employees led them to subconsciously feel less responsibility for their own conduct, ultimately making them more likely to act in ways that they would otherwise consider immoral. However, when employees feel that they are being treated fairly, the authors found that they are less likely to suffer a drop in agency and are thus less likely to lose their sense of moral responsibility in response to monitoring. As such, the authors suggest that in cases where monitoring is necessary, employers should take steps to enhance perceptions of justice and thus preserve employees’ sense of agency.

In April 2020, global demand for employee monitoring software more than doubled . Online searches for “how to monitor employees working from home” increased by 1,705%, and sales for systems that track workers’ activity via desktop monitoring, keystroke tracking, video surveillance, GPS location tracking, and other digital tools went through the roof. Some of these systems purport to use employee data to improve wellbeing — for example, Microsoft is developing a system that would use smart watches to collect data on employees’ blood pressure and heart rate, producing personalized “anxiety scores” to inform wellness recommendations. But the vast majority of employee monitoring tools are focused on tracking performance, increasing productivity, and deterring rule-breaking.

what is an unethical research study

  • CT Chase Thiel is the Bill Daniels Chair of Business Ethics and an associate professor of management at the University of Wyoming’s College of Business. His research examines causes of organizational misconduct through a behavioral lens, characteristics of moral people, and the role of leaders in the creation and maintenance of ethical workplaces.
  • JB Julena M. Bonner  is an Associate Professor of management in the Marketing and Strategy Department of the Jon M. Huntsman School of Business at Utah State University. She received her PhD in Management from Oklahoma State University. Her research interests include behavioral ethics, ethical leadership, moral emotions, and workplace deviance. See her faculty page here .
  • JB John Bush is an Assistant Professor of Management in the College of Business at the University of Central Florida. His research focuses on employee ethicality and performance in organizations.
  • David Welsh is an associate professor in the Department of Management and Entrepreneurship at Arizona State University’s W.P. Carey School of Business. He holds a Ph.D. in Management from the University of Arizona. His research focuses primarily on issues related to unethical behavior in the workplace. See his faculty page here .
  • NG Niharika Garud is an associate professor in the Department of Management and Marketing at University of Melbourne’s Faculty of Business & Economics. Her research focuses primarily on understanding management of people, performance, and innovation in organizations. See her faculty page here .

Partner Center

  • Share full article

Advertisement

Supported by

Guest Essay

Political Scientists Want to Know Why We Hate One Another This Much

Two hands hold up a cellphone capturing an image of Donald Trump standing in front of a large American flag.

By Thomas B. Edsall

Mr. Edsall contributes a weekly column from Washington, D.C., on politics, demographics and inequality.

Who among us are the most willing to jettison democratic elections? Which voters not only detest their political adversaries but also long for their destruction?

These questions are now at the heart of political science.

Five scholars have capitalized on new measurement techniques to identify partisan sectarian voters, a category that they said “does indeed predict antidemocratic tendencies.”

In their recent paper “ Partisan Antipathy and the Erosion of Democratic Norms ” Eli Finkel of Northwestern, James Druckman of the University of Rochester, Alexander Landry of Stanford, Jay Van Bavel of N.Y.U. and Rick H. Hoyle of Duke made the case that earlier studies of partisan hostility used ratings of the two parties on a scale of 0 (cold) to 100 (very warm) but that that measure failed to show a linkage between such hostility and antidemocratic views.

The five scholars wrote, “Partisan antipathy is indeed to blame, but the guilty party is political sectarianism,” not the thermometer rating system:

Insofar as people experience othering, aversion and moralization toward opposing partisans, they are more likely to support using undemocratic tactics to pass partisan policies: gerrymandering congressional districts, reducing the number of polling stations in locations that support the opposing party, ignoring unfavorable court rulings by opposition-appointed judges, failing to accept the results of elections that one loses and using violence and intimidation toward opposing partisans.

Who, then, falls into this subset of partisan sectarians?

The authors cited nine polling questions that asked voters to assess their feelings toward members of the opposition on a scale of 1 to 6, with 6 the most hostile.

The first set of questions measured what the authors called othering. The most extreme answers were:

I felt as if they and I are on separate planets. I am as different from them as can be. It’s impossible for me to see the world the way they do.

The second set of questions measured aversion:

My feelings toward them are overwhelmingly negative. I have a fierce hatred for them. They have every negative trait in the book.

The third set of questions measured moralization:

They are completely immoral. They are completely evil in every way. They lack any shred of integrity.

How, then, to identify voters high in antidemocratic views? Representative questions here were: “Democratic/Republican governors should ignore unfavorable court rulings by Republican/Democratic-appointed judges” and “Democrats/Republicans should not accept election results if they lose.”

The Finkel et al. analysis linking partisan sectarianism to antidemocratic views received strong support but not a wholesale endorsement from Nicolas Campos and Christopher Federico , political scientists at the University of Minnesota, who modified the Finkel approach.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Public Health
  • v.105(10); Oct 2015

The Rationalization of Unethical Research: Revisionist Accounts of the Tuskegee Syphilis Study and the New Zealand “Unfortunate Experiment”

C. Paul conceptualized the idea, in discussion with B. Brookes; reviewed the literature; and wrote the first draft of the article. B. Brookes revised the article for important intellectual content.

Two studies, widely condemned in the 1970s and 1980s—the Tuskegee study of men with untreated syphilis and the New Zealand study of women with untreated carcinoma in situ of the cervix—received new defenses in the 21st century.

We noted remarkable similarities in both the studies and their defenses. Here we evaluate the scientific, political, and moral claims of the defenders.

The scientific claims are largely based on incomplete or misinterpreted evidence and exaggeration of the uncertainties of science. The defenders’ political arguments mistakenly claim that identity politics clouded the original critiques; in fact such politics opened the eyes of the public to exploitation. The moral defenses demonstrate an overreliance on codes of conduct and have implications for research ethics today.

In Dark Medicine: Rationalizing Unethical Medical Research , William LaFleur describes how rationalizations initially masquerade as reasons—but they are insufficient or invalid. 1 Even Nazi physicians experimenting in the concentration camps rationalized their actions. These acts were grossly immoral, yet behind them were moral justifications. Those involved used these justifications to suppress and subdue their moral intuitions. 2 These were rationalizations offered at the time, but such justifications can also be made in the present for the past actions of others, though the motives for doing so are different. We use the idea of rationalization to explore recent revisionist accounts of what have been widely regarded as unethical medical studies.

The Tuskegee syphilis study in Macon Country, Alabama, has been described as an egregious case of blatant racism. 3 Nevertheless, it has always had its defenders, such as those who rushed to justify it when the study first came to public attention in the 1970s. 4 In the 21st century, physician Robert White and cultural anthropologist Richard Shweder have proffered a fresh defense, suggesting that the study was neither unethical nor racist. 5 Similarly, by the late 1980s a cervical precancer study in Auckland, New Zealand, was widely considered to be emblematic of unethical research and of the failures of professional self-regulation. 6 This “unfortunate experiment” 7 at the National Women’s Hospital had many similarities to the Tuskegee study. Defense of the unfortunate experiment is not new, 8 but the established accounts of the study, that women with carcinoma in situ (CIS) of the cervix 9 were followed but not treated, have recently been described as misjudged, both factually and ethically. 10

The status afforded these new accounts is important. The Tuskegee and New Zealand studies have provided models of unethical practice for a generation of physicians and researchers. The US Belmont Report was prompted by the Tuskegee study revelations. 11 The New Zealand cervical cancer study was subject to a judicial inquiry and report (the Cartwright Inquiry), 12 and the subsequent recommendations led to wide-ranging changes in that country, including a legally enforceable code of patients’ rights and the appointment of a health and disability commissioner with powers to investigate complaints of breaches of the code. 13

It would be wrong to exaggerate the importance of the new defenses. Though a Lancet Infectious Diseases editorial in 2005 used White’s and Shweder’s revisionist accounts to question whether the Tuskegee study was unethical or racist, 14 these interpretations have not been widely cited. Similarly, though the New Zealand Medical Journal carried an invited editorial by the defender of the cervical cancer study, historian Linda Bryder, 15 and the Royal Society of London published an admiring review of Bryder’s book, 16 elsewhere her work has been strongly criticized. 17 The latest revelation about the involvement of Tuskegee investigator John Cutler in deliberately infecting people with gonorrhea and syphilis in Guatemala in the 1940s should have put an end to defenses of the ethics of those investigators. 18 Nevertheless, as historian Susan Reverby points out about the Tuskegee study, “There is a truth to what actually happened, and trying to understand it does matter.” She says that the revisionists’ arguments should be considered, “if for no other reason than to understand why they are being made.” 19

Here we provide an overview, informed by the reports of the official inquiries, of the 2 studies’ similarities and differences. 20 We draw on evidence from the original inquiries, historical accounts, and reports by the study investigators themselves to evaluate the scientific claims made by the defenders and explain their errors in reasoning. We also explore the political context in which the studies came to attention and the defenders’ moral claims.

Although the studies began in different historical periods (1932 in Alabama, 1966 in New Zealand), they came to light at a similar time (1970s and 1980s), and the revisionist accounts were written at a similar period (the first decade of the 21st century). Examining the accounts together casts a light on the way rationalization of unethical practices in the past takes a particular form in our own day.

OVERVIEW OF STUDIES

In 1932 in Macon County, Alabama, the US Public Health Service (USPHS) started a study of 400 African American men with latent or late syphilis who had not been treated. Men without syphilis (about 200) were also enrolled as control participants. The study was originally designed to last for 6 to 8 months, but it turned into a long-term study that continued for 40 years. The arsenic and bismuth compounds (metal therapy) used for treatment at the time were not offered to the men. The aims of the study were never clearly stated, but they certainly included documenting the natural course of untreated syphilis in African American men and examining whether the disease had a different course according to race. When penicillin became available in the 1940s and 1950s, this was also withheld.

In 1966, gynecologist Herbert Green, at National Women’s Hospital in Auckland obtained approval from the Hospital Medical Committee to withhold the conventional treatment used at the hospital (cone biopsy) from women with CIS of the cervix. His recorded aim was “to attempt to prove that CIS is not a pre-malignant disease.” 21 This entailed following without treatment a selected group of women with a new diagnosis (obtained by a small “punch” biopsy) of CIS. The condition of the women with subsequent positive smears (indicating persistent disease) was monitored with repeated smears and biopsies to check for invasive cancer. Green wrote that he was attempting to “follow indefinitely patients with diagnosed but untreated lesions.” 22 He withheld treatment, for varying lengths of time, from more than 100 women with CIS and microinvasive cancer of the cervix, vagina, and vulva. 23

The similarities between these studies, which took place at different times and in different places, are striking. Both followed and recorded the natural course of untreated disease. They did not seek informed consent for this departure from normal care. Both continued for many years (1932–1972 in Alabama and 1966–1987 in New Zealand). The adverse outcomes for many of the participants were detailed in published reports before the studies were stopped. The investigators attempted to deny some participants treatment elsewhere. 24 Many also received some treatment in the course of the studies. The studies were poorly run. Internal complaints (by Peter Buxton at the USPHS and physicians Bill McIndoe and Jock Mclean at National Women’s Hospital) led to internal inquiries that recommended continuation. 25 Both studies ended only after journalists brought them to public notice. 26 Finally, in neither case were criminal charges filed, but in both some compensation was eventually provided. 27

The studies also had several relevant differences. No ethical review preceded the Tuskegee study, whereas the Hospital Medical Committee performed that function for Green’s study. The natural course of syphilis was followed until death (it seems fairly clear that men who developed symptoms of late syphilis remained untreated). By contrast, when women with CIS developed clinically invasive cancer (though not microinvasive cancer), they were offered treatment. The Tuskegee study clearly deceived participants: they were told they were receiving treatment when they were not. In the cervical cancer study, Green reassured women they did not need treatment to remove the lesion. 28 Green misled his patients because he did not tell them that his view went against prevailing opinion, but arguably he did not deliberately deceive them because he believed what he told them. We have no evidence that the published results of the Tuskegee study were deliberately misrepresented, unlike in New Zealand, where Green misreported emerging results to look more favorable to his hypothesis. 29 Few expressions of concern arose within the USPHS, whereas physicians at National Women’s Hospital made repeated complaints about Green’s study.

Research participants in both studies were relatively powerless: poor rural African Americans and urban public hospital patients in New Zealand’s largest city, including European, Maori, and Pacific Islands women, at a time when women were expected to be passive recipients of reproductive health care. But the men of Macon County, at a time of severe poverty, racial segregation, and almost no access to health services, were much more disadvantaged. 30

Finally, the official inquiries into the 2 studies were very different. For the inquiry into the Tuskegee study, the US Department of Health, Education, and Welfare empaneled citizens representing medicine, law, religion, labor, education, health administration, and public affairs. The investigation was conducted less formally than the Cartwright Inquiry: some panel members went to Macon County to interview some of the men, and public hearings were held in Washington, DC. Set up in August 1972, it reported in April 1973. A minority report by Jay Katz contained stronger criticisms of the study. Subsequently, the Tuskegee panel was criticized for not adequately addressing some important facts, particularly the extent of deception, because panel members had no access to the early documents. 31

By contrast, a district court judge, assisted by legal and medical advisers, conducted the Cartwright Inquiry. Six months of public hearings obtained evidence from numerous international and national experts. The inquiry subpoenaed relevant documents, examined all patients’ medical records, and interviewed 81 patients or relatives. Almost all parties had legal representation. Set up in June 1987, the inquiry reported in July 1988.

Just as the studies themselves had similarities and differences, so do the nature of the defenses. Shweder largely confines himself to questioning the received account of the Tuskegee study (and pointing out the myths) and is receptive to debate. White puts forward alternative evidence on empirical matters but accepts some of the received account, including the charge that the men were deceived. Bryder goes further in her discussions of the cervical cancer study, arguing against almost every aspect of the received account.

SCIENTIFIC CLAIMS

The status of these claims has been addressed in detail elsewhere, 32 so we summarize the claims and rebuttal for both studies here. First, the defenders assert that treatment at the time the studies started (and throughout their course) was of uncertain value and hazardous. 33 In fact, the evidence available in 1932 indicated that treatment of latent syphilis with arsenic and bismuth compounds substantially reduced the risk of tertiary syphilis. The relevant USPHS cooperative clinical studies reports on treatment of latent syphilis were published in 1932 and 1933, 34 and the Department of Health, Education, and Welfare panel relied on them. Progression to symptomatic, tertiary syphilis occurred in 2% to 5% of the treated group and 20% to 30% of the untreated group. Treatment with arsenic and bismuth carried significant risk, but was considered less harmful than not treating. 35 In 1947 the Syphilis Study Group of the USPHS regarded treatment with arsenic and bismuth compounds as having proven value, 36 though by then the Venereal Disease Division recommended penicillin because it offered the advantages of brevity and safety, and indirect evidence suggested that it would be effective against latent disease. 37 Nevertheless, the supply of penicillin was still limited. By the mid-1950s, the USPHS recommended that penicillin should be used for all stages of syphilis. 38 It is not plausible to propose that the study investigators, who worked for the USPHS, would have had clinical grounds for withholding penicillin from the Tuskegee men once it became available.

In the 1960s, the effectiveness of cervical screening in reducing the incidence of cervical cancer in a population was still a matter of dispute, 39 but the effectiveness of treatment of CIS was not. In studies among women whose CIS remained untreated, 24% to 75% progressed to invasive cancer over up to 17 years, 40 but treatment reduced the risk of developing invasive cancer to 1% to 2%. 41

The defenders’ claims that both studies began in a climate of substantial scientific uncertainty about the balance of benefit versus harm are based on omission or misreading of the relevant evidence. In modern parlance, the alternatives were not in equipoise: the balance of benefits and harms favored treating according to the scientific standards of the time.

The second defense claim is that other, contemporaneous studies withheld treatment; hence these cases should not be singled out for criticism. A study of untreated latent syphilis, similar to the Tuskegee study, was published in 1948, and although White describes treatment as being “willfully and intentionally denied,” 42 the study's investigators described their actions as “to permit patients older than 50 years to remain untreated.” 43 The investigators conjectured that the harms of treatment might be greater than the benefits at that age, but the results led them to conclude that at least up to age 60 years, if patients were in good health, treatment should be offered. This study had less potential for bias than the earlier studies but showed similar effectiveness of treatment. 44

Likewise, Bryder claimed that “the international medical press reported on many studies in which doctors followed up cases of CIS without treatment,” that “there were many follow-up studies of women diagnosed with CIN1, CIN2, CIN3 (CIS),” and that “there was no shortage of studies but no definitive answers.” 45 Yet none of the contemporary studies cited by Bryder was similar to Green’s study, except for a study of women with abnormal smears who were aged 20 or younger, an age group for which cervical screening was not recommended and where genuine uncertainty existed. 46 Bryder conflated studies of follow-up of women after treatment with those of follow-up without treatment, and apparently comparable studies of withholding treatment enrolled women with lesser degrees of abnormality, in particular cervical dysplasia, in which the endpoint was CIS and the women gave informed consent. 47 Therefore, claims that the studies were not unusual at the time demonstrate a lack of care in making comparisons.

The third claim the defenders make is that morbidity and mortality were not worse because of participation in the studies. Shweder asserts that no treatment would have been available to these men if they had not been part of the study, so they were not harmed by participation. 48 Whether morbidity and mortality were worse for the men in the Tuskegee study does depend on whether they would have received treatment outside the study. Nevertheless, the conclusion of the investigators themselves was that the men were harmed. Published comparisons in the early years of the untreated Tuskegee men with treated men led the authors to conclude that

adequate antisyphilitic treatment prevented all forms of clinical relapse during the first 15 years of infection, whereas only one fourth of the Negros with untreated syphilis were normal. 49

This demonstrates that the investigators believed that treatment was effective, and extrapolating from later studies, it is certain that lack of treatment among the Tuskegee men was seriously harmful.

Bryder’s case for the cervical cancer study is built around Green’s defense at the inquiry that his “conservative management” had been a “fortunate programme” because unnecessary surgery (removal of the lesion) was avoided. 50 Bryder agrees that a later analysis showed harm in this approach, but asserts that this knowledge was not available at the time. 51 In fact, ample evidence existed that women in Green’s study did worse than those who received standard treatment (cone biopsy) at the same hospital, starting with Green’s own publication in 1974. 52 Moreover, far from benefiting from being spared treatment, many of the women in Green’s care were required to undergo multiple biopsies (each under general anesthetic) to check for invasion, 53 more than 30 developed cancer, and at least 8 died. 54 In both studies the relevant material is ignored or misinterpreted by the defenders.

POLITICAL CLIMATE

The revisionist accounts can be seen as part of a reassessment of, or backlash against, the change in intellectual climate between the 1960s and the 1990s, encompassing feminism and Black studies. 55 A common idea in these movements is that subtle oppression and exploitation is ubiquitous, and in this period activists identified abuses in many areas of modern society that had traditionally been perceived as benign. The backlash emphasizes the excesses of these movements or misreads what is at stake. Surveying the scene in the United States, Daniel Rodgers observes that some of those reacting against these new movements saw in them nothing short of a collapse of truth. Abandon the ground of moral reasoning, the argument goes, and “all that was left was the play of power and identity politics.” 56 In line with such critiques, the defenders of the original studies contend that the politics of oppression originally led African Americans and feminists to misread the scientific evidence to advance political ends. They claim that political activists sought to rescue the untold stories of poor African American men and of unsuspecting women.

To White, Shweder, and Bryder, these stories are suspect: they flow from “rhetoric” not “reason,” 57 or emotion instead of reason. Bryder argues that the Cartwright Inquiry was driven by a feminist agenda because Cartwright said she conducted the inquiry “as a feminist and as a lawyer.” 58 Bryder, in an interview, said that Judge Cartwright misunderstood the evidence, was confused, and got it wrong: “She was interviewing women who had cancer—now it’s a horrible disease and she is a very sympathetic listener and I think she was taken by that.” 59 Of the defenders themselves, White is African American and Bryder is a woman; perhaps such defenses are more convincing coming from members of the affected groups.

It was indeed the case that, at times, identity politics got out of hand in the responses to publicity about both studies. The wrongs revealed were sometimes exaggerated to create new myths. Thus, a survey in 1999 found that 80% of African Americans believed that the Tuskegee men had been injected with syphilis. 60 Furthermore, because of the complexity of both syphilis and cervical cancer, it is unsurprising that many people wrongly assumed, after the studies were exposed, that all latent syphilis would progress to death without treatment or that all untreated CIS would progress to invasive cervical cancer, when in fact over 20 years only about 30% to 50% of people with either disease would have had their condition progress. 61

In the New Zealand case it did not help that the Cartwright Inquiry’s long title referred to “allegations concerning the treatment of cervical cancer” 62 ; in fact it was the precursor of this condition that Green failed to treat. This may have led Iain Chalmers, a prominent epidemiologist, to his initial belief that all the women in Green’s study had cervical cancer (not CIS) at the outset. When Chalmers discovered this was not so, he wanted to clear a “totally unjustified slur” that arose when Green’s study was

referred to in the same breath as the scandalous long-term study of untreated syphilis in poor black sharecroppers in Tuskegee, Alabama. 63

Chalmers may have assumed a qualitative difference between the studies: in one, disease was left untreated and in the other, a precursor, but not the disease of interest itself, was left untreated.

The closing submission to the Cartwright Inquiry by the Ministry of Women’s Affairs expanded Green’s wrongs to the whole of the medical profession:

Ultimately the issues are who controls medicine and how, about who benefits from it and who are its victims. Thus as so many witnesses have clearly stated, the central issue, above all others, is power. 64

Statements such as this caused some people to think that the inquiry was simply a vehicle to wrest power away from the medical profession. An early defender of Green responded to criticism of her own claims in a piece entitled “Have You Been Burned at the Stake Yet?” 65

The rhetoric of those who made exaggerated claims about the wrongs of the Tuskegee and New Zealand studies may have encouraged recent commentators to see African American and feminist health activists as wrong on all counts. These defenders therefore suspect that the studies themselves received unfair press coverage at the hands of polemicists, who were not to be trusted. In an effort to rescue the physicians found to be at fault in both studies, the defenders highlight evidence that might exculpate them and explain their actions. For White, this involves emphasizing the role of African American health workers; Bryder highlights the agency of Green’s patients (meanwhile ignoring the fact that although they had been referred to his care for the best available treatment, they were unknowingly denied it). Yet neither of these positions materially supports White's and Bryder’s claims that the original critiques of the studies were clouded by politics and that once the clouds were cleared little of substance remained.

Defending the studies may also speak to a desire to create a more comfortable medical and national narrative in which progress and the uncertainties of science are the touchstones and medicine evolves in step with society. As Reverby writes, “how easily medical uncertainty masks ethical blindness.” 66 In both Shweder's and Bryder’s accounts, claimed uncertainty is used to exculpate the key actors.

MORAL ARGUMENTS

The defenders’ key moral claims are that the studies, at the time, did not violate medical research codes; that they were medically acceptable; and that, for Tuskegee, participants were not harmed because treatment was unavailable outside the study for such men.

By contrast, the official inquiries concluded that the studies were ethically wrong in proceeding without informed consent and in inflicting harms that could have been anticipated, for no commensurate benefit. Katz argued in his minority report in 1973 that the most fundamental reason for condemning the Tuskegee study at its inception was not that all participants should have been treated—some might not have wished to be treated—

but rather that they were never fairly consulted about the research project, its consequences for them, and the alternatives available to them. 67

Similarly, Cartwright concluded that

had patients been . . . informed of the types of treatment available to them, informed of the risks of procedures which were not conventional, definitive treatment for carcinoma in situ, and given the opportunity freely to decide whether or not to be part of the trial, then the trial could not be so severely criticised. 68

The defenders argue that standards of consent were not violated because “in 1932 the concept of informed consent had not even been imagined by medical professionals,” 69 no written standards for experimentation existed in the 1930s, 70 and the existing Helsinki Declaration gave “a strong exemption for patient consent in therapeutic research.” 71 Furthermore, they argue that the behavior of the researchers was widely acceptable at the time. Shweder points out that doctors in the 1930s did not disclose information because of a benign paternalism that kept their eyes “fixed on some imagined greater good.” 72 Bryder argues that Green’s relationship with his patients was little different from that of other medical professionals of his era. She writes,

Like others, Green stressed a confident approach: “. . . if the physician does not worry too much about the disease then neither will the patient!” 73

The defenders are wrong in their interpretation of the ethical codes, wrong in not knowing about codes and principles of conduct in existence before these studies were instigated, and wrong that this behavior was widely acceptable at the time. In the United States, attempts to regulate medical research and to develop codes of practice began years before the Tuskegee study started. 74 In 1907, a group that included the philosopher William James urged the passage of a bill to ensure “that no experiment should be performed in any other human being without his intelligent written consent.” 75 In 1928, Harvard professor of medicine Richard C. Cabot argued that

experimentation upon a human being without his consent and without the expectation of benefit to him is without any ethical justification. 76

Ethical guidelines for human experimentation, requiring consent, were issued by the government in Germany in 1931. 77 The Declaration of Helsinki was promulgated before Green’s study, but Bryder misreads the declaration in claiming that it gives “strong exemption” for patient consent. 78

Even if the defenders were right that the actions of the Tuskegee and New Zealand researchers were not forbidden by codes of research ethics, and were broadly accepted at the time, the question would become whether and how one should make moral judgments in the absence of agreed sanctions.

The defenders claim that research ethics cannot be judged except according to written codes of conduct. This cannot be the case. Historically, codes of conduct worked to codify existing practices—or existing aspirations. The Helsinki Declaration explicitly mentions older professional obligations: “The health of my patients shall be my first consideration.” 79 From Hippocrates to Claude Bernard in the 19th century to our own day, tradition gives primacy to a morality for medicine as the science and art of healing. For centuries, medicine has been understood by practitioners and lay people to involve not only knowledge and skills, but also the application of these in the service of important moral ends. Drawing on this long tradition, Edmund Pellegrino describes the medical profession as a moral community by virtue of several characteristics:

The inequality of the medical relationship, the nature of medical decisions, the nature of medical knowledge and the ineradicable moral complicity of the physician in whatever happens to his patient. 80

Contemporary physicians made moral critiques of both studies during their progress. Two physicians wrote to investigators in the Tuskegee study after reading published reports of the study. In 1955 Count D. Gibson wrote,

It seems to me that the continued observation of an ignorant individual suffering with a chronic disease for which therapeutic measures are available, cannot be justified on the basis of any accepted moral standard. 81

He mentioned Hippocrates, Maimonides, and the American Medical Association Code of Ethics. In 1964 Irwin Schatz wrote, “I am utterly astounded by the fact that physicians allow patients with potentially fatal disease to remain untreated when effective therapy is available.” 82 He called on the physicians associated with the study to reevaluate their moral judgments.

Similarly, in 1973 Green’s colleagues McIndoe and McLean wrote to the superintendent of National Women’s Hospital outlining their concerns about the harms to patients of Green’s management. 83 These complaints led to an internal inquiry, and the subsequent publication of the results of Green’s study by McIndoe and McLean, together with Ron Jones and Peter Mullins, 84 eventually brought the study to public notice.

A related claim is that these studies might be unethical by today’s standards, but they need to be understood in the sociocultural context and medical culture of their time. Certainly good historical inquiries can shed light on the social and medical environments that contributed to the formulation and continuation of these projects. In each of our examples, historical inquiry has demonstrated the ways the existing power structures made it very difficult to criticize the studies or to allow the criticism to lead to change. 85 In the case of the poor African American sharecroppers in the American South, even African American doctors and nurses did not raise criticisms. At National Women’s Hospital, with female patients and male doctors, there is no record of a patient or a nurse complaining about Green. Formal complaints came only from physicians—and only from those outside the academic hierarchy.

Shweder also argues that the men with syphilis in the Tuskegee study were not worse off than their counterparts outside the study in the same situation in Macon County, because treatment was inaccessible to such people at that time. 86 David Rothman pointed out the dangers in experiments that build on social deprivation, including the fact that they are likely to manipulate the consent of participants. 87 And indeed this happened. Because the men were deceived into believing they were receiving treatment, they had no chance even to try to obtain medical care. There might have been justification for a short-term trial (with consent), investigating the conjectured harms of lack of treatment in African American men with latent and late syphilis and thus providing evidence on which to base treatment of all such men. But this was not the Tuskegee situation, and for the study to continue into the 1940s—let alone later—when outside the study people with syphilis in Macon County were being routinely offered treatment shows that Shweder’s justification cannot hold up.

By the 1970s and 1980s, the people outside the medical profession who brought the studies to public notice did not hesitate to judge them to be wrong. The moral resources they used to discern this were not new and not relative. It was obvious that the participants were treated as means to scientific ends by the investigators; that they were not told the truth, and even repeatedly lied to; and that they were subjected to avoidable harm. Perhaps it is a cramped morality, consisting only of formal obligations or rules and cut off from its source in moral intuitions, that blinds the defenders. 88 In his reflections on experimenting on people, philosopher Hans Jonas cited the Golden Rule in its negative form: do not do unto others what you do not want done unto yourself. 89 The exercise of the moral imagination should enable one to put oneself in the place of the participant in research. For the men in the USPHS, it was 1 step too far to imagine that the poor men of Macon County were just like them. Likewise, Green could not imagine that a woman might wish to make up her own mind about treatment options rather than trust to his (impaired) judgment. When Tuskegee investigator Sidney Olansky was asked about withholding penicillin, the interviewer asked, “Wouldn’t you have treated yourselves?” Yet Olansky replied, “I don’t know . . . It is a trick question.” 90

The 2 studies came to public notice in the ferment of the civil rights movement and the women’s movement. Indeed, the explosion of moral insight and the determination that characterized these movements provided a climate in which the existing power structures could be defied and the unethical practices exposed to public view. Of course the effect of this exposure was not simply a benefit for medicine and research. The legacies of both Tuskegee and the New Zealand study have suffered abuses. 91 Yet it is perhaps a sign of the different times that we live in now, when those social movements are no longer so strong, that the defenders’ arguments seem more acceptable.

A distinguished African American physician, Charles MacDonald, writing in 1974 in the Journal of the National Medical Association , picked up the sense of having one’s eyes opened:

In my opinion the greatest contribution of the Tuskegee Study lies not in the scientific merit of the publications that have emanated from it, but in the anguish and concern its revelation has provoked in the minds of lay persons, physicians, medical investigators and others . . . our entire nation has been stimulated to rethink and redefine our present day position and practices as they relate to human experimentation.” 92

CONCLUSIONS

Even though we can appreciate the context in which these arguments in defense of the studies have emerged, they are mistaken in all substantive disagreements with the original critiques. In their attempt to explain and justify past medical research, the defenders have used existing power relations, overreliance on and limited interpretation of codes of conduct, confusion about scientific issues, and exaggeration of the uncertainties of science to make their case. In the end their accounts rely on an impoverished view of ethics: they have become whitewashes for studies that caused real harms.

Both studies came to light when existing power structures were under intense scrutiny from feminism and opposition to racism. Yet when the clouds of identity politics are cleared, we can see that they did not affect the substance of the original critiques. Moreover, the political focus on rescuing untold stories of African Americans and women enabled the wider public to see the racism and exploitation with fresh eyes.

Finally, the rationalizations tell us something important about research ethics today. Both the Tuskegee study and the unfortunate experiment deserve repeated revisiting because of the serious moral issues at stake. Yet retellings that rely too heavily on the existence and the authority of codes of ethics serve to reinforce a view that such codes are all that is necessary to protect vulnerable research participants. Their existence has not prevented institutional review board approval of studies today that are potentially exploitative. 93 More than rules are required; as moral philosopher Jonathan Glover writes, a code of ethics “should include the imagination to look through the rules to the human reality.” 94 Codes and guidelines are necessary, but they require thoughtful moral interpretation, alert to context. 95 Those who rationalize mistakes made in the past impair our ability to make just decisions today.

Acknowledgments

We thank Jing-Bao Nie for very helpful discussion of the ethical issues and Joanna Manning, Tom Cunningham, and Ron Paterson for helpful comments on a previous draft.

Note. C. Paul was 1 of 3 medical advisers to the Cartwright Inquiry. She assisted in compiling the medical review for the Inquiry.

Human Participant Protection

Institutional review board approval was not needed because no human participants were involved.

IMAGES

  1. Examples of unethical practices in research A researcher requires a

    what is an unethical research study

  2. Research

    what is an unethical research study

  3. PPT

    what is an unethical research study

  4. PPT

    what is an unethical research study

  5. Examples of Unethical Psychological Research The

    what is an unethical research study

  6. Unethical Business Research Conduct: Free Essay Example

    what is an unethical research study

VIDEO

  1. UCSB News in Review

  2. CONTACT-02

  3. Ethical Issues in Research

  4. How People Perceive Aesthetic Designs

  5. Unethical Medical Practices and Research

  6. The Untold Story of the Monster Study

COMMENTS

  1. Research Ethics

    Multiple examples of unethical research studies conducted in the past throughout the world have cast a significant historical shadow on research involving human subjects. Examples include the Tuskegee Syphilis Study from 1932 to 1972, Nazi medical experimentation in the 1930s and 1940s, and research conducted at the Willowbrook State School in the 1950s and 1960s.[1]

  2. Unethical experiments' painful contributions to today's medicine

    The first document outlining how research should be done in a fair way was a product of Nazi war atrocities. ... The need for retribution and compensation is found in a famously unethical ...

  3. What Is Ethics in Research and Why Is It Important?

    An action may be legal but unethical or illegal but ethical. We can also use ethical concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last century, many social reformers have urged citizens to disobey laws they regarded as immoral or unjust laws. ... The research protocol for a study of a drug on ...

  4. Unethical Research Practices to Avoid: Examples & Detection

    Examples of Unethical Research Practices. Here's a list of unethical practices every researcher must avoid. 1. Duplicate publication. It is unethical for a researcher to submit a research paper or publication that has two or more seminal journals which could be with or without acknowledgment of these other journals.

  5. Guiding Principles for Ethical Research

    This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose. Fair subject selection

  6. Research ethics

    Advancing psychology to benefit society and improve lives. Find resources on research misconduct, publication ethics, protecting research participants, ethics of online research, and guidance from various agencies and organizations, such as the NIH.

  7. Unethical practices within medical research and publication

    The data produced by the scientific community impacts on academia, clinicians, and the general public; therefore, the scientific community and other regulatory bodies have been focussing on ethical codes of conduct. Despite the measures taken by several research councils, unethical research, publishing and/or reviewing behaviours still take place. This exploratory study considers some of the ...

  8. Ethical Considerations in Research

    Revised on May 9, 2024. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments ...

  9. Incidence and Consequences

    Synopsis:Research misconduct and detrimental research practices constitute serious threats to science in the United States and around the world. The incidence of research misconduct is tracked by official statistics, survey results, and analysis of retractions, and all of these indicators have shown increases over time. However, as there are no definitive data, it is difficult to say precisely ...

  10. Ethical Issues in Research

    Definition. Ethics is a set of standards, a code, or value system, worked out from human reason and experience, by which free human actions are determined as ultimately right or wrong, good, or evil. If acting agrees with these standards, it is ethical, otherwise unethical. Scientific research refers to a persistent exercise towards producing ...

  11. Common Ethical Issues In Research And Publication

    Research is the pillar of knowledge, and it constitutes an integral part of progress. In the fast-expanding field of biomedical research, this has improved the quality and quantity of life. ... Some people say it is "unethical" to do this study because it has been proven in many studies. But no such research has ever been done locally!

  12. Should unethical research be retracted?

    The first argument is that it is wrong in itself to publish unethical research and that any such publications should therefore be retracted. As our analysis shows, however, this is a rather weak argument. The second argument is that it is important to retract such papers in order to communicate that unethical research is unacceptable and under ...

  13. Unethical human research in the field of neuroscience: a historical

    This paper gives an overview of the most remarkable unethical human research and how past misconducts helped develop ethical guidelines on human experimentation such as The Nuremberg Code 1947 following WWII. Unethical research in the field of neuroscience also proved to be incredibly distressing. Participants were often left with life-long ...

  14. 20 Most Unethical Experiments in Psychology

    In 1969, a research facility began an unethical experiment that would study the effects of drug addiction using animals. A large number of monkeys were trained to inject themselves with morphine, alcohol, cocaine, codeine, and a variety of amphetamines. ... In one well known and especially unethical experiment, ...

  15. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    To summarize research ethics review experiences in a study about the research ethics review process: Online feedback form of researcher's data: It is necessary to increase transparency of the review process, consistent application of federal guidelines, and a more collaborative review approach to improve the trust of qualitative researchers ...

  16. 'Failure at every level': How science sleuths exposed massive ethics

    Vulnerable people may feel they have no choice in whether to participate in a research study, says Lisa Rasmussen, a research ethicist at the University of North Carolina at Charlotte. "They are not in a position to give authentic consent." ... "I fell from my chair," Molimard says. "It's the largest unethical study performed for ...

  17. Publication of unethical research studies: the importance of informed

    Scientific research on human beings has been reported since the 18th century, when prisoners at Newgate were pardoned if they agreed to undergo variola vaccination (1721), and Edward Jenner began a series of cowpox vaccinations in children (1776) [1]. No ethical guidelines existed, however, until Thomas Percival wrote, in 1803: "… [I]t is for the public good… that new remedies and new ...

  18. 5 Unethical Medical Experiments Brought Out of the Shadows of History

    Most people are aware of some of the heinous medical experiments of the past that violated human rights. Participation in these studies was either forced or coerced under false pretenses. Some of the most notorious examples include the experiments by the Nazis, the Tuskegee syphilis study, the Stanford Prison Experiment, and the CIA's LSD ...

  19. The victims of unethical human experiments and coerced research under

    Background. The coerced human experiments and research under National Socialism constitute a reference point in modern bioethics. 7 Yet discussions of consent and the need for safeguards for research subjects to date lack a firm basis in historical evidence. There has been no full evaluation of the numbers of victims of Nazi research, who the victims were, and of the frequency and types of ...

  20. Why things go badly for medical research whistleblowers

    Call 651-227-6000 or 800-242-2828 during the 9 a.m. hour. Guest: Carl Elliott is a professor in the Department of Philosophy at the University of Minnesota and previously a professor of bioethics ...

  21. A Study of Unethical Research Practices

    The research ethics validate the study or the research conducted by the researcher. It makes the work authentic and original. As there is deviance from all set of norms and rules, there are also ways of deviating from the research ethics. These ways of deviating from the ways of research are known as unethical research practices.

  22. These 1950s experiments showed us the trauma of parent-child ...

    But by 1947, the scientific establishment required informed consent for research participants (though notable cases like the Tuskegee syphilis study violated such rules into at least the 1970s).

  23. New Research Says Having a Black Teacher Reduces Special Education

    This manuscript was published in the high-impact American Educational Research Journal, a dream for any researcher. Methodology: This is a "semi"-causal study. Since random assignment based on race is unethical, researchers employed a "quasi-experimental" approach to study outcomes.

  24. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...

  25. Spotting real expertise and examining your own knowledge

    On this episode of the Utterly Moderate Podcast, host Lawrence Eppard and Connors Institute co-director Jacob Mackey discuss techniques and shortcuts that you can use to spot real expertise in a world where people with expert credentials are sometimes frauds and where people without expert credentials are often very knowledgeable. They also discuss crucial techniques for examining your ...

  26. Monitoring Employees Makes Them More Likely to Break Rules

    These systems are designed to reduce rule-breaking — and yet new research suggests that in some cases, they can seriously backfire. Specifically, the authors found across two studies that ...

  27. More than Tuskegee: Understanding Mistrust about Research Participation

    Groups ranged in size from 4-10 participants (N = 70). Mistrust of the health care system emerged as a primary barrier to participation in medical research among participants in our study. Mistrust stems from historical events including the Tuskegee syphilis study and is reinforced by health system issues and discriminatory events that ...

  28. Political Scientists Want to Know Why We Hate One Another This Much

    The pilot study included four questions asking respondents whether a set of unethical or illegal actions would "keep you from voting for a candidate for public office."

  29. The Rationalization of Unethical Research: Revisionist Accounts of the

    In Dark Medicine: Rationalizing Unethical Medical Research, William LaFleur describes how rationalizations initially masquerade as reasons—but they are insufficient or invalid. 1 Even Nazi physicians experimenting in the concentration camps rationalized their actions. These acts were grossly immoral, yet behind them were moral justifications. Those involved used these justifications to ...