The Classroom | Empowering Students in Their College Journey

The Relationship Between Scientific Method & Critical Thinking

Scott Neuffer

What Is the Function of the Hypothesis?

Critical thinking, that is the mind’s ability to analyze claims about the world, is the intellectual basis of the scientific method. The scientific method can be viewed as an extensive, structured mode of critical thinking that involves hypothesis, experimentation and conclusion.

Critical Thinking

Broadly speaking, critical thinking is any analytical thought aimed at determining the validity of a specific claim. It can be as simple as a nine-year-old questioning a parent’s claim that Santa Claus exists, or as complex as physicists questioning the relativity of space and time. Critical thinking is the point when the mind turns in opposition to an accepted truth and begins analyzing its underlying premises. As American philosopher John Dewey said, it is the “active, persistent and careful consideration of a belief or supposed form of knowledge in light of the grounds that support it, and the further conclusions to which it tends.”

Critical thinking initiates the act of hypothesis. In the scientific method, the hypothesis is the initial supposition, or theoretical claim about the world, based on questions and observations. If critical thinking asks the question, then the hypothesis is the best attempt at the time to answer the question using observable phenomenon. For example, an astrophysicist may question existing theories of black holes based on his own observation. He may posit a contrary hypothesis, arguing black holes actually produce white light. It is not a final conclusion, however, as the scientific method requires specific forms of verification.

Experimentation

The scientific method uses formal experimentation to analyze any hypothesis. The rigorous and specific methodology of experimentation is designed to gather unbiased empirical evidence that either supports or contradicts a given claim. Controlled variables are used to provide an objective basis of comparison. For example, researchers studying the effects of a certain drug may provide half the test population with a placebo pill and the other half with the real drug. The effects of the real drug can then be assessed relative to the control group.

In the scientific method, conclusions are drawn only after tested, verifiable evidence supports them. Even then, conclusions are subject to peer review and often retested before general consensus is reached. Thus, what begins as an act of critical thinking becomes, in the scientific method, a complex process of testing the validity of a claim. English philosopher Francis Bacon put it this way: “If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties.”

Related Articles

According to the Constitution, What Power Is Denied to the Judicial Branch?

According to the Constitution, What Power Is Denied to the Judicial ...

How to Evaluate Statistical Analysis

How to Evaluate Statistical Analysis

The Disadvantages of Qualitative & Quantitative Research

The Disadvantages of Qualitative & Quantitative Research

Qualitative and Quantitative Research Methods

Qualitative and Quantitative Research Methods

What Is Experimental Research Design?

What Is Experimental Research Design?

The Parts of an Argument

The Parts of an Argument

What Is a Confirmed Hypothesis?

What Is a Confirmed Hypothesis?

The Formula for T Scores

The Formula for T Scores

  • How We Think: John Dewey
  • The Advancement of Learning: Francis Bacon

Scott Neuffer is an award-winning journalist and writer who lives in Nevada. He holds a bachelor's degree in English and spent five years as an education and business reporter for Sierra Nevada Media Group. His first collection of short stories, "Scars of the New Order," was published in 2014.

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • About This Blog
  • Official PLOS Blog
  • EveryONE Blog
  • Speaking of Medicine
  • PLOS Biologue
  • Absolutely Maybe
  • DNA Science
  • PLOS ECR Community
  • All Models Are Wrong
  • About PLOS Blogs

A Guide to Using the Scientific Method in Everyday Life

critical thinking and the scientific method

The  scientific method —the process used by scientists to understand the natural world—has the merit of investigating natural phenomena in a rigorous manner. Working from hypotheses, scientists draw conclusions based on empirical data. These data are validated on large-scale numbers and take into consideration the intrinsic variability of the real world. For people unfamiliar with its intrinsic jargon and formalities, science may seem esoteric. And this is a huge problem: science invites criticism because it is not easily understood. So why is it important, then, that every person understand how science is done?

Because the scientific method is, first of all, a matter of logical reasoning and only afterwards, a procedure to be applied in a laboratory.

Individuals without training in logical reasoning are more easily victims of distorted perspectives about themselves and the world. An example is represented by the so-called “ cognitive biases ”—systematic mistakes that individuals make when they try to think rationally, and which lead to erroneous or inaccurate conclusions. People can easily  overestimate the relevance  of their own behaviors and choices. They can  lack the ability to self-estimate the quality of their performances and thoughts . Unconsciously, they could even end up selecting only the arguments  that support their hypothesis or beliefs . This is why the scientific framework should be conceived not only as a mechanism for understanding the natural world, but also as a framework for engaging in logical reasoning and discussion.

A brief history of the scientific method

The scientific method has its roots in the sixteenth and seventeenth centuries. Philosophers Francis Bacon and René Descartes are often credited with formalizing the scientific method because they contrasted the idea that research should be guided by metaphysical pre-conceived concepts of the nature of reality—a position that, at the time,  was highly supported by their colleagues . In essence, Bacon thought that  inductive reasoning based on empirical observation was critical to the formulation of hypotheses  and the  generation of new understanding : general or universal principles describing how nature works are derived only from observations of recurring phenomena and data recorded from them. The inductive method was used, for example, by the scientist Rudolf Virchow to formulate the third principle of the notorious  cell theory , according to which every cell derives from a pre-existing one. The rationale behind this conclusion is that because all observations of cell behavior show that cells are only derived from other cells, this assertion must be always true. 

Inductive reasoning, however, is not immune to mistakes and limitations. Referring back to cell theory, there may be rare occasions in which a cell does not arise from a pre-existing one, even though we haven’t observed it yet—our observations on cell behavior, although numerous, can still benefit from additional observations to either refute or support the conclusion that all cells arise from pre-existing ones. And this is where limited observations can lead to erroneous conclusions reasoned inductively. In another example, if one never has seen a swan that is not white, they might conclude that all swans are white, even when we know that black swans do exist, however rare they may be.  

The universally accepted scientific method, as it is used in science laboratories today, is grounded in  hypothetico-deductive reasoning . Research progresses via iterative empirical testing of formulated, testable hypotheses (formulated through inductive reasoning). A testable hypothesis is one that can be rejected (falsified) by empirical observations, a concept known as the  principle of falsification . Initially, ideas and conjectures are formulated. Experiments are then performed to test them. If the body of evidence fails to reject the hypothesis, the hypothesis stands. It stands however until and unless another (even singular) empirical observation falsifies it. However, just as with inductive reasoning, hypothetico-deductive reasoning is not immune to pitfalls—assumptions built into hypotheses can be shown to be false, thereby nullifying previously unrejected hypotheses. The bottom line is that science does not work to prove anything about the natural world. Instead, it builds hypotheses that explain the natural world and then attempts to find the hole in the reasoning (i.e., it works to disprove things about the natural world).

How do scientists test hypotheses?

Controlled experiments

The word “experiment” can be misleading because it implies a lack of control over the process. Therefore, it is important to understand that science uses controlled experiments in order to test hypotheses and contribute new knowledge. So what exactly is a controlled experiment, then? 

Let us take a practical example. Our starting hypothesis is the following: we have a novel drug that we think inhibits the division of cells, meaning that it prevents one cell from dividing into two cells (recall the description of cell theory above). To test this hypothesis, we could treat some cells with the drug on a plate that contains nutrients and fuel required for their survival and division (a standard cell biology assay). If the drug works as expected, the cells should stop dividing. This type of drug might be useful, for example, in treating cancers because slowing or stopping the division of cells would result in the slowing or stopping of tumor growth.

Although this experiment is relatively easy to do, the mere process of doing science means that several experimental variables (like temperature of the cells or drug, dosage, and so on) could play a major role in the experiment. This could result in a failed experiment when the drug actually does work, or it could give the appearance that the drug is working when it is not. Given that these variables cannot be eliminated, scientists always run control experiments in parallel to the real ones, so that the effects of these other variables can be determined.  Control experiments  are designed so that all variables, with the exception of the one under investigation, are kept constant. In simple terms, the conditions must be identical between the control and the actual experiment.     

Coming back to our example, when a drug is administered it is not pure. Often, it is dissolved in a solvent like water or oil. Therefore, the perfect control to the actual experiment would be to administer pure solvent (without the added drug) at the same time and with the same tools, where all other experimental variables (like temperature, as mentioned above) are the same between the two (Figure 1). Any difference in effect on cell division in the actual experiment here can be attributed to an effect of the drug because the effects of the solvent were controlled.

critical thinking and the scientific method

In order to provide evidence of the quality of a single, specific experiment, it needs to be performed multiple times in the same experimental conditions. We call these multiple experiments “replicates” of the experiment (Figure 2). The more replicates of the same experiment, the more confident the scientist can be about the conclusions of that experiment under the given conditions. However, multiple replicates under the same experimental conditions  are of no help  when scientists aim at acquiring more empirical evidence to support their hypothesis. Instead, they need  independent experiments  (Figure 3), in their own lab and in other labs across the world, to validate their results. 

critical thinking and the scientific method

Often times, especially when a given experiment has been repeated and its outcome is not fully clear, it is better  to find alternative experimental assays  to test the hypothesis. 

critical thinking and the scientific method

Applying the scientific approach to everyday life

So, what can we take from the scientific approach to apply to our everyday lives?

A few weeks ago, I had an agitated conversation with a bunch of friends concerning the following question: What is the definition of intelligence?

Defining “intelligence” is not easy. At the beginning of the conversation, everybody had a different, “personal” conception of intelligence in mind, which – tacitly – implied that the conversation could have taken several different directions. We realized rather soon that someone thought that an intelligent person is whoever is able to adapt faster to new situations; someone else thought that an intelligent person is whoever is able to deal with other people and empathize with them. Personally, I thought that an intelligent person is whoever displays high cognitive skills, especially in abstract reasoning. 

The scientific method has the merit of providing a reference system, with precise protocols and rules to follow. Remember: experiments must be reproducible, which means that an independent scientists in a different laboratory, when provided with the same equipment and protocols, should get comparable results.  Fruitful conversations as well need precise language, a kind of reference vocabulary everybody should agree upon, in order to discuss about the same “content”. This is something we often forget, something that was somehow missing at the opening of the aforementioned conversation: even among friends, we should always agree on premises, and define them in a rigorous manner, so that they are the same for everybody. When speaking about “intelligence”, we must all make sure we understand meaning and context of the vocabulary adopted in the debate (Figure 4, point 1).  This is the first step of “controlling” a conversation.

There is another downside that a discussion well-grounded in a scientific framework would avoid. The mistake is not structuring the debate so that all its elements, except for the one under investigation, are kept constant (Figure 4, point 2). This is particularly true when people aim at making comparisons between groups to support their claim. For example, they may try to define what intelligence is by comparing the  achievements in life of different individuals: “Stephen Hawking is a brilliant example of intelligence because of his great contribution to the physics of black holes”. This statement does not help to define what intelligence is, simply because it compares Stephen Hawking, a famous and exceptional physicist, to any other person, who statistically speaking, knows nothing about physics. Hawking first went to the University of Oxford, then he moved to the University of Cambridge. He was in contact with the most influential physicists on Earth. Other people were not. All of this, of course, does not disprove Hawking’s intelligence; but from a logical and methodological point of view, given the multitude of variables included in this comparison, it cannot prove it. Thus, the sentence “Stephen Hawking is a brilliant example of intelligence because of his great contribution to the physics of black holes” is not a valid argument to describe what intelligence is. If we really intend to approximate a definition of intelligence, Steven Hawking should be compared to other physicists, even better if they were Hawking’s classmates at the time of college, and colleagues afterwards during years of academic research. 

In simple terms, as scientists do in the lab, while debating we should try to compare groups of elements that display identical, or highly similar, features. As previously mentioned, all variables – except for the one under investigation – must be kept constant.

This insightful piece  presents a detailed analysis of how and why science can help to develop critical thinking.

critical thinking and the scientific method

In a nutshell

Here is how to approach a daily conversation in a rigorous, scientific manner:

  • First discuss about the reference vocabulary, then discuss about the content of the discussion.  Think about a researcher who is writing down an experimental protocol that will be used by thousands of other scientists in varying continents. If the protocol is rigorously written, all scientists using it should get comparable experimental outcomes. In science this means reproducible knowledge, in daily life this means fruitful conversations in which individuals are on the same page. 
  • Adopt “controlled” arguments to support your claims.  When making comparisons between groups, visualize two blank scenarios. As you start to add details to both of them, you have two options. If your aim is to hide a specific detail, the better is to design the two scenarios in a completely different manner—it is to increase the variables. But if your intention is to help the observer to isolate a specific detail, the better is to design identical scenarios, with the exception of the intended detail—it is therefore to keep most of the variables constant. This is precisely how scientists ideate adequate experiments to isolate new pieces of knowledge, and how individuals should orchestrate their thoughts in order to test them and facilitate their comprehension to others.   

Not only the scientific method should offer individuals an elitist way to investigate reality, but also an accessible tool to properly reason and discuss about it.

Edited by Jason Organ, PhD, Indiana University School of Medicine.

critical thinking and the scientific method

Simone is a molecular biologist on the verge of obtaining a doctoral title at the University of Ulm, Germany. He is Vice-Director at Culturico (https://culturico.com/), where his writings span from Literature to Sociology, from Philosophy to Science. His writings recently appeared in Psychology Today, openDemocracy, Splice Today, Merion West, Uncommon Ground and The Society Pages. Follow Simone on Twitter: @simredaelli

  • Pingback: Case Studies in Ethical Thinking: Day 1 | Education & Erudition

This has to be the best article I have ever read on Scientific Thinking. I am presently writing a treatise on how Scientific thinking can be adopted to entreat all situations.And how, a 4 year old child can be taught to adopt Scientific thinking, so that, the child can look at situations that bothers her and she could try to think about that situation by formulating the right questions. She may not have the tools to find right answers? But, forming questions by using right technique ? May just make her find a way to put her mind to rest even at that level. That is why, 4 year olds are often “eerily: (!)intelligent, I have iften been intimidated and plain embarrassed to see an intelligent and well spoken 4 year old deal with celibrity ! Of course, there are a lot of variables that have to be kept in mind in order to train children in such controlled thinking environment, as the screenplay of little Sheldon shows. Thanking the author with all my heart – #ershadspeak #wearescience #weareallscientists Ershad Khandker

Simone, thank you for this article. I have the idea that I want to apply what I learned in Biology to everyday life. You addressed this issue, and have given some basic steps in using the scientific method.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name and email for the next time I comment.

By Ashley Moses, edited by Andrew S. Cale Each year, millions of scientific research papers are published. Virtually none of them can…

By Ana Santos-Carvalho and Carolina Lebre, edited by Andrew S. Cale Excessive use of technical jargon can be a significant barrier to…

By Ryan McRae and Briana Pobiner, edited by Andrew S. Cale In 2023, the field of human evolution benefited from a plethora…

What’s the Difference Between Critical Thinking and Scientific Thinking?

Thinking deeply about things is a defining feature of what it means to be human, but, surprising as it may seem, there isn’t just one way to ‘think’ about something; instead, humans have been developing organized and varied schools of thought for thousands of years.

Discussions about morality, religion, and the meaning of life often drive knowledge-seeking inquiry, leading people to wonder what the difference is between critical thinking and Scientific Thinking.

Critical thinkers prioritize objectivity to analyze a problem, deduce logical solutions, and examine what the ramifications of those solutions are.

While scientific thinking often relies heavily on critical thinking, scientific inquiry is more dedicated to acquiring knowledge rather than mere abstraction.

What Is Critical Thinking?

A critical thinker may discern what they already know about a subject, what that information suggests, why that information is relevant, and how that information could be linked to further lines of inquiry. Critical thinking is, therefore, simply the ability to think clearly and logically.

Naturally, the ability to think critically is highly prized in an academic setting, and most educators seek to enable their students to think critically.

So much information can be interlinked to develop our understanding of the world, and critical thinking is the basis for using objectivity to not only establish likely outcomes to a scenario, but also inquire on the repercussions of that outcome and reflect on the process by which one came to that conclusion.

What Is Scientific Thinking?

As you might imagine, this process can be repeated ad infinitum. So, you draw a conclusion that’s scientifically verifiable? Great! Now you can take that conclusion and use it as a basis for a new experiment. Of course, the scientific method has limits.

Physics is known as the perfect science because the forces that comprise our world are well understood and don’t tend to exhibit anomalies, making the empirically verified scientific method perfect for improving our understanding of the natural world.

How Are Critical Thinking and Scientific Thinking Similar and Different?

The main difference between the two, however, is the goal of each discipline.

Both scientific thinking and critical thinking tend to draw links between concepts, evaluating how they are related and what knowledge may be gleaned from that connection.

Scientific thinkers develop a hypothesis, test it, and then rinse and repeat until the phenomenon is understood. As such, scientific thinkers are obsessed with why questions. Why does this phenomenon happen?

Why does matter behave like this? In the end, both schools are thought have a lot of interesting ideas guiding them, and most of us probably use them throughout our daily lives.

https://psycnet.apa.org/record/2010-22950-019

You may also like

Critical thinking concepts: enhancing decision-making skills, critical thinking in leadership, critical thinking for self-improvement: a guide to unlocking your potential, how to overcome procrastination, download this free ebook.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.15(9); 2019 Sep

Logo of ploscomp

Perspective: Dimensions of the scientific method

Eberhard o. voit.

Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, Georgia, United States of America

The scientific method has been guiding biological research for a long time. It not only prescribes the order and types of activities that give a scientific study validity and a stamp of approval but also has substantially shaped how we collectively think about the endeavor of investigating nature. The advent of high-throughput data generation, data mining, and advanced computational modeling has thrown the formerly undisputed, monolithic status of the scientific method into turmoil. On the one hand, the new approaches are clearly successful and expect the same acceptance as the traditional methods, but on the other hand, they replace much of the hypothesis-driven reasoning with inductive argumentation, which philosophers of science consider problematic. Intrigued by the enormous wealth of data and the power of machine learning, some scientists have even argued that significant correlations within datasets could make the entire quest for causation obsolete. Many of these issues have been passionately debated during the past two decades, often with scant agreement. It is proffered here that hypothesis-driven, data-mining–inspired, and “allochthonous” knowledge acquisition, based on mathematical and computational models, are vectors spanning a 3D space of an expanded scientific method. The combination of methods within this space will most certainly shape our thinking about nature, with implications for experimental design, peer review and funding, sharing of result, education, medical diagnostics, and even questions of litigation.

The traditional scientific method: Hypothesis-driven deduction

Research is the undisputed core activity defining science. Without research, the advancement of scientific knowledge would come to a screeching halt. While it is evident that researchers look for new information or insights, the term “research” is somewhat puzzling. Never mind the prefix “re,” which simply means “coming back and doing it again and again,” the word “search” seems to suggest that the research process is somewhat haphazard, that not much of a strategy is involved in the process. One might argue that research a few hundred years ago had the character of hoping for enough luck to find something new. The alchemists come to mind in their quest to turn mercury or lead into gold, or to discover an elixir for eternal youth, through methods we nowadays consider laughable.

Today’s sciences, in stark contrast, are clearly different. Yes, we still try to find something new—and may need a good dose of luck—but the process is anything but unstructured. In fact, it is prescribed in such rigor that it has been given the widely known moniker “scientific method.” This scientific method has deep roots going back to Aristotle and Herophilus (approximately 300 BC), Avicenna and Alhazen (approximately 1,000 AD), Grosseteste and Robert Bacon (approximately 1,250 AD), and many others, but solidified and crystallized into the gold standard of quality research during the 17th and 18th centuries [ 1 – 7 ]. In particular, Sir Francis Bacon (1561–1626) and René Descartes (1596–1650) are often considered the founders of the scientific method, because they insisted on careful, systematic observations of high quality, rather than metaphysical speculations that were en vogue among the scholars of the time [ 1 , 8 ]. In contrast to their peers, they strove for objectivity and insisted that observations, rather than an investigator’s preconceived ideas or superstitions, should be the basis for formulating a research idea [ 7 , 9 ].

Bacon and his 19th century follower John Stuart Mill explicitly proposed gaining knowledge through inductive reasoning: Based on carefully recorded observations, or from data obtained in a well-planned experiment, generalized assertions were to be made about similar yet (so far) unobserved phenomena [ 7 ]. Expressed differently, inductive reasoning attempts to derive general principles or laws directly from empirical evidence [ 10 ]. An example is the 19th century epigram of the physician Rudolf Virchow, Omnis cellula e cellula . There is no proof that indeed “every cell derives from a cell,” but like Virchow, we have made the observation time and again and never encountered anything suggesting otherwise.

In contrast to induction, the widely accepted, traditional scientific method is based on formulating and testing hypotheses. From the results of these tests, a deduction is made whether the hypothesis is presumably true or false. This type of hypotheticodeductive reasoning goes back to William Whewell, William Stanley Jevons, and Charles Peirce in the 19th century [ 1 ]. By the 20th century, the deductive, hypothesis-based scientific method had become deeply ingrained in the scientific psyche, and it is now taught as early as middle school in order to teach students valid means of discovery [ 8 , 11 , 12 ]. The scientific method has not only guided most research studies but also fundamentally influenced how we think about the process of scientific discovery.

Alas, because biology has almost no general laws, deduction in the strictest sense is difficult. It may therefore be preferable to use the term abduction, which refers to the logical inference toward the most plausible explanation, given a set of observations, although this explanation cannot be proven and is not necessarily true.

Over the decades, the hypothesis-based scientific method did experience variations here and there, but its conceptual scaffold remained essentially unchanged ( Fig 1 ). Its key is a process that begins with the formulation of a hypothesis that is to be rigorously tested, either in the wet lab or computationally; nonadherence to this principle is seen as lacking rigor and can lead to irreproducible results [ 1 , 13 – 15 ].

An external file that holds a picture, illustration, etc.
Object name is pcbi.1007279.g001.jpg

The central concept of the traditional scientific method is a falsifiable hypothesis regarding some phenomenon of interest. This hypothesis is to be tested experimentally or computationally. The test results support or refute the hypothesis, triggering a new round of hypothesis formulation and testing.

Going further, the prominent philosopher of science Sir Karl Popper argued that a scientific hypothesis can never be verified but that it can be disproved by a single counterexample. He therefore demanded that scientific hypotheses had to be falsifiable, because otherwise, testing would be moot [ 16 , 17 ] (see also [ 18 ]). As Gillies put it, “successful theories are those that survive elimination through falsification” [ 19 ]. Kelley and Scott agreed to some degree but warned that complete insistence on falsifiability is too restrictive as it would mark many computational techniques, statistical hypothesis testing, and even Darwin’s theory of evolution as nonscientific [ 20 ].

While the hypothesis-based scientific method has been very successful, its exclusive reliance on deductive reasoning is dangerous because according to the so-called Duhem–Quine thesis, hypothesis testing always involves an unknown number of explicit or implicit assumptions, some of which may steer the researcher away from hypotheses that seem implausible, although they are, in fact, true [ 21 ]. According to Kuhn, this bias can obstruct the recognition of paradigm shifts [ 22 ], which require the rethinking of previously accepted “truths” and the development of radically new ideas [ 23 , 24 ]. The testing of simultaneous alternative hypotheses [ 25 – 27 ] ameliorates this problem to some degree but not entirely.

The traditional scientific method is often presented in discrete steps, but it should really be seen as a form of critical thinking, subject to review and independent validation [ 8 ]. It has proven very influential, not only by prescribing valid experimentation, but also for affecting the way we attempt to understand nature [ 18 ], for teaching [ 8 , 12 ], reporting, publishing, and otherwise sharing information [ 28 ], for peer review and the awarding of funds by research-supporting agencies [ 29 , 30 ], for medical diagnostics [ 7 ], and even in litigation [ 31 ].

A second dimension of the scientific method: Data-mining–inspired induction

A major shift in biological experimentation occurred with the–omics revolution of the early 21st century. All of a sudden, it became feasible to perform high-throughput experiments that generated thousands of measurements, typically characterizing the expression or abundances of very many—if not all—genes, proteins, metabolites, or other biological quantities in a sample.

The strategy of measuring large numbers of items in a nontargeted fashion is fundamentally different from the traditional scientific method and constitutes a new, second dimension of the scientific method. Instead of hypothesizing and testing whether gene X is up-regulated under some altered condition, the leading question becomes which of the thousands of genes in a sample are up- or down-regulated. This shift in focus elevates the data to the supreme role of revealing novel insights by themselves ( Fig 2 ). As an important, generic advantage over the traditional strategy, this second dimension is free of a researcher’s preconceived notions regarding the molecular mechanisms governing the phenomenon of interest, which are otherwise the key to formulating a hypothesis. The prominent biologists Patrick Brown and David Botstein commented that “the patterns of expression will often suffice to begin de novo discovery of potential gene functions” [ 32 ].

An external file that holds a picture, illustration, etc.
Object name is pcbi.1007279.g002.jpg

Data-driven research begins with an untargeted exploration, in which the data speak for themselves. Machine learning extracts patterns from the data, which suggest hypotheses that are to be tested in the lab or computationally.

This data-driven, discovery-generating approach is at once appealing and challenging. On the one hand, very many data are explored simultaneously and essentially without bias. On the other hand, the large datasets supporting this approach create a genuine challenge to understanding and interpreting the experimental results because the thousands of data points, often superimposed with a fair amount of noise, make it difficult to detect meaningful differences between sample and control. This situation can only be addressed with computational methods that first “clean” the data, for instance, through the statistically valid removal of outliers, and then use machine learning to identify statistically significant, distinguishing molecular profiles or signatures. In favorable cases, such signatures point to specific biological pathways, whereas other signatures defy direct explanation but may become the launch pad for follow-up investigations [ 33 ].

Today’s scientists are very familiar with this discovery-driven exploration of “what’s out there” and might consider it a quaint quirk of history that this strategy was at first widely chastised and ridiculed as a “fishing expedition” [ 30 , 34 ]. Strict traditionalists were outraged that rigor was leaving science with the new approach and that sufficient guidelines were unavailable to assure the validity and reproducibility of results [ 10 , 35 , 36 ].

From the view point of philosophy of science, this second dimension of the scientific method uses inductive reasoning and reflects Bacon’s idea that observations can and should dictate the research question to be investigated [ 1 , 7 ]. Allen [ 36 ] forcefully rejected this type of reasoning, stating “the thinking goes, we can now expect computer programs to derive significance, relevance and meaning from chunks of information, be they nucleotide sequences or gene expression profiles… In contrast with this view, many are convinced that no purely logical process can turn observation into understanding.” His conviction goes back to the 18th century philosopher David Hume and again to Popper, who identified as the overriding problem with inductive reasoning that it can never truly reveal causality, even if a phenomenon is observed time and again [ 16 , 17 , 37 , 38 ]. No number of observations, even if they always have the same result, can guard against an exception that would violate the generality of a law inferred from these observations [ 1 , 35 ]. Worse, Popper argued, through inference by induction, we cannot even know the probability of something being true [ 10 , 17 , 36 ].

Others argued that data-driven and hypothesis-driven research actually do not differ all that much in principle, as long as there is cycling between developing new ideas and testing them with care [ 27 ]. In fact, Kell and Oliver [ 34 ] maintained that the exclusive acceptance of hypothesis-driven programs misrepresents the complexities of biological knowledge generation. Similarly refuting the prominent rule of deduction, Platt [ 26 ] and Beard and Kushmerick [ 27 ] argued that repeated inductive reasoning, called strong inference, corresponds to a logically sound decision tree of disproving or refining hypotheses that can rapidly yield firm conclusions; nonetheless, Platt had to admit that inductive inference is not as certain as deduction, because it projects into the unknown. Lander compared the task of obtaining causality by induction to the problem of inferring the design of a microprocessor from input-output readings, which in a strict sense is impossible, because the microprocessor could be arbitrarily complicated; even so, inference often leads to novel insights and therefore is valuable [ 39 ].

An interesting special case of almost pure inductive reasoning is epidemiology, where hypothesis-driven reasoning is rare and instead, the fundamental question is whether data-based evidence is sufficient to associate health risks with specific causes [ 31 , 34 ].

Recent advances in machine learning and “big-data” mining have driven the use of inductive reasoning to unprecedented heights. As an example, machine learning can greatly assist in the discovery of patterns, for instance, in biological sequences [ 40 ]. Going a step further, a pithy article by Andersen [ 41 ] proffered that we may not need to look for causality or mechanistic explanations anymore if we just have enough correlation: “With enough data, the numbers speak for themselves, correlation replaces causation, and science can advance even without coherent models or unified theories.”

Of course, the proposal to abandon the quest for causality caused pushback on philosophical as well as mathematical grounds. Allen [ 10 , 35 ] considered the idea “absurd” that data analysis could enhance understanding in the absence of a hypothesis. He felt confident “that even the formidable combination of computing power with ease of access to data cannot produce a qualitative shift in the way that we do science: the making of hypotheses remains an indispensable component in the growth of knowledge” [ 36 ]. Succi and Coveney [ 42 ] refuted the “most extravagant claims” of big-data proponents very differently, namely by analyzing the theories on which machine learning is founded. They contrasted the assumptions underlying these theories, such as the law of large numbers, with the mathematical reality of complex biological systems. Specifically, they carefully identified genuine features of these systems, such as nonlinearities, nonlocality of effects, fractal aspects, and high dimensionality, and argued that they fundamentally violate some of the statistical assumptions implicitly underlying big-data analysis, like independence of events. They concluded that these discrepancies “may lead to false expectations and, at their nadir, even to dangerous social, economical and political manipulation.” To ameliorate the situation, the field of big-data analysis would need new strong theorems characterizing the validity of its methods and the numbers of data required for obtaining reliable insights. Succi and Coveney go as far as stating that too many data are just as bad as insufficient data [ 42 ].

While philosophical doubts regarding inductive methods will always persist, one cannot deny that -omics-based, high-throughput studies, combined with machine learning and big-data analysis, have been very successful [ 43 ]. Yes, induction cannot truly reveal general laws, no matter how large the datasets, but they do provide insights that are very different from what science had offered before and may at least suggest novel patterns, trends, or principles. As a case in point, if many transcriptomic studies indicate that a particular gene set is involved in certain classes of phenomena, there is probably some truth to the observation, even though it is not mathematically provable. Kepler’s laws of astronomy were arguably derived solely from inductive reasoning [ 34 ].

Notwithstanding the opposing views on inductive methods, successful strategies shape how we think about science. Thus, to take advantage of all experimental options while ensuring quality of research, we must not allow that “anything goes” but instead identify and characterize standard operating procedures and controls that render this emerging scientific method valid and reproducible. A laudable step in this direction was the wide acceptance of “minimum information about a microarray experiment” (MIAME) standards for microarray experiments [ 44 ].

A third dimension of the scientific method: Allochthonous reasoning

Parallel to the blossoming of molecular biology and the rapid rise in the power and availability of computing in the late 20th century, the use of mathematical and computational models became increasingly recognized as relevant and beneficial for understanding biological phenomena. Indeed, mathematical models eventually achieved cornerstone status in the new field of computational systems biology.

Mathematical modeling has been used as a tool of biological analysis for a long time [ 27 , 45 – 48 ]. Interesting for the discussion here is that the use of mathematical and computational modeling in biology follows a scientific approach that is distinctly different from the traditional and the data-driven methods, because it is distributed over two entirely separate domains of knowledge. One consists of the biological reality of DNA, elephants, and roses, whereas the other is the world of mathematics, which is governed by numbers, symbols, theorems, and abstract work protocols. Because the ways of thinking—and even the languages—are different in these two realms, I suggest calling this type of knowledge acquisition “allochthonous” (literally Greek: in or from a “piece of land different from where one is at home”; one could perhaps translate it into modern lingo as “outside one’s comfort zone”). De facto, most allochthonous reasoning in biology presently refers to mathematics and computing, but one might also consider, for instance, the application of methods from linguistics in the analysis of DNA sequences or proteins [ 49 ].

One could argue that biologists have employed “models” for a long time, for instance, in the form of “model organisms,” cell lines, or in vitro experiments, which more or less faithfully reflect features of the organisms of true interest but are easier to manipulate. However, this type of biological model use is rather different from allochthonous reasoning, as it does not leave the realm of biology and uses the same language and often similar methodologies.

A brief discussion of three experiences from our lab may illustrate the benefits of allochthonous reasoning. (1) In a case study of renal cell carcinoma, a dynamic model was able to explain an observed yet nonintuitive metabolic profile in terms of the enzymatic reaction steps that had been altered during the disease [ 50 ]. (2) A transcriptome analysis had identified several genes as displaying significantly different expression patterns during malaria infection in comparison to the state of health. Considered by themselves and focusing solely on genes coding for specific enzymes of purine metabolism, the findings showed patterns that did not make sense. However, integrating the changes in a dynamic model revealed that purine metabolism globally shifted, in response to malaria, from guanine compounds to adenine, inosine, and hypoxanthine [ 51 ]. (3) Data capturing the dynamics of malaria parasites suggested growth rates that were biologically impossible. Speculation regarding possible explanations led to the hypothesis that many parasite-harboring red blood cells might “hide” from circulation and therewith from detection in the blood stream. While experimental testing of the feasibility of the hypothesis would have been expensive, a dynamic model confirmed that such a concealment mechanism could indeed quantitatively explain the apparently very high growth rates [ 52 ]. In all three cases, the insights gained inductively from computational modeling would have been difficult to obtain purely with experimental laboratory methods. Purely deductive allochthonous reasoning is the ultimate goal of the search for design and operating principles [ 53 – 55 ], which strives to explain why certain structures or functions are employed by nature time and again. An example is a linear metabolic pathway, in which feedback inhibition is essentially always exerted on the first step [ 56 , 57 ]. This generality allows the deduction that a so far unstudied linear pathway is most likely (or even certain to be) inhibited at the first step. Not strictly deductive—but rather abductive—was a study in our lab in which we analyzed time series data with a mathematical model that allowed us to infer the most likely regulatory structure of a metabolic pathway [ 58 , 59 ].

A typical allochthonous investigation begins in the realm of biology with the formulation of a hypothesis ( Fig 3 ). Instead of testing this hypothesis with laboratory experiments, the system encompassing the hypothesis is moved into the realm of mathematics. This move requires two sets of ingredients. One set consists of the simplification and abstraction of the biological system: Any distracting details that seem unrelated to the hypothesis and its context are omitted or represented collectively with other details. This simplification step carries the greatest risk of the entire modeling approach, as omission of seemingly negligible but, in truth, important details can easily lead to wrong results. The second set of ingredients consists of correspondence rules that translate every biological component or process into the language of mathematics [ 60 , 61 ].

An external file that holds a picture, illustration, etc.
Object name is pcbi.1007279.g003.jpg

This mathematical and computational approach is distributed over two realms, which are connected by correspondence rules.

Once the system is translated, it has become an entirely mathematical construct that can be analyzed purely with mathematical and computational means. The results of this analysis are also strictly mathematical. They typically consist of values of variables, magnitudes of processes, sensitivity patterns, signs of eigenvalues, or qualitative features like the onset of oscillations or the potential for limit cycles. Correspondence rules are used again to move these results back into the realm of biology. As an example, the mathematical result that “two eigenvalues have positive real parts” does not make much sense to many biologists, whereas the interpretation that “the system is not stable at the steady state in question” is readily explained. New biological insights may lead to new hypotheses, which are tested either by experiments or by returning once more to the realm of mathematics. The model design, diagnosis, refinements, and validation consist of several phases, which have been discussed widely in the biomathematical literature. Importantly, each iteration of a typical modeling analysis consists of a move from the biological to the mathematical realm and back.

The reasoning within the realm of mathematics is often deductive, in the form of an Aristotelian syllogism, such as the well-known “All men are mortal; Socrates is a man; therefore, Socrates is mortal.” However, the reasoning may also be inductive, as it is the case with large-scale Monte-Carlo simulations that generate arbitrarily many “observations,” although they cannot reveal universal principles or theorems. An example is a simulation randomly drawing numbers in an attempt to show that every real number has an inverse. The simulation will always attest to this hypothesis but fail to discover the truth because it will never randomly draw 0. Generically, computational models may be considered sets of hypotheses, formulated as equations or as algorithms that reflect our perception of a complex system [ 27 ].

Impact of the multidimensional scientific method on learning

Almost all we know in biology has come from observation, experimentation, and interpretation. The traditional scientific method not only offered clear guidance for this knowledge gathering, but it also fundamentally shaped the way we think about the exploration of nature. When presented with a new research question, scientists were trained to think immediately in terms of hypotheses and alternatives, pondering the best feasible ways of testing them, and designing in their minds strong controls that would limit the effects of known or unknown confounders. Shaped by the rigidity of this ever-repeating process, our thinking became trained to move forward one well-planned step at a time. This modus operandi was rigid and exact. It also minimized the erroneous pursuit of long speculative lines of thought, because every step required testing before a new hypothesis was formed. While effective, the process was also very slow and driven by ingenuity—as well as bias—on the scientist’s part. This bias was sometimes a hindrance to necessary paradigm shifts [ 22 ].

High-throughput data generation, big-data analysis, and mathematical-computational modeling changed all that within a few decades. In particular, the acceptance of inductive principles and of the allochthonous use of nonbiological strategies to answer biological questions created an unprecedented mix of successes and chaos. To the horror of traditionalists, the importance of hypotheses became minimized, and the suggestion spread that the data would speak for themselves [ 36 ]. Importantly, within this fog of “anything goes,” the fundamental question arose how to determine whether an experiment was valid.

Because agreed-upon operating procedures affect research progress and interpretation, thinking, teaching, and sharing of results, this question requires a deconvolution of scientific strategies. Here I proffer that the single scientific method of the past should be expanded toward a vector space of scientific methods, with spanning vectors that correspond to different dimensions of the scientific method ( Fig 4 ).

An external file that holds a picture, illustration, etc.
Object name is pcbi.1007279.g004.jpg

The traditional hypothesis-based deductive scientific method is expanded into a 3D space that allows for synergistic blends of methods that include data-mining–inspired, inductive knowledge acquisition, and mathematical model-based, allochthonous reasoning.

Obviously, all three dimensions have their advantages and drawbacks. The traditional, hypothesis-driven deductive method is philosophically “clean,” except that it is confounded by preconceptions and assumptions. The data-mining–inspired inductive method cannot offer universal truths but helps us explore very large spaces of factors that contribute to a phenomenon. Allochthonous, model-based reasoning can be performed mentally, with paper and pencil, through rigorous analysis, or with a host of computational methods that are precise and disprovable [ 27 ]. At the same time, they are incomparable faster, cheaper, and much more comprehensive than experiments in molecular biology. This reduction in cost and time, and the increase in coverage, may eventually have far-reaching consequences, as we can already fathom from much of modern physics.

Due to its long history, the traditional dimension of the scientific method is supported by clear and very strong standard operating procedures. Similarly, strong procedures need to be developed for the other two dimensions. The MIAME rules for microarray analysis provide an excellent example [ 44 ]. On the mathematical modeling front, no such rules are generally accepted yet, but trends toward them seem to emerge at the horizon. For instance, it seems to be becoming common practice to include sensitivity analyses in typical modeling studies and to assess the identifiability or sloppiness of ensembles of parameter combinations that fit a given dataset well [ 62 , 63 ].

From a philosophical point of view, it seems unlikely that objections against inductive reasoning will disappear. However, instead of pitting hypothesis-based deductive reasoning against inductivism, it seems more beneficial to determine how the different methods can be synergistically blended ( cf . [ 18 , 27 , 34 , 42 ]) as linear combinations of the three vectors of knowledge acquisition ( Fig 4 ). It is at this point unclear to what degree the identified three dimensions are truly independent of each other, whether additional dimensions should be added [ 24 ], or whether the different versions could be amalgamated into a single scientific method [ 18 ], especially if it is loosely defined as a form of critical thinking [ 8 ]. Nobel Laureate Percy Bridgman even concluded that “science is what scientists do, and there are as many scientific methods as there are individual scientists” [ 8 , 64 ].

Combinations of the three spanning vectors of the scientific method have been emerging for some time. Many biologists already use inductive high-throughput methods to develop specific hypotheses that are subsequently tested with deductive or further inductive methods [ 34 , 65 ]. In terms of including mathematical modeling, physics and geology have been leading the way for a long time, often by beginning an investigation in theory, before any actual experiment is performed. It will benefit biology to look into this strategy and to develop best practices of allochthonous reasoning.

The blending of methods may take quite different shapes. Early on, Ideker and colleagues [ 65 ] proposed an integrated experimental approach for pathway analysis that offered a glimpse of new experimental strategies within the space of scientific methods. In a similar vein, Covert and colleagues [ 66 ] included computational methods into such an integrated approach. Additional examples of blended analyses in systems biology can be seen in other works, such as [ 43 , 67 – 73 ]. Generically, it is often beneficial to start with big data, determine patterns in associations and correlations, then switch to the mathematical realm in order to filter out spurious correlations in a high-throughput fashion. If this procedure is executed in an iterative manner, the “surviving” associations have an increased level of confidence and are good candidates for further experimental or computational testing (personal communication from S. Chandrasekaran).

If each component of a blended scientific method follows strict, commonly agreed guidelines, “linear combinations” within the 3D space can also be checked objectively, per deconvolution. In addition, guidelines for synergistic blends of component procedures should be developed. If we carefully monitor such blends, time will presumably indicate which method is best for which task and how the different approaches optimally inform each other. For instance, it will be interesting to study whether there is an optimal sequence of experiments along the three axes for a particular class of tasks. Big-data analysis together with inductive reasoning might be optimal for creating initial hypotheses and possibly refuting wrong speculations (“we had thought this gene would be involved, but apparently it isn’t”). If the logic of an emerging hypotheses can be tested with mathematical and computational tools, it will almost certainly be faster and cheaper than an immediate launch into wet-lab experimentation. It is also likely that mathematical reasoning will be able to refute some apparently feasible hypothesis and suggest amendments. Ultimately, the “surviving” hypotheses must still be tested for validity through conventional experiments. Deconvolving current practices and optimizing the combination of methods within the 3D or higher-dimensional space of scientific methods will likely result in better planning of experiments and in synergistic blends of approaches that have the potential capacity of addressing some of the grand challenges in biology.

Acknowledgments

The author is very grateful to Dr. Sriram Chandrasekaran and Ms. Carla Kumbale for superb suggestions and invaluable feedback.

Funding Statement

This work was supported in part by grants from the National Science Foundation ( https://www.nsf.gov/div/index.jsp?div=MCB ) grant NSF-MCB-1517588 (PI: EOV), NSF-MCB-1615373 (PI: Diana Downs) and the National Institute of Environmental Health Sciences ( https://www.niehs.nih.gov/ ) grant NIH-2P30ES019776-05 (PI: Carmen Marsit). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Advertisement

Advertisement

Scientific Thinking and Critical Thinking in Science Education 

Two Distinct but Symbiotically Related Intellectual Processes

  • Open access
  • Published: 05 September 2023

Cite this article

You have full access to this open access article

critical thinking and the scientific method

  • Antonio García-Carmona   ORCID: orcid.org/0000-0001-5952-0340 1  

6039 Accesses

3 Citations

Explore all metrics

Scientific thinking and critical thinking are two intellectual processes that are considered keys in the basic and comprehensive education of citizens. For this reason, their development is also contemplated as among the main objectives of science education. However, in the literature about the two types of thinking in the context of science education, there are quite frequent allusions to one or the other indistinctly to refer to the same cognitive and metacognitive skills, usually leaving unclear what are their differences and what are their common aspects. The present work therefore was aimed at elucidating what the differences and relationships between these two types of thinking are. The conclusion reached was that, while they differ in regard to the purposes of their application and some skills or processes, they also share others and are related symbiotically in a metaphorical sense; i.e., each one makes sense or develops appropriately when it is nourished or enriched by the other. Finally, an orientative proposal is presented for an integrated development of the two types of thinking in science classes.

Similar content being viewed by others

critical thinking and the scientific method

Philosophical Inquiry and Critical Thinking in Primary and Secondary Science Education

Fostering scientific literacy and critical thinking in elementary science education.

critical thinking and the scientific method

Enhancing Scientific Thinking Through the Development of Critical Thinking in Higher Education

Avoid common mistakes on your manuscript.

Education is not the learning of facts, but the training of the mind to think. Albert Einstein

1 Introduction

In consulting technical reports, theoretical frameworks, research, and curricular reforms related to science education, one commonly finds appeals to scientific thinking and critical thinking as essential educational processes or objectives. This is confirmed in some studies that include exhaustive reviews of the literature in this regard such as those of Bailin ( 2002 ), Costa et al. ( 2020 ), and Santos ( 2017 ) on critical thinking, and of Klarh et al. ( 2019 ) and Lehrer and Schauble ( 2006 ) on scientific thinking. However, conceptualizing and differentiating between both types of thinking based on the above-mentioned documents of science education are generally difficult. In many cases, they are referred to without defining them, or they are used interchangeably to represent virtually the same thing. Thus, for example, the document A Framework for K-12 Science Education points out that “Critical thinking is required, whether in developing and refining an idea (an explanation or design) or in conducting an investigation” (National Research Council (NRC), 2012 , p. 46). The same document also refers to scientific thinking when it suggests that basic scientific education should “provide students with opportunities for a range of scientific activities and scientific thinking , including, but not limited to inquiry and investigation, collection and analysis of evidence, logical reasoning, and communication and application of information” (NRC, 2012 , p. 251).

A few years earlier, the report Science Teaching in Schools in Europe: Policies and Research (European Commission/Eurydice, 2006 ) included the dimension “scientific thinking” as part of standardized national science tests in European countries. This dimension consisted of three basic abilities: (i) to solve problems formulated in theoretical terms , (ii) to frame a problem in scientific terms , and (iii) to formulate scientific hypotheses . In contrast, critical thinking was not even mentioned in such a report. However, in subsequent similar reports by the European Commission/Eurydice ( 2011 , 2022 ), there are some references to the fact that the development of critical thinking should be a basic objective of science teaching, although these reports do not define it at any point.

The ENCIENDE report on early-year science education in Spain also includes an explicit allusion to critical thinking among its recommendations: “Providing students with learning tools means helping them to develop critical thinking , to form their own opinions, to distinguish between knowledge founded on the evidence available at a certain moment (evidence which can change) and unfounded beliefs” (Confederation of Scientific Societies in Spain (COSCE), 2011 , p. 62). However, the report makes no explicit mention to scientific thinking. More recently, the document “ Enseñando ciencia con ciencia ” (Teaching science with science) (Couso et al., 2020 ), sponsored by Spain’s Ministry of Education, also addresses critical thinking:

(…) with the teaching approach through guided inquiry students learn scientific content, learn to do science (procedures), learn what science is and how it is built, and this (...) helps to develop critical thinking , that is, to question any statement that is not supported by evidence. (Couso et al., 2020 , p. 54)

On the other hand, in referring to what is practically the same thing, the European report Science Education for Responsible Citizenship speaks of scientific thinking when it establishes that one of the challenges of scientific education should be: “To promote a culture of scientific thinking and inspire citizens to use evidence-based reasoning for decision making” (European Commission, 2015 , p. 14). However, the Pisa 2024 Strategic Vision and Direction for Science report does not mention scientific thinking but does mention critical thinking in noting that “More generally, (students) should be able to recognize the limitations of scientific inquiry and apply critical thinking when engaging with its results” (Organization for Economic Co-operation and Development (OECD), 2020 , p. 9).

The new Spanish science curriculum for basic education (Royal Decree 217/ 2022 ) does make explicit reference to scientific thinking. For example, one of the STEM (Science, Technology, Engineering, and Mathematics) competency descriptors for compulsory secondary education reads:

Use scientific thinking to understand and explain the phenomena that occur around them, trusting in knowledge as a motor for development, asking questions and checking hypotheses through experimentation and inquiry (...) showing a critical attitude about the scope and limitations of science. (p. 41,599)

Furthermore, when developing the curriculum for the subjects of physics and chemistry, the same provision clarifies that “The essence of scientific thinking is to understand what are the reasons for the phenomena that occur in the natural environment to then try to explain them through the appropriate laws of physics and chemistry” (Royal Decree 217/ 2022 , p. 41,659). However, within the science subjects (i.e., Biology and Geology, and Physics and Chemistry), critical thinking is not mentioned as such. Footnote 1 It is only more or less directly alluded to with such expressions as “critical analysis”, “critical assessment”, “critical reflection”, “critical attitude”, and “critical spirit”, with no attempt to conceptualize it as is done with regard to scientific thinking.

The above is just a small sample of the concepts of scientific thinking and critical thinking only being differentiated in some cases, while in others they are presented as interchangeable, using one or the other indistinctly to talk about the same cognitive/metacognitive processes or practices. In fairness, however, it has to be acknowledged—as said at the beginning—that it is far from easy to conceptualize these two types of thinking (Bailin, 2002 ; Dwyer et al., 2014 ; Ennis, 2018 ; Lehrer & Schauble, 2006 ; Kuhn, 1993 , 1999 ) since they feed back on each other, partially overlap, and share certain features (Cáceres et al., 2020 ; Vázquez-Alonso & Manassero-Mas, 2018 ). Neither is there unanimity in the literature on how to characterize each of them, and rarely have they been analyzed comparatively (e.g., Hyytinen et al., 2019 ). For these reasons, I believed it necessary to address this issue with the present work in order to offer some guidelines for science teachers interested in deepening into these two intellectual processes to promote them in their classes.

2 An Attempt to Delimit Scientific Thinking in Science Education

For many years, cognitive science has been interested in studying what scientific thinking is and how it can be taught in order to improve students’ science learning (Klarh et al., 2019 ; Zimmerman & Klarh, 2018 ). To this end, Kuhn et al. propose taking a characterization of science as argument (Kuhn, 1993 ; Kuhn et al., 2008 ). They argue that this is a suitable way of linking the activity of how scientists think with that of the students and of the public in general, since science is a social activity which is subject to ongoing debate, in which the construction of arguments plays a key role. Lehrer and Schauble ( 2006 ) link scientific thinking with scientific literacy, paying especial attention to the different images of science. According to those authors, these images would guide the development of the said literacy in class. The images of science that Leherer and Schauble highlight as characterizing scientific thinking are: (i) science-as-logical reasoning (role of domain-general forms of scientific reasoning, including formal logic, heuristic, and strategies applied in different fields of science), (ii) science-as-theory change (science is subject to permanent revision and change), and (iii) science-as-practice (scientific knowledge and reasoning are components of a larger set of activities that include rules of participation, procedural skills, epistemological knowledge, etc.).

Based on a literature review, Jirout ( 2020 ) defines scientific thinking as an intellectual process whose purpose is the intentional search for information about a phenomenon or facts by formulating questions, checking hypotheses, carrying out observations, recognizing patterns, and making inferences (a detailed description of all these scientific practices or competencies can be found, for example, in NRC, 2012 ; OECD, 2019 ). Therefore, for Jirout, the development of scientific thinking would involve bringing into play the basic science skills/practices common to the inquiry-based approach to learning science (García-Carmona, 2020 ; Harlen, 2014 ). For other authors, scientific thinking would include a whole spectrum of scientific reasoning competencies (Krell et al., 2022 ; Moore, 2019 ; Tytler & Peterson, 2004 ). However, these competences usually cover the same science skills/practices mentioned above. Indeed, a conceptual overlap between scientific thinking, scientific reasoning, and scientific inquiry is often found in science education goals (Krell et al., 2022 ). Although, according to Leherer and Schauble ( 2006 ), scientific thinking is a broader construct that encompasses the other two.

It could be said that scientific thinking is a particular way of searching for information using science practices Footnote 2 (Klarh et al., 2019 ; Zimmerman & Klarh, 2018 ; Vázquez-Alonso & Manassero-Mas, 2018 ). This intellectual process provides the individual with the ability to evaluate the robustness of evidence for or against a certain idea, in order to explain a phenomenon (Clouse, 2017 ). But the development of scientific thinking also requires metacognition processes. According to what Kuhn ( 2022 ) argues, metacognition is fundamental to the permanent control or revision of what an individual thinks and knows, as well as that of the other individuals with whom it interacts, when engaging in scientific practices. In short, scientific thinking demands a good connection between reasoning and metacognition (Kuhn, 2022 ). Footnote 3

From that perspective, Zimmerman and Klarh ( 2018 ) have synthesized a taxonomy categorizing scientific thinking, relating cognitive processes with the corresponding science practices (Table 1 ). It has to be noted that this taxonomy was prepared in line with the categorization of scientific practices proposed in the document A Framework for K-12 Science Education (NRC, 2012 ). This is why one needs to understand that, for example, the cognitive process of elaboration and refinement of hypotheses is not explicitly associated with the scientific practice of hypothesizing but only with the formulation of questions. Indeed, the K-12 Framework document does not establish hypothesis formulation as a basic scientific practice. Lederman et al. ( 2014 ) justify it by arguing that not all scientific research necessarily allows or requires the verification of hypotheses, for example, in cases of exploratory or descriptive research. However, the aforementioned document (NRC, 2012 , p. 50) does refer to hypotheses when describing the practice of developing and using models , appealing to the fact that they facilitate the testing of hypothetical explanations .

In the literature, there are also other interesting taxonomies characterizing scientific thinking for educational purposes. One of them is that of Vázquez-Alonso and Manassero-Mas ( 2018 ) who, instead of science practices, refer to skills associated with scientific thinking . Their characterization basically consists of breaking down into greater detail the content of those science practices that would be related to the different cognitive and metacognitive processes of scientific thinking. Also, unlike Zimmerman and Klarh’s ( 2018 ) proposal, Vázquez-Alonso and Manassero-Mas’s ( 2018 ) proposal explicitly mentions metacognition as one of the aspects of scientific thinking, which they call meta-process . In my opinion, the proposal of the latter authors, which shells out scientific thinking into a broader range of skills/practices, can be more conducive in order to favor its approach in science classes, as teachers would have more options to choose from to address components of this intellectual process depending on their teaching interests, the educational needs of their students and/or the learning objectives pursued. Table 2 presents an adapted characterization of the Vázquez-Alonso and Manassero-Mas’s ( 2018 ) proposal to address scientific thinking in science education.

3 Contextualization of Critical Thinking in Science Education

Theorization and research about critical thinking also has a long tradition in the field of the psychology of learning (Ennis, 2018 ; Kuhn, 1999 ), and its application extends far beyond science education (Dwyer et al., 2014 ). Indeed, the development of critical thinking is commonly accepted as being an essential goal of people’s overall education (Ennis, 2018 ; Hitchcock, 2017 ; Kuhn, 1999 ; Willingham, 2008 ). However, its conceptualization is not simple and there is no unanimous position taken on it in the literature (Costa et al., 2020 ; Dwyer et al., 2014 ); especially when trying to relate it to scientific thinking. Thus, while Tena-Sánchez and León-Medina ( 2022 ) Footnote 4 and McBain et al. ( 2020 ) consider critical thinking to be the basis of or forms part of scientific thinking, Dowd et al. ( 2018 ) understand scientific thinking to be just a subset of critical thinking. However, Vázquez-Alonso and Manassero-Mas ( 2018 ) do not seek to determine whether critical thinking encompasses scientific thinking or vice versa. They consider that both types of knowledge share numerous skills/practices and the progressive development of one fosters the development of the other as a virtuous circle of improvement. Other authors, such as Schafersman ( 1991 ), even go so far as to say that critical thinking and scientific thinking are the same thing. In addition, some views on the relationship between critical thinking and scientific thinking seem to be context-dependent. For example, Hyytine et al. ( 2019 ) point out that in the perspective of scientific thinking as a component of critical thinking, the former is often used to designate evidence-based thinking in the sciences, although this view tends to dominate in Europe but not in the USA context. Perhaps because of this lack of consensus, the two types of thinking are often confused, overlapping, or conceived as interchangeable in education.

Even with such a lack of unanimous or consensus vision, there are some interesting theoretical frameworks and definitions for the development of critical thinking in education. One of the most popular definitions of critical thinking is that proposed by The National Council for Excellence in Critical Thinking (1987, cited in Inter-American Teacher Education Network, 2015 , p. 6). This conceives of it as “the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action”. In other words, critical thinking can be regarded as a reflective and reasonable class of thinking that provides people with the ability to evaluate multiple statements or positions that are defensible to then decide which is the most defensible (Clouse, 2017 ; Ennis, 2018 ). It thus requires, in addition to a basic scientific competency, notions about epistemology (Kuhn, 1999 ) to understand how knowledge is constructed. Similarly, it requires skills for metacognition (Hyytine et al., 2019 ; Kuhn, 1999 ; Magno, 2010 ) since critical thinking “entails awareness of one’s own thinking and reflection on the thinking of self and others as objects of cognition” (Dean & Kuhn, 2003 , p. 3).

In science education, one of the most suitable scenarios or resources, but not the only one, Footnote 5 to address all these aspects of critical thinking is through the analysis of socioscientific issues (SSI) (Taylor et al., 2006 ; Zeidler & Nichols, 2009 ). Without wishing to expand on this here, I will only say that interesting works can be found in the literature that have analyzed how the discussion of SSIs can favor the development of critical thinking skills (see, e.g., López-Fernández et al., 2022 ; Solbes et al., 2018 ). For example, López-Fernández et al. ( 2022 ) focused their teaching-learning sequence on the following critical thinking skills: information analysis, argumentation, decision making, and communication of decisions. Even some authors add the nature of science (NOS) to this framework (i.e., SSI-NOS-critical thinking), as, for example, Yacoubian and Khishfe ( 2018 ) in order to develop critical thinking and how this can also favor the understanding of NOS (Yacoubian, 2020 ). In effect, as I argued in another work on the COVID-19 pandemic as an SSI, in which special emphasis was placed on critical thinking, an informed understanding of how science works would have helped the public understand why scientists were changing their criteria to face the pandemic in the light of new data and its reinterpretations, or that it was not possible to go faster to get an effective and secure medical treatment for the disease (García-Carmona, 2021b ).

In the recent literature, there have also been some proposals intended to characterize critical thinking in the context of science education. Table 3 presents two of these by way of example. As can be seen, both proposals share various components for the development of critical thinking (respect for evidence, critically analyzing/assessing the validity/reliability of information, adoption of independent opinions/decisions, participation, etc.), but that of Blanco et al. ( 2017 ) is more clearly contextualized in science education. Likewise, that of these authors includes some more aspects (or at least does so more explicitly), such as developing epistemological Footnote 6 knowledge of science (vision of science…) and on its interactions with technology, society, and environment (STSA relationships), and communication skills. Therefore, it offers a wider range of options for choosing critical thinking skills/processes to promote it in science classes. However, neither proposal refers to metacognitive skills, which are also essential for developing critical thinking (Kuhn, 1999 ).

3.1 Critical thinking vs. scientific thinking in science education: differences and similarities

In accordance with the above, it could be said that scientific thinking is nourished by critical thinking, especially when deciding between several possible interpretations and explanations of the same phenomenon since this generally takes place in a context of debate in the scientific community (Acevedo-Díaz & García-Carmona, 2017 ). Thus, the scientific attitude that is perhaps most clearly linked to critical thinking is the skepticism with which scientists tend to welcome new ideas (Normand, 2008 ; Sagan, 1987 ; Tena-Sánchez and León-Medina, 2022 ), especially if they are contrary to well-established scientific knowledge (Bell, 2009 ). A good example of this was the OPERA experiment (García-Carmona & Acevedo-Díaz, 2016a ), which initially seemed to find that neutrinos could move faster than the speed of light. This finding was supposed to invalidate Albert Einstein’s theory of relativity (the finding was later proved wrong). In response, Nobel laureate in physics Sheldon L. Glashow went so far as to state that:

the result obtained by the OPERA collaboration cannot be correct. If it were, we would have to give up so many things, it would be such a huge sacrifice... But if it is, I am officially announcing it: I will shout to Mother Nature: I’m giving up! And I will give up Physics. (BBVA Foundation, 2011 )

Indeed, scientific thinking is ultimately focused on getting evidence that may support an idea or explanation about a phenomenon, and consequently allow others that are less convincing or precise to be discarded. Therefore when, with the evidence available, science has more than one equally defensible position with respect to a problem, the investigation is considered inconclusive (Clouse, 2017 ). In certain cases, this gives rise to scientific controversies (Acevedo-Díaz & García-Carmona, 2017 ) which are not always resolved based exclusively on epistemic or rational factors (Elliott & McKaughan, 2014 ; Vallverdú, 2005 ). Hence, it is also necessary to integrate non-epistemic practices into the framework of scientific thinking (García-Carmona, 2021a ; García-Carmona & Acevedo-Díaz, 2018 ), practices that transcend the purely rational or cognitive processes, including, for example, those related to emotional or affective issues (Sinatra & Hofer, 2021 ). From an educational point of view, this suggests that for students to become more authentically immersed in the way of working or thinking scientifically, they should also learn to feel as scientists do when they carry out their work (Davidson et al., 2020 ). Davidson et al. ( 2020 ) call it epistemic affect , and they suggest that it could be approach in science classes by teaching students to manage their frustrations when they fail to achieve the expected results; Footnote 7 or, for example, to moderate their enthusiasm with favorable results in a scientific inquiry by activating a certain skepticism that encourages them to do more testing. And, as mentioned above, for some authors, having a skeptical attitude is one of the actions that best visualize the application of critical thinking in the framework of scientific thinking (Normand, 2008 ; Sagan, 1987 ; Tena-Sánchez and León-Medina, 2022 ).

On the other hand, critical thinking also draws on many of the skills or practices of scientific thinking, as discussed above. However, in contrast to scientific thinking, the coexistence of two or more defensible ideas is not, in principle, a problem for critical thinking since its purpose is not so much to invalidate some ideas or explanations with respect to others, but rather to provide the individual with the foundations on which to position themself with the idea/argument they find most defensible among several that are possible (Ennis, 2018 ). For example, science with its methods has managed to explain the greenhouse effect, the phenomenon of the tides, or the transmission mechanism of the coronavirus. For this, it had to discard other possible explanations as they were less valid in the investigations carried out. These are therefore issues resolved by the scientific community which create hardly any discussion at the present time. However, taking a position for or against the production of energy in nuclear power plants transcends the scope of scientific thinking since both positions are, in principle, equally defensible. Indeed, within the scientific community itself there are supporters and detractors of the two positions, based on the same scientific knowledge. Consequently, it is critical thinking, which requires the management of knowledge and scientific skills, a basic understanding of epistemic (rational or cognitive) and non-epistemic (social, ethical/moral, economic, psychological, cultural, ...) aspects of the nature of science, as well as metacognitive skills, which helps the individual forge a personal foundation on which to position themself in one place or another, or maintain an uncertain, undecided opinion.

In view of the above, one can summarize that scientific thinking and critical thinking are two different intellectual processes in terms of purpose, but are related symbiotically (i.e., one would make no sense without the other or both feed on each other) and that, in their performance, they share a fair number of features, actions, or mental skills. According to Cáceres et al. ( 2020 ) and Hyytine et al. ( 2019 ), the intellectual skills that are most clearly common to both types of thinking would be searching for relationships between evidence and explanations , as well as investigating and logical thinking to make inferences . To this common space, I would also add skills for metacognition in accordance with what has been discussed about both types of knowledge (Khun, 1999 , 2022 ).

In order to compile in a compact way all that has been argued so far, in Table 4 , I present my overview of the relationship between scientific thinking and critical thinking. I would like to point out that I do not intend to be extremely extensive in the compilation, in the sense that possibly more elements could be added in the different sections, but rather to represent above all the aspects that distinguish and share them, as well as the mutual enrichment (or symbiosis) between them.

4 A Proposal for the Integrated Development of Critical Thinking and Scientific Thinking in Science Classes

Once the differences, common aspects, and relationships between critical thinking and scientific thinking have been discussed, it would be relevant to establish some type of specific proposal to foster them in science classes. Table 5 includes a possible script to address various skills or processes of both types of thinking in an integrated manner. However, before giving guidance on how such skills/processes could be approached, I would like to clarify that while all of them could be dealt within the context of a single school activity, I will not do so in this way. First, because I think that it can give the impression that the proposal is only valid if it is applied all at once in a specific learning situation, which can also discourage science teachers from implementing it in class due to lack of time or training to do so. Second, I think it can be more interesting to conceive the proposal as a set of thinking skills or actions that can be dealt with throughout the different science contents, selecting only (if so decided) some of them, according to educational needs or characteristics of the learning situation posed in each case. Therefore, in the orientations for each point of the script or grouping of these, I will use different examples and/or contexts. Likewise, these orientations in the form of comments, although founded in the literature, should be considered only as possibilities to do so, among many others possible.

Motivation and predisposition to reflect and discuss (point i ) demands, on the one hand, that issues are chosen which are attractive for the students. This can be achieved, for example, by asking the students directly what current issues, related to science and its impact or repercussions, they would like to learn about, and then decide on which issue to focus on (García-Carmona, 2008 ). Or the teacher puts forward the issue directly in class, trying for it be current, to be present in the media, social networks, etc., or what they think may be of interest to their students based on their teaching experience. In this way, each student is encouraged to feel questioned or concerned as a citizen because of the issue that is going to be addressed (García-Carmona, 2008 ). Also of possible interest is the analysis of contemporary, as yet unresolved socioscientific affairs (Solbes et al., 2018 ), such as climate change, science and social justice, transgenic foods, homeopathy, and alcohol and drug use in society. But also, everyday questions can be investigated which demand a decision to be made, such as “What car to buy?” (Moreno-Fontiveros et al., 2022 ), or “How can we prevent the arrival of another pandemic?” (Ushola & Puig, 2023 ).

On the other hand, it is essential that the discussion about the chosen issue is planned through an instructional process that generates an environment conducive to reflection and debate, with a view to engaging the students’ participation in it. This can be achieved, for example, by setting up a role-play game (Blanco-López et al., 2017 ), especially if the issue is socioscientific, or by critical and reflective reading of advertisements with scientific content (Campanario et al., 2001 ) or of science-related news in the daily media (García-Carmona, 2014 , 2021a ; Guerrero-Márquez & García-Carmona, 2020 ; Oliveras et al., 2013 ), etc., for subsequent discussion—all this, in a collaborative learning setting and with a clear democratic spirit.

Respect for scientific evidence (point ii ) should be the indispensable condition in any analysis and discussion from the prisms of scientific and of critical thinking (Erduran, 2021 ). Although scientific knowledge may be impregnated with subjectivity during its construction and is revisable in the light of new evidence ( tentativeness of scientific knowledge), when it is accepted by the scientific community it is as objective as possible (García-Carmona & Acevedo-Díaz, 2016b ). Therefore, promoting trust and respect for scientific evidence should be one of the primary educational challenges to combating pseudoscientists and science deniers (Díaz & Cabrera, 2022 ), whose arguments are based on false beliefs and assumptions, anecdotes, and conspiracy theories (Normand, 2008 ). Nevertheless, it is no simple task to achieve the promotion or respect for scientific evidence (Fackler, 2021 ) since science deniers, for example, consider that science is unreliable because it is imperfect (McIntyre, 2021 ). Hence the need to promote a basic understanding of NOS (point iii ) as a fundamental pillar for the development of both scientific thinking and critical thinking. A good way to do this would be through explicit and reflective discussion about controversies from the history of science (Acevedo-Díaz & García-Carmona, 2017 ) or contemporary controversies (García-Carmona, 2021b ; García-Carmona & Acevedo-Díaz, 2016a ).

Also, with respect to point iii of the proposal, it is necessary to manage basic scientific knowledge in the development of scientific and critical thinking skills (Willingham, 2008 ). Without this, it will be impossible to develop a minimally serious and convincing argument on the issue being analyzed. For example, if one does not know the transmission mechanism of a certain disease, it is likely to be very difficult to understand or justify certain patterns of social behavior when faced with it. In general, possessing appropriate scientific knowledge on the issue in question helps to make the best interpretation of the data and evidence available on this issue (OECD, 2019 ).

The search for information from reliable sources, together with its analysis and interpretation (points iv to vi ), are essential practices both in purely scientific contexts (e.g., learning about the behavior of a given physical phenomenon from literature or through enquiry) and in the application of critical thinking (e.g., when one wishes to take a personal, but informed, position on a particular socio-scientific issue). With regard to determining the credibility of information with scientific content on the Internet, Osborne et al. ( 2022 ) propose, among other strategies, to check whether the source is free of conflicts of interest, i.e., whether or not it is biased by ideological, political or economic motives. Also, it should be checked whether the source and the author(s) of the information are sufficiently reputable.

Regarding the interpretation of data and evidence, several studies have shown the difficulties that students often have with this practice in the context of enquiry activities (e.g., Gobert et al., 2018 ; Kanari & Millar, 2004 ; Pols et al., 2021 ), or when analyzing science news in the press (Norris et al., 2003 ). It is also found that they have significant difficulties in choosing the most appropriate data to support their arguments in causal analyses (Kuhn & Modrek, 2022 ). However, it must be recognized that making interpretations or inferences from data is not a simple task; among other reasons, because their construction is influenced by multiple factors, both epistemic (prior knowledge, experimental designs, etc.) and non-epistemic (personal expectations, ideology, sociopolitical context, etc.), which means that such interpretations are not always the same for all scientists (García-Carmona, 2021a ; García-Carmona & Acevedo-Díaz, 2018 ). For this reason, the performance of this scientific practice constitutes one of the phases or processes that generate the most debate or discussion in a scientific community, as long as no consensus is reached. In order to improve the practice of making inferences among students, Kuhn and Lerman ( 2021 ) propose activities that help them develop their own epistemological norms to connect causally their statements with the available evidence.

Point vii refers, on the one hand, to an essential scientific practice: the elaboration of evidence-based scientific explanations which generally, in a reasoned way, account for the causality, properties, and/or behavior of the phenomena (Brigandt, 2016 ). In addition, point vii concerns the practice of argumentation . Unlike scientific explanations, argumentation tries to justify an idea, explanation, or position with the clear purpose of persuading those who defend other different ones (Osborne & Patterson, 2011 ). As noted above, the complexity of most socioscientific issues implies that they have no unique valid solution or response. Therefore, the content of the arguments used to defend one position or another are not always based solely on purely rational factors such as data and scientific evidence. Some authors defend the need to also deal with non-epistemic aspects of the nature of science when teaching it (García-Carmona, 2021a ; García-Carmona & Acevedo-Díaz, 2018 ) since many scientific and socioscientific controversies are resolved by different factors or go beyond just the epistemic (Vallverdú, 2005 ).

To defend an idea or position taken on an issue, it is not enough to have scientific evidence that supports it. It is also essential to have skills for the communication and discussion of ideas (point viii ). The history of science shows how the difficulties some scientists had in communicating their ideas scientifically led to those ideas not being accepted at the time. A good example for students to become aware of this is the historical case of Semmelweis and puerperal fever (Aragón-Méndez et al., 2019 ). Its reflective reading makes it possible to conclude that the proposal of this doctor that gynecologists disinfect their hands, when passing from one parturient to another to avoid contagions that provoked the fever, was rejected by the medical community not only for epistemic reasons, but also for the difficulties that he had to communicate his idea. The history of science also reveals that some scientific interpretations were imposed on others at certain historical moments due to the rhetorical skills of their proponents although none of the explanations would convincingly explain the phenomenon studied. An example is the case of the controversy between Pasteur and Liebig about the phenomenon of fermentation (García-Carmona & Acevedo-Díaz, 2017 ), whose reading and discussion in science class would also be recommended in this context of this critical and scientific thinking skill. With the COVID-19 pandemic, for example, the arguments of some charlatans in the media and on social networks managed to gain a certain influence in the population, even though scientifically they were muddled nonsense (García-Carmona, 2021b ). Therefore, the reflective reading of news on current SSIs such as this also constitutes a good resource for the same educational purpose. In general, according to Spektor-Levy et al. ( 2009 ), scientific communication skills should be addressed explicitly in class, in a progressive and continuous manner, including tasks of information seeking, reading, scientific writing, representation of information, and representation of the knowledge acquired.

Finally (point ix ), a good scientific/critical thinker must be aware of what they know, of what they have doubts about or do not know, to this end continuously practicing metacognitive exercises (Dean & Kuhn, 2003 ; Hyytine et al., 2019 ; Magno, 2010 ; Willingham, 2008 ). At the same time, they must recognize the weaknesses and strengths of the arguments of their peers in the debate in order to be self-critical if necessary, as well as to revising their own ideas and arguments to improve and reorient them, etc. ( self-regulation ). I see one of the keys of both scientific and critical thinking being the capacity or willingness to change one’s mind, without it being frowned upon. Indeed, quite the opposite since one assumes it to occur thanks to the arguments being enriched and more solidly founded. In other words, scientific and critical thinking and arrogance or haughtiness towards the rectification of ideas or opinions do not stick well together.

5 Final Remarks

For decades, scientific thinking and critical thinking have received particular attention from different disciplines such as psychology, philosophy, pedagogy, and specific areas of this last such as science education. The two types of knowledge represent intellectual processes whose development in students, and in society in general, is considered indispensable for the exercise of responsible citizenship in accord with the demands of today’s society (European Commission, 2006 , 2015 ; NRC, 2012 ; OECD, 2020 ). As has been shown however, the task of their conceptualization is complex, and teaching students to think scientifically and critically is a difficult educational challenge (Willingham, 2008 ).

Aware of this, and after many years dedicated to science education, I felt the need to organize my ideas regarding the aforementioned two types of thinking. In consulting the literature about these, I found that, in many publications, scientific thinking and critical thinking are presented or perceived as being interchangeable or indistinguishable; a conclusion also shared by Hyytine et al. ( 2019 ). Rarely have their differences, relationships, or common features been explicitly studied. So, I considered that it was a matter needing to be addressed because, in science education, the development of scientific thinking is an inherent objective, but, when critical thinking is added to the learning objectives, there arise more than reasonable doubts about when one or the other would be used, or both at the same time. The present work came about motivated by this, with the intention of making a particular contribution, but based on the relevant literature, to advance in the question raised. This converges in conceiving scientific thinking and critical thinking as two intellectual processes that overlap and feed into each other in many aspects but are different with respect to certain cognitive skills and in terms of their purpose. Thus, in the case of scientific thinking, the aim is to choose the best possible explanation of a phenomenon based on the available evidence, and it therefore involves the rejection of alternative explanatory proposals that are shown to be less coherent or convincing. Whereas, from the perspective of critical thinking, the purpose is to choose the most defensible idea/option among others that are also defensible, using both scientific and extra-scientific (i.e., moral, ethical, political, etc.) arguments. With this in mind, I have described a proposal to guide their development in the classroom, integrating them under a conception that I have called, metaphorically, a symbiotic relationship between two modes of thinking.

Critical thinking is mentioned literally in other of the curricular provisions’ subjects such as in Education in Civics and Ethical Values or in Geography and History (Royal Decree 217/2022).

García-Carmona ( 2021a ) conceives of them as activities that require the comprehensive application of procedural skills, cognitive and metacognitive processes, and both scientific knowledge and knowledge of the nature of scientific practice .

Kuhn ( 2021 ) argues that the relationship between scientific reasoning and metacognition is especially fostered by what she calls inhibitory control , which basically consists of breaking down the whole of a thought into parts in such a way that attention is inhibited on some of those parts to allow a focused examination of the intended mental content.

Specifically, Tena-Sánchez and León-Medina (2020) assume that critical thinking is at the basis of rational or scientific skepticism that leads to questioning any claim that does not have empirical support.

As discussed in the introduction, the inquiry-based approach is also considered conducive to addressing critical thinking in science education (Couso et al., 2020 ; NRC, 2012 ).

Epistemic skills should not be confused with epistemological knowledge (García-Carmona, 2021a ). The former refers to skills to construct, evaluate, and use knowledge, and the latter to understanding about the origin, nature, scope, and limits of scientific knowledge.

For this purpose, it can be very useful to address in class, with the help of the history and philosophy of science, that scientists get more wrong than right in their research, and that error is always an opportunity to learn (García-Carmona & Acevedo-Díaz, 2018 ).

Acevedo-Díaz, J. A., & García-Carmona, A. (2017). Controversias en la historia de la ciencia y cultura científica [Controversies in the history of science and scientific culture]. Los Libros de la Catarata.

Aragón-Méndez, M. D. M., Acevedo-Díaz, J. A., & García-Carmona, A. (2019). Prospective biology teachers’ understanding of the nature of science through an analysis of the historical case of Semmelweis and childbed fever. Cultural Studies of Science Education , 14 (3), 525–555. https://doi.org/10.1007/s11422-018-9868-y

Bailin, S. (2002). Critical thinking and science education. Science & Education, 11 (4), 361–375. https://doi.org/10.1023/A:1016042608621

Article   Google Scholar  

BBVA Foundation (2011). El Nobel de Física Sheldon L. Glashow no cree que los neutrinos viajen más rápido que la luz [Physics Nobel laureate Sheldon L. Glashow does not believe neutrinos travel faster than light.]. https://www.fbbva.es/noticias/nobel-fisica-sheldon-l-glashow-no-cree-los-neutrinos-viajen-mas-rapido-la-luz/ . Accessed 5 Februray 2023.

Bell, R. L. (2009). Teaching the nature of science: Three critical questions. In Best Practices in Science Education . National Geographic School Publishing.

Google Scholar  

Blanco-López, A., España-Ramos, E., & Franco-Mariscal, A. J. (2017). Estrategias didácticas para el desarrollo del pensamiento crítico en el aula de ciencias [Teaching strategies for the development of critical thinking in the teaching of science]. Ápice. Revista de Educación Científica, 1 (1), 107–115. https://doi.org/10.17979/arec.2017.1.1.2004

Brigandt, I. (2016). Why the difference between explanation and argument matters to science education. Science & Education, 25 (3-4), 251–275. https://doi.org/10.1007/s11191-016-9826-6

Cáceres, M., Nussbaum, M., & Ortiz, J. (2020). Integrating critical thinking into the classroom: A teacher’s perspective. Thinking Skills and Creativity, 37 , 100674. https://doi.org/10.1016/j.tsc.2020.100674

Campanario, J. M., Moya, A., & Otero, J. (2001). Invocaciones y usos inadecuados de la ciencia en la publicidad [Invocations and misuses of science in advertising]. Enseñanza de las Ciencias, 19 (1), 45–56. https://doi.org/10.5565/rev/ensciencias.4013

Clouse, S. (2017). Scientific thinking is not critical thinking. https://medium.com/extra-extra/scientific-thinking-is-not-critical-thinking-b1ea9ebd8b31

Confederacion de Sociedades Cientificas de Espana [COSCE]. (2011). Informe ENCIENDE: Enseñanza de las ciencias en la didáctica escolar para edades tempranas en España [ENCIENDE report: Science education for early-year in Spain] . COSCE.

Costa, S. L. R., Obara, C. E., & Broietti, F. C. D. (2020). Critical thinking in science education publications: the research contexts. International Journal of Development Research, 10 (8), 39438. https://doi.org/10.37118/ijdr.19437.08.2020

Couso, D., Jiménez-Liso, M.R., Refojo, C. & Sacristán, J.A. (coords.) (2020). Enseñando ciencia con ciencia [Teaching science with science]. FECYT & Fundacion Lilly / Penguin Random House

Davidson, S. G., Jaber, L. Z., & Southerland, S. A. (2020). Emotions in the doing of science: Exploring epistemic affect in elementary teachers' science research experiences. Science Education, 104 (6), 1008–1040. https://doi.org/10.1002/sce.21596

Dean, D., & Kuhn, D. (2003). Metacognition and critical thinking. ERIC document. Reproduction No. ED477930 . https://files.eric.ed.gov/fulltext/ED477930.pdf

Díaz, C., & Cabrera, C. (2022). Desinformación científica en España . FECYT/IBERIFIER https://www.fecyt.es/es/publicacion/desinformacion-cientifica-en-espana

Dowd, J. E., Thompson, R. J., Jr., Schiff, L. A., & Reynolds, J. A. (2018). Understanding the complex relationship between critical thinking and science reasoning among undergraduate thesis writers. CBE—Life Sciences . Education, 17 (1), ar4. https://doi.org/10.1187/cbe.17-03-0052

Dwyer, C. P., Hogan, M. J., & Stewart, I. (2014). An integrated critical thinking framework for the 21st century. Thinking Skills and Creativity, 12 , 43–52. https://doi.org/10.1016/j.tsc.2013.12.004

Elliott, K. C., & McKaughan, D. J. (2014). Non-epistemic values and the multiple goals of science. Philosophy of Science, 81 (1), 1–21. https://doi.org/10.1086/674345

Ennis, R. H. (2018). Critical thinking across the curriculum: A vision. Topoi, 37 (1), 165–184. https://doi.org/10.1007/s11245-016-9401-4

Erduran, S. (2021). Respect for evidence: Can science education deliver it? Science & Education, 30 (3), 441–444. https://doi.org/10.1007/s11191-021-00245-8

European Commission. (2015). Science education for responsible citizenship . Publications Office https://op.europa.eu/en/publication-detail/-/publication/a1d14fa0-8dbe-11e5-b8b7-01aa75ed71a1

European Commission / Eurydice. (2011). Science education in Europe: National policies, practices and research . Publications Office. https://op.europa.eu/en/publication-detail/-/publication/bae53054-c26c-4c9f-8366-5f95e2187634

European Commission / Eurydice. (2022). Increasing achievement and motivation in mathematics and science learning in schools . Publications Office. https://eurydice.eacea.ec.europa.eu/publications/mathematics-and-science-learning-schools-2022

European Commission/Eurydice. (2006). Science teaching in schools in Europe. Policies and research . Publications Office. https://op.europa.eu/en/publication-detail/-/publication/1dc3df34-acdf-479e-bbbf-c404fa3bee8b

Fackler, A. (2021). When science denial meets epistemic understanding. Science & Education, 30 (3), 445–461. https://doi.org/10.1007/s11191-021-00198-y

García-Carmona, A. (2008). Relaciones CTS en la educación científica básica. II. Investigando los problemas del mundo [STS relationships in basic science education II. Researching the world problems]. Enseñanza de las Ciencias, 26 (3), 389–402. https://doi.org/10.5565/rev/ensciencias.3750

García-Carmona, A. (2014). Naturaleza de la ciencia en noticias científicas de la prensa: Análisis del contenido y potencialidades didácticas [Nature of science in press articles about science: Content analysis and pedagogical potential]. Enseñanza de las Ciencias, 32 (3), 493–509. https://doi.org/10.5565/rev/ensciencias.1307

García-Carmona, A., & Acevedo-Díaz, J. A. (2016). Learning about the nature of science using newspaper articles with scientific content. Science & Education, 25 (5–6), 523–546. https://doi.org/10.1007/s11191-016-9831-9

García-Carmona, A., & Acevedo-Díaz, J. A. (2016b). Concepciones de estudiantes de profesorado de Educación Primaria sobre la naturaleza de la ciencia: Una evaluación diagnóstica a partir de reflexiones en equipo [Preservice elementary teachers' conceptions of the nature of science: a diagnostic evaluation based on team reflections]. Revista Mexicana de Investigación Educativa, 21 (69), 583–610. https://www.redalyc.org/articulo.oa?id=14045395010

García-Carmona, A., & Acevedo-Díaz, J. A. (2017). Understanding the nature of science through a critical and reflective analysis of the controversy between Pasteur and Liebig on fermentation. Science & Education, 26 (1–2), 65–91. https://doi.org/10.1007/s11191-017-9876-4

García-Carmona, A., & Acevedo-Díaz, J. A. (2018). The nature of scientific practice and science education. Science & Education, 27 (5–6), 435–455. https://doi.org/10.1007/s11191-018-9984-9

García-Carmona, A. (2020). From inquiry-based science education to the approach based on scientific practices. Science & Education, 29 (2), 443–463. https://doi.org/10.1007/s11191-020-00108-8

García-Carmona, A. (2021a). Prácticas no-epistémicas: ampliando la mirada en el enfoque didáctico basado en prácticas científicas [Non-epistemic practices: extending the view in the didactic approach based on scientific practices]. Revista Eureka sobre Enseñanza y Divulgación de las Ciencias, 18 (1), 1108. https://doi.org/10.25267/Rev_Eureka_ensen_divulg_cienc.2021.v18.i1.1108

García-Carmona, A. (2021b). Learning about the nature of science through the critical and reflective reading of news on the COVID-19 pandemic. Cultural Studies of Science Education, 16 (4), 1015–1028. https://doi.org/10.1007/s11422-021-10092-2

Guerrero-Márquez, I., & García-Carmona, A. (2020). La energía y su impacto socioambiental en la prensa digital: temáticas y potencialidades didácticas para una educación CTS [Energy and its socio-environmental impact in the digital press: issues and didactic potentialities for STS education]. Revista Eureka sobre Enseñanza y Divulgación de las Ciencias, 17(3), 3301. https://doi.org/10.25267/Rev_Eureka_ensen_divulg_cienc.2020.v17.i3.3301

Gobert, J. D., Moussavi, R., Li, H., Sao Pedro, M., & Dickler, R. (2018). Real-time scaffolding of students’ online data interpretation during inquiry with Inq-ITS using educational data mining. In M. E. Auer, A. K. M. Azad, A. Edwards, & T. de Jong (Eds.), Cyber-physical laboratories in engineering and science education (pp. 191–217). Springer.

Chapter   Google Scholar  

Harlen, W. (2014). Helping children’s development of inquiry skills. Inquiry in Primary Science Education, 1 (1), 5–19. https://ipsejournal.files.wordpress.com/2015/03/3-ipse-volume-1-no-1-wynne-harlen-p-5-19.pdf

Hitchcock, D. (2017). Critical thinking as an educational ideal. In On reasoning and argument (pp. 477–497). Springer.

Hyytinen, H., Toom, A., & Shavelson, R. J. (2019). Enhancing scientific thinking through the development of critical thinking in higher education. In M. Murtonen & K. Balloo (Eds.), Redefining scientific thinking for higher education . Palgrave Macmillan.

Jiménez-Aleixandre, M. P., & Puig, B. (2022). Educating critical citizens to face post-truth: the time is now. In B. Puig & M. P. Jiménez-Aleixandre (Eds.), Critical thinking in biology and environmental education, Contributions from biology education research (pp. 3–19). Springer.

Jirout, J. J. (2020). Supporting early scientific thinking through curiosity. Frontiers in Psychology, 11 , 1717. https://doi.org/10.3389/fpsyg.2020.01717

Kanari, Z., & Millar, R. (2004). Reasoning from data: How students collect and interpret data in science investigations. Journal of Research in Science Teaching, 41 (7), 748–769. https://doi.org/10.1002/tea.20020

Klahr, D., Zimmerman, C., & Matlen, B. J. (2019). Improving students’ scientific thinking. In J. Dunlosky & K. A. Rawson (Eds.), The Cambridge handbook of cognition and education (pp. 67–99). Cambridge University Press.

Krell, M., Vorholzer, A., & Nehring, A. (2022). Scientific reasoning in science education: from global measures to fine-grained descriptions of students’ competencies. Education Sciences, 12 , 97. https://doi.org/10.3390/educsci12020097

Kuhn, D. (1993). Science as argument: Implications for teaching and learning scientific thinking. Science education, 77 (3), 319–337. https://doi.org/10.1002/sce.3730770306

Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28 (2), 16–46. https://doi.org/10.3102/0013189X028002016

Kuhn, D. (2022). Metacognition matters in many ways. Educational Psychologist, 57 (2), 73–86. https://doi.org/10.1080/00461520.2021.1988603

Kuhn, D., Iordanou, K., Pease, M., & Wirkala, C. (2008). Beyond control of variables: What needs to develop to achieve skilled scientific thinking? Cognitive Development, 23 (4), 435–451. https://doi.org/10.1016/j.cogdev.2008.09.006

Kuhn, D., & Lerman, D. (2021). Yes but: Developing a critical stance toward evidence. International Journal of Science Education, 43 (7), 1036–1053. https://doi.org/10.1080/09500693.2021.1897897

Kuhn, D., & Modrek, A. S. (2022). Choose your evidence: Scientific thinking where it may most count. Science & Education, 31 (1), 21–31. https://doi.org/10.1007/s11191-021-00209-y

Lederman, J. S., Lederman, N. G., Bartos, S. A., Bartels, S. L., Meyer, A. A., & Schwartz, R. S. (2014). Meaningful assessment of learners' understandings about scientific inquiry—The views about scientific inquiry (VASI) questionnaire. Journal of Research in Science Teaching, 51 (1), 65–83. https://doi.org/10.1002/tea.21125

Lehrer, R., & Schauble, L. (2006). Scientific thinking and science literacy. In K. A. Renninger, I. E. Sigel, W. Damon, & R. M. Lerner (Eds.), Handbook of child psychology: Child psychology in practice (pp. 153–196). John Wiley & Sons, Inc.

López-Fernández, M. D. M., González-García, F., & Franco-Mariscal, A. J. (2022). How can socio-scientific issues help develop critical thinking in chemistry education? A reflection on the problem of plastics. Journal of Chemical Education, 99 (10), 3435–3442. https://doi.org/10.1021/acs.jchemed.2c00223

Magno, C. (2010). The role of metacognitive skills in developing critical thinking. Metacognition and Learning, 5 , 137–156. https://doi.org/10.1007/s11409-010-9054-4

McBain, B., Yardy, A., Martin, F., Phelan, L., van Altena, I., McKeowen, J., Pembertond, C., Tosec, H., Fratuse, L., & Bowyer, M. (2020). Teaching science students how to think. International Journal of Innovation in Science and Mathematics Education, 28 (2), 28–35. https://openjournals.library.sydney.edu.au/CAL/article/view/14809/13480

McIntyre, L. (2021). Talking to science deniers and sceptics is not hopeless. Nature, 596 (7871), 165–165. https://doi.org/10.1038/d41586-021-02152-y

Moore, C. (2019). Teaching science thinking. Using scientific reasoning in the classroom . Routledge.

Moreno-Fontiveros, G., Cebrián-Robles, D., Blanco-López, A., & y España-Ramos, E. (2022). Decisiones de estudiantes de 14/15 años en una propuesta didáctica sobre la compra de un coche [Fourteen/fifteen-year-old students’ decisions in a teaching proposal on the buying of a car]. Enseñanza de las Ciencias, 40 (1), 199–219. https://doi.org/10.5565/rev/ensciencias.3292

National Research Council [NRC]. (2012). A framework for K-12 science education . National Academies Press.

Network, I.-A. T. E. (2015). Critical thinking toolkit . OAS/ITEN.

Normand, M. P. (2008). Science, skepticism, and applied behavior analysis. Behavior Analysis in Practice, 1 (2), 42–49. https://doi.org/10.1007/BF03391727

Norris, S. P., Phillips, L. M., & Korpan, C. A. (2003). University students’ interpretation of media reports of science and its relationship to background knowledge, interest, and reading difficulty. Public Understanding of Science, 12 (2), 123–145. https://doi.org/10.1177/09636625030122001

Oliveras, B., Márquez, C., & Sanmartí, N. (2013). The use of newspaper articles as a tool to develop critical thinking in science classes. International Journal of Science Education, 35 (6), 885–905. https://doi.org/10.1080/09500693.2011.586736

Organisation for Economic Co-operation and Development [OECD]. (2019). PISA 2018. Assessment and Analytical Framework . OECD Publishing. https://doi.org/10.1787/b25efab8-en

Book   Google Scholar  

Organisation for Economic Co-operation and Development [OECD]. (2020). PISA 2024: Strategic Vision and Direction for Science. https://www.oecd.org/pisa/publications/PISA-2024-Science-Strategic-Vision-Proposal.pdf

Osborne, J., Pimentel, D., Alberts, B., Allchin, D., Barzilai, S., Bergstrom, C., Coffey, J., Donovan, B., Kivinen, K., Kozyreva, A., & Wineburg, S. (2022). Science Education in an Age of Misinformation . Stanford University.

Osborne, J. F., & Patterson, A. (2011). Scientific argument and explanation: A necessary distinction? Science Education, 95 (4), 627–638. https://doi.org/10.1002/sce.20438

Pols, C. F. J., Dekkers, P. J. J. M., & De Vries, M. J. (2021). What do they know? Investigating students’ ability to analyse experimental data in secondary physics education. International Journal of Science Education, 43 (2), 274–297. https://doi.org/10.1080/09500693.2020.1865588

Royal Decree 217/2022. (2022). of 29 March, which establishes the organisation and minimum teaching of Compulsory Secondary Education (Vol. 76 , pp. 41571–41789). Spanish Official State Gazette. https://www.boe.es/eli/es/rd/2022/03/29/217

Sagan, C. (1987). The burden of skepticism. Skeptical Inquirer, 12 (1), 38–46. https://skepticalinquirer.org/1987/10/the-burden-of-skepticism/

Santos, L. F. (2017). The role of critical thinking in science education. Journal of Education and Practice, 8 (20), 160–173. https://eric.ed.gov/?id=ED575667

Schafersman, S. D. (1991). An introduction to critical thinking. https://facultycenter.ischool.syr.edu/wp-content/uploads/2012/02/Critical-Thinking.pdf . Accessed 10 May 2023.

Sinatra, G. M., & Hofer, B. K. (2021). How do emotions and attitudes influence science understanding? In Science denial: why it happens and what to do about it (pp. 142–180). Oxford Academic.

Solbes, J., Torres, N., & Traver, M. (2018). Use of socio-scientific issues in order to improve critical thinking competences. Asia-Pacific Forum on Science Learning & Teaching, 19 (1), 1–22. https://www.eduhk.hk/apfslt/

Spektor-Levy, O., Eylon, B. S., & Scherz, Z. (2009). Teaching scientific communication skills in science studies: Does it make a difference? International Journal of Science and Mathematics Education, 7 (5), 875–903. https://doi.org/10.1007/s10763-009-9150-6

Taylor, P., Lee, S. H., & Tal, T. (2006). Toward socio-scientific participation: changing culture in the science classroom and much more: Setting the stage. Cultural Studies of Science Education, 1 (4), 645–656. https://doi.org/10.1007/s11422-006-9028-7

Tena-Sánchez, J., & León-Medina, F. J. (2022). Y aún más al fondo del “bullshit”: El papel de la falsificación de preferencias en la difusión del oscurantismo en la teoría social y en la sociedad [And even deeper into “bullshit”: The role of preference falsification in the difussion of obscurantism in social theory and in society]. Scio, 22 , 209–233. https://doi.org/10.46583/scio_2022.22.949

Tytler, R., & Peterson, S. (2004). From “try it and see” to strategic exploration: Characterizing young children's scientific reasoning. Journal of Research in Science Teaching, 41 (1), 94–118. https://doi.org/10.1002/tea.10126

Uskola, A., & Puig, B. (2023). Development of systems and futures thinking skills by primary pre-service teachers for addressing epidemics. Research in Science Education , 1–17. https://doi.org/10.1007/s11165-023-10097-7

Vallverdú, J. (2005). ¿Cómo finalizan las controversias? Un nuevo modelo de análisis: la controvertida historia de la sacarina [How does controversies finish? A new model of analysis: the controversial history of saccharin]. Revista Iberoamericana de Ciencia, Tecnología y Sociedad, 2 (5), 19–50. http://www.revistacts.net/wp-content/uploads/2020/01/vol2-nro5-art01.pdf

Vázquez-Alonso, A., & Manassero-Mas, M. A. (2018). Más allá de la comprensión científica: educación científica para desarrollar el pensamiento [Beyond understanding of science: science education for teaching fair thinking]. Revista Electrónica de Enseñanza de las Ciencias, 17 (2), 309–336. http://reec.uvigo.es/volumenes/volumen17/REEC_17_2_02_ex1065.pdf

Willingham, D. T. (2008). Critical thinking: Why is it so hard to teach? Arts Education Policy Review, 109 (4), 21–32. https://doi.org/10.3200/AEPR.109.4.21-32

Yacoubian, H. A. (2020). Teaching nature of science through a critical thinking approach. In W. F. McComas (Ed.), Nature of Science in Science Instruction (pp. 199–212). Springer.

Yacoubian, H. A., & Khishfe, R. (2018). Argumentation, critical thinking, nature of science and socioscientific issues: a dialogue between two researchers. International Journal of Science Education, 40 (7), 796–807. https://doi.org/10.1080/09500693.2018.1449986

Zeidler, D. L., & Nichols, B. H. (2009). Socioscientific issues: Theory and practice. Journal of elementary science education, 21 (2), 49–58. https://doi.org/10.1007/BF03173684

Zimmerman, C., & Klahr, D. (2018). Development of scientific thinking. In J. T. Wixted (Ed.), Stevens’ handbook of experimental psychology and cognitive neuroscience (Vol. 4 , pp. 1–25). John Wiley & Sons, Inc..

Download references

Conflict of Interest

The author declares no conflict of interest.

Funding for open access publishing: Universidad de Sevilla/CBUA

Author information

Authors and affiliations.

Departamento de Didáctica de las Ciencias Experimentales y Sociales, Universidad de Sevilla, Seville, Spain

Antonio García-Carmona

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Antonio García-Carmona .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

García-Carmona, A. Scientific Thinking and Critical Thinking in Science Education . Sci & Educ (2023). https://doi.org/10.1007/s11191-023-00460-5

Download citation

Accepted : 30 July 2023

Published : 05 September 2023

DOI : https://doi.org/10.1007/s11191-023-00460-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cognitive skills
  • Critical thinking
  • Metacognitive skills
  • Science education
  • Scientific thinking
  • Find a journal
  • Publish with us
  • Track your research

APS

  • Teaching Tips

On Critical Thinking

Several years ago some teaching colleagues were talking about the real value of teaching psychology students to think critically. After some heated discussion, the last word was had by a colleague from North Carolina. “The real value of being a good critical thinker in psychology is so you won’t be a jerk,” he said with a smile. That observation remains one of my favorites in justifying why teaching critical thinking skills should be an important goal in psychology. However, I believe it captures only a fraction of the real value of teaching students to think critically about behavior.

What I s Critical Thinking?

Although there is little agreement about what it means to think critically in psychology, I like the following broad definition: The propensity and skills to engage in activity with reflec tive skepticism focused on deciding what to believe or do

Students often arrive at their first introductory course with what they believe is a thorough grasp of how life works. After all, they have been alive for at least 18 years, have witnessed their fair shares of crisis, joy, and tragedy, and have successfully navigated their way in to your classroom.

These students have had a lot of time to develop their own personal theories about how the world works and most are quite satisfied with the results. They often pride themselves on how good they are with people as well as how astute they are in understanding and explaining the motives of others. And they think they know what psychology is. Many are surprised- and sometimes disappointed- to discover that psychology is a science, and the rigor of psychological research is a shock. The breadth and depth of psychology feel daunting. Regardless of their sophistication in the discipline, students often are armed with a single strategy to survive the experience: Memorize the book and hope it works out on the exam. In many cases, this strategy will serve them well. Unfortunately, student exposure to critical thinking skill development may be more accidental than planful on the part of most teachers. Collaboration in my department and with other colleagues over the years has persuaded me that we need to approach critical thinking skills in a purposeful, systematic, and developmental manner from the introductory course through the capstone experience, propose that we need to teach critical thinking skills in three domains of psychology: practical (the “jerk avoidance” function), theoretical (developing scientific explanations for behavior), and methodological (testing scientific ideas). I will explore each of these areas and then offer some general suggestions about how psychology teachers can improve their purposeful pursuit of critical thinking objectives.

Practical Domain

Practical critical thinking is often expressed as a long-term, implicit goal of teachers of psychology, even though they may not spend much academic time teaching how to transfer critical thinking skills to make students wise consumers, more careful judges of character, or more cautious interpreters of behavior. Accurate appraisal of behavior is essential, yet few teachers invest time in helping students understand how vulnerable their own interpretations are to error.

Encourage practice in accurate description and interpretation of behavior by presenting students with ambiguous behavior samples. Ask them to distinguish what they observe (What is the behavior?) from the inferences they draw from the behavior (What is the meaning of the behavior?). I have found that cartoons, such as Simon Bond’s Uns p eakable Acts, can be a good resource for refining observation skills. Students quickly recognize that crisp behavioral descriptions are typically consistent from observer to observer, but inferences vary wildly. They recognize that their interpretations are highly personal and sometimes biased by their own values and preferences. As a result of experiencing such strong individual differences in interpretation, students may learn to be appropriately less confident of their immediate conclusions, more tolerant of ambiguity, and more likely to propose alternative explanations. As they acquire a good understanding of scientific procedures, effective control techniques, and legitimate forms of evidence, they may be less likely to fall victim to the multitude of off-base claims about behavior that confront us all. (How many Elvis sightings can be valid in one year?)

Theoretical Domain

Theoretical critical thinking involves helping the student develop an appreciation for scientific explanations of behavior. This means learning not just the content of psychology but how and why psychology is organized into concepts, principles, laws, and theories. Developing theoretical skills begins in the introductory course where the primary critical thinking objective is understanding and applying concepts appropriately. For example, when you introduce students to the principles of reinforcement, you can ask them to find examples of the principles in the news or to make up stories that illustrate the principles.

Mid-level courses in the major require more sophistication, moving students beyond application of concepts and principles to learning and applying theories. For instance, you can provide a rich case study in abnormal psychology and ask students to make sense of the case from different perspectives, emphasizing theoretical flexibility or accurate use of existing and accepted frameworks in psychology to explain patterns of behavior. In advanced courses we can justifiably ask students to evaluate theory, selecting the most useful or rejecting the least helpful. For example, students can contrast different models to explain drug addiction in physiological psychology. By examining the strengths and weaknesses of existing frameworks, they can select which theories serve best as they learn to justify their criticisms based on evidence and reason.

Capstone, honors, and graduate courses go beyond theory evaluation to encourage students to create theory. Students select a complex question about behavior (for example, identifying mechanisms that underlie autism or language acquisition) and develop their own theory-based explanations for the behavior. This challenge requires them to synthesize and integrate existing theory as well as devise new insights into the behavior.

Methodological Domain

Most departments offer many opportunities for students to develop their methodological critical thinking abilities by applying different research methods in psychology. Beginning students must first learn what the scientific method entails. The next step is to apply their understanding of scientific method by identifying design elements in existing research. For example, any detailed description of an experimental design can help students practice distinguishing the independent from the dependent variable and identifying how researchers controlled for alternative explanations. The next methodological critical thinking goals include evaluating the quality of existing research design and challenging the conclusions of research findings. Students may need to feel empowered by the teacher to overcome the reverence they sometimes demonstrate for anything in print, including their textbooks. Asking students to do a critical analysis on a fairly sophisticated design may simply be too big a leap for them to make. They are likely to fare better if given examples of bad design so they can build their critical abilities and confidence in order to tackle more sophisticated designs. (Examples of bad design can be found in The Critical Thinking Companion for Introductory Psychology or they can be easily constructed with a little time and imagination). Students will develop and execute their own research designs in their capstone methodology courses. Asking students to conduct their own independent research, whether a comprehensive survey on parental attitudes, a naturalistic study of museum patrons’ behavior, or a well-designed experiment on paired associate learning, prompts students to integrate their critical thinking skills and gives them practice with conventional writing forms in psychology. In evaluating their work I have found it helpful to ask students to identify the strengths and weaknesses of their own work- as an additional opportunity to think critically-before giving them my feedback.

Additional Suggestions

Adopting explicit critical thinking objectives, regardless of the domain of critical thinking, may entail some strategy changes on the part of the teacher.

• Introduce psychology as an ope n-end ed, growing enterprise . Students often think that their entry into the discipline represents an end-point where everything good and true has already been discovered. That conclusion encourages passivity rather than criticality. Point out that research is psychology’ s way of growing and developing. Each new discovery in psychology represents a potentially elegant act of critical thinking. A lot of room for discovery remains. New ideas will be developed and old conceptions discarded.

• Require student performance that goes beyond memorization . Group work, essays, debates, themes, letters to famous psychologists, journals, current event examples- all of these and more can be used as a means of developing the higher skills involved in critical thinking in psychology. Find faulty cause-effect conclusions in the tabloids (e.g., “Eating broccoli increases your IQ!”) and have students design studies to confirm or discredit the headline’s claims. Ask students to identify what kinds of evidence would warrant belief in commercial claims. Although it is difficult, even well designed objective test items can capture critical thinking skills so that students are challenged beyond mere repetition and recall.

• Clarify your expectations about performance with explicit, public criteria. Devising clear performance criteria for psychology projects will enhance student success. Students often complain that they don’t understand “what you want” when you assign work. Performance criteria specify the standards that you will use to evaluate their work. For example, perfonnance criteria for the observation exercise described earlier might include the following: The student describes behavior accurately; offers i nference that is reasonable for the context; and identifies personal factors that might influence infer ence. Perfonnance criteria facilitate giving detailed feedback easily and can also promote student self-assessment.

• Label good examples of critical thinking when these occur spontaneously. Students may not recognize when they are thinking critically. When you identify examples of good thinking or exploit examples that could be improved, it enhances students’ ability to understand. One of my students made this vivid for me when she commented on the good connection she had made between a course concept and an insight from her literature class, “That is what you mean by critical thinking?” There after I have been careful to label a good critical thinking insight.

• Endorse a questioning attitude. Students often assume that if they have questions about their reading, then they are somehow being dishonorable, rude, or stupid. Having  discussions early in the course about the role of good questions in enhancing the quality of the subject and expanding the sharpness of the mind may set a more critical stage on which students can play. Model critical thinking from some insights you have had about behavior or from some research you have conducted in the past. Congratulate students who offer good examples of the principles under study. Thank students who ask concept-related questions and describe why you think their questions are good. Leave time and space for more. Your own excitement about critical thinking can be a great incentive for students to seek that excitement.

• Brace yourself . When you include more opportunity for student critical thinking in class, there is much more opportunity for the class to go astray. Stepping away from the podium and engaging the students to perform what they know necessitates some loss of control, or at least some enhanced risk. However, the advantage is that no class will ever feel completely predictable, and this can be a source of stimulation for students and the professor as well.

critical thinking and the scientific method

As far back as I can remember over 50 yrs. ago. I have been talking psychology to friends, or helping them to solve problems. I never thought about psy. back then, but now I realize I really love helping people. How can I become a critical thinker without condemning people?

critical thinking and the scientific method

using a case study explain use of critical thinking in counseling process.

critical thinking and the scientific method

Do you have any current readings with Critical Thinking Skills in Psychology, besides John Russcio’s work?

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

Jane Halonen received her PhD from the University of Wisconsin-Milwaukee in 1980. She is Professor of Psychology at Alverno College in Milwaukee, Wisconsin, where she has served as Chair of Psychology and Dean of the Behavior Sciences Department. Halonen is past president of the Council for Teachers of Undergraduate Psychology. A fellow of APA's Division 2 (Teaching), she has been active on the Committee of Undergraduate Education, helped design the 1991 APA Conference on Undergraduate Educational Quality, and currently serves as a committee member to develop standards for the teaching of high school psychology.

critical thinking and the scientific method

Student Notebook: Five Tips for Working with Teaching Assistants in Online Classes

Sarah C. Turner suggests it’s best to follow the golden rule: Treat your TA’s time as you would your own.

Teaching Current Directions in Psychological Science

Aimed at integrating cutting-edge psychological science into the classroom, Teaching Current Directions in Psychological Science offers advice and how-to guidance about teaching a particular area of research or topic in psychological science that has been

European Psychology Learning and Teaching Conference

The School of Education of the Paris Lodron University of Salzburg is hosting the next European Psychology Learning and Teaching (EUROPLAT) Conference on September 18–20, 2017 in Salzburg, Austria. The main theme of the conference

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Method

Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).

Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.

While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.

The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.

1. Overview and organizing themes

2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.

  • 5.2 Computer methods and the ‘new ways’ of doing science

6.1 “The scientific method” in science education and as seen by scientists

6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.

This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.

The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.

Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.

Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.

Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.

In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.

As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.

Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]

We begin with a point made by Laudan (1968) in his historical survey of scientific method:

Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)

To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).

Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).

Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).

In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/​synthesis, non-ampliative/​ampliative, or even confirmation/​verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.

The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.

During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).

In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.

Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).

It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)

To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)

Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.

The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)

Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).

Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).

3. Logic of method and critical responses

The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.

Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.

Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)

Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]

Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.

The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.

The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).

From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)

The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.

An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place

Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).

An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.

A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)

By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.

Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.

Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).

These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.

Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.

In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .

5. Method in Practice

Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.

Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that

creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)

Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is

the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)

Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.

Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).

5.2 Computer methods and ‘new ways’ of doing science

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.

The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.

A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/​simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.

For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).

In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .

6. Discourse on scientific method

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)

Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of

(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)

Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.

Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how

The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)

Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)

Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.

In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that

ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)

But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).

The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as

fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)

However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it

wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)

This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).

The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.

One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.

Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.

  • Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
  • Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
  • Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
  • Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
  • Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
  • Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
  • Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
  • Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
  • Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
  • Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
  • Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
  • Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
  • Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
  • –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
  • Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
  • –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
  • Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
  • –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
  • Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
  • Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
  • Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
  • Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
  • Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
  • Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
  • Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
  • Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
  • Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
  • Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
  • –––, 1988, Against Method , London: Verso, 2 nd edition.
  • Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
  • Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
  • Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
  • Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
  • Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
  • Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
  • –––, 2003, Defending science—within reason , Amherst: Prometheus.
  • –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
  • –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
  • –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
  • Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
  • Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
  • Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
  • –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
  • –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
  • –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
  • Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
  • Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
  • Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
  • –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
  • Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
  • Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
  • Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
  • Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
  • ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
  • Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
  • Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
  • Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
  • Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
  • Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
  • Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
  • Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
  • Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
  • Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
  • Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
  • Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
  • Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
  • Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
  • Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
  • Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
  • Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
  • NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
  • Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
  • –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
  • Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
  • –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
  • Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
  • Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
  • Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
  • Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
  • –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
  • –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
  • Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
  • O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
  • Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
  • Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
  • Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
  • Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
  • –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
  • Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
  • Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
  • Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
  • Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
  • –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
  • –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
  • Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
  • Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
  • Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
  • Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
  • Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
  • Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
  • Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
  • Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
  • Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
  • –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
  • Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
  • Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
  • Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
  • –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
  • Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
  • Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
  • Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
  • Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
  • Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
  • Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
  • William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
  • Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
  • Scientific Method at philpapers. Darrell Rowbottom (ed.).
  • Recent Articles | Scientific Method | The Scientist Magazine

al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo

Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Breadcrumbs Section. Click here to navigate to respective pages.

The Need for Critical Thinking and the Scientific Method

The Need for Critical Thinking and the Scientific Method

DOI link for The Need for Critical Thinking and the Scientific Method

Get Citation

The book exposes many of the misunderstandings about the scientific method and its application to critical thinking. It argues for a better understanding of the scientific method and for nurturing critical thinking in the community. This knowledge helps the reader to analyze issues more objectively, and warns about the dangers of bias and propaganda. The principles are illustrated by considering several issues that are currently being debated. These include anthropogenic global warming (often loosely referred to as climate change), dangers to preservation of the Great Barrier Reef, and the expansion of the gluten-free food market and genetic engineering.

TABLE OF CONTENTS

Chapter one | 11  pages, introduction, chapter two | 8  pages, the scientific method, chapter three | 12  pages, how the lack of scientific input impacts research organizations, chapter four | 19  pages, how could this have happened, chapter five | 12  pages, how the media influences public thinking, chapter six | 14  pages, dangers to progress in science, chapter seven | 18  pages, applying scientific thinking to some current controversies, chapter eight | 11  pages, implementing scientific thinking and critical analysis, chapter nine | 17  pages, bringing it together, chapter ten | 4  pages, where will the future take us.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

  • Grades 6-12
  • School Leaders

100 Last-Day-of-School Activities Your Students Will Love!

What Are the Scientific Method Steps?

Explore with a well-organized and curious approach.

Text that says What Is the Scientific Method? on yellow background.

The scientific method not only teaches students how to conduct experiments, but it also enables them to think critically about processes that extend beyond science and into all aspects of their academic lives. Just like detectives, scientists, and explorers, students can use this scientific method structured-steps approach to explore, question, and discover. 

What is the scientific method?

What are the steps of the scientific method, how does the scientific method encourage critical thinking, how are the scientific method steps used in the classroom.

  • Free printable scientific method steps worksheet
  • Free printable scientific method steps posters

The scientific method is like a structured adventure for exploring the world that encourages discovery by finding answers and solving puzzles. With the scientific method steps, students get to ask questions, observe, make educated guesses (called hypotheses), run experiments, collect and organize data, draw sensible conclusions, and share what they’ve learned. Students can explore the natural world with a well-organized and curious approach. 

The scientific method steps can vary by name, but the process as a whole is the same across grade levels. There are as many as seven steps, but sometimes they are combined. Below are six steps that make the process accessible to younger learners.

1. Question

Encourage students to ask why, what, when, where, or how about a particular phenomenon or topic. Get them wondering about something that they find interesting or have a passion for. 

2. Research

Teach them to use their senses to gather information and make notes—for example, what are they seeing, hearing, etc.

3. Hypothesize

Based on observations, students will then make a hypothesis, which is an educated guess—it’s what they think will happen in an experiment. 

4. Experiment

To test their hypothesis, students can conduct an investigation or experiment and collect data. Data collection can involve charts, graphs, and observations.

Students can then look at the results of their experiment and interpret what that means in the grand scheme of their original question. From the data collected, students can then apply the new knowledge to their original question. 

Just like real scientists, students can communicate their findings with their classmates in a presentation, lab write-up, and many other ways. 

Be sure to check out our free printable scientific method posters and free scientific method steps printable .

The scientific method fosters critical thinking in students by promoting curiosity, observation, hypothesis formation, problem-solving, data analysis, logical reasoning, and effective communication. This structured approach equips students with vital skills for science and everyday life, while also promoting open-mindedness, adaptability, and reflective thinking, enhancing their critical thinking abilities across various situations.

The scientific method isn’t just about experiments, it’s a valuable tool that helps students become critical thinkers in all areas of their studies. From forming hypotheses to conducting experiments and sharing findings, it equips them with important skills. Plus, it encourages open-mindedness and adaptability. By using the scientific method, students start a lifelong adventure of learning and solving problems.

Even students as young as kindergarten can begin learning and exploring the scientific method steps. Plus, the scientific method is used all the way through high school and beyond, so it’s not a one-and-done skill. If you’re looking for hands-on ways for students to practice the scientific method, we compiled science experiments, labs, and demonstrations for elementary through middle school teachers to share with their students:

  • Kindergarten Experiments and Projects
  • 1st Grade Experiments and Projects
  • 2nd Grade Experiments and Projects
  • 3rd Grade Experiments and Projects
  • 4th Grade Experiments and Projects
  • 5th Grade Experiments and Projects
  • 6th Grade Experiments and Projects
  • 7th Grade Experiments and Projects
  • 8th Grade Experiments and Projects
  • High School Experiments and Projects

Free Printable Scientific Method Worksheet

Scientific Method Worksheet Feature 1

This worksheet includes space for students to fill in every step of the scientific inquiry process along with prompts to ensure they stay on track. 

Free Printable Scientific Method Posters

scientific method posters feature

Looking for a visual aid to help your students remember the steps to the scientific method? Get our free printable scientific method posters.

Unleash the power of the scientific method in elementary and middle school with examples of scientific method steps and free printables.

You Might Also Like

Scientific Method Worksheet Feature 1

Grab Your Free Scientific Method Worksheet Printable

Supercharge scientific inquiry. Continue Reading

Copyright © 2024. All rights reserved. 5335 Gate Parkway, Jacksonville, FL 32256

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

flow chart of scientific method

scientific method

Our editors will review what you’ve submitted and determine whether to revise the article.

  • University of Nevada, Reno - College of Agriculture, Biotechnology and Natural Resources Extension - The Scientific Method
  • World History Encyclopedia - Scientific Method
  • LiveScience - What Is Science?
  • Verywell Mind - Scientific Method Steps in Psychology Research
  • WebMD - What is the Scientific Method?
  • Chemistry LibreTexts - The Scientific Method
  • Khan Academy - The scientific method
  • Simply Psychology - What are the steps in the Scientific Method?
  • Stanford Encyclopedia of Philosophy - Scientific Method

flow chart of scientific method

Recent News

scientific method , mathematical and experimental technique employed in the sciences . More specifically, it is the technique used in the construction and testing of a scientific hypothesis .

The process of observing, asking questions, and seeking answers through tests and experiments is not unique to any one field of science. In fact, the scientific method is applied broadly in science, across many different fields. Many empirical sciences, especially the social sciences , use mathematical tools borrowed from probability theory and statistics , together with outgrowths of these, such as decision theory , game theory , utility theory, and operations research . Philosophers of science have addressed general methodological problems, such as the nature of scientific explanation and the justification of induction .

Earth's Place in the Universe. Introduction: The History of the Solar System. Aristotle's Philosophical Universe. Ptolemy's Geocentric Cosmos. Copernicus' Heliocentric System. Kepler's Laws of Planetary Motion.

The scientific method is critical to the development of scientific theories , which explain empirical (experiential) laws in a scientifically rational manner. In a typical application of the scientific method, a researcher develops a hypothesis , tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments. The modified hypothesis is then retested, further modified, and tested again, until it becomes consistent with observed phenomena and testing outcomes. In this way, hypotheses serve as tools by which scientists gather data. From that data and the many different scientific investigations undertaken to explore hypotheses, scientists are able to develop broad general explanations, or scientific theories.

See also Mill’s methods ; hypothetico-deductive method .

What Is the Scientific Method?

critical thinking and the scientific method

The scientific method is a systematic way of conducting experiments or studies so that you can explore the things you observe in the world and answer questions about them. The scientific method, also known as the hypothetico-deductive method, is a series of steps that can help you accurately describe the things you observe or improve your understanding of them.

Ultimately, your goal when you use the scientific method is to:

  • Find a cause-and-effect relationship by asking a question about something you observed
  • Collect as much evidence as you can about what you observed, as this can help you explore the connection between your evidence and what you observed
  • Determine if all your evidence can be combined to answer your question in a way that makes sense

Francis Bacon and René Descartes are usually credited with formalizing the process in the 16th and 17th centuries. The two philosophers argued that research shouldn’t be guided by preset metaphysical ideas of how reality works. They supported the use of inductive reasoning to come up with hypotheses and understand new things about reality.

Scientific Method Steps

The scientific method is a step-by-step problem-solving process. These steps include:

Observe the world around you. This will help you come up with a topic you are interested in and want to learn more about. In many cases, you already have a topic in mind because you have a related question for which you couldn't find an immediate answer.

Either way, you'll start the process by finding out what people before you already know about the topic, as well as any questions that people are still asking about. You may need to look up and read books and articles from academic journals or talk to other people so that you understand as much as you possibly can about your topic. This will help you with your next step.

Ask questions. Asking questions about what you observed and learned from reading and talking to others can help you figure out what the "problem" is. Scientists try to ask questions that are both interesting and specific and can be answered with the help of a fairly easy experiment or series of experiments. Your question should have one part (called a variable) that you can change in your experiment and another variable that you can measure. Your goal is to design an experiment that is a "fair test," which is when all the conditions in the experiment are kept the same except for the one you change (called the experimental or independent variable).

Form a hypothesis and make predictions based on it.  A hypothesis is an educated guess about the relationship between two or more variables in your question. A good hypothesis lets you predict what will happen when you test it in an experiment. Another important feature of a good hypothesis is that, if the hypothesis is wrong, you should be able to show that it's wrong. This is called falsifiability. If your experiment shows that your prediction is true, then your hypothesis is supported by your data.

Test your prediction by doing an experiment or making more observations.  The way you test your prediction depends on what you are studying. The best support comes from an experiment, but in some cases, it's too hard or impossible to change the variables in an experiment. Sometimes, you may need to do descriptive research where you gather more observations instead of doing an experiment. You will carefully gather notes and measurements during your experiments or studies, and you can share them with other people interested in the same question as you. Ideally, you will also repeat your experiment a couple more times because it's possible to get a result by chance, but it's less possible to get the same result more than once by chance.

Draw a conclusion. You will analyze what you already know about your topic from your literature research and the data gathered during your experiment. This will help you decide if the conclusion you draw from your data supports or contradicts your hypothesis. If your results contradict your hypothesis, you can use this observation to form a new hypothesis and make a new prediction. This is why scientific research is ongoing and scientific knowledge is changing all the time. It's very common for scientists to get results that don't support their hypotheses. In fact, you sometimes learn more about the world when your experiments don't support your hypotheses because it leads you to ask more questions. And this time around, you already know that one possible explanation is likely wrong.

Use your results to guide your next steps (iterate). For instance, if your hypothesis is supported, you may do more experiments to confirm it. Or you could come up with a hypothesis about why it works this way and design an experiment to test that. If your hypothesis is not supported, you can come up with another hypothesis and do experiments to test it. You'll rarely get the right hypothesis in one go. Most of the time, you'll have to go back to the hypothesis stage and try again. Every attempt offers you important information that helps you improve your next round of questions, hypotheses, and predictions.

Share your results. Scientific research isn't something you can do on your own; you must work with other people to do it.   You may be able to do an experiment or a series of experiments on your own, but you can't come up with all the ideas or do all the experiments by yourself .

Scientists and researchers usually share information by publishing it in a scientific journal or by presenting it to their colleagues during meetings and scientific conferences. These journals are read and the conferences are attended by other researchers who are interested in the same questions. If there's anything wrong with your hypothesis, prediction, experiment design, or conclusion, other researchers will likely find it and point it out to you.

It can be scary, but it's a critical part of doing scientific research. You must let your research be examined by other researchers who are as interested and knowledgeable about your question as you. This process helps other researchers by pointing out hypotheses that have been proved wrong and why they are wrong. It helps you by identifying flaws in your thinking or experiment design. And if you don't share what you've learned and let other people ask questions about it, it's not helpful to your or anyone else's understanding of what happens in the world.

Scientific Method Example

Here's an everyday example of how you can apply the scientific method to understand more about your world so you can solve your problems in a helpful way.

Let's say you put slices of bread in your toaster and press the button, but nothing happens. Your toaster isn't working, but you can't afford to buy a new one right now. You might be able to rescue it from the trash can if you can figure out what's wrong with it. So, let's figure out what's wrong with your toaster.

Observation. Your toaster isn't working to toast your bread.

Ask a question. In this case, you're asking, "Why isn't my toaster working?" You could even do a bit of preliminary research by looking in the owner's manual for your toaster. The manufacturer has likely tested your toaster model under many conditions, and they may have some ideas for where to start with your hypothesis.

Form a hypothesis and make predictions based on it. Your hypothesis should be a potential explanation or answer to the question that you can test to see if it's correct. One possible explanation that we could test is that the power outlet is broken. Our prediction is that if the outlet is broken, then plugging it into a different outlet should make the toaster work again.

Test your prediction by doing an experiment or making more observations. You plug the toaster into a different outlet and try to toast your bread.

If that works, then your hypothesis is supported by your experimental data. Results that support your hypothesis don't prove it right; they simply suggest that it's a likely explanation. This uncertainty arises because, in the real world, we can't rule out the possibility of mistakes, wrong assumptions, or weird coincidences affecting the results. If the toaster doesn’t work even after plugging it into a different outlet, then your hypothesis is not supported and it's likely the wrong explanation.

Use your results to guide your next steps (iteration). If your toaster worked, you may decide to do further tests to confirm it or revise it. For example, you could plug something else that you know is working into the first outlet to see if that stops working too. That would be further confirmation that your hypothesis is correct.

If your toaster failed to toast when plugged into the second outlet, you need a new hypothesis. For example, your next hypothesis might be that the toaster has a shorted wire. You could test this hypothesis directly if you have the right equipment and training, or you could take it to a repair shop where they could test that hypothesis for you.

Share your results. For this everyday example, you probably wouldn't want to write a paper, but you could share your problem-solving efforts with your housemates or anyone you hire to repair your outlet or help you test if the toaster has a short circuit.

What the Scientific Method Is Used For

The scientific method is useful whenever you need to reason logically about your questions and gather evidence to support your problem-solving efforts. So, you can use it in everyday life to answer many of your questions; however, when most people think of the scientific method, they likely think of using it to:

Describe how nature works . It can be hard to accurately describe how nature works because it's almost impossible to account for every variable that's involved in a natural process. Researchers may not even know about many of the variables that are involved. In some cases, all you can do is make assumptions. But you can use the scientific method to logically disprove wrong assumptions by identifying flaws in the reasoning.

Do scientific research in a laboratory to develop things such as new medicines.

Develop critical thinking skills.  Using the scientific method may help you develop critical thinking in your daily life because you learn to systematically ask questions and gather evidence to find answers. Without logical reasoning, you might be more likely to have a distorted perspective or bias. Bias is the inclination we all have to favor one perspective (usually our own) over another.

The scientific method doesn't perfectly solve the problem of bias, but it does make it harder for an entire field to be biased in the same direction. That's because it's unlikely that all the people working in a field have the same biases. It also helps make the biases of individuals more obvious because if you repeatedly misinterpret information in the same way in multiple experiments or over a period, the other people working on the same question will notice. If you don't correct your bias when others point it out to you, you'll lose your credibility. Other people might then stop believing what you have to say.

Why Is the Scientific Method Important?

When you use the scientific method, your goal is to do research in a fair, unbiased, and repeatable way. The scientific method helps meet these goals because:

It's a systematic approach to problem-solving. It can help you figure out where you're going wrong in your thinking and research if you're not getting helpful answers to your questions. Helpful answers solve problems and keep you moving forward. So, a systematic approach helps you improve your problem-solving abilities if you get stuck.

It can help you solve your problems.  The scientific method helps you isolate problems by focusing on what's important. In addition, it can help you make your solutions better every time you go through the process.

It helps you eliminate (or become aware of) your personal biases.  It can help you limit the influence of your own personal, preconceived notions . A big part of the process is considering what other people already know and think about your question. It also involves sharing what you've learned and letting other people ask about your methods and conclusions. At the end of the process, even if you still think your answer is best, you have considered what other people know and think about the question.

The scientific method is a systematic way of conducting experiments or studies so that you can explore the world around you and answer questions using reason and evidence. It's a step-by-step problem-solving process that involves: (1) observation, (2) asking questions, (3) forming hypotheses and making predictions, (4) testing your hypotheses through experiments or more observations, (5) using what you learned through experiment or observation to guide further investigation, and (6) sharing your results.

Top doctors in ,

Find more top doctors on, related links.

  • Health A-Z News
  • Health A-Z Reference
  • Health A-Z Slideshows
  • Health A-Z Quizzes
  • Health A-Z Videos
  • WebMDRx Savings Card
  • Coronavirus (COVID-19)
  • Hepatitis C
  • Diabetes Warning Signs
  • Rheumatoid Arthritis
  • Morning-After Pill
  • Breast Cancer Screening
  • Psoriatic Arthritis Symptoms
  • Heart Failure
  • Multiple Myeloma
  • Types of Crohn's Disease

critical thinking and the scientific method

* Note: it is page 43 in the 6th edition

Dany S. Adams, Department of Biology, Smith College, Northampton, MA 01063



Sidelight in Gilbert's (2000, Sinauer Associates); that is, I harp on correlation, necessity, and sufficiency, and the kinds of experiments required to gather each type of evidence. In my own class, an upper division Developmental Biology lecture class, I use these techniques, which include both verbal and written reinforcement, to encourage students to evaluate claims about cause and effect, that is, to distinguish between correlation and causation; however, I believe that with very slight modifications, these tricks can be applied in a much greater array of situations.







. I am impressed over an over again by the improvement in my students' ability to UNDERSTAND the primary literature, to ASSESS the validity of claims, and to THINK critically about how to answer questions.



for one of my other classes and am reading this book on the microbes. I came across this paragraph, part of which I have to share with you!! It talks about how... 'the intimin of was shown to be NECESSARY BUT NOT SUFFICIENT to induce lesions.' I just thought it was so cool that I am reading this highly scientific book and can make sense of concepts that would have been so foreign to me not all that long ago!!"



warning the students that they will be asked to think about the experimental basis of knowledge. I read this out loud during the first class. Difference: it takes an extra two minutes.

. ". Every time a technique is mentioned in class, we pull out the toolbox and write notes about the technique in the appropriate box. Difference: by the end of the semester, the students have been introduced to, and thought about how to use,, an impressive number of techniques, and they UNDERSTAND the power and the limitations of those techniques. On a very practical level, they end up with a list of techniques and controls they can consult in the future.

. Difference: students actually UNDERSTAND controls.

, always worth 50%, that asks the students to make a hypothesis about an unfamiliar observation then design experiments to test the hypothesis:

       
Ion      
DNA      
RNA      
Protein Immunocytochemistry Western Blot w/ pure protein Stain known positive cells Pre-immune serum
2nd Ab only
Cell      
Tissue      

       
Ion      
DNA      
RNA      
Protein      
Cell      
Tissue Remove tissue Stain for marker
Histology
Remove then return

       
Ion      
DNA Transfect gene (w/inducible promoter & reporter) Look for reporter; northern &/or western Transfect with neutral DNA
RNA      
Protein      
Cell      
Tissue      

Posted on the SDB Web Site Monday, July 26, 1999, Modified Wednesday, December 27, 2000

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. The Need for Critical Thinking and the Scientific Method

    critical thinking and the scientific method

  2. What is the scientific method? Good scientists think it through. The

    critical thinking and the scientific method

  3. Science-Based Strategies For Critical Thinking

    critical thinking and the scientific method

  4. Formula for Using the Scientific Method

    critical thinking and the scientific method

  5. The 6 Stages of Critical Thinking Charles Leon

    critical thinking and the scientific method

  6. The scientific method is a process for experimentation

    critical thinking and the scientific method

VIDEO

  1. Creating a Problem Solving Culture: Featuring the "A6"

  2. SCT 3.3a

  3. SCT 3.1a

  4. The Importance of Critical Thinking and Scientific Inquiry

  5. Critical thinking: Scientific method

  6. Question Everything

COMMENTS

  1. Understanding the Complex Relationship between Critical Thinking and

    Critical thinking and scientific reasoning are similar but different constructs that include various types of higher-order cognitive processes, metacognitive strategies, and dispositions involved in making meaning of information. ... 2004) and developing explicit examples of how critical thinking relates to the scientific method (Miri et al ...

  2. The Relationship Between Scientific Method & Critical Thinking

    Critical thinking initiates the act of hypothesis. In the scientific method, the hypothesis is the initial supposition, or theoretical claim about the world, based on questions and observations. If critical thinking asks the question, then the hypothesis is the best attempt at the time to answer the question using observable phenomenon.

  3. A Guide to Using the Scientific Method in Everyday Life

    Because the scientific method is, first of all, a matter of logical reasoning and only afterwards, a procedure to be applied in a laboratory. ... This insightful piece presents a detailed analysis of how and why science can help to develop critical thinking. Figure 4. A framework for applying the scientific approach to everyday conversations ...

  4. Critical thinking

    From the turn of the 20th century, he and others working in the overlapping fields of psychology, philosophy, and educational theory sought to rigorously apply the scientific method to understand and define the process of thinking. They conceived critical thinking to be related to the scientific method but more open, flexible, and self ...

  5. Science, method and critical thinking

    The method, based on critical thinking, is embedded in the scientific method, named here the Critical Generative Method. Before illustrating the key requirements for critical thinking, one point must be made clear from the outset: thinking involves using language, and the depth of thought is directly related to the 'active' vocabulary ...

  6. PDF The Nature of Scientific Thinking

    Scientific Thinking is More Than "the Scientific Method" Students in many science classrooms are presented with the scientific method as the fundamental plan scientists use to gain their understandings. Scientists throughout history have come to their conclusions in a variety of ways, not always following such a specific method.

  7. Critical Thinking

    Critical Thinking. Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as differing conceptions of the same basic concept: careful thinking directed to a goal. Conceptions differ with respect to the scope of such thinking, the type of goal, the criteria and norms ...

  8. The Scientific Method

    The Scientific Revolution began in the sixteenth century with Copernicus's heliocentric account of the cosmos and was consolidated in the early seventeenth century with the establishment of the scientific method based upon mechanism and reductionism. Newton's Principia , published in 1687, marked its apotheosis. This was a revolution that ...

  9. Science and the Spectrum of Critical Thinking

    Both the scientific method and critical thinking are applications of logic and related forms of rationality that date to the Ancient Greeks. The full spectrum of critical/rational thinking includes logic, informal logic, and systemic or analytic thinking. This common core is shared by the natural sciences and other domains of inquiry share, and ...

  10. Understanding the Complex Relationship between Critical Thinking and

    Developing critical-thinking and scientific reasoning skills are core learning objectives of science education, but little empirical evidence exists regarding the interrelationships between these constructs. Writing effectively fosters students' development of these constructs, and it offers a unique window into studying how they relate. In this study of undergraduate thesis writing in ...

  11. Critical Thinking and Scientific Thinking

    Both critical and scientific thinking rely on the use of empirical, objective evidence. Thinking scientifically or critically relies on using the data available and following it to its likely conclusion. Scientific thinking can be seen as a stricter, more regulated version of critical thinking. It takes the tenets of critically thinking and ...

  12. Perspective: Dimensions of the scientific method

    The scientific method has been guiding biological research for a long time. It not only prescribes the order and types of activities that give a scientific study validity and a stamp of approval but also has substantially shaped how we collectively think about the endeavor of investigating nature. The advent of high-throughput data generation ...

  13. Scientific Thinking and Critical Thinking in Science Education

    Scientific thinking and critical thinking are two intellectual processes that are considered keys in the basic and comprehensive education of citizens. For this reason, their development is also contemplated as among the main objectives of science education. However, in the literature about the two types of thinking in the context of science education, there are quite frequent allusions to one ...

  14. Understanding the Complex Relationship between Critical Thinking and

    studies, authors advocate adopting critical thinking as the course framework (Pukkila, 2004) and developing explicit examples of how critical thinking relates to the scientific method (Miri et al., 2007). In these examples, the important connection between writ-ing and critical thinking is highlighted by the fact that each

  15. On Critical Thinking

    Theoretical critical thinking involves helping the student develop an appreciation for scientific explanations of behavior. This means learning not just the content of psychology but how and why psychology is organized into concepts, principles, laws, and theories. Developing theoretical skills begins in the introductory course where the ...

  16. Scientific Method

    The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories.

  17. The Need for Critical Thinking and the Scientific Method

    It argues for a better understanding of the scientific method and for nurturing critical thinking in the community. This knowledge helps the reader to analyze issues more objectively, and warns about the dangers of bias and propaganda. The principles are illustrated by considering several issues that are currently being debated.

  18. What Are the Scientific Method Steps?

    The scientific method fosters critical thinking in students by promoting curiosity, observation, hypothesis formation, problem-solving, data analysis, logical reasoning, and effective communication. This structured approach equips students with vital skills for science and everyday life, while also promoting open-mindedness, adaptability, and ...

  19. Scientific method

    The scientific method is critical to the development of scientific theories, which explain empirical (experiential) laws in a scientifically rational manner.In a typical application of the scientific method, a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments.

  20. The Scientific Method: What Is It?

    Develop critical thinking skills. Using the scientific method may help you develop critical thinking in your daily life because you learn to systematically ask questions and gather evidence to ...

  21. Science, method and critical thinking

    scientific method is to try and answer those. The way in which questions emerge is a subject in itself. This is not addressed here, but this should also be the subject of critical thinking (Yanai & Lercher, 2019). The basis for scientific investigation accepts that, while the truth of the world exists in itself ('relativism'

  22. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. The scientific method involves careful observation coupled with rigorous scepticism, because cognitive assumptions can distort the interpretation of the observation.Scientific inquiry includes creating a hypothesis through inductive reasoning ...

  23. CRITICAL THINKING, THE SCIENTIFIC METHOD

    Because the scientific method is just a formalization of critical thinking, that means that the students become critical thinkers. And that is what I most want to teach. The basic idea: Explicitly discussing the logic and the thought processes that inform experimental methods works better than hoping students will "get it" if they hear enough ...

  24. The Scientific Method

    Summary <p>This chapter outlines the achievements and limitations of the scientific method, beginning with a brief discussion of early Systems Thinking (ST) and how it was pushed to the margins of reputable thought by the success of the Scientific Revolution. The Scientific Revolution began in the sixteenth century with Copernicus's heliocentric account of the cosmos and was consolidated in ...

  25. What Are Critical Thinking Skills and Why Are They Important?

    Critical thinking skills are used every day in a myriad of ways and can be applied to situations such as a CEO approaching a group project or a nurse deciding in which order to treat their patients. Examples of common critical thinking skills. Critical thinking skills differ from individual to individual and are utilized in various ways.

  26. 21 Ways to Teach Critical Thinking to Kids

    Critical thinking skills enable individuals to navigate through the noise, make informed decisions, and solve problems effectively. ... Learning to apply the scientific method can help young ...