LEARN STATISTICS EASILY

LEARN STATISTICS EASILY

Learn Data Analysis Now!

LEARN STATISTICS EASILY LOGO 2

What is: Experimental Error

What is experimental error.

Experimental error refers to the difference between the measured value and the true value of a quantity in scientific experiments. It is an inherent aspect of any experimental process, arising from various sources such as measurement inaccuracies, environmental factors, and limitations in the experimental design. Understanding experimental error is crucial for data analysis and interpretation in fields like statistics, data science, and research.

 width=

Ad description. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Types of Experimental Error

There are two primary types of experimental error: systematic error and random error. Systematic errors are consistent and repeatable inaccuracies that occur due to flaws in the measurement system or experimental setup. In contrast, random errors are unpredictable fluctuations that can arise from various sources, including human error, environmental changes, or limitations in measurement tools. Both types of errors can significantly impact the reliability of experimental results.

Systematic Error Explained

Systematic error can lead to biased results, as it consistently skews measurements in a particular direction. This type of error can often be identified and corrected through calibration of instruments or adjustments in the experimental procedure. For instance, if a scale consistently reads 0.5 grams too high, all measurements taken with that scale will be systematically biased. Recognizing and mitigating systematic errors is essential for achieving accurate and reliable data.

Random Error Explained

Random error, on the other hand, is characterized by its unpredictable nature. It can arise from various factors, such as fluctuations in environmental conditions, variations in the measurement process, or even human error during data collection. Unlike systematic errors, random errors can be reduced by increasing the number of observations or measurements, as the average of a large number of trials tends to converge on the true value. Understanding random error is vital for statistical analysis and hypothesis testing.

Impact of Experimental Error on Data Analysis

Experimental error can significantly affect the outcomes of data analysis and the conclusions drawn from experimental results. When errors are not accounted for, they can lead to incorrect interpretations and potentially flawed decisions based on the data. Researchers must employ statistical methods to quantify and minimize the impact of experimental error, ensuring that their findings are robust and reliable.

Quantifying Experimental Error

Quantifying experimental error involves calculating the uncertainty associated with measurements. This can be done using various statistical techniques, such as calculating the standard deviation, confidence intervals, and error propagation. These methods help researchers understand the degree of uncertainty in their measurements and provide a framework for making informed decisions based on the data collected.

Reducing Experimental Error

To enhance the accuracy of experimental results, researchers can implement several strategies to reduce experimental error. These include improving measurement techniques, using high-quality instruments, standardizing procedures, and conducting repeated trials. By systematically addressing potential sources of error, researchers can improve the reliability of their findings and contribute to the overall integrity of scientific research.

Role of Experimental Error in Scientific Research

Experimental error plays a critical role in scientific research, as it influences the validity and reliability of experimental findings. Acknowledging and addressing experimental error is essential for maintaining the integrity of scientific inquiry. Researchers must be transparent about the limitations of their studies and the potential sources of error, allowing for a more accurate interpretation of results and fostering trust in the scientific community.

Conclusion on Experimental Error

In summary, understanding experimental error is fundamental for anyone involved in statistics, data analysis, and data science. By recognizing the types of errors, quantifying their impact, and implementing strategies to minimize them, researchers can enhance the accuracy and reliability of their experimental results. This knowledge is crucial for making informed decisions based on data and advancing scientific knowledge.

experimental error chart

How to Calculate Experimental Error in Chemistry

scanrail / Getty Images

  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Scientific Method
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

Error is a measure of accuracy of the values in your experiment. It is important to be able to calculate experimental error, but there is more than one way to calculate and express it. Here are the most common ways to calculate experimental error:

Error Formula

In general, error is the difference between an accepted or theoretical value and an experimental value.

Error = Experimental Value - Known Value

Relative Error Formula

Relative Error = Error / Known Value

Percent Error Formula

% Error = Relative Error x 100%

Example Error Calculations

Let's say a researcher measures the mass of a sample to be 5.51 grams. The actual mass of the sample is known to be 5.80 grams. Calculate the error of the measurement.

Experimental Value = 5.51 grams Known Value = 5.80 grams

Error = Experimental Value - Known Value Error = 5.51 g - 5.80 grams Error = - 0.29 grams

Relative Error = Error / Known Value Relative Error = - 0.29 g / 5.80 grams Relative Error = - 0.050

% Error = Relative Error x 100% % Error = - 0.050 x 100% % Error = - 5.0%

  • Dilution Calculations From Stock Solutions
  • Here's How to Calculate pH Values
  • How to Calculate Percent
  • Tips and Rules for Determining Significant Figures
  • Molecular Mass Definition
  • 10 Things You Need To Know About Chemistry
  • Chemistry Word Problem Strategy
  • Chemistry 101 - Introduction & Index of Topics
  • What Is pH and What Does It Measure?
  • General Chemistry Topics
  • Understanding Experimental Groups
  • Molarity Definition in Chemistry
  • Teach Yourself Chemistry Today
  • What Is the Difference Between Molarity and Normality?
  • How to Find pOH in Chemistry
  • What Is a Mole Fraction?

Types of Error — Overview & Comparison - Expii

  • WolframAlpha.com
  • WolframCloud.com
  • All Sites & Public Resources...

experimental error chart

  • Wolfram|One
  • Mathematica
  • Wolfram|Alpha Notebook Edition
  • Finance Platform
  • System Modeler
  • Wolfram Player
  • Wolfram Engine
  • WolframScript
  • Enterprise Private Cloud
  • Application Server
  • Enterprise Mathematica
  • Wolfram|Alpha Appliance
  • Corporate Consulting
  • Technical Consulting
  • Wolfram|Alpha Business Solutions
  • Data Repository
  • Neural Net Repository
  • Function Repository
  • Wolfram|Alpha Pro
  • Problem Generator
  • Products for Education
  • Wolfram Cloud App
  • Wolfram|Alpha for Mobile
  • Wolfram|Alpha-Powered Apps
  • Paid Project Support
  • Summer Programs
  • All Products & Services »
  • Wolfram Language Revolutionary knowledge-based programming language. Wolfram Cloud Central infrastructure for Wolfram's cloud products & services. Wolfram Science Technology-enabling science of the computational universe. Wolfram Notebooks The preeminent environment for any technical workflows. Wolfram Engine Software engine implementing the Wolfram Language. Wolfram Natural Language Understanding System Knowledge-based broadly deployed natural language. Wolfram Data Framework Semantic framework for real-world data. Wolfram Universal Deployment System Instant deployment across cloud, desktop, mobile, and more. Wolfram Knowledgebase Curated computable knowledge powering Wolfram|Alpha.
  • All Technologies »
  • Aerospace & Defense
  • Chemical Engineering
  • Control Systems
  • Electrical Engineering
  • Image Processing
  • Industrial Engineering
  • Mechanical Engineering
  • Operations Research
  • Actuarial Sciences
  • Bioinformatics
  • Data Science
  • Econometrics
  • Financial Risk Management
  • All Solutions for Education
  • Machine Learning
  • Multiparadigm Data Science
  • High-Performance Computing
  • Quantum Computation Framework
  • Software Development
  • Authoring & Publishing
  • Interface Development
  • Web Development
  • All Solutions »
  • Wolfram Language Documentation
  • Fast Introduction for Programmers
  • Videos & Screencasts
  • Wolfram Language Introductory Book
  • Webinars & Training
  • Support FAQ
  • Wolfram Community
  • Contact Support
  • All Learning & Support »
  • Company Background
  • Wolfram Blog
  • Careers at Wolfram
  • Internships
  • Other Wolfram Language Jobs
  • Wolfram Foundation
  • Computer-Based Math
  • A New Kind of Science
  • Wolfram Technology for Hackathons
  • Student Ambassador Program
  • Wolfram for Startups
  • Demonstrations Project
  • Wolfram Innovator Awards
  • Wolfram + Raspberry Pi
  • All Company »

Chapter 3

Experimental Errors and

Error Analysis

This chapter is largely a tutorial on handling experimental errors of measurement. Much of the material has been extensively tested with science undergraduates at a variety of levels at the University of Toronto.

Whole books can and have been written on this topic but here we distill the topic down to the essentials. Nonetheless, our experience is that for beginners an iterative approach to this material works best. This means that the users first scan the material in this chapter; then try to use the material on their own experiment; then go over the material again; then ...

provides functions to ease the calculations required by propagation of errors, and those functions are introduced in Section 3.3. These error propagation functions are summarized in Section 3.5.

3.1 Introduction

3.1.1 The Purpose of Error Analysis

For students who only attend lectures and read textbooks in the sciences, it is easy to get the incorrect impression that the physical sciences are concerned with manipulating precise and perfect numbers. Lectures and textbooks often contain phrases like:

For an experimental scientist this specification is incomplete. Does it mean that the acceleration is closer to 9.8 than to 9.9 or 9.7? Does it mean that the acceleration is closer to 9.80000 than to 9.80001 or 9.79999? Often the answer depends on the context. If a carpenter says a length is "just 8 inches" that probably means the length is closer to 8 0/16 in. than to 8 1/16 in. or 7 15/16 in. If a machinist says a length is "just 200 millimeters" that probably means it is closer to 200.00 mm than to 200.05 mm or 199.95 mm.

We all know that the acceleration due to gravity varies from place to place on the earth's surface. It also varies with the height above the surface, and gravity meters capable of measuring the variation from the floor to a tabletop are readily available. Further, any physical measure such as can only be determined by means of an experiment, and since a perfect experimental apparatus does not exist, it is impossible even in principle to ever know perfectly. Thus, the specification of given above is useful only as a possible exercise for a student. In order to give it some meaning it must be changed to something like:

Two questions arise about the measurement. First, is it "accurate," in other words, did the experiment work properly and were all the necessary factors taken into account? The answer to this depends on the skill of the experimenter in identifying and eliminating all systematic errors. These are discussed in Section 3.4.

The second question regards the "precision" of the experiment. In this case the precision of the result is given: the experimenter claims the precision of the result is within 0.03 m/s

1. The person who did the measurement probably had some "gut feeling" for the precision and "hung" an error on the result primarily to communicate this feeling to other people. Common sense should always take precedence over mathematical manipulations.

2. In complicated experiments, error analysis can identify dominant errors and hence provide a guide as to where more effort is needed to improve an experiment.

3. There is virtually no case in the experimental physical sciences where the correct error analysis is to compare the result with a number in some book. A correct experiment is one that is performed correctly, not one that gives a result in agreement with other measurements.

4. The best precision possible for a given experiment is always limited by the apparatus. Polarization measurements in high-energy physics require tens of thousands of person-hours and cost hundreds of thousand of dollars to perform, and a good measurement is within a factor of two. Electrodynamics experiments are considerably cheaper, and often give results to 8 or more significant figures. In both cases, the experimenter must struggle with the equipment to get the most precise and accurate measurement possible.

3.1.2 Different Types of Errors

As mentioned above, there are two types of errors associated with an experimental result: the "precision" and the "accuracy". One well-known text explains the difference this way:

" " E.M. Pugh and G.H. Winslow, p. 6.

The object of a good experiment is to minimize both the errors of precision and the errors of accuracy.

Usually, a given experiment has one or the other type of error dominant, and the experimenter devotes the most effort toward reducing that one. For example, in measuring the height of a sample of geraniums to determine an average value, the random variations within the sample of plants are probably going to be much larger than any possible inaccuracy in the ruler being used. Similarly for many experiments in the biological and life sciences, the experimenter worries most about increasing the precision of his/her measurements. Of course, some experiments in the biological and life sciences are dominated by errors of accuracy.

On the other hand, in titrating a sample of HCl acid with NaOH base using a phenolphthalein indicator, the major error in the determination of the original concentration of the acid is likely to be one of the following: (1) the accuracy of the markings on the side of the burette; (2) the transition range of the phenolphthalein indicator; or (3) the skill of the experimenter in splitting the last drop of NaOH. Thus, the accuracy of the determination is likely to be much worse than the precision. This is often the case for experiments in chemistry, but certainly not all.

Question: Most experiments use theoretical formulas, and usually those formulas are approximations. Is the error of approximation one of precision or of accuracy?

3.1.3 References

There is extensive literature on the topics in this chapter. The following lists some well-known introductions.

D.C. Baird, (Prentice-Hall, 1962)

E.M. Pugh and G.H. Winslow, (Addison-Wesley, 1966)

J.R. Taylor, (University Science Books, 1982)

In addition, there is a web document written by the author of that is used to teach this topic to first year Physics undergraduates at the University of Toronto. The following Hyperlink points to that document.

3.2 Determining the Precision

3.2.1 The Standard Deviation

In the nineteenth century, Gauss' assistants were doing astronomical measurements. However, they were never able to exactly repeat their results. Finally, Gauss got angry and stormed into the lab, claiming he would show these people how to do the measurements once and for all. The only problem was that Gauss wasn't able to repeat his measurements exactly either!

After he recovered his composure, Gauss made a histogram of the results of a particular measurement and discovered the famous Gaussian or bell-shaped curve.

Many people's first introduction to this shape is the grade distribution for a course. Here is a sample of such a distribution, using the function .

We use a standard package to generate a Probability Distribution Function ( ) of such a "Gaussian" or "normal" distribution. The mean is chosen to be 78 and the standard deviation is chosen to be 10; both the mean and standard deviation are defined below.

We then normalize the distribution so the maximum value is close to the maximum number in the histogram and plot the result.

In this graph,

Finally, we look at the histogram and plot together.

We can see the functional form of the Gaussian distribution by giving symbolic values.

In this formula, the quantity , and . The is sometimes called the . The definition of is as follows.

Here is the total number of measurements and is the result of measurement number .

The standard deviation is a measure of the width of the peak, meaning that a larger value gives a wider peak.

If we look at the area under the curve from graph, we find that this area is 68 percent of the total area. Thus, any result chosen at random has a 68% change of being within one standard deviation of the mean. We can show this by evaluating the integral. For convenience, we choose the mean to be zero.

Now, we numericalize this and multiply by 100 to find the percent.

The only problem with the above is that the measurement must be repeated an infinite number of times before the standard deviation can be determined. If is less than infinity, one can only estimate measurements, this is the best estimate.

The major difference between this estimate and the definition is the . This is reasonable since if = 1 we know we can't determine

Here is an example. Suppose we are to determine the diameter of a small cylinder using a micrometer. We repeat the measurement 10 times along various points on the cylinder and get the following results, in centimeters.

The number of measurements is the length of the list.

The average or mean is now calculated.

Then the standard deviation is to be 0.00185173.

We repeat the calculation in a functional style.

Note that the package, which is standard with , includes functions to calculate all of these quantities and a great deal more.

We close with two points:

1. The standard deviation has been associated with the error in each individual measurement. Section 3.3.2 discusses how to find the error in the estimate of the average.

2. This calculation of the standard deviation is only an estimate. In fact, we can find the expected error in the estimate,

As discussed in more detail in Section 3.3, this means that the true standard deviation probably lies in the range of values.

Viewed in this way, it is clear that the last few digits in the numbers above for function adjusts these significant figures based on the error.

is discussed further in Section 3.3.1.

3.2.2 The Reading Error

There is another type of error associated with a directly measured quantity, called the "reading error". Referring again to the example of Section 3.2.1, the measurements of the diameter were performed with a micrometer. The particular micrometer used had scale divisions every 0.001 cm. However, it was possible to estimate the reading of the micrometer between the divisions, and this was done in this example. But, there is a reading error associated with this estimation. For example, the first data point is 1.6515 cm. Could it have been 1.6516 cm instead? How about 1.6519 cm? There is no fixed rule to answer the question: the person doing the measurement must guess how well he or she can read the instrument. A reasonable guess of the reading error of this micrometer might be 0.0002 cm on a good day. If the experimenter were up late the night before, the reading error might be 0.0005 cm.

An important and sometimes difficult question is whether the reading error of an instrument is "distributed randomly". Random reading errors are caused by the finite precision of the experiment. If an experimenter consistently reads the micrometer 1 cm lower than the actual value, then the reading error is not random.

For a digital instrument, the reading error is ± one-half of the last digit. Note that this assumes that the instrument has been properly engineered to round a reading correctly on the display.

3.2.3 "THE" Error

So far, we have found two different errors associated with a directly measured quantity: the standard deviation and the reading error. So, which one is the actual real error of precision in the quantity? The answer is both! However, fortunately it almost always turns out that one will be larger than the other, so the smaller of the two can be ignored.

In the diameter example being used in this section, the estimate of the standard deviation was found to be 0.00185 cm, while the reading error was only 0.0002 cm. Thus, we can use the standard deviation estimate to characterize the error in each measurement. Another way of saying the same thing is that the observed spread of values in this example is not accounted for by the reading error. If the observed spread were more or less accounted for by the reading error, it would not be necessary to estimate the standard deviation, since the reading error would be the error in each measurement.

Of course, everything in this section is related to the precision of the experiment. Discussion of the accuracy of the experiment is in Section 3.4.

3.2.4 Rejection of Measurements

Often when repeating measurements one value appears to be spurious and we would like to throw it out. Also, when taking a series of measurements, sometimes one value appears "out of line". Here we discuss some guidelines on rejection of measurements; further information appears in Chapter 7.

It is important to emphasize that the whole topic of rejection of measurements is awkward. Some scientists feel that the rejection of data is justified unless there is evidence that the data in question is incorrect. Other scientists attempt to deal with this topic by using quasi-objective rules such as 's . Still others, often incorrectly, throw out any data that appear to be incorrect. In this section, some principles and guidelines are presented; further information may be found in many references.

First, we note that it is incorrect to expect each and every measurement to overlap within errors. For example, if the error in a particular quantity is characterized by the standard deviation, we only expect 68% of the measurements from a normally distributed population to be within one standard deviation of the mean. Ninety-five percent of the measurements will be within two standard deviations, 99% within three standard deviations, etc., but we never expect 100% of the measurements to overlap within any finite-sized error for a truly Gaussian distribution.

Of course, for most experiments the assumption of a Gaussian distribution is only an approximation.

If the error in each measurement is taken to be the reading error, again we only expect most, not all, of the measurements to overlap within errors. In this case the meaning of "most", however, is vague and depends on the optimism/conservatism of the experimenter who assigned the error.

Thus, it is always dangerous to throw out a measurement. Maybe we are unlucky enough to make a valid measurement that lies ten standard deviations from the population mean. A valid measurement from the tails of the underlying distribution should not be thrown out. It is even more dangerous to throw out a suspect point indicative of an underlying physical process. Very little science would be known today if the experimenter always threw out measurements that didn't match preconceived expectations!

In general, there are two different types of experimental data taken in a laboratory and the question of rejecting measurements is handled in slightly different ways for each. The two types of data are the following:

1. A series of measurements taken with one or more variables changed for each data point. An example is the calibration of a thermocouple, in which the output voltage is measured when the thermocouple is at a number of different temperatures.

2. Repeated measurements of the same physical quantity, with all variables held as constant as experimentally possible. An example is the measurement of the height of a sample of geraniums grown under identical conditions from the same batch of seed stock.

For a series of measurements (case 1), when one of the data points is out of line the natural tendency is to throw it out. But, as already mentioned, this means you are assuming the result you are attempting to measure. As a rule of thumb, unless there is a physical explanation of why the suspect value is spurious and it is no more than three standard deviations away from the expected value, it should probably be kept. Chapter 7 deals further with this case.

For repeated measurements (case 2), the situation is a little different. Say you are measuring the time for a pendulum to undergo 20 oscillations and you repeat the measurement five times. Assume that four of these trials are within 0.1 seconds of each other, but the fifth trial differs from these by 1.4 seconds ( , more than three standard deviations away from the mean of the "good" values). There is no known reason why that one measurement differs from all the others. Nonetheless, you may be justified in throwing it out. Say that, unknown to you, just as that measurement was being taken, a gravity wave swept through your region of spacetime. However, if you are trying to measure the period of the pendulum when there are no gravity waves affecting the measurement, then throwing out that one result is reasonable. (Although trying to repeat the measurement to find the existence of gravity waves will certainly be more fun!) So whatever the reason for a suspect value, the rule of thumb is that it may be thrown out provided that fact is well documented and that the measurement is repeated a number of times more to convince the experimenter that he/she is not throwing out an important piece of data indicating a new physical process.

3.3 Propagation of Errors of Precision

3.3.1 Discussion and Examples

Usually, errors of precision are probabilistic. This means that the experimenter is saying that the actual value of some parameter is within a specified range. For example, if the half-width of the range equals one standard deviation, then the probability is about 68% that over repeated experimentation the true mean will fall within the range; if the half-width of the range is twice the standard deviation, the probability is 95%, etc.

If we have two variables, say and , and want to combine them to form a new variable, we want the error in the combination to preserve this probability.

The correct procedure to do this is to combine errors in quadrature, which is the square root of the sum of the squares. supplies a function.

For simple combinations of data with random errors, the correct procedure can be summarized in three rules. will stand for the errors of precision in , , and , respectively. We assume that and are independent of each other.

Note that all three rules assume that the error, say , is small compared to the value of .

If

z = x * y

or

then

In words, the fractional error in is the quadrature of the fractional errors in and .

If

z = x + y

or

z = x - y

then

In words, the error in is the quadrature of the errors in and .

If

then

or equivalently

includes functions to combine data using the above rules. They are named , , , , and .

Imagine we have pressure data, measured in centimeters of Hg, and volume data measured in arbitrary units. Each data point consists of { , } pairs.

We calculate the pressure times the volume.

In the above, the values of and have been multiplied and the errors have ben combined using Rule 1.

There is an equivalent form for this calculation.

Consider the first of the volume data: {11.28156820762763, 0.031}. The error means that the true value is claimed by the experimenter to probably lie between 11.25 and 11.31. Thus, all the significant figures presented to the right of 11.28 for that data point really aren't significant. The function will adjust the volume data.

Notice that by default, uses the two most significant digits in the error for adjusting the values. This can be controlled with the option.

For most cases, the default of two digits is reasonable. As discussed in Section 3.2.1, if we assume a normal distribution for the data, then the fractional error in the determination of the standard deviation , and can be written as follows.

Thus, using this as a general rule of thumb for all errors of precision, the estimate of the error is only good to 10%, ( one significant figure, unless is greater than 51) . Nonetheless, keeping two significant figures handles cases such as 0.035 vs. 0.030, where some significance may be attached to the final digit.

You should be aware that when a datum is massaged by , the extra digits are dropped.

By default, and the other functions use the function. The use of is controlled using the option.

The number of digits can be adjusted.

To form a power, say,

we might be tempted to just do

function.

Finally, imagine that for some reason we wish to form a combination.

We might be tempted to solve this with the following.

then the error is

Here is an example solving . We shall use and below to avoid overwriting the symbols and . First we calculate the total derivative.

Next we form the error.

Now we can evaluate using the pressure and volume data to get a list of errors.

Next we form the list of pairs.

The function combines these steps with default significant figure adjustment.

The function can be used in place of the other functions discussed above.

In this example, the function will be somewhat faster.

There is a caveat in using . The expression must contain only symbols, numerical constants, and arithmetic operations. Otherwise, the function will be unable to take the derivatives of the expression necessary to calculate the form of the error. The other functions have no such limitation.

3.3.1.1 Another Approach to Error Propagation: The and Datum

value error

Data[{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},
{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8},
{796.4, 2.8}}]Data[{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},

{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8},

{796.4, 2.8}}]

The wrapper can be removed.

{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},
{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8}, {796.4, 2.8}}{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},

{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8}, {796.4, 2.8}}

The reason why the output of the previous two commands has been formatted as is that typesets the pairs using ± for output.

A similar construct can be used with individual data points.

Datum[{70, 0.04}]Datum[{70, 0.04}]

Just as for , the typesetting of uses

The and constructs provide "automatic" error propagation for multiplication, division, addition, subtraction, and raising to a power. Another advantage of these constructs is that the rules built into know how to combine data with constants.

The rules also know how to propagate errors for many transcendental functions.

This rule assumes that the error is small relative to the value, so we can approximate.

or arguments, are given by .

We have seen that typesets the and constructs using ±. The function can be used directly, and provided its arguments are numeric, errors will be propagated.

One may typeset the ± into the input expression, and errors will again be propagated.

The ± input mechanism can combine terms by addition, subtraction, multiplication, division, raising to a power, addition and multiplication by a constant number, and use of the . The rules used by for ± are only for numeric arguments.

This makes different than

3.3.1.2 Why Quadrature?

Here we justify combining errors in quadrature. Although they are not proofs in the usual pristine mathematical sense, they are correct and can be made rigorous if desired.

First, you may already know about the "Random Walk" problem in which a player starts at the point = 0 and at each move steps either forward (toward + ) or backward (toward - ). The choice of direction is made randomly for each move by, say, flipping a coin. If each step covers a distance , then after steps the expected most probable distance of the player from the origin can be shown to be

Thus, the distance goes up as the square root of the number of steps.

Now consider a situation where measurements of a quantity are performed, each with an identical random error . We find the sum of the measurements.

, it is equally likely to be + as - , and which is essentially random. Thus, the expected most probable error in the sum goes up as the square root of the number of measurements.

This is exactly the result obtained by combining the errors in quadrature.

Another similar way of thinking about the errors is that in an abstract linear error space, the errors span the space. If the errors are probabilistic and uncorrelated, the errors in fact are linearly independent (orthogonal) and thus form a basis for the space. Thus, we would expect that to add these independent random errors, we would have to use Pythagoras' theorem, which is just combining them in quadrature.

3.3.2 Finding the Error in an Average

The rules for propagation of errors, discussed in Section 3.3.1, allow one to find the error in an average or mean of a number of repeated measurements. Recall that to compute the average, first the sum of all the measurements is found, and the rule for addition of quantities allows the computation of the error in the sum. Next, the sum is divided by the number of measurements, and the rule for division of quantities allows the calculation of the error in the result ( the error of the mean).

In the case that the error in each measurement has the same value, the result of applying these rules for propagation of errors can be summarized as a theorem.

Theorem: If the measurement of a random variable is repeated times, and the random variable has standard deviation , then the standard deviation in the mean is

Proof: One makes measurements, each with error .

{x1, errx}, {x2, errx}, ... , {xn, errx}

We calculate the sum.

sumx = x1 + x2 + ... + xn

We calculate the error in the sum.

This last line is the key: by repeating the measurements times, the error in the sum only goes up as [ ].

The mean

Applying the rule for division we get the following.

This completes the proof.

The quantity called

Here is an example. In Section 3.2.1, 10 measurements of the diameter of a small cylinder were discussed. The mean of the measurements was 1.6514 cm and the standard deviation was 0.00185 cm. Now we can calculate the mean and its error, adjusted for significant figures.

Note that presenting this result without significant figure adjustment makes no sense.

The above number implies that there is meaning in the one-hundred-millionth part of a centimeter.

Here is another example. Imagine you are weighing an object on a "dial balance" in which you turn a dial until the pointer balances, and then read the mass from the marking on the dial. You find = 26.10 ± 0.01 g. The 0.01 g is the reading error of the balance, and is about as good as you can read that particular piece of equipment. You remove the mass from the balance, put it back on, weigh it again, and get = 26.10 ± 0.01 g. You get a friend to try it and she gets the same result. You get another friend to weigh the mass and he also gets = 26.10 ± 0.01 g. So you have four measurements of the mass of the body, each with an identical result. Do you think the theorem applies in this case? If yes, you would quote = 26.100 ± 0.01/ [4] = 26.100 ± 0.005 g. How about if you went out on the street and started bringing strangers in to repeat the measurement, each and every one of whom got = 26.10 ± 0.01 g. So after a few weeks, you have 10,000 identical measurements. Would the error in the mass, as measured on that $50 balance, really be the following?

The point is that these rules of statistics are only a rough guide and in a situation like this example where they probably don't apply, don't be afraid to ignore them and use your "uncommon sense". In this example, presenting your result as = 26.10 ± 0.01 g is probably the reasonable thing to do.

3.4 Calibration, Accuracy, and Systematic Errors

In Section 3.1.2, we made the distinction between errors of precision and accuracy by imagining that we had performed a timing measurement with a very precise pendulum clock, but had set its length wrong, leading to an inaccurate result. Here we discuss these types of errors of accuracy. To get some insight into how such a wrong length can arise, you may wish to try comparing the scales of two rulers made by different companies — discrepancies of 3 mm across 30 cm are common!

If we have access to a ruler we trust ( a "calibration standard"), we can use it to calibrate another ruler. One reasonable way to use the calibration is that if our instrument measures and the standard records , then we can multiply all readings of our instrument by / . Since the correction is usually very small, it will practically never affect the error of precision, which is also small. Calibration standards are, almost by definition, too delicate and/or expensive to use for direct measurement.

Here is an example. We are measuring a voltage using an analog Philips multimeter, model PM2400/02. The result is 6.50 V, measured on the 10 V scale, and the reading error is decided on as 0.03 V, which is 0.5%. Repeating the measurement gives identical results. It is calculated by the experimenter that the effect of the voltmeter on the circuit being measured is less than 0.003% and hence negligible. However, the manufacturer of the instrument only claims an accuracy of 3% of full scale (10 V), which here corresponds to 0.3 V.

Now, what this claimed accuracy means is that the manufacturer of the instrument claims to control the tolerances of the components inside the box to the point where the value read on the meter will be within 3% times the scale of the actual value. Furthermore, this is not a random error; a given meter will supposedly always read too high or too low when measurements are repeated on the same scale. Thus, repeating measurements will not reduce this error.

A further problem with this accuracy is that while most good manufacturers (including Philips) tend to be quite conservative and give trustworthy specifications, there are some manufacturers who have the specifications written by the sales department instead of the engineering department. And even Philips cannot take into account that maybe the last person to use the meter dropped it.

Nonetheless, in this case it is probably reasonable to accept the manufacturer's claimed accuracy and take the measured voltage to be 6.5 ± 0.3 V. If you want or need to know the voltage better than that, there are two alternatives: use a better, more expensive voltmeter to take the measurement or calibrate the existing meter.

Using a better voltmeter, of course, gives a better result. Say you used a Fluke 8000A digital multimeter and measured the voltage to be 6.63 V. However, you're still in the same position of having to accept the manufacturer's claimed accuracy, in this case (0.1% of reading + 1 digit) = 0.02 V. To do better than this, you must use an even better voltmeter, which again requires accepting the accuracy of this even better instrument and so on, ad infinitum, until you run out of time, patience, or money.

Say we decide instead to calibrate the Philips meter using the Fluke meter as the calibration standard. Such a procedure is usually justified only if a large number of measurements were performed with the Philips meter. Why spend half an hour calibrating the Philips meter for just one measurement when you could use the Fluke meter directly?

We measure four voltages using both the Philips and the Fluke meter. For the Philips instrument we are not interested in its accuracy, which is why we are calibrating the instrument. So we will use the reading error of the Philips instrument as the error in its measurements and the accuracy of the Fluke instrument as the error in its measurements.

We form lists of the results of the measurements.

We can examine the differences between the readings either by dividing the Fluke results by the Philips or by subtracting the two values.

The second set of numbers is closer to the same value than the first set, so in this case adding a correction to the Philips measurement is perhaps more appropriate than multiplying by a correction.

We form a new data set of format { }.

We can guess, then, that for a Philips measurement of 6.50 V the appropriate correction factor is 0.11 ± 0.04 V, where the estimated error is a guess based partly on a fear that the meter's inaccuracy may not be as smooth as the four data points indicate. Thus, the corrected Philips reading can be calculated.

(You may wish to know that all the numbers in this example are real data and that when the Philips meter read 6.50 V, the Fluke meter measured the voltage to be 6.63 ± 0.02 V.)

Finally, a further subtlety: Ohm's law states that the resistance is related to the voltage and the current across the resistor according to the following equation.

V = IR

Imagine that we are trying to determine an unknown resistance using this law and are using the Philips meter to measure the voltage. Essentially the resistance is the slope of a graph of voltage versus current.

If the Philips meter is systematically measuring all voltages too big by, say, 2%, that systematic error of accuracy will have no effect on the slope and therefore will have no effect on the determination of the resistance . So in this case and for this measurement, we may be quite justified in ignoring the inaccuracy of the voltmeter entirely and using the reading error to determine the uncertainty in the determination of .

3.5 Summary of the Error Propagation Routines

  • Wolfram|Alpha Notebook Edition
  • Mobile Apps
  • Wolfram Workbench
  • Volume & Site Licensing
  • View all...
  • For Customers
  • Online Store
  • Product Registration
  • Product Downloads
  • Service Plans Benefits
  • User Portal
  • Your Account
  • Customer Service
  • Get Started with Wolfram
  • Fast Introduction for Math Students
  • Public Resources
  • Wolfram|Alpha
  • Resource System
  • Connected Devices Project
  • Wolfram Data Drop
  • Wolfram Science
  • Computational Thinking
  • About Wolfram
  • Legal & Privacy Policy

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Academic writing
  • How to write a lab report

How To Write A Lab Report | Step-by-Step Guide & Examples

Published on May 20, 2021 by Pritha Bhandari . Revised on July 23, 2023.

A lab report conveys the aim, methods, results, and conclusions of a scientific experiment. The main purpose of a lab report is to demonstrate your understanding of the scientific method by performing and evaluating a hands-on lab experiment. This type of assignment is usually shorter than a research paper .

Lab reports are commonly used in science, technology, engineering, and mathematics (STEM) fields. This article focuses on how to structure and write a lab report.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Structuring a lab report, introduction, other interesting articles, frequently asked questions about lab reports.

The sections of a lab report can vary between scientific fields and course requirements, but they usually contain the purpose, methods, and findings of a lab experiment .

Each section of a lab report has its own purpose.

  • Title: expresses the topic of your study
  • Abstract : summarizes your research aims, methods, results, and conclusions
  • Introduction: establishes the context needed to understand the topic
  • Method: describes the materials and procedures used in the experiment
  • Results: reports all descriptive and inferential statistical analyses
  • Discussion: interprets and evaluates results and identifies limitations
  • Conclusion: sums up the main findings of your experiment
  • References: list of all sources cited using a specific style (e.g. APA )
  • Appendices : contains lengthy materials, procedures, tables or figures

Although most lab reports contain these sections, some sections can be omitted or combined with others. For example, some lab reports contain a brief section on research aims instead of an introduction, and a separate conclusion is not always required.

If you’re not sure, it’s best to check your lab report requirements with your instructor.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experimental error chart

Your title provides the first impression of your lab report – effective titles communicate the topic and/or the findings of your study in specific terms.

Create a title that directly conveys the main focus or purpose of your study. It doesn’t need to be creative or thought-provoking, but it should be informative.

  • The effects of varying nitrogen levels on tomato plant height.
  • Testing the universality of the McGurk effect.
  • Comparing the viscosity of common liquids found in kitchens.

An abstract condenses a lab report into a brief overview of about 150–300 words. It should provide readers with a compact version of the research aims, the methods and materials used, the main results, and the final conclusion.

Think of it as a way of giving readers a preview of your full lab report. Write the abstract last, in the past tense, after you’ve drafted all the other sections of your report, so you’ll be able to succinctly summarize each section.

To write a lab report abstract, use these guiding questions:

  • What is the wider context of your study?
  • What research question were you trying to answer?
  • How did you perform the experiment?
  • What did your results show?
  • How did you interpret your results?
  • What is the importance of your findings?

Nitrogen is a necessary nutrient for high quality plants. Tomatoes, one of the most consumed fruits worldwide, rely on nitrogen for healthy leaves and stems to grow fruit. This experiment tested whether nitrogen levels affected tomato plant height in a controlled setting. It was expected that higher levels of nitrogen fertilizer would yield taller tomato plants.

Levels of nitrogen fertilizer were varied between three groups of tomato plants. The control group did not receive any nitrogen fertilizer, while one experimental group received low levels of nitrogen fertilizer, and a second experimental group received high levels of nitrogen fertilizer. All plants were grown from seeds, and heights were measured 50 days into the experiment.

The effects of nitrogen levels on plant height were tested between groups using an ANOVA. The plants with the highest level of nitrogen fertilizer were the tallest, while the plants with low levels of nitrogen exceeded the control group plants in height. In line with expectations and previous findings, the effects of nitrogen levels on plant height were statistically significant. This study strengthens the importance of nitrogen for tomato plants.

Your lab report introduction should set the scene for your experiment. One way to write your introduction is with a funnel (an inverted triangle) structure:

  • Start with the broad, general research topic
  • Narrow your topic down your specific study focus
  • End with a clear research question

Begin by providing background information on your research topic and explaining why it’s important in a broad real-world or theoretical context. Describe relevant previous research on your topic and note how your study may confirm it or expand it, or fill a gap in the research field.

This lab experiment builds on previous research from Haque, Paul, and Sarker (2011), who demonstrated that tomato plant yield increased at higher levels of nitrogen. However, the present research focuses on plant height as a growth indicator and uses a lab-controlled setting instead.

Next, go into detail on the theoretical basis for your study and describe any directly relevant laws or equations that you’ll be using. State your main research aims and expectations by outlining your hypotheses .

Based on the importance of nitrogen for tomato plants, the primary hypothesis was that the plants with the high levels of nitrogen would grow the tallest. The secondary hypothesis was that plants with low levels of nitrogen would grow taller than plants with no nitrogen.

Your introduction doesn’t need to be long, but you may need to organize it into a few paragraphs or with subheadings such as “Research Context” or “Research Aims.”

A lab report Method section details the steps you took to gather and analyze data. Give enough detail so that others can follow or evaluate your procedures. Write this section in the past tense. If you need to include any long lists of procedural steps or materials, place them in the Appendices section but refer to them in the text here.

You should describe your experimental design, your subjects, materials, and specific procedures used for data collection and analysis.

Experimental design

Briefly note whether your experiment is a within-subjects  or between-subjects design, and describe how your sample units were assigned to conditions if relevant.

A between-subjects design with three groups of tomato plants was used. The control group did not receive any nitrogen fertilizer. The first experimental group received a low level of nitrogen fertilizer, while the second experimental group received a high level of nitrogen fertilizer.

Describe human subjects in terms of demographic characteristics, and animal or plant subjects in terms of genetic background. Note the total number of subjects as well as the number of subjects per condition or per group. You should also state how you recruited subjects for your study.

List the equipment or materials you used to gather data and state the model names for any specialized equipment.

List of materials

35 Tomato seeds

15 plant pots (15 cm tall)

Light lamps (50,000 lux)

Nitrogen fertilizer

Measuring tape

Describe your experimental settings and conditions in detail. You can provide labelled diagrams or images of the exact set-up necessary for experimental equipment. State how extraneous variables were controlled through restriction or by fixing them at a certain level (e.g., keeping the lab at room temperature).

Light levels were fixed throughout the experiment, and the plants were exposed to 12 hours of light a day. Temperature was restricted to between 23 and 25℃. The pH and carbon levels of the soil were also held constant throughout the experiment as these variables could influence plant height. The plants were grown in rooms free of insects or other pests, and they were spaced out adequately.

Your experimental procedure should describe the exact steps you took to gather data in chronological order. You’ll need to provide enough information so that someone else can replicate your procedure, but you should also be concise. Place detailed information in the appendices where appropriate.

In a lab experiment, you’ll often closely follow a lab manual to gather data. Some instructors will allow you to simply reference the manual and state whether you changed any steps based on practical considerations. Other instructors may want you to rewrite the lab manual procedures as complete sentences in coherent paragraphs, while noting any changes to the steps that you applied in practice.

If you’re performing extensive data analysis, be sure to state your planned analysis methods as well. This includes the types of tests you’ll perform and any programs or software you’ll use for calculations (if relevant).

First, tomato seeds were sown in wooden flats containing soil about 2 cm below the surface. Each seed was kept 3-5 cm apart. The flats were covered to keep the soil moist until germination. The seedlings were removed and transplanted to pots 8 days later, with a maximum of 2 plants to a pot. Each pot was watered once a day to keep the soil moist.

The nitrogen fertilizer treatment was applied to the plant pots 12 days after transplantation. The control group received no treatment, while the first experimental group received a low concentration, and the second experimental group received a high concentration. There were 5 pots in each group, and each plant pot was labelled to indicate the group the plants belonged to.

50 days after the start of the experiment, plant height was measured for all plants. A measuring tape was used to record the length of the plant from ground level to the top of the tallest leaf.

In your results section, you should report the results of any statistical analysis procedures that you undertook. You should clearly state how the results of statistical tests support or refute your initial hypotheses.

The main results to report include:

  • any descriptive statistics
  • statistical test results
  • the significance of the test results
  • estimates of standard error or confidence intervals

The mean heights of the plants in the control group, low nitrogen group, and high nitrogen groups were 20.3, 25.1, and 29.6 cm respectively. A one-way ANOVA was applied to calculate the effect of nitrogen fertilizer level on plant height. The results demonstrated statistically significant ( p = .03) height differences between groups.

Next, post-hoc tests were performed to assess the primary and secondary hypotheses. In support of the primary hypothesis, the high nitrogen group plants were significantly taller than the low nitrogen group and the control group plants. Similarly, the results supported the secondary hypothesis: the low nitrogen plants were taller than the control group plants.

These results can be reported in the text or in tables and figures. Use text for highlighting a few key results, but present large sets of numbers in tables, or show relationships between variables with graphs.

You should also include sample calculations in the Results section for complex experiments. For each sample calculation, provide a brief description of what it does and use clear symbols. Present your raw data in the Appendices section and refer to it to highlight any outliers or trends.

The Discussion section will help demonstrate your understanding of the experimental process and your critical thinking skills.

In this section, you can:

  • Interpret your results
  • Compare your findings with your expectations
  • Identify any sources of experimental error
  • Explain any unexpected results
  • Suggest possible improvements for further studies

Interpreting your results involves clarifying how your results help you answer your main research question. Report whether your results support your hypotheses.

  • Did you measure what you sought out to measure?
  • Were your analysis procedures appropriate for this type of data?

Compare your findings with other research and explain any key differences in findings.

  • Are your results in line with those from previous studies or your classmates’ results? Why or why not?

An effective Discussion section will also highlight the strengths and limitations of a study.

  • Did you have high internal validity or reliability?
  • How did you establish these aspects of your study?

When describing limitations, use specific examples. For example, if random error contributed substantially to the measurements in your study, state the particular sources of error (e.g., imprecise apparatus) and explain ways to improve them.

The results support the hypothesis that nitrogen levels affect plant height, with increasing levels producing taller plants. These statistically significant results are taken together with previous research to support the importance of nitrogen as a nutrient for tomato plant growth.

However, unlike previous studies, this study focused on plant height as an indicator of plant growth in the present experiment. Importantly, plant height may not always reflect plant health or fruit yield, so measuring other indicators would have strengthened the study findings.

Another limitation of the study is the plant height measurement technique, as the measuring tape was not suitable for plants with extreme curvature. Future studies may focus on measuring plant height in different ways.

The main strengths of this study were the controls for extraneous variables, such as pH and carbon levels of the soil. All other factors that could affect plant height were tightly controlled to isolate the effects of nitrogen levels, resulting in high internal validity for this study.

Your conclusion should be the final section of your lab report. Here, you’ll summarize the findings of your experiment, with a brief overview of the strengths and limitations, and implications of your study for further research.

Some lab reports may omit a Conclusion section because it overlaps with the Discussion section, but you should check with your instructor before doing so.

If you want to know more about AI for academic writing, AI tools, or fallacies make sure to check out some of our other articles with explanations and examples or go directly to our tools!

  • Ad hominem fallacy
  • Post hoc fallacy
  • Appeal to authority fallacy
  • False cause fallacy
  • Sunk cost fallacy
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

A lab report conveys the aim, methods, results, and conclusions of a scientific experiment . Lab reports are commonly assigned in science, technology, engineering, and mathematics (STEM) fields.

The purpose of a lab report is to demonstrate your understanding of the scientific method with a hands-on lab experiment. Course instructors will often provide you with an experimental design and procedure. Your task is to write up how you actually performed the experiment and evaluate the outcome.

In contrast, a research paper requires you to independently develop an original argument. It involves more in-depth research and interpretation of sources and data.

A lab report is usually shorter than a research paper.

The sections of a lab report can vary between scientific fields and course requirements, but it usually contains the following:

  • Abstract: summarizes your research aims, methods, results, and conclusions
  • References: list of all sources cited using a specific style (e.g. APA)
  • Appendices: contains lengthy materials, procedures, tables or figures

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, July 23). How To Write A Lab Report | Step-by-Step Guide & Examples. Scribbr. Retrieved September 22, 2024, from https://www.scribbr.com/academic-writing/lab-report/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, guide to experimental design | overview, steps, & examples, how to write an apa methods section, how to write an apa results section, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Research Guides

BSCI 1510L Literature and Stats Guide: 5.5 Error bars in figures

  • 1 What is a scientific paper?
  • 2 Referencing and accessing papers
  • 2.1 Literature Cited
  • 2.2 Accessing Scientific Papers
  • 2.3 Traversing the web of citations
  • 2.4 Keyword Searches
  • 3 Style of scientific writing
  • 3.1 Specific details regarding scientific writing
  • 3.2 Components of a scientific paper
  • 4 Summary of the Writing Guide and Further Information
  • Appendix A: Calculation Final Concentrations
  • 1 Formulas in Excel
  • 2 Basic operations in Excel
  • 3 Measurement and Variation
  • 3.1 Describing Quantities and Their Variation
  • 3.2 Samples Versus Populations
  • 3.3 Calculating Descriptive Statistics using Excel
  • 4 Variation and differences
  • 5 Differences in Experimental Science
  • 5.1 Aside: Commuting to Nashville
  • 5.2 P and Detecting Differences in Variable Quantities
  • 5.3 Statistical significance
  • 5.4 A test for differences of sample means: 95% Confidence Intervals

5.5 Error bars in figures

  • 5.6 Discussing statistics in your scientific writing
  • 6 Scatter plot, trendline, and linear regression
  • 7 The t-test of Means
  • 8 Paired t-test
  • 9 Two-Tailed and One-Tailed Tests
  • 10 Variation on t-tests: ANOVA
  • 11 Reporting the Results of a Statistical Test
  • 12 Summary of statistical tests
  • 1 Objectives
  • 2 Project timeline
  • 3 Background
  • 4 Previous work in the BSCI 111 class
  • 5 General notes about the project
  • 6 About the paper
  • 7 References

It is not particularly easy to examine the table shown in Fig. 12 of section 5.4 and quickly tell which of the 95% confidence intervals overlapped.  For this reason, a figure containing a column chart is often used to compare the results of a number of different trials.

5.5.1 Examples from journal articles

experimental error chart

Fig. 13  Sample mean log reductions and 95% confidence intervals from Fig. 2 of Sickbert-Bennett et al. (2005)

The results from the first column of Fig. 12 are presented in graphical form in Fig. 13.  The mean log reduction of each agent is graphed as the height of its column.  The upper and lower 95% confidence intervals are shown as error bars that extend above and below the top of the mean column.  These error bars serve the same purpose as the error bars in Figs. 9 - 11 of section 5.4.

It is important to note that in publications error bars do not always represent 95% confidence limits.  It is also common for the error bars to represent plus or minus one standard error of the mean ("SE" or "SEM").  Because plus/minus one standard error is smaller than the 95% confidence interval, overlapping error bars indicate that means are not different.  However, non-overlapping error bars may or may not be significantly different so some other means must be used to indicate differences.  Often letters are used to indicate differing means by labeling significantly different groups with different letters (Fig. 14).

experimental error chart

Fig. 2. Desmanthus seed production in relation to timing of vole access. Seed production continued to be lower where voles had continual and early-season access in 2003 even after being protected from vole herbivory in 2004. The number of seeds shown for each treatment is the mean of 2003 and 2004; bars represent standard error. Different letters indicate significant differences in access treatment means from repeated-measures ANOVA (Hotelling's T 2 , P < 0.05).

Fig. 14 Mean seed production with error bars representing standard error of the mean from Fig. 2 of Sullivan and Howe (2009).

In some cases error bars represent +/- 2 SE and in a few cases where showing the variability of the data is important, error bars may also represent plus or minus one standard deviation.  Because of the great variety of interpretations for error bars, it is extremely important to specify in the figure legend what the error bars actually represent.

5.5.2 Creating a column chart with error bars using Excel

To create a column chart of sample means having error bars based a statistical analysis, begin by following the instructions in Section 5.4.4 .  To graph several sample means, place the data for each sample group in adjacent columns of the spreadsheet and select the block of columns during the Input Range selection step.  When the descriptive statistics are generated, the labels for the statistical values will be repeated unnecessarily, so you can delete the labels that separate the columns of results. 

Highlight the mean values in the row labeled "Mean".  On the Insert ribbon, click on the Insert Column Chart.  Select the Clustered Column option.  Excel should create a column chart with a column for each sample mean.  To label the columns, click on the funnel-shaped icon to the right of the chart.  Click on the "Select data…" link at the bottom of the dialog box.  Click on the Edit button under Horizontal (Category) Axis Labels.  In the Axis Labels selection dialog box, click on the select button and select the column heading cells.  Click OK, then OK again. 

To further customize the chart, click on the + sign to the upper right of the chart.  Deselect Chart Title.  Select Axis Title and edit the labels appropriately.  Check the Error Bars checkbox, then click on the triangle to the right of the option.  Select "More Options…" Under "Error Amount", click the Custom radio button, then click on Specify Value.  The Custom Error Bars dialog box will appear.  The positive and negative error values are the amounts to be added to and subtracted from the means to specify the upper and lower limits.  They are not the actual values of the limits themselves.  Since these amounts are included under Confidence Level(95.0%) in the generated Summary Statistics table, you can simply select the cells in that row for both the Positive Error Value and Negative Error Value, then click OK to dismiss the dialog.

experimental error chart

If you want the error bars to represent standard error of the mean or standard deviation, you can select cells in those rows rather than the 95% confidence levels.

Because the chart was created in a Microsoft Office product, you should be able to copy and paste it easily to either a Word or PowerPoint document.  The default paste option is "Use Destination Theme & Link Data" which means that if you change the spreadsheet, the chart may change in your Word document.  To prevent this, you can change the paste option to Picture.  The chart will then be stable in your Word document, although it will also become uneditable.

Sickbert-Bennett, E.E., D.J. Weber, M.F. Gergen-Teague, M.D. Sobsey, G.P. Samsa, W.A. Rutala. 2005. Comparative efficacy of hand hygiene agents in the reduction of bacteria and viruses.  American Journal of Infection Control 33:67-77.  http://dx.doi.org/10.1016/j.ajic.2004.08.005

Sullivan, A.T. and H.F. Howe. 2009. Prairie forb response to timing of vole herbivory.  Ecology 90:1346-1355.  http://dx.doi.org/10.1890/08-0629.1

  • << Previous: 5.4 A test for differences of sample means: 95% Confidence Intervals
  • Next: 5.6 Discussing statistics in your scientific writing >>
  • Last Updated: Jul 30, 2024 9:53 AM
  • URL: https://researchguides.library.vanderbilt.edu/bsci1510L

Creative Commons License

IMAGES

  1. experimental error probability chart

    experimental error chart

  2. Types of Error

    experimental error chart

  3. Error and regression charts between experimental values and simulation

    experimental error chart

  4. Experimental error chart, indicating the minimal error (1%) in

    experimental error chart

  5. Types of Error

    experimental error chart

  6. Error percentage chart of experimental vs. predicted values

    experimental error chart

VIDEO

  1. Handgun Shooting Error Chart #firearmstraining #practicalshooting

  2. Error analysis 2 #physics #education #waec

  3. Experimental methods and Error analysis

  4. Experimental Error

  5. Building a Compiler

  6. PHYS 1130

COMMENTS

  1. What is: Experimental Error

    What is: Experimental Error? Learn about types, impact, and reduction strategies for accurate data analysis.

  2. PDF Experimental Uncertainties (Errors)

    There are three main sources of experimental uncertainties (experimental errors): 1. Limited accuracy of the measuring apparatus - e.g., the force sensors that we use in experiment M2 cannot determine applied force with a better accuracy than ±0.05 N. 2. Limitations and simplifications of the experimental procedure - e.g., we commonly

  3. Random vs. Systematic Error

    Random and systematic errors are types of measurement error, a difference between the observed and true values of something.

  4. PDF A Student's Guide to Data and Error Analysis

    Preface. This book is written as a guide for the presentation of experimental including a consistent treatment of experimental errors and inaccuracies. is meant for experimentalists in physics, astronomy, chemistry, life and engineering. However, it can be equally useful for theoreticians produce simulation data: they are often confronted with ...

  5. How to Calculate Experimental Error in Chemistry

    Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. She has taught science courses at the high school, college, and graduate levels.

  6. Understanding Experimental Errors: Types, Causes, and Solutions

    These errors are often classified into three main categories: systematic errors, random errors, and human errors. Here are some common types of experimental errors: 1. Systematic Errors. Systematic errors are consistent and predictable errors that occur throughout an experiment. They can arise from flaws in equipment, calibration issues, or ...

  7. PDF Measurement and Error Analysis

    1Adapted from "Introduction to Experimental Error," Susan Cartwright, University of Sheffield, UK (2003). 2This is referred to as the mode. 1. 2 APPENDIX A. MEASUREMENT AND ERROR ANALYSIS The student also decides to calculate the average (or mean) head count n¯ for the 100 trials, knowing

  8. 5: Experimental Design

    Experimental design is a discipline within statistics concerned with the analysis and design of experiments. Design is intended to help research create experiments such that cause and effect can be established from tests of the hypothesis. We introduced elements of experimental design in Chapter 2.4. Here, we expand our discussion of ...

  9. PDF ERROR ANALYSIS (UNCERTAINTY ANALYSIS)

    or. dy − dx. - These errors are much smaller. • In general if different errors are not correlated, are independent, the way to combine them is. dz =. dx2 + dy2. • This is true for random and bias errors. THE CASE OF Z = X - Y. • Suppose Z = X - Y is a number much smaller than X or Y.

  10. Sources of Error in Science Experiments

    Random errors are due to fluctuations in the experimental or measurement conditions. Usually these errors are small. Taking more data tends to reduce the effect of random errors.

  11. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  12. Types of Error

    Random errors occur randomly, and sometimes have no source/cause. There are two types of random errors. Observational: When the observer makes consistent observational mistakes (such not reading the scale correctly and writing down values that are constantly too low or too high) Environmental: When unpredictable changes occur in the environment ...

  13. PDF An Introduction to Experimental Uncertainties and Error Analysis

    To illustrate the usefulness of fractional uncertainty, consider propagating errors (using Equation 6) in several simple and commonly-encountered functions. First, we consider a product of two variables, possibly with a constant coefficient c: g ( x , y ) ≡ cxy. 2 2 = g δ (. 2 2 δ x ) ( cy ) + ( δ y ) ( cx ) δ.

  14. Experimental Errors and Error Analysis

    This chapter is largely a tutorial on handling experimental errors of measurement. Much of the material has been extensively tested with science undergraduates at a variety of levels at the University of Toronto. Whole books can and have been written on this topic but here we distill the topic down to the essentials. Nonetheless, our experience ...

  15. Bias and Sources of Error

    These types of errors are also known as "blunders" or "miscalculations" and they happen to everyone. In experimental inquires, these types of errors can occur due to: incorrect reading of instructions (e.g., 50 mL vs. 500 mL, sugar vs. salt, etc.); incorrect measuring (e.g., inches instead of cm, °F instead of °C, voltage instead of ...

  16. How To Write A Lab Report

    Introduction. Your lab report introduction should set the scene for your experiment. One way to write your introduction is with a funnel (an inverted triangle) structure: Start with the broad, general research topic. Narrow your topic down your specific study focus. End with a clear research question.

  17. Percent Error Calculator

    Calculator Use. The Percent Error Calculator calculates the difference between between an experimental or observed value and a theoretical actual value.

  18. BSCI 1510L Literature and Stats Guide: 5.5 Error bars in figures

    Fig. 13 Sample mean log reductions and 95% confidence intervals from Fig. 2 of Sickbert-Bennett et al. (2005) The results from the first column of Fig. 12 are presented in graphical form in Fig. 13.