By the end of this section, you will be able to:
Figure 1 shows two instruments used to measure the mass of an object. The digital scale has mostly replaced the double-pan balance in physics labs because it gives more accurate and precise measurements. But what exactly do we mean by accurate and precise? Aren’t they the same thing? In this section, we examine in detail the process of making and reporting a measurement.
Science is based on observation and experiment—that is, on measurements. Accuracy is how close a measurement is to the accepted reference value for that measurement. For example, let’s say we want to measure the length of standard printer paper. The packaging in which we purchased the paper states that it is 11.0 in. long. We then measure the length of the paper three times and obtain the following measurements: 11.1 in., 11.2 in., and 10.9 in. These measurements are quite accurate because they are very close to the reference value of 11.0 in. In contrast, if we had obtained a measurement of 12 in., our measurement would not be very accurate. Notice that the concept of accuracy requires that an accepted reference value be given.
The precision of measurements refers to how close the agreement is between repeated independent measurements (which are repeated under the same conditions). Consider the example of the paper measurements. The precision of the measurements refers to the spread of the measured values. One way to analyze the precision of the measurements is to determine the range, or difference, between the lowest and the highest measured values. In this case, the lowest value was 10.9 in. and the highest value was 11.2 in. Thus, the measured values deviated from each other by, at most, 0.3 in. These measurements were relatively precise because they did not vary too much in value. However, if the measured values had been 10.9 in., 11.1 in., and 11.9 in., then the measurements would not be very precise because there would be significant variation from one measurement to another. Notice that the concept of precision depends only on the actual measurements acquired and does not depend on an accepted reference value.
The measurements in the paper example are both accurate and precise, but in some cases, measurements are accurate but not precise, or they are precise but not accurate. Let’s consider an example of a GPS attempting to locate the position of a restaurant in a city. Think of the restaurant location as existing at the center of a bull’s-eye target and think of each GPS attempt to locate the restaurant as a black dot. In Figure 2(a), we see the GPS measurements are spread out far apart from each other, but they are all relatively close to the actual location of the restaurant at the center of the target. This indicates a low-precision, high-accuracy measuring system. However, in Figure 2(b), the GPS measurements are concentrated quite closely to one another, but they are far away from the target location. This indicates a high-precision, low-accuracy measuring system.
The precision of a measuring system is related to the uncertainty in the measurements whereas the accuracy is related to the discrepancy from the accepted reference value. Uncertainty is a quantitative measure of how much your measured values deviate from one another. There are many different methods of calculating uncertainty, each of which is appropriate to different situations. Some examples include taking the range (that is, the largest minus the smallest) or finding the standard deviation of the measurements. Discrepancy (or “measurement error”) is the difference between the measured value and a given standard or expected value. If the measurements are not very precise, then the uncertainty of the values is high. If the measurements are not very accurate, then the discrepancy of the values is high.
Recall our example of measuring paper length; we obtained measurements of 11.1 in., 11.2 in., and 10.9 in., and the accepted value was 11.0 in. We might average the three measurements to say our best guess is 11.1 in.; in this case, our discrepancy is 11.1 – 11.0 = 0.1 in., which provides a quantitative measure of accuracy. We might calculate the uncertainty in our best guess by using half of the range of our measured values: 0.15 in. Then we would say the length of the paper is 11.1 in. plus or minus 0.15 in. The uncertainty in a measurement, A, is often denoted as δA (read “delta A”), so the measurement result would be recorded as A ± δA. Returning to our paper example, the measured length of the paper could be expressed as 11.1 ± 0.15 in. Since the discrepancy of 0.1 in. is less than the uncertainty of 0.15 in., we might say the measured value agrees with the accepted reference value to within experimental uncertainty.
Some factors that contribute to uncertainty in a measurement include the following:
In our example, such factors contributing to the uncertainty could be the smallest division on the ruler is 1/16 in., the person using the ruler has bad eyesight, the ruler is worn down on one end, or one side of the paper is slightly longer than the other. At any rate, the uncertainty in a measurement must be calculated to quantify its precision. If a reference value is known, it makes sense to calculate the discrepancy as well to quantify its accuracy.
Another method of expressing uncertainty is as a percent of the measured value. If a measurement A is expressed with uncertainty δA, the percent uncertainty is defined as
A grocery store sells 5-lb bags of apples. Let’s say we purchase four bags during the course of a month and weigh the bags each time. We obtain the following measurements:
Week 1 weight: 4.8 lb
Week 2 weight: 5.3 lb
Week 3 weight: 4.9 lb
Week 4 weight: 5.4 lb
We then determine the average weight of the 5-lb bag of apples is 5.1 ± 0.3 lb from using half of the range. What is the percent uncertainty of the bag’s weight?
First, observe that the average value of the bag’s weight, A, is 5.1 lb. The uncertainty in this value, \(\delta A\) is 0.3 lb. We can use the following equation to determine the percent uncertainty of the weight:.
\(\text{Percent uncertainty} = {\delta A \over A}×100\%.\)
Substitute the values into the equation:
We can conclude the average weight of a bag of apples from this store is 5.1 lb ± 6%. Notice the percent uncertainty is dimensionless because the units of weight in \(\delta A = 0.2\;lb\) canceled those in A = 5.1 lb when we took the ratio.
Uncertainty exists in anything calculated from measured quantities. For example, the area of a floor calculated from measurements of its length and width has an uncertainty because the length and width have uncertainties. How big is the uncertainty in something you calculate by multiplication or division? If the measurements going into the calculation have small uncertainties (a few percent or less), then the method of adding percents can be used for multiplication or division. This method states the percent uncertainty in a quantity calculated by multiplication or division is the sum of the percent uncertainties in the items used to make the calculation. For example, if a floor has a length of 4.00 m and a width of 3.00 m, with uncertainties of 2% and 1%, respectively, then the area of the floor is 12.0 m2 and has an uncertainty of 3%. (Expressed as an area, this is 0.36 m2 [12.0m2 × 0.03], which we round to 0.4 m2 since the area of the floor is given to a tenth of a square meter.)