There is no such thing as a perfect or ideal measurement which provides the “true value” of the measured quantity. There are a number of reasons for this, from limitations of the instrumentation used and those of the observer, to the variations in the devices in the circuit on which the measurement is made. This does not mean that a good, useful measurement is not possible. Obtaining it, however, requires not only adequate instruments but also some attention and vigilance against gross mistakes which seem to lurk in any laboratory setup. Gross mistakes are such errors as connecting a voltmeter lead to a wrong point in a circuit or entering data incorrectly into a notebook or a computer. These can be avoided by following proper procedures, careful data recording etc. Here we are concerned with two other important concepts: accuracy and precision.
Accuracy can be defined as the difference between the value obtained from measurement and a real ”true” value of a quantity. It can be expressed in absolute numbers, such as 10 mV, or in relative numbers, such as 0.5%. In the first case the measured voltage may be different from the actual voltage by no more than 10 mV, in the second by the given percentage. Accuracy is difficult to determine, because we never know what the real value of the measured quantity is, but it can be roughly estimated if we know the precision of instruments and the reliability of their calibration.
Precision of a measurement is related to the smallest difference between the measured values that can be distinguished. For example, if a voltmeter precision is 0.1 V we could measure the difference between 10.2 V and 10.3 V but no better. A reading of 10.25 may be assigned to either of these values, we could not tell. Precision is often confused with the resolution of the instrument scale. Just because an instrument has a finely divided scale on which we can read numbers “precisely” (true for many digital instruments), it does not necessarily follow that the measurement is precise. It may happen that when you disconnect the meter and connect it again to the same source you get a different reading on the same “precise” scale. It is generally true, however, that more precise instruments are designed with finer scales or more digits in their numerical display.
To understand better the difference between accuracy and precision consider a voltmeter that measures voltage consistently and reliably with the precision of 1 mV. A measurement of the voltage of an accurate standard source used for calibration of instruments gives a voltage 5 mV too high. This last error is the measure of the voltmeter accuracy. Its measurements were quite precise but the instrument was not well calibrated and showed consistently higher values. Such an instrument is still quite useful since we are often interested in comparing different voltages and this meter is able to measure the ratio of two voltages much better than it measures their absolute values.
In considering the effect of precision of instruments on measurement errors we are usually concerned with relative rather than absolute numbers. An error of 0.1 V for measurement of power line voltage of 117 V is very acceptable, since it gives the relative error of 0.1/117 < 0.1 % The same absolute error in a measurement of an amplifier output of 1 V gives a large relative error of 10%.