9.6 Practice problems

  1. Determine the mean, mode and range of the following dataset:
    20, 20, 25, 19, 17, 18, 17, 22, 23, 17, 23
  2. As part of a diagnostic test for levels of serum cortisol two standards were tested, a high concentration standard and a low concentration standard. The following results were obtained.
    High Low
    Mean 1.005 0.104
    Standard deviation 0.051 0.006

    Is the assay more precise at the lower control or the higher standard concentration?

  3. Pretend you are in charge of determining a reference interval for the general population for a new discovered blood analyte. Abnormally high or low levels of this analyte could be an indicator of kidney dysfunction. How would you go about selecting individuals to test? What factors would you need to consider in selecting them?


Solution to Practice Problem 9.2.



What do we do if measurements are inaccurate or imprecise?

Before this can be answered we need to consider, what is an acceptable level of accuracy and precision? There is no simple answer as this must be defined for each variable or analyte being measured. In a biomedical context this is normally based on medical significance.

A rule of thumb for many diagnostic assays is that the precision should be equal to or less than half of the ‘within subject’ biological variation. Ultimately the value of any medical diagnostic procedure or test is determined by how well it discriminates between the two conditions of interest (health and disease; two stages of a disease etc.)

Since it is sometimes difficult to know the true value for something being measured, precision is often used a proxy for accuracy. That is, a number of measurements are made under the same conditions that are in ‘good’ agreement with one another and therefore it is assumed that the measurements are accurate. This is not a correct assumption as it is possible for a measurement to be precise but not accurate. It is also possible for a measurement to be accurate but not precise. Ideally, we want a diagnostic test to be both accurate and precise.

Poor precision for a measurement usually results from poor technique and is associated with ‘random errors’. That is, the error (the deviation from the mean) has a random sign and varying magnitude. For example, the technician performing a particular assay uses an automatic pipette which has a precision limitation that introduces random errors. This kind of error can be detected by examining the variability of the results and can be reduced by averaging over many repetitions.

In general, poor accuracy is associated with ‘systematic errors’. This kind of error will have a reproducible sign and magnitude. For example, the automatic pipette used in an assay is incorrectly calibrated and consistently dispenses a higher volume than expected. Systematic errors are often more difficult to detect and require either the use of known standards or verification by a different method.

Share This Book