Implementation of a Quality Assurance/Quality Control (QA/QC) program will ensure that scientifically credible and meaningful data are collected. Quality Assurance (QA) is the collection of management processes and technical activities designed to guarantee that the defined standards of quality for data are met. Quality Control (QC) is an important part of the QA system and includes the activities that control the quality of the data so that it meets the need of the users.
Quality Control applies to technical day-to-day activities, such as the use of standards, spiked samples, and blanks, whereas Quality Assurance is the management system that ensures effective quality control is in place and working as intended.
The following concepts are relevant to this topic:
Types of errors
The QA/QC Program can address different types of errors common to water sampling. These errors are systematic errors, random errors and blunders.
Systematic errors are always the same sign (direction) and magnitude, and produce biases. Examples of activities that may lead to systematic errors include improperly calibrated equipment or improper selection of sampling devices or storage containers leading to a consistent change in the water.
Random errors are unpredictable and vary in sign and magnitude; however, they will average out and approach zero if enough measurements are made.
Blunders are a type of random error and are simply mistakes that occur on occasion and produce erroneous results. Measuring the wrong sample, transcribing or transposing measured variables, and misreading a scale are examples of blunders. Appropriate QC procedures can minimize most kinds of blunders, but they cannot minimize the carelessness that is their principal cause.
Although QA/QC programs try to remove these types of errors, it is extremely hard to control random errors. It is usually just by carelessness or nature that these errors happen. That is why sampling programs often involve the use of blanks and spiked samples in the sampling protocols.
Blanks and spiked sample
QA/QC programs often incorporate the use of blanks and spiked samples.
Blanks are water samples that have negligible or un-measurable amounts of the substances of interest in them and are commonly just distilled deionized water. They are used to measure incidental or accidental sample contamination during the whole process (sampling, transport, sample preparation, and analysis). Wherever a possibility exists for introduction of extraneous material into a sample or analytical procedure, a blank should be used to detect and measure the extraneous material.
Field Blanks are distilled deionized water samples that are carried to the sampling site. They are used to assess on-site contamination from dust or dirt in the air, or from parts of the sampling protocol like filtering or preservative addition. The distilled water is transferred from the container in which it was transported to the sample container at the sampling site. The blanks are treated the same as a regular sample. For example if a preservative is added to the regular sample, then it is added to the field blank as well. If the regular sample is filtered on-site then the field blank is filtered on-site as well.
Lab Blanks are distilled deionized water samples that are used to assess potential contamination from analytical procedures. A blank sample is included in the sample run to ensure that the analytical equipment is not a source of contamination.
Spiked Samples (also known as Reference Samples) are made by preparing a sample with a known concentration of a particular analyte. The analytical results are then compared to the known concentrations to determine the accuracy of the analyses. Spiked samples are sometimes used to test the accuracy of the lab analyses.
Sampling to reduce variability
QA/QC programs may also specify measures to reduce variability. There are three common approaches in sampling that can be utilized to reduce variability: split samples, replicate samples, and composite (pooled) samples.
Split Samples are obtained by dividing one sample into two or more identical sub-samples. Split samples are used to obtain the magnitude of error caused by sample processing either in the laboratory or in the field. Care must be taken to ensure that the sample is well agitated prior to dividing it into duplicates or triplicates.
Replicate Samples are two or more samples taken in a water body in the same sampling period. The first class of replicate samples are those taken at the same location. These replicate samples are taken to measure the uncertainty in a water body within a single location at any specific time. The second class of replicate samples are those taken at different locations within the water body. These replicate samples are taken to measure spatial variability within the water body.
A Composite Sample consists of several samples taken from the same water body in the same sampling period and mixed together to create a pooled sample. Similar to replicate samples, pooled samples can either be taken at the same location to reduce inherent variability, or from several locations to reduce spatial variability. Although a composite sample cannot measure variability, it is commonly used to reduce variability without increasing costs by adding several replicates. Care should be taken to ensure that the composite sample is well agitated prior to sub-sampling.
Data and measurement quality objectives
Data and measurement quality objectives are key components of a QA/QC system. Data quality objectives (DQOs) are statements that provide the definitions of confidence required to draw conclusions from the data. These objectives determine the degree of total variability, uncertainty or error that can be tolerated in the data. These limits of variability are incorporated into the sampling plan and achieved with detailed sampling protocols.
An example of a quantitative DQO might be "Determine with 95% confidence that the water quality of the stream meets the guidelines". An example of a qualitative DQO might be "Determine the water quality of our stream using standard field methods, to compare to the water quality of streams monitored under the provincial long term monitoring program". DQOs should be thought of as statements that express the project outcomes that the data generated in the sampling program will be expected to support. They guide what type of accuracy or quality is required in the data, but do not set those specifics.
Measurement Quality Objectives (MQOs) are quantitative expressions of the quality of the data; i.e., is the data is good enough to be used in making decisions? They specify what level of confidence is required in the data, but they don't specify how that is achieved. They are usually expressed in terms such as standard deviation, percent recovery, and concentration.
MQOs should reflect error values that you would expect to be outside of the normal range of variability in the analytical or measurement technique being used. The decision on what number to use can be subjective and is sometimes based on previous knowledge. Sometimes it is based on statistical probability or some knowledge of the analysis technique. MQOs might be set to reflect the quality of field blanks that are collected. For example, a MQO might state that less than 5% of field blanks show contamination. Often MQOs are set by analyzing replicate samples and indicating a tolerance for variability among the replicates. If the replicates don't meet the tolerance limit for a particular parameter, then all the data for that parameter should be considered questionable. The variability is expressed as the coefficient of variation (CV) which is equal to the standard deviation divided by the mean, multiplied by 100, and expressed as a percentage. Some "rule of thumb" criteria for tolerance values (above which the data should be viewed with caution) are:
- CV ≤ 25% for duplicates (i.e., a value exceeding 25% is considered too imprecise);
- CV ≤ 18% for triplicates;
- CV ≤ 10% for six or more replicates
DQOs and MQOs are not always necessary. In some sampling programs, particularly those geared towards awareness, you may not need to define these objectives.
Data collection and review
The QA/QC program should also contain procedures and protocols that pertain to data collection and review. Protocols should be specified with respect to instrument calibration and instrument malfunction.
Example: QC programs involve implementing back up plans when necessary.
There is usually more than one way to complete a task, and when necessary the sampling design should allow you to improvise and adapt plans when circumstances change.
For example, if the method of measuring stream flow could not be completed because of equipment malfunction, another method could be substituted.
One possibility could be measuring the velocity of the flow by timing the movement of an orange between 2 points. Something like an orange is preferred to a stick because an orange is slightly less buoyant and will be influenced less by the faster surface flow.
As long as the backup plan uses an accepted methodology, it's OK to improvise, but always make a note in your field book that alternate data collection methods were used. The other take away here is, be prepared. In addition to keeping your equipment in good operating order, carry additional batteries, bottles, preservative and tools on sampling trips.
Ongoing data review not only ensures the work is on track but allows for early identification of suspect results or analysis issues.