logo_2Cy.gif
Home About us Media Research Consultancy Training Site map Contact

Home » Research » Limit of detection

Probably much to the chagrin of practitioners, the official definition of limit of detection is not really simple:

Taking the detection decision at the critical level (LC) leads to a risk a of false detects. Given this critical level, the limit of detection (LD) is construed as the level that will lead to false non-detects with probability b.

Common values for a and b are 1% and 5%. The proper values are of course problem-dependent.

This definition may lead to surprises. It is, for example, quite possible to detect the analyte when its actual level is below LD, since the result should be compared with LC, not LD.

Admittedly, this definition is complicated because hypothesis testing usually involves the distribution of potential results under the null-hypothesis only.

LOD1.gif

Figure LOD 1: The detection decision takes place at the critical level (LC), not at the limit of detection (LD)!

The purpose of this page is to review the progress made concerning limit of detection estimation in the presence of varying spectral (i.e. direct overlap) and non-spectral interferences (e.g. matrix effects). In other words, the signal data are non-selective for the analyte of interest and the blank signal varies. These conditions can often be avoided by a specific sample pretreatment and/or sufficiently expensive instrumentation. The straightforward, theoretically sound and cost-effective chemometrics alternative is to utilize suitable calibration models.

This page is organized as follows:


Setting the stage

The limit of detection is an analytical figure of merit that, owing to the complex statistics involved, deserves a separate treatment. It is of vital importance in trace analysis. One expects that the ever-increasing concern with respect to food safety, the environment, 'clean' sports, &c. will continue to stimulate efforts to characterize sophisticated instruments with a realistic estimate of their detection capability.

A good introduction of some relevant concepts is provided by the following tutorial:

  • V. Thomsen, D. Schatzlein and D. Mercuro
    Limits of detection in spectroscopy
    Spectroscopy, 18 (12) (2003) 112-114
    Download from the Spectroscopy Online site (icon_pdf.gif=129 kB)


The following comments seem to be in order:

  • With a limit of detection one intends to characterize the capability of a method or instrument to detect the analyte of interest. To ensure a useful characterization therefore implies preselecting a sufficiently small value for both a and b. It is important to note that the critical level (also known as decision limit) is often mistaken for limit of detection. However, the two limits happen to coincide only when 50% false non-detects are allowed for. Clearly, such a high value for b is hardly useful in practice: it doesn't make sense to talk about detection capability if the analyte is missed in 50% of the cases!
  • A related misconception is, that it is impossible to detect the analyte when the actual level is below the limit of detection. However, the illustration above clarifies that the analyte will be detected (by definition) as long as the experimental result exceeds the critical level. Of course, the probability of a false non-detect (b) increases with decreasing (actual) analyte level but the risk of a false positive (a) remains smaller than the preselected value as long as the result exceeds the critical level.
  • Although the subject is firmly rooted in statistics, one may still find incorrect methodology being applied in practice. See an example of that in:cover_AQUAL.jpg

    • N.M. Faber and R. Boqué
      On the calculation of decision limits in doping control
      Accreditation and Quality Assurance, 11 (2006) 536-538
      Download (icon_pdf.gif=54 kB: the original publication is available at www.springerlink.com)


  • One often encounters the conveniently simple recipe

    • limit of detection = blank value + k × standard deviation

    However, underlying this expression is the so-called homoscedastic assumption, i.e. the uncertainty in the experimental result does not depend on the actual analyte level. This assumption is usually violated.
  • It is important to note that the opportunities for realistically assessing the detrimental effect of varying interferences markedly increase when moving from univariate to multivariate data and beyond. This is consistent with the superior outlier detection capability of multivariate calibration methods, which is even surpassed by some multiway methods.


Univariate data: official literature


Top Top blue.gif

A major development with respect to methodology has been the harmonization of concepts by the International Organisation for Standardization (ISO) and the International Union for Pure and Applied Chemistry (IUPAC), see:

  • L.A. Currie
    Detection: International update, and some emerging di-lemmas involving calibration, the blank, and multiple detection decisions
    Chemometrics and Intelligent Laboratory Systems, 37 (1997) 151-181
    Download (icon_pdf.gif=2,246 kB: contribution of the National Institute of Standards and Technology; not subject to copyright)


On page 152, Lloyd Currie makes the following intriguing statement - with our italics:

The meaning of 'detection limits' is perhaps clear to all, in a qualitative sense. That is, the detection limit is commonly accepted as the smallest amount or concentration of a particular substance that can be reliably detected in a given type of sample or medium by a specific measurement process. Within such a general definition, however, lurk many pitfalls in terminology, understanding, and formulation, that have led to several decades of miscommunication among scientists and between scientists and the public.

The subtleties of calibration-based limit of detection estimation are perhaps best illustrated using the univariate procedure developed in:

  • A. Hubaux and G. Vos
    Decision and detection limits for linear calibration curves
    Analytical Chemistry, 42 (1970) 849-855


Graphically, the Hubaux-Vos procedure for univariate, fully selective signal data looks like:

LOD2.gif

Figure LOD 2: Relationship between content and signal for the blank (0 and B), at the critical level (LC and SC) and at the limit of detection (LD and SD). True content values of 0 and LD give rise to distributions of observed signal that overlap to some extent. The detection decision is taken at the SC level, i.e. in the signal domain. Owing to uncertainties in the signal of the test sample and the estimated model, a (true) zero content can give rise to a false positive declaration with probability a. Likewise, with probability b one encounters a false non-detect when the analyte is present at the LD level. The requirement that a particular substance can be reliably detected (see quote from Currie's paper above) translates to taking sufficiently small values for a and b. The current plot is obtained using the rather common (but arbitrary) value of 5% for both error rates.


The main point to be taken from this figure is, that two independent sources of uncertainty need to be considered when estimating limit of detection for a calibration model, namely the uncertainty in the model itself and the uncertainty in the signal for the test sample. To quantify the uncertainty in the model, one has to account for the uncertainty in the signals and concentrations associated with the calibration set.

Clearly, this reasoning does not depend on the complexity of the signal (univariate, multivariate, multiway). Neither does it depend on the degree of selectivity.


Generalization to multivariate data and beyond


Top Top blue.gif

Currie's document is fairly complete with respect to univariate calibration, but leaves a few essential aspects undiscussed for multivariate methods (and multiway). In particular, how to deal with non-selective signal data, i.e. spectral as well as non-spectral interferences. An early discussion of the need for truly multivariate limit of detection estimators can be found in:

  • F.C. Garner and G.L. Robertson
    Evaluation of detection limit estimators
    Chemometrics and Intelligent Laboratory Systems, 3 (1988) 53-59


Garner and Robertson note that

There is currently no generally accepted multivariate model of instrumental signals incorporating detection limit estimators but there are no major reasons why such models cannot now be developed.

They continue their discussion of multivariate models with some suggestions and further state that

It is highly recommended that multivariate models and estimators be developed and used. Until this is done, decision and detection limits for multiple-signal instruments may be inappropriately estimated.

The progress made in the multivariate area until 1996 is reviewed in:

  • R. Boqué and F.X. Rius
    Multivariate detection limit estimators
    Chemometrics and Intelligent Laboratory Systems, 32 (1996) 11-23


An excellent general review of more recent date is:

  • H. van der Voet
    Detection Limits
    Encyclopedia of Environmetrics, Vol. 1, pp. 504-515, Wiley, Chichester (2002)


Finally, a thorough update until 2005 is:

  • A. Olivieri, N.M. Faber, J. Ferré, R. Boqué, J.H. Kalivas and H. Mark
    Guidelines for calibration in analytical chemistry
    Part 3. Uncertainty estimation and figures of merit for multivariate calibration
    Pure & Applied Chemistry, 78 (2006) 633-661
    Download (icon_pdf.gif=645 kB: © IUPAC 2006)


Prediction uncertainty-based approach


Top Top blue.gif

A rather straightforward approach amounts to inserting an expression for multivariate sample-specific prediction uncertainty in the general defining equation for limits of detection, given in Currie's document. This approach is comparable to the Hubaux-Vos procedure, exemplified in Figure LOD 2. Clearly, the true critical level depends on the number and level of interferences - the matrix. Unlike other multivariate proposals, this sample-specific approach allows one to take a varying matrix into account. As a result, a closer assessment of detection capability is obtained.

The currently advocated approach builds on the fundamental relationship between limits of detection and multivariate prediction intervals. We have derived the required expressions for several multivariate calibration methods. The utility of one of them has been tested for a near-infrared (NIR) calibration using principal component regression (PCR) in:

  • R. Boqué, M.S. Larrechi and F.X. Rius
    Multivariate detection limits with fixed probabilities of error
    Chemometrics and Intelligent Laboratory Systems, 45 (1999) 397-408


The proposal has been validated by comparing theoretically predicted and experimentally observed error rates:

Table LOD 1: Theoretically predicted versus actually observed b-values at different a-probabilities.

LOD3.gif


Given the relatively small number of test samples (26), the results seem as good as one might expect for a univariate method. The only drawback might be that the intuitive understanding of, for example, the (univariate) Hubaux-Vos procedure is lacking.


References & further information


Top Top blue.gif

Open blue.gif Open a list of references. In recent years, one observes a gradual increase of studies dealing with multivariate (and multiway) limit of detection estimation.

For further information, please contact Ricard Boqué: Ricard Boque.jpg