Authored by: Christophe Champod , Paul Chamberlain

Handbook of Forensic Science

Print publication date:  July  2009
Online publication date:  January  2013

Print ISBN: 9781843923121
eBook ISBN: 9781843927327
Adobe ISBN: 9781134028634




Christophe Champod received his MSc and PhD (summa cum laude) both in Forensic Science, from the University of Lausanne, in 1990 and 1995 respectively. He remained in academia, eventually holding the position of assistant professor in forensic science. From 1999 to 2003, he led the Interpretation Research Group of the Forensic Science Service (UK), before taking a professorship at the School of Criminal Sciences (ESC)/Institute of Forensic Science (IPS) of the University of Lausanne. He is in charge of education and research on identification methods (detection and identification). He is a member of the International Association for Identification and in 2004 was elected a member of the FBI-sponsored SWGFAST. His research is devoted to the statistical evaluation of forensic identification techniques. The value of fingerprint evidence is at the core of his interests.

 Add to shortlist  Cite


Introduction and terminology

The term friction ridge skin refers to a specific type of skin, the surface of which has ridges and furrows, formed by folds in the skin surface. It can be observed on the distal phalanges of the fingers and thumbs, and also on the palms, toes and soles of the feet. We call these surfaces the volar surfaces. It is believed that the biological function of these surfaces is to increase the grip and the mechanical sensitivity of the skin to pressure, movement and vibration.

The general flow of the ridges forms various patterns. Three main categories: arches, loops and whorls can be defined. These patterns are formed by the conjunction of ridge systems that are articulated around key focal points called core(s) and delta(s).

Closer inspection reveals that the ridges may break (bifurcate), end, or may be limited in length, sometimes forming little more than a dot. These latter events are termed minutiae in this chapter, although various other terms such as points, characteristics, or Galton points are also used. Minutiae can also form combined arrangements such as lakes (formed by two opposing bifurcations), islands (formed by two connected ridge endings) or spurs (formed from a bifurcation and ridge ending). The terminology regarding these latter formations lacks standardisation. The ridges bear pores indented in their summit. Their function is to allow the secretion of sweat from eccrine glands embedded in the dermis. The pores themselves vary in shape and relative position along the ridge. The edges of the ridges are also irregular, often with distinctive shapes. The above described features are often categorised into three levels: Level 1 relating to the general ridge flow of the ridges, Level 2 to minutiae (or large deviation of the ridge path) and Level 3 to pores and ridge edge or other details (Figure 3.1). In addition there are other features that may be observed. Creases (some of which are permanent, particularly when associated with flexures) may have distinctive shapes. Trauma to the skin such as warts, blisters and scars may also be seen.

Illustration of the three levels of details

Figure 3.1   Illustration of the three levels of details

The term fingerprint refers to an impression left by a friction ridge skin area of a finger. Likewise, a palm print is an impression from the palm. By convention a print is then a reference from a known sample taken with cooperation and under controlled conditions either using an inking process or an optical device (essentially a digital scanner commonly referred to as livescan) (Maltoni et al. 2003). Because of their pristine acquisition conditions, prints are a near perfect representation of the friction ridge skin.

The friction ridge skin area can also leave a representation of its characteristics when it comes into direct contact with any surface. We refer to these impressions as marks. They are made of sweat residues, a complex mixture of compounds originating from eccrine and sebaceous glands. The friction ridge skin area acts here as a ‘stamp’ contaminated with sweat residues. Marks are then left adventitiously when one touches an object without gloves or footwear. By the uncontrolled nature of the deposition, marks are often of varying quality compared to the prints. If the term mark is qualified as in finger marks or palm marks, it means that the corresponding position on the friction ridge skin area has been established, otherwise the term mark alone will refer to an impression of any friction ridge skin area. The reader may also have come across the term latent mark, print or impression. The term latent has often been used to designate the large proportion of marks that cannot be seen without the application of detection techniques. The term area of friction ridge detail is also sometimes encountered.

Marks may not be complete; sections of the ridge flow may not be reproduced. The finger may be placed on a surface for a very short period of time thus reducing the transfer of residue, may be in contact for longer time or placed on the same surface multiple times. This gives rise to visible differences: marks may be fragmentary with broken ridge flow; distorted by pressure which may sometimes cause the residue to be pushed into the furrows, thus changing the appearance of the mark, or overlaid (superimposed).

The term quality is an assessment of accuracy of the representation of the impression (either a mark or a print) compared with the actual friction ridge skin surface. Prints tend to be of high quality when taken in appropriate conditions, whereas marks may vary due to the uncontrolled circumstances of their deposition (Figure 3.2).

Illustration of the range of quality obtained for the same finger on prints (depending on the acquisition techniques) and on marks (depending on the deposition)

Figure 3.2   Illustration of the range of quality obtained for the same finger on prints (depending on the acquisition techniques) and on marks (depending on the deposition)

In the UK fingerprint practitioners engaged in comparison are generally called fingerprint experts. Here we will use the term fingerprint examiner to mean a practitioner trained to competency.

The basis of all fingerprints examination is that configurations of fingerprint features have a high level of specificity or ‘uniqueness’ (the term often used); that configurations of ridges in sequence, minutiae, pores or ridge edge details do not change with time and that such configurations will reform more or less exactly (except where there is heavy damage as the result of trauma). Because of the above attributes, the comparison of friction ridge skin impressions helps to address issues of identity of individuals.

Short historical perspective focused on the UK

Fingerprints are used in two related but distinct ways within the criminal justice system. First as a biometric, that is a method for the identification of arrested individuals, for example, to establish if they have come to police notice previously. Second to attribute finger or palm marks recovered from a crime scene to an individual to provide information for investigators.

The systematic use of fingerprints for identification purposes in the UK can be traced to the turn of the twentieth century. At that time the anthropometric system (developed in 1881 by Bertillon in France) using specific measurements of the human body (length of index finger, length of arm, circumference of head etc.), was the prevalent biometric method of personal identification, alongside the developing photographic technologies. The principles were the following: (1) bone lengths remain constant during adult age, but (2) vary from individual to individual and (3) they can be measured with reasonable precision. Bertillon proposed the use of the description of the iris’s colour combined 11 precise measurements. A classification method was also developed in order to structure these distinctive characteristics. It addressed the increasing issues of identification with great success although its limitations were becoming evident as the size of databases increased. The limitations of this technique were: (1) uneven distributions of the measures in the population; (2) the correlation between features; (3) inter-operator variations due to lack of training, the quality of the (very expensive) instrumentation or non-cooperative subjects; and (4) the need of the body and the absence of anthropometric ‘traces’ left on crime scenes.

Francis Galton in England, and specific government committees (Troup and Belper) were tasked to assess the merits of Bertillon’s method and prepare recommendations for the UK government. The work of British scientists in the development of fingerprinting, namely H. Faulds, W. Herschel, F. Galton and E. Henry is well covered in recent monographs (Beavan 2001; Cole 2001; Sengoopta 2003). William Herschel, a colonial administrator in India, proposed the use of fingerprints to identify individuals, especially prisoners, as a result of his use to identify employees. His work over some 30 years also supported the concept of permanence. At the same time, Henry Faulds, who studied at Andersons College in Glasgow (now the University of Strathclyde), then a medical missionary in Japan, proposed in 1880 to use fingerprints for investigative purposes, as fingermarks could be detected at crime scenes. As a natural addition to that, Faulds designed a classification system based on ridge flow pattern and core characteristics. The main forensic operational contribution came from the work of Francis Galton. He presented in 1892 the basic axioms of fingerprinting, which are the notion of permanence (based on Herschel’s work and data), and uniqueness. He also suggested the possibility of reliably classifying fingerprint patterns into three basic types. The first classification method of Galton was judged unsuitable to handle large collections of individuals. The method was then greatly improved by Edward Henry (helped by his Indian colleagues). This Galton-Henry classification allowed the storage and retrieval of a particular set of fingerprints from extremely large databases. This development was instrumental in persuading the Troup committee to adopt fingerprints as the main method of personal identification in the UK.

The first cases in the UK of the use of marks left on crime scenes to identify its donors are well documented: Jackson (1902) and Stratton brothers (1905).

Due to these early successes and combined with a strong belief in the new modern virtues of science in the early twentieth century, the use of fingerprints as a means of personal identification became established and essentially remained unchallenged until fairly recently.

Until the 1990s New Scotland Yard maintained the national fingerprint database, whilst a similar collection was maintained for Scotland by SCRO (Scottish Criminal Record Office). Each arrestee would be fingerprinted, generally when charged with an offence although additional powers to fingerprint in respect of particular investigations existed. Outside of the London Metropolitan Police area two sets of fingerprints were taken, one for the national collection and one for the local police fingerprint department. The national collections formed the heart of the criminal record offices. Each set received was coded according to the Henry classification and searched to determine if the individual had previously come to notice. Sets were taken when the individuals were held in prisons and retained to prove previous convictions.

In the 1970s the New Scotland Yard collection was moved to videotape and whilst the Henry classification was still used as the filing method, speed of access increased. In the 1980s AFIS (i.e. Automatic Fingerprint Identification Systems) technologies were developed. The book by Komarinski serves as a good introduction to forensic AFIS systems (Komarinski 2005), the development in the UK is briefly outlined by Blain (2002). New Scotland Yard introduced a system and several provincial forces purchased stand-alone systems. In the early 1990s a consortium of forces purchased a networked system that allowed a degree of national searching. This was followed in 1996 by the roll-out of NAFIS (National Automated Fingerprint Identification System), the first truly national system. With the advent of NAFIS individual police forces undertook shared maintenance of the national database. Recent years have seen the increased use of new technology with the development of electronic scanning of fingerprints (using livescan devices), first in custody suites and latterly using mobile technologies.

The utilisation of these collections for investigation required the ability to search a mark from a crime scene. Originally the Henry coding system was refined to allow such searching with reasonable accuracy although this was highly labour intensive. In the 1930s the Battley single finger system further extended this ability by introducing a coding system for single prints and marks. The advent of videotape systems, albeit restricted to New Scotland Yard, extended the searching ability. It was however the introduction of the AFIS systems and in particular NAFIS that provided the first national search facility. NAFIS has moved now to IDENT1, offering in its extension both finger and palm print search capabilities (Suman and Whitaker 2005).

In parallel with the development of fingerprint comparison techniques, there has been an increase in the range and complexity of techniques to detect and recover crime scene marks. The original powdering technique remains widely used in view of its reasonable sensitivity and ease of use. But other chemical reagents have been introduced extending the ability to recover marks as well as the range of surfaces that can be examined. Both selectivity and sensitivity of the techniques have been increased by combining them in sequence, allowing targeting of specific compounds of the sweat residue. Researchers in the UK have pioneered innovation in this area. The review paper by Goode and Morris (1983) can be considered to be a landmark publication and subsequently the research programmes of the Home Office Scientific Development Branch, the Metropolitan Police Forensic Science Laboratory and the Forensic Science Service have contributed significantly to the ongoing development of techniques.

Use of fingerprint in investigations

Three fingerprint processes can be identified:

Print to prints:

The comparison of the fingerprints from an unknown individual (living or dead) against a database of known prints (or declared as such) from individuals, to prove identity. The comparison can be undertaken from the unknown material to the known or vice versa.

Mark to prints: (Prints to marks)

The comparison of a mark left in circumstances of interest to an investigation with a reference collection of prints from known (or declared as such) individuals. The converse is also possible, meaning the search of the features from a new set of ten-prints (ten-prints is a term used to refer to a control set of finger and palm prints) against the database of (unresolved) marks.

Mark to marks:

The comparison of marks recovered from different offences to establish a connection.

The first two uses are by far the most prevalent in operational practices and we will concentrate on them hereinafter.

Print to prints

Currently, in England and Wales, fingerprints are taken on the arrest of an individual, retained permanently in a national database called IDENT1 and accessed using AFIS technology. Fingerprinting on arrest is a relatively new change included in the Criminal Justice Act 2005. It replaced the system, still in force in Scotland, whereby fingerprints were taken only at charge or caution and subsequently destroyed where there were no proceedings or the individual was acquitted. Current AFIS technology will search an inquiry print against database and then present a selection of fingerprints for comparison. Examiners have the opportunity to review all fingers and palms should they wish. The accuracy of systems when searching fingerprints taken under controlled conditions is such that the single respondent produced is generally found to correspond. AFIS system accuracies are now such that, when comparing prints, the concept of ‘lights out’ identification is being actively investigated. This means that the computer systems will be allowed to make the ‘call’ without human intervention. This allows a round-the-clock accurate identification system which when coupled to various mobile devices clearly benefits police and other agencies.

Ten-print forms are input to the system either through the capture of inked prints using scanners or more commonly through direct capture of the fingers and palm by livescan devices. Essentially a specialised scanner, this device captures digital images, which can be rapidly searched by direct transmission to the AFIS system. However, this kind of apparatus still requires a degree of operator training and experience to obtain a high quality set of ten prints: it is often subject to the loss of certain areas of the finger and palm, particularly tips of fingers and the centre area of palms. The latter has little impact on the ten-print search facility but may be important in processing of crime scene marks. Whilst the accuracy of the identification and the maintenance of the database are to the highest standard, the identification process should not be considered to be without error. Most of the errors are not related to the fingerprint comparison, however, but to such activities as: poorly taken fingerprints, fingerprints being associated to the wrong individual, duplication of fingers etc. Errors such as these are rare but do occur.

Mark to prints (or prints to marks)

Marks are searched either manually against prints (when for example the investigation has reduced the scope to a limited number of individuals), or in conjunction with the AFIS system. Over recent years the use of AFIS systems has become widespread and to a certain extent the traditional practice of police agencies nominating potential suspects for a comparison has diminished. Nonetheless such work does still take place. Within the UK the traditional approach of using life size photographs and a handheld magnifier is common. Enlargements are used occasionally. Some computer-based on screen comparison systems do exist but are not in regular use at this time. The accuracy in AFIS searching marks against prints is lower than that of prints against prints as a result of the lower quality of the marks. Unlike good quality prints, encoding programs are not sufficiently sophisticated to automatically detect the features of lower quality marks. Where a hit has been achieved using AFIS, it is still common practice for the mark and print to be compared by an examiner using a handheld magnifier. AFIS systems do not in fact ‘identify’ the mark but provide a list of potential candidates. Various algorithms exist but all AFIS systems work on broadly similar lines by comparing a dataset from the enquiry mark with templates obtained from the ten prints in the database, and generating a score value based on the similarity of the datasets. As the systems accuracy improves the likelihood of the actual match appearing in first place on the candidate list increases. It is a common misconception that AFIS systems have replaced the traditional examination and the examiner. However, although it is possible to conceive the development of a fully automatic comparison system, such is not yet feasible.

The use of crime scene marks requires the detection of the marks on items and their recovery by a suitable method or sequence of techniques that will allow the search and then comparison.

Finger marks detection techniques

Crime scene marks may be patent, i.e. visible to the naked eye or latent, formed of sweat residue or other substances that are not readily visible. Marks may also be formed in such substances as dust, paint or blood. Latent marks are more often encountered in criminal investigation, in particular planned crimes where the offender has taken some precautions against detection. Considerable investment has been made in a range of techniques to visualise these marks. Agencies such as the HOSDB (Home Office Scientific Development Branch), FSS (Forensic Science Service) and a number of universities have research programmes, to develop new techniques and improve those currently used. HOSDB forms the main advisory body to the UK police and is influential in which techniques are utilised. Their long-standing research programmes have provided significant data on the efficiency of various techniques as well as their safety. Its main publication (Manual of Fingerprint Development Techniques) can be considered as the most influential publication in this area (Bowman 2004).

Latent marks are generally formed from sweat left on the surface of ridges. The deposited residue is composed of mainly water, proteins, amino acids, fatty acids, inorganic salts, cholesterol and squalene. The amount of residue left by an individual depends on a large number of variables, such as: the conditions in which the friction ridge surface has been kept, the diet, age, sex and physical condition of the donor. This variation implies that some detection techniques may prove successful for ‘good’ donors only. Also the residue will degrade with time, since the deposit and the ability to detect marks may vary as a function of time for certain detection techniques.

The strategy adopted for the detection of marks is to choose reagents based on:

  • the nature of the substrate (porous – such as paper, semi-porous or nonporous such as glass and plastic surfaces);
  • the case circumstances (fire scene, outdoor wetted object);
  • the ability to transport the items to the laboratory versus the ability to deploy a technique directly on site;
  • the potential detrimental effect of the technique on the item;

Judgements are made on the effectiveness of the technique although practical issues such as availability, examination time and safety have a significant influence. The use of many techniques in sequences to maximise the likelihood of recovery is not routine and often reserved for serious crimes.

Marks left on a porous surface have shown persistence for decades, the residue being absorbed and fixed by the surface. On the contrary, marks on a non-porous surface are prone to abrasion. Their persistence is therefore significantly influenced by the subsequent handling of the item.

Detection techniques may be broadly categorised into optical, physical or chemical and are covered in details elsewhere (Champod et al. 2004b). The advances on the detection techniques are reviewed every three years for the Interpol Forensic Science Symposium (Champod et al. 2004a; Becue et al. 2007).

Optical techniques take advantage of the interaction between light and the surface to produce a suitable contrast, often without any adverse effect on the marks themselves, thus allowing the use of follow-up detection techniques. The surface can be searched using different wavelengths (from ultraviolet to infrared), different illumination conditions (diffuse/reflective mode, dark field examination, etc.) or in luminescence conditions. These techniques are the cornerstone of a successful detection process of marks, applied either at the outset of the examination or after each physical or chemical technique. Many marks are relatively easy to photograph; however, others require sound photographic knowledge to obtain a usable image. In any case, photographic and written documentation should indicate the position of the detected mark on the object, its relationship to the other marks, as well as a close-up view of the mark itself with sufficient resolution to optimise its quality (as defined previously). Each recorded image should carry a scale (using a ruler on the image itself, for example).

The oldest and most common form of physical detection is the use of powder. A number of commercial powders are available in differing materials, particle size, particle shape and colours. Powders have a tendency to adhere to the fatty components of the residue. Generally effective only on non-porous surfaces and with limited sensitivity, they have the advantage of simplicity. Marks are developed using powders and then transferred on an appropriate medium (such as gelatine or adhesive lifter). Occasionally they are only photographed. Another physical technique is vacuum metal deposition. Metals tend to condense differently on a surface when it bears latent mark residues, resulting in contrast between the mark and the substrate.

A range of chemical reagents is also available. These reagents rely on specific chemical reactions between the reagent and one or more constituents of the residue. Ninhydrin (and its analogues) or DFO react with amino acids to form a visible purple compound or a fluorescent product visible under specific wavelengths of light. These are the methods of choice for porous surfaces. They may be followed by techniques targeted at lipid components (such as the physical developer). Fumes from heated cyanoacrylate (a constituent of superglues) tend to polymerise preferentially on the residue and form a white polymer on the ridges. It is a very efficient method on non-porous surfaces that is often followed by dye staining of the glued impressions to increase their visibility. Marks left in blood or on adhesive surfaces require specific detection techniques to target the heme, proteins or other specific content of the residue (Figure 3.3).

Examples of marks detected with various detection techiniques

Figure 3.3   Examples of marks detected with various detection techiniques

Increasingly digital imaging systems are being used. These provide a range of tools to capture and optimise images. Some sophisticated systems may use such techniques as FFT (Fast Fourier Transform) to remove backgrounds. A key issue with any digital technology is the ability to significantly alter the image. Any system or process must therefore record and maintain the raw image and subsequent optimised images with an appropriate audit trail (Russ 2001, Reis 2007).

Method of comparison and associated conclusions

Basic premises

The premises of friction ridge skin identification are twofold:

  1. Friction ridge skin arrangements are extremely discriminating, and reflect the variability between donors (fingerprint examiners would probably use the term ‘unique’, but we feel that this term is often misused). The discriminating power is expressed in the capacity of an examiner to distinguish between arrangements from different sources. The discriminating capability is rooted in the morphogenesis of the friction ridge skin. General flow alone presents variation that will allow a fair discrimination although the number of possible classes for the general pattern is not infinite. Some patterns may however be less frequent than others. Arches represent roughly 5 per cent of the patterns observed on fingertips, whereas some loops account for 60 per cent. The factors (genetic and epigenetic) influencing the general flow are known (Kücken 2007). The discrimination is drastically increased by the specific arrangements of ridges in sequence. The morphogenesis of ridges is highly epigenetic and leads to arrangements with various lengths (ridges end with either endings or bifurcations) and sequences (Ashbaugh 1999; Wertheim and Maceo 2002). The between sources variability of these arrangements of ridges in sequence is extreme to the point that no two individuals showing the same arrangements have ever been found. Finally, the shape of ridge edges as well as the form and relative position of pores is also very discriminative.
  2. Friction ridge skin arrangements are persistent. From about the 25th week of foetal life until decomposition of tissues after death, the arrangement of ridges will remain permanent in all respects (with the exception of size as the result of natural growth). Scars are permanent only when the dermis is damaged by the incident, otherwise the friction ridge skin will regenerate according to the ‘template’ held on the dermis (Maceo 2005).

It is due to this variability and persistency that friction ridge skin has gained the status of ‘gold standard’ for inferring identity.

Protocol for a mark to print comparison: ACE-V 1

In general most examiners subscribe to the ACE-V methodology or a comparable protocol (Ashbaugh 1999). ACE-V is an acronym that stands for analysis, comparison, evaluation and verification and implies four distinct stages to the comparison between a mark and print. In the UK practice (and elsewhere), it is common to find little distinction, or clear-cut separation, between its stages.

Analysis requires the examiner to examine the mark without the print in order to assess and locate the friction ridge skin features that may be used for further comparison. The analysis also allows an assessment of the quality of the mark and factors such as distortion, pressure, amount of residue and the nature of the substrate and of the detection technique used. Taking these factors into account will help to set the tolerances (or boundaries) that the examiner will have to allow for considering a potential correspondence. When the quality of the print is poor, a similar analysis will be carried out on the print (without undertaking any comparison work). Tolerances may then also be assigned to the print, when for example a blurred area is observed. It is advised to document the analysis phase by writing contemporaneous case notes, especially when the mark or print is of very limited quality. Another outcome of the analysis stage is a decision from the examiner regarding the capacity of the mark to be compared against known prints (a limited number of forms or using AFIS). In some instances, the analysis stage concludes as to whether or not the mark is expected to be identified should the corresponding source be available.

Comparison means locating the corresponding features from the mark in the print of interest. It takes the form of a side-by-side comparison. The print either constitutes the output of a potential candidate from an AFIS search or a print from a form that showed some initial similarity. All features (from level 1 to level 3) should be considered for that phase, starting from general features to particulars. The comparison should always entail observation of features in the mark followed by their location in the print. The reverse process, from the print to the mark, should only be made with care (and being documented). It is only when the general pattern is within tolerances that the comparison continues with ridge arrangements. Instead of focusing on minutiae during the comparison process, it is advised to compare every ridge and furrow (assessing comparatively their length and sequence) and then the shape of pores and edges if the quality of the mark allows. The comparison stage is essentially factual and should lead to a documentation of the features that have been found in correspondence and the features that have been described as dissimilar. Documentation of this crucial stage is sometimes sparse due to time constraints and it is not common practice to produce any form of chart to record the information (an example is given Figure 3.4). The absence of documentation may lead to difficulties in disputed cases and we would advise systematic documentation of the comparison stage.

Example of a comparison chart. On the top a close-up of the comparison without annotation (the mark on the left and the print on the right). In the middle, illustration of the agreement in terms of ridges in sequence and level 2 features with an indication of the minutiae found in correspondence. On the bottom, a close-up of some of the level 3 features retained during the comparison process

Figure 3.4   Example of a comparison chart. On the top a close-up of the comparison without annotation (the mark on the left and the print on the right). In the middle, illustration of the agreement in terms of ridges in sequence and level 2 features with an indication of the minutiae found in correspondence. On the bottom, a close-up of some of the level 3 features retained during the comparison process

Evaluation is the fundamental inferential step of the process. It will lead to the formulation of a conclusion. There are three conclusions that are commonly used in the discipline: individualisation (generally referred to as an identification), exclusion and inconclusive (often termed ‘not identified’). An individualisation means that the mark and the print have an identical source to the exclusion of all other potential donors. An exclusion means that the mark and the print(s) do not have the same source. Inconclusive indicates that neither of the previous conclusions (individualisation or exclusion) has been reached. Observed differences in ridge flow (for example the mark being a loop and the print showing a whorl pattern) will enable an examiner to exclude the print from being from the same friction ridge skin area as the mark. In fact, as soon as a difference is observed between the mark and the print that cannot be reconciled in the light of the tolerances defined during the analysis stage, then an exclusion conclusion will be reached. Logically, the absence of any irreconcilable discrepancy is a prerequisite for identification.

In the absence of dissimilarities, the examiner will weigh the corresponding features in reference to the standards for identification (see below). In a nutshell, the individualisation will be reached when the examiner observes a level agreement (across the three levels of legible features) that exceeds the highest level of correspondence he observed through his/her training and experience in comparisons involving non-matching entities. An identification is then concluded when the mark shows sufficient quality (clarity) of friction ridges in agreement with the print so that the probability for such a match to happen if a print from another source is submitted is deemed to be impossible.

For all other cases, the comparison will be deemed ‘inconclusive’ even if there are significant and perhaps highly probative correspondences between the mark and print. Corroborative evidence of this type (i.e. less than ‘certain’) based on friction ridge skin impressions is rarely brought to the attention of the Criminal Justice System despite its potential to help address the issue of identity. We regret that the profession has adopted such a cautious approach as it precludes the trier of fact from having the advantage of potentially very strong corroborative evidence (Champod and Evett 2001). This traditional and very conservative approach limits the potential of fingerprint evidence. This situation arises from the misconception that fingerprint evidence must be categoric, and from the unwillingness of the majority of examiners to accept the relevance of probabilistic evaluation. Indeed most examiners would feel at ease to express an opinion as to the definitive source or otherwise of a mark, but would refrain from providing an informed judgement in terms of likelihood of common or different sources. Our view is that the development of statistical models (as discussed below) will provide the necessary tools to the development of a spectrum of conclusions, similar to other forensic disciplines. The perception that this will in some way ‘water down’ fingerprint evidence by removing its simplistic black or white (identification or no identification) approach is, in our opinion, outweighed by the strengthening of the process through the introduction of objectivity and the potential additional evidence.

Verification is defined as the examination by another qualified examiner resulting in the same conclusion. This examination process is called the verification stage. Its objective is for another examiner to independently review the conclusion of the first examiner, following the Analysis, Comparison and Evaluation (ACE) protocol. Most departments will use the verification stage as the ultimate quality assurance measure. Such importance means that this stage needs to be fully documented and that the department needs to have processes in place to handle dissenting opinions. The weight to be given to this quality mechanism is directly related to the departmental standard operating procedures, which are adopted in cases of failure to verify the original conclusion. In an ideal world, we would expect the verification stage to be blind, meaning that the second examiner would have no information regarding the examination details and conclusions reached by the first. However, the whole concept of independent review is difficult to ensure in small departments or where there are heavy case workloads. Also few departments have regular checks of non-identified marks. Whilst there may well be practical reasons for this in terms of manpower, it also may reflect the way in which the outcomes are valued. A third check of the identification is still rooted in the UK practices. The third identification check is much hailed as a failsafe, but in fact the simple process of adding further quasi-independent checks may not provide the degree of assurance that is required. This is particularly where the working environment is not receptive to the possibility of error and is dominated by the concept that longer service equates to more skill.

Standards for identification

The standards for identification are essentially the same whether we consider print to prints or mark to prints comparisons. The section that follows can apply without distinction to both scenarios.

The basis of identification is first empirical and founded on the fact that no two individuals have been found to have the same fingerprints. This is undoubtedly true for complete sets of fingerprints and provides a suitable basis for use as a biometric tool. However, when dealing with marks from crime scenes that by their very nature are poor quality, from single fingers and partial representations, this basis can be questioned.

Fingerprint identification in the UK and around the world is understood to mean that a mark has been attributed to a particular individual to the exclusion of all others, although it is seldom articulated in this way. ‘Others’ refers often to any human in the world, living or dead. In fact such a claim may be unnecessary as in all but a few scenarios, the suspect [or source] may be from a much smaller or restricted population. It is therefore interesting that examiners have felt the need to make a much greater claim, presumably to increase the perceived evidential value of the identification.

Prior to 2001, UK practice required that 16 minutiae in agreement (without discrepancies) between a mark and a print was required for an ‘identification’. The 16-points standard, although termed a standard, was, to all intents and purposes, a numerical threshold above which it was safe to claim identification. It was adopted in 1924 after a poor-quality reproduction of a photograph submitted by Alphonse Bertillon of two prints allegedly from two individuals and charted to show 16 minutiae in agreement was sent to the head of the Scotland Yard fingerprint department. Having examined the material Superintendent Collins decided that he could see only 10 in agreement and consequently increased this standard to 16. This standard was a recommendation that was used with flexibility before the 1950s. The number of 16 became a Home Office working agreement in 1953, following a case where the fingerprint evidence was made of two fingerprint identifications with respectively 12 and 15 points in agreement. The standard was regularly discussed; however, it remained unchanged by fingerprint practitioners, since no substantive erroneous decisions were apparent. The concept of the 16-points standard reenforced the unfounded belief in the accuracy and robustness of fingerprint identification in the UK. It was higher than those applied elsewhere in the world (many countries adopting 12 with reference to the work of Edmond Locard (Locard 1914)). A rather crude probabilistic calculation was used to show that such a 16-point configuration would give a probability smaller than the inverse of the world population although examiners were advised never to make such an argument in court (Balthazard 1911). Note that the 16-points standard was never given legal force.

During the 1970s some variation was seen in the application of the standard. This may have been a reflection of the 1973 IAI (International Association for Identification) resolution that declared there to be no numerical basis for a fingerprint individualisation. For a period of time so called ‘non-provable (or partial) identifications’ were presented in serious cases. These were comparisons where the threshold could not be met but where there were sufficient minutiae for an examiner to have some degree of confidence that the source of the fingerprint and the mark were the same. Later guidelines allowed two marks with between 10 and 16 minutiae in agreement to be reported as a full identification and single marks of 10 minutiae in agreement to be determined as full identifications where it was a serious crime and the examiner had ‘significant experience’.

The Home Office undertook a full review in England and Wales of the 16-points rule in the mid 1980s and discovered that the paper by Bertillon mentioned above, had been largely misunderstood. The purpose of the original image (and text) was to draw attention to the importance of dissimilarities more than of gross similarities (Champod et al. 1993). The Home Office review resulted in a landmark report by Evett and Williams which drew attention to certain practices within the fingerprint community associated with the use of the numerical standard (Evett and Williams 1996). The report indicated that there was considerable variation in the number of minutiae seen by examiners in any one mark. The range of variation on the number of minutiae annotated was so large that the concept of a ‘standard’ became clearly questionable. The review concluded that the 16-points standard was not an efficient way of ensuring quality; other mechanisms should be explored such as: performance testing of experts, file audits and blind trials.

By the 1990s the 16-point standard had fallen into disuse with many decisions in court accepting identifications without 16 points. In the cases of R v. Thomas McAteer (1993), R v. Craig Eyre and Ian Andrew Reid v. DPP (1996) we find the courts accepting fingerprint evidence with less than 16 minutiae in agreement; in fact in the case of McAteer with just eight. In North America the development of ACE-V as a methodology and the growing use of other elements of the mark such as pore positions, ridge edge detail, creases etc. began to influence the UK.

In 1996 ACPO (the Association of Chief Police Officers) for England and Wales initiated a working party that resulted in the abandonment of the 16-points standard and in favour of a non-numerical or holistic approach with a fingerprint profession supported by a strong training and quality assurance programme. The 16-points standard was abandoned in England and Wales in July 2001, whereas Scotland adopted the same non-numerical approach in 2007. This meant that fingerprint experts could give their opinions unfettered by any arbitrary numerical thresholds. The determination of identification therefore rests solely with the trained examiner’s experience. If he/she determines that the number of corresponding fingerprint features is sufficient (without discrepancies) then an identification will be declared. The ultimate safeguard is a verification process whereby two other examiners ‘independently’ assess the conclusion reached.

This holistic approach is fully in line with the practice in North America, Australia and the Nordic countries, which all adhere to the IAI 1973 resolution, modified slightly in Ne’Urim, Israel (1995) which states: ‘No scientific basis exists for requiring that a predetermined minimum number of friction ridge features must be present in two impressions in order to establish a positive identification.’

In the rest of Europe, the minimum standard is generally 12 (Italy is the exception with 16 to 17 points), although agencies have found mechanisms to bypass the rigid standard; for example where the pattern is clearly visible or there is a skin trauma present or perhaps through the assignment of greater weight to certain features.

From a logical perspective, there is no argument to recommend any predetermined minimum number of features for the following main reasons:

  1. The relative frequency of general flow varies greatly from class to class. Some types of arches would reduce the population of potential donors 10 times more than whorls. A numerical standard system would not make a distinction between general patterns.
  2. Minutiae frequency varies greatly as a function of their type and their position. Hence any system suggesting a fixed addition of points cannot be supported from a statistical perspective. Recent studies of statistical evaluation of partial fingermarks have shown that the discrimination offered by partial fingermarks is very high, even down to configurations of three minutiae. The random match probabilities involved compete with DNA profiling (Neumann et al. 2006; Neumann et al. 2007).
  3. When quality allows, small features such as pore positions and shapes and the topography of the edges of the ridges can add to the identification process. No numerical standard would account for these third-level details.

Allowing the whole range of features to be accounted for in the identification process is referred to as ridgeology and has been promoted by David Ashbaugh (Ashbaugh 1999). It is this holistic approach that has been adopted as a policy for fingerprint identification in the UK.

Quality management systems have been introduced but the emphasis to date has been on the management systems through ISO9002 rather than the more in depth assessments such as ISO17025. Thus the policy and management procedures are covered but the fundamental standards are as yet not externally verified. At the time of writing no uniform competence or proficiency scheme is employed across all UK fingerprint departments although plans exist for its introduction. Only lately have development courses been established for long-serving examiners. With most examiners in the employ of police the necessary ability to maintain independence that the court would wish to see may be open to challenge.

Bias and errors in fingerprint identifications

Research aimed at identifying the potential bias to which examiners may be susceptible has made significant recent progress (Dror 2005; Dror et al. 2005; Dror and Charlton 2006; Dror et al. 2006). This empirical research highlights the existence and the need of a full awareness of the impact of contextual information on the final decision making of fingerprint examiners. The studies by Dror et al. are mainly focused on the final decision arising from the comparison. We believe that a strict adherence to the ACE-V protocol with a distinct analysis and comparison stage is critical to mitigate the risk. Clearly there are dangers here that may be more pronounced given the close working relationship between the examiners and the police investigators. A clear and documented analysis, which would minimise this risk, is not universally seen in the UK and it is unlikely that marks are regularly analysed in this fashion.

The fingerprint profession claimed for numerous years that a misidentification was not a possible outcome (with the exception of intentional or fraudulent evidence). Recent academic interest in the area has led to the publication of accounts of 22 cases of misidentifications (Cole 2005; Cole 2006b). Although Cole (2005) is suggesting that these cases represent the tip of the iceberg, the accuracy of this statement remains unknown. Among these 22 cases, two are from the UK and neither is accepted as misidentification.

The first case is against McNamee who was convicted in 1987 in England for conspiracy to cause explosions. He had been identified as the man who left a thumbmark on a battery recovered from an explosive device in London by fingerprint examiners from the Metropolitan Police. During his appeal in 1998, numerous fingerprint experts were called to comment on the identification. Some experts identified that mark to McNamee’s print (although their conclusions were based on different sets of features), others maintained that the mark was not sufficient for individualisation. The original verdict was set aside by the Court of Appeal (R v. Gilbert Thomas Patrick McNamee, No. 9704481 S2, The Court of Appeal, Criminal Division, 17 December 1998).

The second case is the supposed misattribution (with allegedly 16 points in agreement) of a mark found on a scene in Scotland to the thumbprint of Shirley McKie. The McKie case is covered by a recent book (McKie and Russell 2007) and a full inquiry of the Scottish Criminal Record Office and Scottish Fingerprint Service has been undertaken by the Justice 1 committee of the Scottish parliament. 2 In this case, there is not a complete consensus. A minority of latent print examiners continue to claim that McKie was indeed the source of the disputed mark, whereas the majority have declared an exclusion in this case. This lack of consistency between examiners is a worrying fact.

The other famous case of known misattribution is the FBI misidentification related to the terrorist attacks in Madrid, Spain, in 2004. Various reports of critical importance have followed the discovery of the wrong individualisation of one Brandon Mayfield. The main recommendations of the internal review team were (Smrz et al. 2006; Stacey 2004): revisions in the latent print training programme, revisions to evidence-acceptance policies, detailed revisions to SOPs (Standard Operating Procedures) and casework documentation policies and procedures, revisions to SOPs regarding the decision-making process when determining the comparative value of a latent mark, and more stringent verification policies and procedures. An internal scientific review team explored the main issues facing the fingerprint profession (Budowle et al. 2006). A report of the office of the Inspector General regarding this case has also been released (United States Department of Justice and Office of the Inspector General – Oversight and Review Division 2006). Four critical areas have been identified:

  1. the fact that the known impression from Brandon Mayfield was showing a degree of similarity (10 minutiae could be described as in correspondence) with the true source;
  2. the fact that circular reasoning could have convinced examiners of the presence of distinctive features on the mark whereas their visibility was established on the known impression first;
  3. the misleading conclusions from the analysis stage with regards to the number of marks revealed (double tap or superimposition) and a questionable reliance on level 3 features;
  4. the potential bias of the verification stage caused by the knowledge of the conclusions of the first examiner when the mark was checked by the next.

These reports should be considered by any laboratory doing friction ridge skin individualisation. The possibility for a misattribution exists and the standard operating procedures should recognise this.

From identification issues to activity issues

The examination of marks allows the association of an individual with a particular surface or item. A contact between the donor and the surface or item is a prerequisite. At present, there is no scientific method that allows a reliable estimate of timeframe in which the mark was left on the surface. Patent marks, particularly those in blood, provide significant evidence other than that of identity. For example, whether a mark was made in the blood of a victim, or by deposition of the blood of the victim already on the hand is clearly of significance. Such determinations are possible as the ridges and pores may show distinct differences.

An area of particular interest in the detection of criminal activities is the interpretation of mark placement. In certain cases it is possible to provide an opinion, either on the activity or the position of the hand when the mark was made. Of course, in many scenarios there will be more than one activity that could result in the observed marks. It is therefore important to consider all possible propositions. It is interesting to observe that many fingerprint examiners in the UK (and elsewhere) are unwilling to provide such opinions. There is a tendency to limit the evidence to that of association rather than explore the potential for more informative opinions regarding the activity of the person who touched the object. We believe that when the real issues in the case are in relation to both the source of the marks and the activity associated with them (handling or sequence of events), then the duty of the expert witness is to inform the trier of facts on both aspects.

Similarly, where no marks are found we find quite often that this fact is sometimes misinterpreted by investigators and legal professionals. Bearing in mind the number of factors that affect both the persistence and the detection of a mark, the inability to find any mark cannot be used to ascertain that there was no contact between the item or surface and the particular individual. This may not be made clear in statements and reports. Where some history can be associated to the item it may be possible to draw some inference as to why no marks were found. For example the expectation of finding marks on items that have been wetted. The limitation in such an approach is the availability of data and the number of conflicting factors that need to be considered.

Provision of fingerprint services in the UK

The majority of fingerprint examinations in the UK are provided by departments (traditionally called bureaux) attached to police forces. In England and Wales these departments are essentially autonomous although increasingly they are subject to the policies and direction of the National Fingerprint Board set up by ACPO. In Scotland the fingerprint departments were amalgamated into the Scottish Police Services Authority in 2007 (a merger not unconnected to the debate with regard to the McKie case).

In England and Wales each department deals with submissions relating to crime within the geographical area of the police force. There are some exceptions to this, in particular terrorist investigations that are more often undertaken by the Metropolitan Police and other larger forces.

The police services are supported in terms of research and development by HOSDB (Home Office Scientific Development Branch). Training is supplied by centralised organisations and therefore a degree of consistency has been achieved. In England and Wales training is provided by the National Policing Improvement Agency (NPIA). Central instruction is given to provide basic skills and in-house training provided within the student’s own department mentored by a local trainer. A portfolio of work is prepared and a final assessment is undertaken covering basic knowledge, comparison skills and court presentation. In Scotland training is provided from within the Scottish Police Services Authority. The traditional ‘apprenticeship’ of five years has been removed although the idea that length of service and therefore experience can be utilised as some measure of competence is still embedded. What may be missing in the training is the general concept of forensic science in terms of a philosophical basis. Such studies may be all the more important as the majority of examiners work within police organisations and may therefore be subjected to conflicting influences with regard to balanced and transparent reporting and pursuit of the investigation.

In addition to the police, a number of commercial organisations also provide fingerprint services. LGC/Alliance provides fingerprint services in association with a police force and the FSS provides a range of services including specialist detection and recovery to police forces and government agencies. The FSS also maintains a programme of research and development. In recent years several smaller companies have been formed to deliver fingerprint expertise for the defence and undertake some contracted casework.

The UK national AFIS system is known as IDENT1 (formerly NAFIS although this term is still in general use). It is managed by the NPIA. The system comprises terminals within each of the force departments that provide access to the national fingerprint database and a link directly to the police national computer. Fingerprints taken in force areas are handled by the local departments who search the fingerprints, create new records and update police criminal records as necessary.

The Home Office has invested in Livescan technology for all charging stations. This has helped ensure that the biometric identification of individuals is fairly rapid. However, the quality of images being filed into the database may not be as consistently high as was envisaged. The algorithm supplied most recently by SAGEM (Défense Sécurité (SAFRAN Group, France)) is highly effective, to the point that automatic identifications (based solely on technology without check by an examiner) using mobile technologies are opening future possibilities for identity checking.

Fingerprint Evidence in court

The first successes for fingerprints date from the early twentieth century. The first murder case, R v. Stratton (1905) is well documented. But two cases served to provide the necessary precedents for future use, R v. Castleton (3 Cr App R 74, 1909) and HM Advocate v. Hamilton (JC 1 1933). These set the tone of the legal response to fingerprints by accepting the system as ‘practically infallible’ and admissible as the sole grounds of identification. This is perhaps a reflection of the legal systems in the UK or the social conditions that prevailed. However, no significant challenge was made to the validity of fingerprint identification for some considerable time. This is indeed similar to other jurisdictions; however, notable challenges have arisen in the USA in recent years. In Hamilton, the High Court of Judiciary clearly refrained from accepting using the term ‘infallible’ as a qualifier of the fingerprint evidence, but suggested the term ‘reliable’. In R. v. R.J. Buckley (Court of Appeal, criminal division, 143 SJ LB 159, The Times 12 May 1999), their Lordships reviewed the previous cases in the UK where fingerprint evidence had been admitted without 16 points of correspondence, the historical aspects of the 16-points standard and the associated review process undertaken under ACPO. Lord Justice Rose laid down the following guidelines in his decision (extracts from the decision):

  • If there are fewer than eight similar ridge characteristics, it is highly unlikely that a judge will exercise his discretion to admit such evidence and, save in wholly exceptional circumstances, the prosecution should not seek to adduce such evidence.
  • If there are eight or more similar ridge characteristics, a judge may or may not exercise his or her discretion in favour of admitting the evidence. How the discretion is exercised will depend on all the circumstances of the case, including in particular:
    • the experience and expertise of the witness;
    • the number of similar ridge characteristics;
    • whether there are dissimilar characteristics;
    • the size of the print relied on, in that the same number of similar ridge characteristics may be more compelling in a fragment of print than in an entire print; and
    • the quality and clarity of the print on the item relied on, which may involve, for example, consideration of possible injury to the person who left the print, as well as factors such as smearing or contamination.
  • In every case where fingerprint evidence is admitted, it will generally be necessary, as in relation to all expert evidence, for the judge to warn the jury that it is evidence opinion only, that the expert’s opinion is not conclusive and that it is for the jury to determine whether guilt is proved in the light of all the evidence.

This ruling is very similar to Locard’s tripartite rule (Locard 1914), with the exception that the admissibility of fingerprint evidence has been limited to cases where at least eight corresponding minutiae are observed, whereas Locard opened the door to the use of marks of very limited quality as corroborative evidence (Champod 1995). As Lord Justice Rose said: ‘It may be that in the future, when sufficient new protocols have been established to maintain the integrity of fingerprint evidence, it will be properly receivable as a matter of discretion, without reference to any particular number of similar ridge characteristics.’ Indeed we referred earlier to recent statistical studies that establish the high evidential contribution of very limited marks (in terms of number of minutiae) (Neumann et al. 2006; Neumann et al. 2007). On average, a configuration with three minutiae will be observed in the population with a match probability in the order of one in a thousand. We obtain one in 10,000 for four minutiae and one in a million for six minutiae. Hence fingerprint evidence below eight points of coincidence may contribute very significantly in deterring crime. The above argument is heavily focused on minutiae, but a similar point can be made invoking level 3 details such as pores and edge structure if the mark and the alleged corresponding print are of adequate quality. Cases of individualisation when the number of corresponding minutiae is very limited have been published (Reneau 2003).

The debate regarding fingerprint evidence in UK courts contrasts with the polarised arguments in the US courts especially in the follow-up of the Daubert (Daubert v. Merrell Dow Pharmaceuticals [1993] 509 US 579) decision setting new guidelines for the admissibility of scientific evidence (Berger 2000).

Until January 2002, all Daubert hearings led to the admissibility of fingerprint evidence in US courtrooms. In January 2002 the first decision that limited (briefly) expert testimony on fingerprint identification was made. In US v. Llera Plaza (US v. Llera Plaza, Acosta and Rodriguez US District Court of the Eastern District of Pennsylvania [2002] Criminal No. 98-362-10,11,12), Judge Pollack held that a fingerprint expert could not give an opinion of identification and required that the expert limit his testimony to outlining the correspondences observed between the mark and the print, leaving to the court the assessment of the significance of these findings. Asked to reconsider his opinion, Judge Pollack later reversed his decision, and admitted the evidence, mainly due to the consideration of the UK Court of Appeal ruling in R v. Buckley. That decision led to increased scrutiny of and interest in the fingerprint area in the scientific literature with unprecedented press coverage and reactions from the legal community (Steele 2004). Later, in Mitchell (United States v. Byron Mitchell Court of Appeals for the Third Circuit [2004] No. 02-2859 (29 April 2004)), the Appeal Court gave, while accepting fingerprint evidence under Daubert, a fair assessment of the field: on balance the probative benefits outweigh the risks, but the field lacks clear standards and the debate is marred by the ill-defined concept of the criteria for sufficiency.

The scientific status of identification evidence and in particular fingerprint evidence still receives critical attention from scholars and commentators (Saks and Koehler 2005; Zabell 2005; Harmon et al. 2006). Simon Cole in particular published a series of papers pointing out some critical weaknesses in latent fingerprint identification (Cole 2004a; Cole 2004b; Cole 2005; Cole 2006a; Cole 2006b).

The present situation is that the UK courts accept the non-numerical standard and for the most part do not challenge the identification. The reasons for this may be twofold. First the general perception amongst the public (including judiciary) is that fingerprint evidence is irrefutable and safe. This view is consistently reinforced by the media. Second there may be a lack of adequate defence expertise. The defence therefore tend to try to devalue the evidence through claims to legitimate access, attacks on the chain of evidence or occasionally discrediting the examiner. In fact there are a number of other lines of questioning that may be appropriate (especially in the aftermath of the Mayfield case): for example the lack of a demonstrable process, issues relating to ongoing competence of the examiner and lack of contemporaneous notes and detailed records of the conclusion and its basis.

Examiners supply statements in accordance with the prevailing rules. However, in general, these statements do little more than offer the opinion of the examiner and provide rather limited detail on how the conclusion has been drawn. The practice of charting the mark and print to demonstrate the features in agreement is no longer common. Thus the transparency of the examination process may be questioned. There is a common practice within the fingerprint community to document factually what has been done without documenting why a given conclusion has been drawn.

The future

There continues to be a large investment in technologies to recover fingerprints. This will continue and therefore one may expect some extension in the types of surfaces that may reveal marks. Other associated research such as aging of marks may also bring benefits to the criminal justice system.

AFIS technologies continue to improve and the acceptance of ‘lights out’ checking for biometric use may not be too far in the future. Wireless transfer of mark images from crime scenes will also become standard, offering rapid turnaround of identifications.

But probably the most important development will be the design of statistical models to evaluate low features matches. Recent studies indicate a potential for extending the number of marks on which evidence can be given. Such research may also represent an adequate and sought-after response to the current debate (mainly in the USA) regarding the admissibility of fingerprint evidence. We foresee the fingerprint profession moving from the current situation where the strength of the opinion is mainly associated with the expert and his/her experience to a time where fingerprint evidence will be supported by statistical models and a full documentation of the process used to draw a conclusion from a comparison.

Finally the terms quality assurance and proficiency testing will tend to dominate the debate regarding fingerprint evidence in the future. It is a move towards transparency and accountability that will bring the fingerprint field to be held to the same standards as all other forensic evidence types.


We limit our presentation to the case where a mark is compared against a print, but the protocol remains essentially the same when prints are compared against prints. The only difference is the quality of the information on one side of the comparison process.


Ashbaugh, D.R. (1999) Qualitative-Quantitative Friction Ridge Analysis – An Introduction to Basic and Advanced Ridgeology. Boca Raton: CRC Press.
Balthazard, V. (1911) ‘De l’identification par les empreintes digitales’, Comptes rendus des séances de l’Académie des Sciences, 152: 1862–1864.
Beavan, C. (2001) Fingerprints – The Origins of the Crime Detection and the Murder Case that Launched Forensic Science. New York: Hyperion.
Becue, A. , Champod, C. and Margot, P.A. (2007) ‘Fingermarks, Bitemarks and other Impressions (Barefoot, Ears, Lips) – A Review’ (September 2004–July 2007). Proceeding of the 15th Interpol Forensic Science Symposium,
Berger, M.A. (2000) ‘The Supreme Court’s Trilogy on the Admissibility of Expert Testimony’, in Federal Judicial Center (ed.), Reference Manual on Scientific Evidence. Washington: Federal Judicial Center, 9–38.
Blain, B. (2002) ‘Automated Palm Identification’, Fingerprint Whorld, 28: 102–107.
Bowman, V. (ed.) (2004) Manual of Fingerprint Development Techniques. Sandridge: Home Office Scientific Research and Development Branch.
Budowle, B. , Buscaglia, J. and Schwartz Perlman, R. (2006) ‘Review of the Scientific Basis for Friction Ridge Skin Comparisons as a Means of Identification: Committee Findings and Recommendations’, Forensic Science Communications, 8,
Champod, C. (1995) ‘Locard, Numerical Standards and “Probable” Identification’, Journal of Forensic Identification, 45: 132–159.
Champod, C. , Egli, N. and Margot, P.A. (2004a) ‘Fingermarks, Shoesoles and Footprint Impressions, Tire Impressions, Ear Impressions, Toolmarks, Lipmarks, Bitemarks – A Review (2001–2004). Proceeding of the 14th Interpol Forensic Science Symposium,
Champod, C. and Evett, I.W. (2001) ‘A Probabilistic Approach to Fingerprint Evidence’, Journal of Forensic Identification, 51: 101–122.
Champod, C. , Lennard, C. and Margot, P.A. (1993) ‘Alphonse Bertillon and Dactyloscopy’, Journal of Forensic Identification, 43: 604–625.
Champod, C. , Lennard, C.J. , Margot, P.A. and Stoilovic, M. (2004b) Fingerprints and other Ridge Skin Impressions. Boca Raton: CRC Press.
Cole, S. (2001) Suspect Identities: A History of Fingerprinting and Criminal Identification. Harvard University Press.
Cole, S.A. (2004a) ‘Fingerprint Identification and the Criminal Justice System: Historical Lessons for the DNA Debate’, in D. Lazer (ed.), DNA and the Criminal Justice System. Harvard: MIT Press, 63–90.
Cole, S.A. (2004b) ‘Grandfathering Evidence: Fingerprint Admissibility Rulings from Jennings to Llera Plaza and Back Again’, American Criminal Law Review, 41: 1189–1276.
Cole, S.A. (2005) ‘More than Zero: Accounting for Error in Latent Fingerprint Identification’, The Journal of Criminal Law and Criminology, 95: 985–1078.
Cole, S.A. (2006a) ‘Is Fingerprint Identification Valid? Rhetorics of Reliability in Fingerprint Proponents’ Discourse’, Law and Policy, 28: 109–135.
Cole, S.A. (2006b) ‘The Prevalence and Potential Causes of Wrongful Conviction by Fingerprint Evidence’, Golden Gate University Law Review, 37: 39–105.
Dror, I. (2005) ‘Experts and technology: Do’s & Don’ts’, Biometric Technology Today, 13: 7–9.
Dror, I.E. and Charlton, D. (2006) ‘Why Experts Make Errors’, Journal of Forensic Identification, 56: 600–616.
Dror, I.E. , Charlton, D. and Péron, A.E. (2006) ‘Contextual Information Renders Experts Vulnerable to Making Erroneous Identifications’, Forensic Science International, 156: 74–78.
Dror, I.E. , Péron, A. , Hind, S.-L. and Charlton, D. (2005) ‘When Emotions Get to the Better of us: The Effect of Contextual Top-Down Processing on Matching Fingerprints’, Applied Cognitive Psychology, 19: 799–809.
Evett, I.W. and Williams, R. (1996) ‘A Review of the Sixteen Points Fingerprint Standard in England and Wales’, Journal of Forensic Identification, 46: 49–73.
Goode, G.C. and Morris, J.R. (1983) ‘Latent Fingerprints: A Review of their Origin, Composition and Methods for Detection’. Aldermaston, UK: Atomic Weapons Research Establishment, AWRE Report No. 022/83.
Harmon, R. , Budowle, B. , Langenburg, G. and Houck, M.M. (2006) ‘Letters: Questions About Forensic Science (with response)’, Science, 311: 607–610.
Komarinski, P. (2005) Automated Fingerprint Identification Systems (AFIS). New York: Elsevier Academic Press.
Kücken, M. (2007) ‘Models for Fingerprint Pattern Formation’, Forensic Science International, 171: 85–96.
Locard, E. (1914) ‘La preuve judiciaire par les empreintes digitales’, Archives d’anthropologie criminelle, de médecine légale et de psychologie normale et pathologique, 29: 321–348.
Maceo, A.V. (2005) ‘The Basis for the Uniqueness and Persistence of Scars in the Friction Ridge Skin’, Fingerprint World, 31: 147–161.
Maltoni, D. , Maio, D. , Jain, A.K. and Prabhakar, S. (2003) Handbook of Fingerprint Recognition. New York: Springer Verlag.
McKie, I. and Russell, M. (2007) Shirley McKie: The Price of Innocence. Edinburgh: Birlinn Ltd.
Neumann, C. , Champod, C. , Puch-Solis, R. , Egli, N. , Anthonioz, A. and Bromage-Griffiths, A. (2007) ‘Computation of Likelihood Ratios in Fingerprint Identification for Configurations of Any Number of Minutiae’, Journal of Forensic Sciences, 52: 54–64.
Neumann, C. , Champod, C. , Puch-Solis, R. , Meuwly, D. , Egli, N. , Anthonioz, A. and Bromage-Griffiths, A. (2006) ‘Computation of Likelihood Ratios in Fingerprint Identification for Configurations of Three Minutiae’, Journal of Forensic Sciences, 51: 1255–1266.
Reis, G. (2007) Photoshop® CS3 for Forensics Professionals. Indianapolis, USA: Wiley Publishing, Inc.
Reneau, R.D. (2003) ‘Unusual Latent Print Examinations’, Journal of Forensic Identification, 53: 531–537.
Russ, J.C. (2001) Forensic Uses of Digital Imaging. Boca Raton: CRC Press.
Saks, M.J. and Koehler, J.J. (2005) ‘The Coming Paradigm Shift in Forensic Identification Science’, Science, 309: 892–895.
Sengoopta, C. (2003) Imprint of the Raj – How Fingerprinting Was Born in Colonial India. London: Macmillan.
Smrz, M.A. , Burmeister, S.G. , Einseln, A. , Fisher, C.L. , Fram, R. , Stacey, R.B. , Theisen, C.E. and Budowle, B. (2006) ‘Review of FBI Latent Print Unit Processes and Recommendations to Improve Practices and Quality’, Journal of Forensic Identification, 56: 402–434.
Stacey, R.B. (2004) ‘A Report on the Erroneous Fingerprint Individualization in the Madrid Train Bombing Case’, Journal of Forensic Identification, 54: 706–718.
Steele, L.J. (2004) ‘The Defense Challenge to Fingerprints’, Criminal Law Bulletin, 40: 213–240.
Suman, A. and Whitaker, G. (2005) ‘Benchmarking the Operational Search Accuracy of a National Identification System’, Biometric Technology for Human Identification II, Proceedings of the SPIE, 5779: 232–241.
United States Department of Justice and Office of the Inspector General – Oversight and Review Division (2006) A Review of the FBI’s Handling of the Brandon Mayfield Case (unclassified and redacted). Washington, DC.
Wertheim, K. and Maceo, A. (2002) ‘The Critical Stage of Friction Ridge and Pattern Formation’, Journal of Forensic Identification, 52: 35–85.
Zabell, S.L. (2005) ‘Fingerprint Evidence’, Journal of Law and Policy, 13: 143–179.


We would like to thank Cédric Neumann for commenting on the draft and Flore Bochet and Damien Dessimoz for helping us with the illustrations.

Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.