Traceology, the bedrock of forensic science and its associated semantics

Authored by: Pierre Margot

The Routledge International Handbook of Forensic Intelligence and Criminology

Print publication date:  December  2017
Online publication date:  December  2017

Print ISBN: 9781138688216
eBook ISBN: 9781315541945
Adobe ISBN:




The focus of this chapter is the major vector of information available to humans that enables them to know, with a certain degree of confidence, past facts that cannot be reproduced. Traces, vestiges andremnants are often more important than writings, which express the viewpoints, wills or conceptions of their authors. Archaeology has revisited many historical facts that were reported but subsequently shown to be wrong or biased when historical vestiges were considered. What is valid for ancient history is just as valid for recent histories such as crimes. Building models of what happened using science is a difficultbut exceedingly rich process. Information, data andfindings are of interest to investigation, courts andcriminologists, as they negate impossible theories or propositions while they support others to various documented degrees. This essentially problem-solving discipline is plagued by misuse of conceptual vocabulary, a practice that highlights the gap between practitioners coming from other traditional scientific disciplines and forensic scientists.

 Add to shortlist  Cite

Traceology, the bedrock of forensic science and its associated semantics

In the twentieth century, forensic science evolved from a method of investigation (Gross 1893) into a technological identification process focused on evidence in a legal procedure (Robertson et al. 2014). The high identification power obtained with fingermarks, tool marks, etc., and more recently with DNA, has fed technological developments focused on the identification capacities offered by chemistry, biology, or physics, with little or no relation to the object of study—the remnants or vestiges 1 left by criminals while committing their crimes. This has reached a point where extreme technological developments are highly controlled to safeguard and ensure the quality of analysis of specific items without any, or hardly any, consideration of the object itself. This has led to the creation of specialized and skilled laboratories akin to service factories, able to deliver analytical results to their clients: highly controlled deliverables but of questionable value. These developments have also resulted in an economic disaster—for example, the recent closure (in 2012) of the Forensic Science Service in the UK—and the turmoil of forensic science in the US with ongoing reviews that followed the scathing National Research Council Report (National Academy of Sciences 2009) after serious mistakes were identified. This crisis is further compounded today with the lack of consideration of electronic trace information (phones, computers, surveillance cameras, GPS, cyberspace) as it relates to other forensic science data.

A simple example should illustrate and help demonstrate the difficult scientific questions fundamental to forensic science, those due to the very nature of its object: the trace, clue of an event that needs to be understood, the identities of its participants, their contribution to the event, etc.


A fight breaks out on a dance floor. One participant to the fight breaks a wine glass and uses the broken glass in his hand to cut the throat of his opponent. Realizing the potential consequences, this participant runs out of the dance club and flees in the direction of an underground station. An individual is stopped and arrested after traditional police investigation. This person denies having been at the dance despite fitting witness descriptions, and a recording by surveillance cameras apparently showing him run into the underground station. His clothing is collected, a fragment of broken glass is found in the wound of the deceased and a couple of glass fragments are recovered from the dance floor.

Brought to the laboratory, two small pieces of broken glass are found during the search of the clothing of the man under arrest. Laboratories can provide glass analyses which may go from simple density and refractive index (RI) measurements to multi-element qualitative and quantitative analyses (such as ICP–MS). Multi-element analytical research has focused on the analysis of windows, bottles, drinking glasses, windbreakers, showing the internal variation of composition of various items, the variation between types of items, sometimes even focusing on differentiating production batches, etc. This is all done using sampling, i.e., selecting a given number of fragments from a source to delimit the spread of values for a given (identified) source. This process is valuable in determining the discrimination power of the method applied and documenting the diversity in production of a given item but has little relation to the problem at hand. The trace items found on the floor and on the clothing are not identified as standard source materials. It is not even sure that they are glass fragments, and if they are, the analyst does not know whether these fragments are representative of their source or not, since they were not selected and they represent only a vestige (what is left from an event, i.e., they are not formally ‘samples’).

Taking a general approach, it is first determined that all fragments are pieces of glass: they are all apparently colourless and transparent. Using the refractive index measurements, they may fit the broad category of glass used for making drinking glasses (not bottle or window or spectacle). Refining the analyses, it now appears that the RI differs between the fragments in the wound, the fragments recovered from the floor and the fragments in the garments. One obvious conclusion is that they are not from the same source! This conclusion, however, may be utterly misleading since a drinking glass may be made of one generic type of glass whose composition may vary for the drinking bowl, the stem and the base. Three pieces of broken glass may be from a single source but differ in their composition! The analytical response is correct, but the forensic conclusion would be wrong if it is decided that the broken pieces are not from the same source. On the other hand, some valuable forensic science research starting in the 1970s (Pearson et al. 1971) has been relegated to the past by laboratories and is ignored most of the times. This research involved population studies looking at clothing to see the likelihood of finding glass fragments in a population of garments, what happens when you break glass, what is happening to glass transferred during the action, etc. These studies have shown that the chance (or risk) of finding glass fragments on clothing in a general population is not unusual but that transferred glass during an event is not persistent and falls rapidly off the clothing with activity. In the example, finding glass on the clothing is therefore relevant since there was glass broken. It has value because the source has material features typically found in drinking glass. Going any further has little relevance since the difference in composition cannot be explained by a difference in source but solely by the lack of homogeneity arising from the manufacture of the drinking glasses in that scenario, and in general. Studying the activity (with time and space or location, pictures taken by witnesses with their mobile phones, surveillance cameras, activation of communication points, etc.) combined with the glass findings in such a scenario is much more relevant than determining a multi-element composition of a fragment whose source and sampling is not controlled. The logic applied in most forensic science laboratories does not follow this path, and most scientists are uncomfortable with this situation since it is out of control of their technical skills.

This example is simple and almost a caricature. It concerns a closed set, a single crime with a limited number of participants but with significant and difficult forensic questions that are not approached by the current discussions on forensic science worldwide.

Linkage blindness

Looking at the wider sociological picture, Egger studied the reasons for failures in criminal investigation. He identified many factors, but one was found to dominate all others, which he coined ‘linkage blindness’ (Egger 1984). Linkage blindness is the incapacity to link crimes committed by the same criminal or group of criminals, crimes that are linked in nature but that investigations have failed to link. Linkage blindness accounts for a large number of these failures in criminal investigation since a great number of crimes are committed by a small number of criminals (habitual/professional/organized/terrorism) with a high recidivist rate. Diminishing this linkage blindness has been key to the developments of various databases—collecting modus operandi; data collection on crimes, on victims, etc.—for example, databases such as VICAP—violent crime apprehension programme—in the US or ViCLAS—violent crime linkage analysis system—in Canada (Collins et al. 1998). But this approach has failed to afford the expected successes because of its subjective and highly administrative nature. Forensic science, maintaining its focus on analytical technology rather than on the problem to be solved, was unable to propose a solution. It was only with the advent of DNA, its high individualizing capacity as well as its presentation as a sequence of digits to express the profiles that computers handle perfectly, that it was soon apparent that DNA traces on various crime scenes were connecting the crimes, whereas the criminal was not in the database or in the frame of the investigation (Kind 1987). One essential feature with DNA is that, in principle, all cells from the source share an identical DNA profile. Nevertheless, all efforts remained on the discrimination power and identifying strength of DNA by multiplying research into more loci, rather than exploiting the connecting power of even partial profiles. Looking for connections, it is necessary to move away from the highly selective to a much lower level of distinction to be able to group what is similar (which allows linkage) from what is different (which allows dissociation).

Research starting in the 1990s looked at highly repetitive crimes, volume crimes such as burglaries, by collecting files of solved linked cases. It was found that most files contained indications that material evidence had been collected but was rarely, if ever, used for investigative purposes, but usually only retrospectively, after a person was arrested, to demonstrate the extent of the crimes (Ribaux, Taroni et al. 1995). The investigative strength of material evidence was lost. Since then, research and practice have demonstrated the power and strength of such material evidence in describing criminal phenomena and reducing linkage blindness in a great variety of crimes from burglaries to terrorism, doping, counterfeiting, etc. (Ribaux and Margot 1999; Walsh et al. 2002; Ribaux, Baylon et al. 2010a; Ribaux, Baylon et al. 2010b; Rossy et al. 2013; Morelato et al. 2014; Baechler et al. 2015). The study of material left or modified by criminals while committing their crimes, or taken with them from the scenes of their crimes, got back on its original tracks, which it should not have left. Electronic information is further nourishing this capacity of linkage but with an explosion in the volume of information that needs to be searched, measured, compared, connected, etc. The study and understanding of the meaning of traces as a whole is therefore fundamental to forensic science. Indeed, the Berlin school, created in the 1920s, maintained teaching, research and development in the use of traces for investigation in direct line with the ‘Kriminalistik’ of Gross (1893). The name it gave for the major defining course in forensic science was ‘traceology’, which we now adopt.

Trace and traceology

The word ‘trace’ comes from Latin (tractus) and the old French to mean the path someone or something takes; since the fourteenth century, it has been extended to mean the vestige or mark of a presence, an existence or an action of someone or something in a location or space that did not belong to that space initially (Margot 2014).

The fact that it is a vestige or a remnant indicates that it is affected by the asymmetry of time (we cannot use the rewind button), and it is determined by the past. Cleland argues (Cleland 2002) that there is an over-determination of the past; it tells something of the past that is not haphazard. This means, also, that it may not be representative of its original state, but most of the time it has a retrodictive capacity sufficient to allow a reasonable explanation of a common cause as highlighted by Cleland (Cleland 2001; Cleland 2013) because it exists! This material existence is independent from its potential meaning unlike a discourse or a written statement, which incorporates an author’s meaning. It allows detection, measurements, classification and comparison using a scientific methodology. The information content of this vestige may be sufficient to identify its source (such as with DNA, fingermarks, toolmarks, etc.), to specify the ‘presence’ or ‘existence’ of ‘someone’ or ‘something’, but it also gives information related to space (where and how—orientation, action, electronic connection, images) and time (‘initially’ or when, succession, sequence, communication, etc.) to describe the overall activity that created the trace. Since it is not possible to go back in time, we can only construct a model that is descriptive of a given crime scenario, supported by what is observed. This is not a general model, but a specific retrodictive model that can only be probabilistic in nature.

In the majority of cases, the quality of the vestige is such that it is incomplete, imperfect and degraded by time passing, and these losses increase uncertainty or may support only approximations about the past event. These approximations need to be revised as new or complementary information becomes available. This may be unsettling for scientists focused on the precision and accuracy of measurements.

The definition of ‘trace’ as used here is clearly divergent from a recent usage that has come from chemistry, where a trace is a very small quantity, near the detection limits of technologies, and often described as an impurity or imperfection in a product or process. This comes from the development of chemistry in the nineteenth century and a definition given by Faraday in 1807 (according to the English Oxford Dictionary). The perception of the trace is here negative; it is considered an undesirable factor, whereas in forensic science, the trace is the fundamental and positive vector of the information. A trace is a vector of knowledge and is capable of being questioned depending on the scenarios that are proposed, since it is the result of an activity under investigation that needs to be understood or demonstrated. Looking at the literature, toxicologists often refer to ‘impurities’ when trying to link illicit drug seizures. They may be impurities for the producer of the illicit drug, but they are the essential trace for forensic science. This contrasting view emphasizes the essential difference in perspectives (Lociciro et al. 2007).

It must be emphasized that the role of traceology is not to determine cause but to give an indication of how the trace findings measure when different versions of causes are given. Mathematical models help give dimensions on how likely are the traces observed, given various propositions about the crime and how it was committed. A whole research field studies this type of interpretation based on Bayes theorem (Aitken and Taroni 2004). This approach is intellectually satisfying and helps with both understanding the reasoning processes involved in retrodiction and the development of tools for forensic expert systems integrated to databases (Neumann et al. 2012; Taroni et al. 2014).

Unfortunately, this end product has little value if the relevance of the traces detected is faulty since the environment is full of traces from the ‘normal’ uses and activities that contaminate scenes of investigation. They constitute a background noise (Hazard 2014) and this is why heuristics should be developed to target the searches toward known trace transfer with a rich potential (point of entry, object not at its usual place, point of contact observed by a surveillance camera, etc.) or to situations that are such that they fit a crime model (see Delémont et al., ‘The practice of crime scene examination in an intelligence-based perspective’, chapter 8, in this volume). Specific traces may also be targeted (even of poor quality) if a phenomenon (serial crime) has been detected and is known to be ongoing. The focus can be directed toward specific traces identified in earlier cases of the series (Girod et al. 2008).

Further, if no precautions are taken, the scenes of investigation may be further polluted by careless handling of scenes (carelessness may also be detected by surveillance cameras!).

The trace exists, whether detected or not, and will remain without meaning unless detected. Its presence and its detection is a first indication of relevance and may lead to the detection of further traces that will help develop an understanding of the crime and its circumstances. From a hazy picture, each relevant finding unveils contours with more or less acuity and may, in certain cases, offer a clear picture with information of maximum quality. The whole range of information will therefore go from a simple clue to a full probative story line.

Understanding this grey scale offered by trace findings, and how they can feed the other processes (investigation, intelligence, prevention, court proceedings, etc.), is what forensic science is about. Traces constitute a sign language and forensic science develops the methods and the grammar to understand the meaning of these signs with their ambiguities, confusions and strengths. Combining findings, analyzing them as a logical puzzle where pieces only fit in certain combinations, belong to the overall forensic science process as a problem-solving quest. It starts from a territory full of unknown elements, and where forensic science should start its quest (not in the laboratory) and as contours become clearer, full laboratory analyses may become necessary to resolve ambiguities.

This argumentation makes apparent that technological analytical developments, although important in their detection and measurement capabilities, remain blind if they are not applied within a developed logical framework and a well-understood investigative methodology: the Kriminalistik of Gross (Gross 1893).

Forensic science and its object: the semantics of the trace and its context

Unfortunately, language and semantics used within the profession have been led astray, by practice and/or under the influence of primarily chemistry, but also biology, physics, mathematics, etc. The language used thus needs clarification because it underlies fundamental misunderstandings and raises fundamental issues relevant to forensic science and its role.

The first confusion is between ‘findings’ and ‘evidence’. Forensic scientists should not use the latter despite the courts’ claim that ‘expert witnesses are giving evidence’. Forensic scientists are highlighting traces they have found and their information content. Given certain propositions, they may highlight their likelihood, supporting one or other propositions. The courts will receive these findings and decide if they help in their understanding of the issues they have to judge. Findings become evidence if they help in any measure to decide on the issue and the causality of a given case. This subtle distinction clarifies the situation and that the scientist cannot take the role of the judge. Evidence concerns the issue and means used to reach a verdict, only if they are found relevant and informative for the judge.

A second confusion appears with ‘sample’. In science, a sample has a definite and narrow meaning in that it is part of a whole selected to be representative of this whole. Sampling gives an indication of the range of values typical for an item. The selection process is an essential element of the representativeness. A trace is never ‘selected’ because it is representative, but it is taken because it is present and, apparently, relevant. It may be, more often than not, representative of its source, but this is only an assumption that cannot be made without logical information supporting it. The example given in the introduction with the glass fragments highlights this misuse: by saying ‘we analyzed the glass samples collected from the clothing,’ the inference is that they are representative of their source (which they are not!); therefore, if they differ in their physics (RI) or chemistry, we should exclude that they come from the same source as the fragment in the wound. It is proposed that ‘specimen’ is a more correct word since a specimen is a part of a whole but expresses the fact that it is a ‘one off’ element, without indication about a selection or representativeness. But other words can be used like ‘item’ and ‘collected trace’. Using ‘sample’ in this case is not only wrong and misleading, but shows a fundamental misunderstanding in the way the trace is created. It is true that some traces, like DNA, do share the same profile as their source since all living cells contain an identical DNA (with few exceptions), but the trace that contains DNA may be partially degraded and will therefore be only partially representative of its source. This is where scientific knowledge and interpretation come in.

A third confusion concerns the term ‘print’ as in ‘fingerprint’, ‘shoe print’, etc. The term ‘finger print’ or ‘fingerprint’ was used by Galton (Galton 1892) to designate the pattern made by papillary ridges on the skin of the surface of hands, fingers and feet that were ‘printed’ to register habitual criminals in order to be able to identify them again after a later arrest. The perception that it could be used to identify a source through its finger traces left at scenes of investigation was a separate, fundamental change in perspective proposed by Faulds (Faulds 1880). It became the embodiment of what could be used to identify with alleged certainty. It was therefore coined by biologists (DNA fingerprinting, virus, bacteria, plant . . . fingerprinting), by chemists (looking at the fingerprint region of an IR spectrum), by forensic scientists (voiceprint) and even by computer scientists (website or server fingerprinting) to indicate the highly selective nature of a technique, of a feature or of a method. A single term should apply to all those: a ‘profile’ that emerges from the analyses. A bibliographic database search for ‘fingerprint’ will deliver thousands of papers that have nothing to do with the ridge features of the skin, or forensic science for that matter. But the confusion is not only created outside the field; within forensic science some have started to designate without distinction the pattern (the pattern made by ridges), the reproduction of the pattern for databases or comparison (the print of this pattern—the true fingerprint) and the traces left unknowingly while touching an object (the trace made by the pattern—the fingermark), making the object quite confusing at times. In the UK, the distinction is provided by the ‘fingerprint’ for the reference print and ‘fingermark’ for the trace: ‘the fingermark which, generally, implies a lesser quality impression that includes latent, partial, distorted, reversed (tonally or laterally), or superimposed impressions’ (Champod et al. 2016).

Kind and colleagues (Kind et al. 1979) already showed, as far back as 1978 in Wichita, that the diversity of vocabulary used was very confusing because a single word could be used to describe different concepts, whereas different words could be used to apply to one single concept. Although they did not make a distinction between samples and specimens (see above), they found 21 different terms to designate samples (i.e., reference material). These authors did not go very far, but stopped at defining three specific concepts:

  1. ‘samples with known association with crime’, which they call the ‘crime sample’: this may be paint from a door where a burglar used a tool, the soil from the garden where a body was found, etc.—that is, samples representative of the environment with which the criminal has had contact (paint on a tool or soil on car mat are traces that can be compared with these samples);
  2. ‘samples with suspected association with crime’ for which they offer the name of ‘questioned sample’. This definition does not make clear whether it is a sample, but it is apparent from the context that these are the specimens—trace material that may be compared to the crime samples of (1);
  3. ‘samples from reference collection’ or ‘reference samples’: these constitute the accumulated events, as they say, in the statistical sense, that compose the collection of data.

This initial attempt at defining what was perceived to be highly confusing was not pursued, but occasional questions arise in the literature. They also failed to note some very important distinctions among reference samples, and indeed their ‘crime sample’ is a form of reference in itself, perhaps the most relevant.

References can cover many relevant concepts for forensic science, less so in other disciplines:

  • The controls which are of three types: the blanks, the positive and the negative controls:
    • Blanks are usually crime samples taken to document background information. This process was used, as far as we know, for the first time in forensic science by Orfila, considered the founder of forensic toxicology with his treatise on poisons (Orfila 1814), who took soil from the environment of an exhumed body suspected to have been poisoned by arsenic (in the Lafarge case in 1840). The presence of arsenic in the body could not be explained by diffusion from the environment since it was devoid of arsenic. The arsenic found in the body must have been ingested and the allegation of poisoning could be decided upon by the court. These are important controls in forensic toxicology and fire investigation.
    • Positive controls are samples known to contain the type of materials for which the analyses are done. They should show positive indications of the expected results. This demonstrates that the technique/method works as it should. This type of control was systematically seen on the same reaction plates in serology, testing for blood groups, or species where wells were arranged, one with a blank control (saline extraction of background), a well with human blood group A, a well with human blood group B and a well with blood group O. Antibodies were tested to see that they reacted with their correct antigen. Antibody anti-A should be reacting with A and only A.
    • Negative controls. In the example under b., in testing antibody anti-A, there should be no reaction with samples of blood group B or O. These are the negative controls. The absence of reaction with the blank control corresponds to a., above.
  • The standards The standards are certified materials/measures against which a specimen is tested. Standard organizations provide or validate these standards, which may be used to build a reference collection that may be shared and against which methods and techniques are calibrated. This allows for comparisons on a larger scale, across borders and jurisdictions.
  • Population collections These are collections that are built on the basis of samples of a relevant population and help determine frequency or rarity of some features relevant to the interpretation of some findings.

It is apparent that these distinctions may be made when questions are raised in court or after reading reports, but they should be part of the overall discipline because they demonstrate a deep understanding of the role of forensic science and traceology.

Another dimension requiring clarification or emphasis (already mentioned earlier in this chapter) is the distinction between contamination and pollution. Any investigation scene is contaminated. Contamination cannot be avoided and one may say that relevance is the essential factor distinguishing the clue-giving trace from the background that constitutes the history of the investigation scene. There is an illusion that contamination can be avoided by careful handling of the scene (overalls, gloves, masks, etc.). These measures are useful and necessary to avoid introducing further, unwanted elements to the scene. The fact of being conscious of this risk justifies the use of a terminology that is essentially negative, like pollution, something that investigative teams know that they should avoid, whereas background and relevant traces belong to the fundamental baggage of forensic scientists.

Finally, a problem that was almost unknown 30 or 40 years ago is now becoming extremely demanding and needs attention: the distinction between an original and a copy. With current electronic developments, the distinction between the original of a document and a copy (mail, letter, contract, image) may be very difficult to make. Indeed, the signal used to produce one document is identical for a second copy of the same document, a third, fourth, etc., and this may need massive investigation into metadata of huge quantities of electronic events or actions. The change of dimensions and the critical issue about chain of custody become even more pressing. This is an issue essential in electronic investigations and, unfortunately, has received too little attention so far, but changes are rapid, and this observation may be obsolete when this chapter is published!


Forensic science has a specific object of analysis, the trace left as witness of an event that needs to be investigated and understood. This vector is a rich support for information whose reading may be very complex and obtained through multiple and various dimensions. The complexity makes its approach a permanent research problem whose solution is never known; it may be alleged, supposed and expected, but the ground truth remains past and largely not accessible. Science, using logic and analyses, is capable of providing rich information of value to investigators, to researchers of crime and to courts. Because of its materiality, the trace can be questioned, measured and analyzed in different ways without uncontrollable variations in its dimensions unlike other less robust information such as that produced by witnesses. The difficulty is not only scientific since the trace’s status often remains accessory to more valued human investigation in traditional policing, and in many instances single pieces of particular interest in a given inquiry are afforded the poor recognition. Science has a role to play, but that role needs to be defined and based in science and not on scientists as providers of certified and accredited tests. Scientists trained in the discipline also need to understand that the language used in their endeavours underscores the problems they tackle and covers essential concepts unique to the discipline, at least to some extent. This chapter highlights some of these conceptual difficulties mostly ignored or misunderstood by scientists working in forensic science but who are specialists of their trade, be it chemistry, physics, biology or computer science. Their discipline is useful and rich, but there is a large gap to bridge before these disciplines offer their full potential in the study of crimes and their signs. Forensic science has been in crisis around the world during the last twenty years for lack of will or understanding, and the limited research dedicated to the scientific and relevant questions. Calls were published that focused on relevant research and the problem-solving paradigm (Margot 2011; Margot 2011b; De Forest 2001) but current initiatives in many countries continue to focus attention on technologies, standards and regulatory measures rather than on science. This is a wake-up call.


The distinction between remnants and vestiges is subtle and shows a gradation, the remnant being a residue or a small remaining part that may not be visible, whereas the vestige is usually a visible sign of something. They are used synonymously in this chapter.


Aitken, C.G.G. , and F. Taroni . 2004. Statistics and the Evaluation of Evidence for Forensic Scientists. Chichester, UK: John Wiley & Sons.
Baechler, S. , M. Morelato , O. Ribaux , A. Beavis , M. Tahtouh , K.P. Kirkbride , P. Esseiva , P. Margot and C. Roux . 2015. ‘Forensic Intelligence Framework. Part II: Study of the Main Generic Building Blocks and Challenges through the Examples of Illicit Drugs and False Identity Documents Monitoring’. Forensic Science International, 250(May): 44–52.
Champod, C. , C. Lennard , P. Margot and M. Stoilovic . 2016. Fingerprints and Other Skin Ridge Impressions. Boca Raton, FL: CRC Press.
Cleland, C.E. 2001. ‘Historical Science, Experimental Science, and the Scientific Method’. Geology, 29(11): 987–990.
Cleland, C.E. 2002. ‘Methodological and Epistemic Differences between Historical Science and Experiemntal Science’. Philosophy of Science, 69: 474–496.
Cleland, C.E. 2013. ‘Common Cause Explanation and the Search for a Smoking Gun’. Geological Society of America Special Papers, 502: 1–9.
Collins, P.I. , G.F. Johnson , A. Choy , K.T. Davidson and R.E. MacKay . 1998. ‘Advances in Violent Crime Analysis and Law Enforcement: The Canadian Violent Crime Linkage Analysis System’. Journal of Government Information, 25(3): 277–284.
De Forest, P.R. 2001. ‘What Is Trace Evidence?’ In B. Caddy , ed., Forensic Examination of Glass and Paint: Analysis and Interpretation, 1–25. London, UK: Taylor and Francis.
Egger, S.A. 1984. ‘A Working Definition of Serial Murder and the Reduction of Linkage Blindness’. Journal of Police Science and Administration, 12(3): 348–355.
Faulds, H. 1880. ‘On the Skin-Furrows of the Hand’. Nature, 22: 605.
Galton, F. 1892. Finger Prints. London: Macmillian and Company.
Girod, A. , C. Champod and O. Ribaux . 2008. Traces de souliers. Lausanne: Presses polytechniques et universitaires romandes.
Gross, H. 1893. Handbuch für Untersuchungsrichter als System der Kriminalistik. Graz: Leuschnen und Lubensky.
Hazard, D. 2014. ‘La pertinence en science forensique’. PhD diss., Université de Lausanne.
Kind, S.S. 1987. The Scientific Investigation of Crime. Harrogate: Forensic Science Services Ltd.
Kind, S.S. , R. Wigmore , P.H. Whitehead and D.X. Loxley . 1979. ‘Terminology in Forensic Science’. Journal of the Forensic Science Society, 19(3): 189–191.
Lociciro, S. , P. Hayoz , P. Esseiva , L. Dujourdy , F. Besacier and P. Margot . 2007. ‘Cocaine Profiling for Strategic Intelligence Purposes: A Cross-Border Project between France and Switzerland. Part I. Optimisation and Harmonisation of the Profiling Method’. Forensic Science International, 167(2–3): 220–228.
Margot, P. 2011. ‘Forensic Science on Trial—What Is the Law of the Land?’ Australian Journal of Forensic Sciences, 43(2–3): 83–97.
Margot, P. 2011b. ‘Commentary on The Need for a Research Culture in the Forensic Sciences ’. UCLA Law Review, 58: 795–801.
Margot, P. 2014. ‘Traçologie: la trace, vecteur fondamental de la police scientifique’. Revue internationale de criminologie et de police technique et scientifique, 67(1): 72–97.
Morelato, M. , S. Baechler , O. Ribaux , A. Beavis , M. Tahtouh , P. Kirkbride , C. Roux and P. Margot . 2014. ‘Forensic Intelligence Framework. Part I: Induction of a Transversal Model by Comparing Illicit Drugs and False Identity Documents Monitoring’. Forensic Science International, 236(March): 181–190.
National Academy of Sciences. 2009. Strengthening Forensic Science in the United Sates: A Path Forward. Washington, DC: National Academies Press.
Neumann, C. , I.W. Evett and J. Skerrett . 2012. ‘Quantifying the Weight of Evidence from a Forensic Fingerprint Comparison: A New Paradigm’. Journal of the Royal Statistical Society. Series A (Statistical Society), 175(2): 371–415.
Orfila, M.J.B. 1814. Le traité des poisons. Paris: Crochard.
Pearson, E.F. , R.W. May and M.G.D. Dabbs . 1971. ‘Glass and Paint Fragments Found in Men’s Outer Clothing—A Report of a Survey’. Journal of Forensic Sciences, 16(2): 283–302.
Ribaux, O. , A. Baylon , C. Roux , O. Delémont , E. Lock , C. Zingg and P. Margot . 2010a. ‘Intelligence-Led Crime Scene Processing. Part l: Forensic Intelligence’. Forensic Science International, 195(1–3): 10–16.
Ribaux, O. , A. Baylon , E. Lock , O. Delémont , C. Roux , C. Zingg and P. Margot . 2010b. ‘Intelligence-Led Crime Scene Processing. Part ll: Intelligence and Crime Scene Examination’. Forensic Science International, 199(1–3): 63–71.
Ribaux, O. , and P. Margot . 1999. ‘Inference Structures for Crime Analysis and Intelligence: The Example of Burglary Using Forensic Science Data’. Forensic Science International, 100(3): 193–210.
Ribaux, O. , F. Taroni and P. Margot . 1995. ‘La recherche et la gestion des liens dans l’investigation criminelle: une étape vers l’exploitation systématique des données de police’. Revue internationale de criminologie et de police technique, 48: 229–242.
Robertson, B. , G.A. Vignaux and C. Berger . 2014. Interpreting Evidence—Evaluating Forensic Science in the Courtroom. Chichester, UK: John Wiley & Sons Inc.
Rossy, Q. , S. Ioset , D. Dessimoz and O. Ribaux . 2013. ‘Integrating Forensic Information in a Crime Intelligence Database’. Forensic Science International, 230(1–3): 137–146.
Taroni, F. , A. Biedermann , S. Bozza , P. Garbolino and C. Aitken . 2014. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science. Chichester, UK: John Wiley & Sons.
Walsh, S.J. , C. Roux , A. Ross , O. Ribaux and J.S. Buckleton . 2002. ‘Forensic DNA Profiling: Beyond Identification’. Law Enforcement Forum, 2(3): 13–21.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.