Dead time
Encyclopedia
For detection systems that record discrete events, such as particle
and nuclear
detectors
, the dead time is the time after each event during which the system is not able to record another event.
An everyday life example of this is what happens when someone takes a photo using a flash - another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits in quantum cryptography
.
(the conversion time of the ADC
s and the readout and storage times).
The intrinsic dead time of a detector is often due to its physical characteristics; for example a spark chamber is "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution.
The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually from .5 up to 10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution.
Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account.
Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates.
From the total time a detection system is running, the dead time must be subtracted to obtain the live time.
In a non-paralizable detector, an event happening during the dead time since the previous event is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time.
In a paralizable detector, an event happening during the dead time since the previous one will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all.
A semi-paralizable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.
. The probability that an event will occur in an infinitesimal time interval dt is then f dt. It follows that the probability P(t) that an event will occur at time t to t+dt with no events occurring between t=0 and time t is given by the exponential distribution
(Lucke 1974, Meeks 2008):
The expected time between events is then
for
for
The expected time between measurements is then
In other words, if counts are recorded during a particular time interval and the dead time is known, the actual number of events (N) may be estimated by
If the dead time is not known, a statistical analysis can yield the correct count. For example (Meeks 2008), if are a set of intervals between measurements, then the will have a shifted exponential distribution, but if a fixed value D is subtracted from each interval, with negative values discarded, the distribution will be exponential as long as D is greater than the dead time . For an exponential distribution, the following relationship holds:
where n is any integer. If the above function is estimated for many measured intervals with various values of D subtracted (and for various values of n) it should be found that for values of D above a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate.
Particle physics
Particle physics is a branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. In current understanding, particles are excitations of quantum fields and interact following their dynamics...
and nuclear
Nuclear physics
Nuclear physics is the field of physics that studies the building blocks and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those...
detectors
Particle detector
In experimental and applied particle physics, nuclear physics, and nuclear engineering, a particle detector, also known as a radiation detector, is a device used to detect, track, and/or identify high-energy particles, such as those produced by nuclear decay, cosmic radiation, or reactions in a...
, the dead time is the time after each event during which the system is not able to record another event.
An everyday life example of this is what happens when someone takes a photo using a flash - another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits in quantum cryptography
Quantum cryptography
Quantum key distribution uses quantum mechanics to guarantee secure communication. It enables two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages...
.
Overview
The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the drift time in a gaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of the DAQData acquisition
Data acquisition is the process of sampling signals that measure real world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems typically convert analog waveforms into digital values for processing...
(the conversion time of the ADC
Analog-to-digital converter
An analog-to-digital converter is a device that converts a continuous quantity to a discrete time digital representation. An ADC may also provide an isolated measurement...
s and the readout and storage times).
The intrinsic dead time of a detector is often due to its physical characteristics; for example a spark chamber is "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution.
The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually from .5 up to 10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution.
Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account.
Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates.
From the total time a detection system is running, the dead time must be subtracted to obtain the live time.
Paralizable and non-paralizable behaviour
A detector, or detection system, can be characterized by a paralizable or non-paralizable behaviour.In a non-paralizable detector, an event happening during the dead time since the previous event is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time.
In a paralizable detector, an event happening during the dead time since the previous one will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all.
A semi-paralizable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.
Analysis
It will be assumed that the events are occurring randomly with an average frequency of f. That is, they constitute a Poisson processPoisson process
A Poisson process, named after the French mathematician Siméon-Denis Poisson , is a stochastic process in which events occur continuously and independently of one another...
. The probability that an event will occur in an infinitesimal time interval dt is then f dt. It follows that the probability P(t) that an event will occur at time t to t+dt with no events occurring between t=0 and time t is given by the exponential distribution
Exponential distribution
In probability theory and statistics, the exponential distribution is a family of continuous probability distributions. It describes the time between events in a Poisson process, i.e...
(Lucke 1974, Meeks 2008):
The expected time between events is then
Non-paralizable analysis
For the non-paralizable case, with a dead time of , the probability of measuring an event between t=0 and is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at time t with no intervening measurements is then given by an exponential distribution shifted by :for
for
The expected time between measurements is then
In other words, if counts are recorded during a particular time interval and the dead time is known, the actual number of events (N) may be estimated by
If the dead time is not known, a statistical analysis can yield the correct count. For example (Meeks 2008), if are a set of intervals between measurements, then the will have a shifted exponential distribution, but if a fixed value D is subtracted from each interval, with negative values discarded, the distribution will be exponential as long as D is greater than the dead time . For an exponential distribution, the following relationship holds:
where n is any integer. If the above function is estimated for many measured intervals with various values of D subtracted (and for various values of n) it should be found that for values of D above a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate.
Time-To-Count
With a modern microprocessor based ratemeter one technique for measuring field strength with detectors (e.g., Geiger–Müller tubes) with a recovery time is Time-To-Count. In this technique, the detector is armed at the same time a counter is started. When a strike occurs, the counter is stopped. If this happens many times in a certain time period (e.g., two seconds), then the mean time between strikes can be determined, and thus the count rate. Live time, dead time, and total time are thus measured, not estimated. This technique is used quite widely in radiation monitoring systems used in nuclear power generating stations.See also
- Data acquisitionData acquisitionData acquisition is the process of sampling signals that measure real world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems typically convert analog waveforms into digital values for processing...
(DAQ) - Allan variance
- PhotomultiplierPhotomultiplierPhotomultiplier tubes , members of the class of vacuum tubes, and more specifically phototubes, are extremely sensitive detectors of light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum...
- Positron emission tomography
- Class-D amplifier