Interrupt latency
Encyclopedia
In real-time operating system
s, interrupt latency is the time between the generation of an interrupt
by a device and the servicing of the device which generated the interrupt. For many operating systems, devices are serviced as soon as the device's interrupt handler
is executed. Interrupt latency may be affected by interrupt controllers, interrupt masking, and the operating system
's (OS) interrupt handling methods.
(the average rate of successful message delivery over a communication channel), and processor utilization. Many of the techniques of CPU
and OS
design that improve interrupt latency will decrease throughput and increase processor utilization. Techniques that increase throughput may increase interrupt latency and increase processor utilization. Lastly, trying to reduce processor utilization may increase interrupt latency and decrease throughput.
Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. They can also affect the jitter
in the interrupt latency, which can drastically affect the real-time
schedulability
of the system. The Intel APIC Architecture
is well known for producing a huge amount of interrupt latency jitter.
Maximum interrupt latency is largely determined by the methods an OS uses for interrupt handling. For example, most processors allow programs to disable interrupts, putting off the execution of interrupt handlers, in order to protect critical section
s of code. During the execution of such a critical section, all interrupt handlers that cannot execute safely within a critical section are blocked (they save the minimum amount of information required to restart the interrupt handler after all critical sections have exited). So the interrupt latency for a blocked interrupt is extended to the end of the critical section, plus any interrupts with equal and higher priority that arrived while the block was in place.
Many computer systems require low interrupt latencies, especially embedded system
s that need to control
machinery in real-time. Sometimes these systems use a real-time operating system
(RTOS). An RTOS makes the promise that no more than an agreed upon maximum amount of time will pass between executions of subroutine
s. In order to do this, the RTOS must also guarantee that interrupt latency will never exceed a predefined maximum.vino
. For example, most network cards implement transmit and receive ring buffers, interrupt rate limiting, and hardware flow control. Buffers allow data to be stored until it can be transferred, and flow control allows the network card to pause communications without having to discard data if the buffer is full.
Modern hardware also implements interrupt rate limiting. This helps prevent interrupt storm
s or live lock by having the hardware wait a programmable minimum amount of time between each interrupt it generates. Interrupt rate limiting reduces the amount of time spent servicing interrupts, allowing the processor to spend more time doing useful work. Exceeding this time results in a soft (recoverable) or hard (non-recoverable) error.
Real-time operating system
A real-time operating system is an operating system intended to serve real-time application requests.A key characteristic of a RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter...
s, interrupt latency is the time between the generation of an interrupt
Interrupt
In computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution....
by a device and the servicing of the device which generated the interrupt. For many operating systems, devices are serviced as soon as the device's interrupt handler
Interrupt handler
An interrupt handler, also known as an interrupt service routine , is a callback subroutine in microcontroller firmware, operating system or device driver whose execution is triggered by the reception of an interrupt...
is executed. Interrupt latency may be affected by interrupt controllers, interrupt masking, and the operating system
Operating system
An operating system is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is the most important type of system software in a computer system...
's (OS) interrupt handling methods.
Background
There is usually a trade-off between interrupt latency, throughputThroughput
In communication networks, such as Ethernet or packet radio, throughput or network throughput is the average rate of successful message delivery over a communication channel. This data may be delivered over a physical or logical link, or pass through a certain network node...
(the average rate of successful message delivery over a communication channel), and processor utilization. Many of the techniques of CPU
Microprocessor
A microprocessor incorporates the functions of a computer's central processing unit on a single integrated circuit, or at most a few integrated circuits. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and...
and OS
Operating system
An operating system is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is the most important type of system software in a computer system...
design that improve interrupt latency will decrease throughput and increase processor utilization. Techniques that increase throughput may increase interrupt latency and increase processor utilization. Lastly, trying to reduce processor utilization may increase interrupt latency and decrease throughput.
Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. They can also affect the jitter
Jitter
Jitter is the undesired deviation from true periodicity of an assumed periodic signal in electronics and telecommunications, often in relation to a reference clock source. Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude, or phase of...
in the interrupt latency, which can drastically affect the real-time
Real-time computing
In computer science, real-time computing , or reactive computing, is the study of hardware and software systems that are subject to a "real-time constraint"— e.g. operational deadlines from event to system response. Real-time programs must guarantee response within strict time constraints...
schedulability
Scheduling (computing)
In computer science, a scheduling is the method by which threads, processes or data flows are given access to system resources . This is usually done to load balance a system effectively or achieve a target quality of service...
of the system. The Intel APIC Architecture
Intel APIC Architecture
The Intel APIC Architecture is a system of advanced programmable interrupt controllers designed by Intel for use in symmetric multiprocessor computer systems. It was originally implemented by the Intel 82093AA and 82489DX, and is found in most x86 SMP motherboards...
is well known for producing a huge amount of interrupt latency jitter.
Maximum interrupt latency is largely determined by the methods an OS uses for interrupt handling. For example, most processors allow programs to disable interrupts, putting off the execution of interrupt handlers, in order to protect critical section
Critical section
In concurrent programming a critical section is a piece of code that accesses a shared resource that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will have to wait a fixed time to...
s of code. During the execution of such a critical section, all interrupt handlers that cannot execute safely within a critical section are blocked (they save the minimum amount of information required to restart the interrupt handler after all critical sections have exited). So the interrupt latency for a blocked interrupt is extended to the end of the critical section, plus any interrupts with equal and higher priority that arrived while the block was in place.
Many computer systems require low interrupt latencies, especially embedded system
Embedded system
An embedded system is a computer system designed for specific control functions within a larger system. often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a personal...
s that need to control
Control system
A control system is a device, or set of devices to manage, command, direct or regulate the behavior of other devices or system.There are two common classes of control systems, with many variations and combinations: logic or sequential controls, and feedback or linear controls...
machinery in real-time. Sometimes these systems use a real-time operating system
Real-time operating system
A real-time operating system is an operating system intended to serve real-time application requests.A key characteristic of a RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter...
(RTOS). An RTOS makes the promise that no more than an agreed upon maximum amount of time will pass between executions of subroutine
Subroutine
In computer science, a subroutine is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code....
s. In order to do this, the RTOS must also guarantee that interrupt latency will never exceed a predefined maximum.vino
Considerations
There are many methods that hardware may use to increase the interrupt latency that can be tolerated. These include buffers, and flow controlFlow control
In data communications, flow control is the process of managing the pacing of data transmission between two nodes to prevent a fast sender from outrunning a slow receiver. It provides a mechanism for the receiver to control the transmission speed, so that the receiving node is not overwhelmed with...
. For example, most network cards implement transmit and receive ring buffers, interrupt rate limiting, and hardware flow control. Buffers allow data to be stored until it can be transferred, and flow control allows the network card to pause communications without having to discard data if the buffer is full.
Modern hardware also implements interrupt rate limiting. This helps prevent interrupt storm
Interrupt storm
In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time...
s or live lock by having the hardware wait a programmable minimum amount of time between each interrupt it generates. Interrupt rate limiting reduces the amount of time spent servicing interrupts, allowing the processor to spend more time doing useful work. Exceeding this time results in a soft (recoverable) or hard (non-recoverable) error.
See also
- Advanced Programmable Interrupt ControllerAdvanced Programmable Interrupt ControllerIn computing, an Advanced Programmable Interrupt Controller is a more complex Programmable Interrupt Controller than Intel's original types such as the 8259A...
(APIC) - Ethernet flow controlEthernet flow controlEthernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks.-Description:Ethernet is a popular family of computer network protocols. Flow control can be implemented at the data link layer...
- IEEE 802.3IEEE 802.3IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group defining the physical layer and data link layer's media access control of wired Ethernet. This is generally a local area network technology with some wide area network applications...
(802.3x PAUSE frames for flow control) - Inter-processor interruptInter-Processor InterruptAn inter-processor interrupt is a special type of interrupt by which one processor may interrupt another processor in a multiprocessor system. IPIs are typically used to implement a cache coherency synchronization point.- Windows :...
(IPI) - InterruptInterruptIn computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution....
- Interrupt handlerInterrupt handlerAn interrupt handler, also known as an interrupt service routine , is a callback subroutine in microcontroller firmware, operating system or device driver whose execution is triggered by the reception of an interrupt...
- Non-maskable interruptNon-Maskable interruptA non-maskable interrupt is a computer processor interrupt that cannot be ignored by standard interrupt masking techniques in the system. It is typically used to signal attention for non-recoverable hardware errors...
(NMI) - Programmable Interrupt ControllerProgrammable Interrupt ControllerIn computing, a programmable interrupt controller is a device that is used to combine several sources of interrupt onto one or more CPU lines, while allowing priority levels to be assigned to its interrupt outputs. When the device has multiple interrupt outputs to assert, it will assert them in...
(PIC) - Response time (technology)
- Latency (engineering)Latency (engineering)Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. Latencies may have different meaning in different contexts.-Packet-switched networks:...
- Computer hardware and operating system latency