Influence diagrams approach
Encyclopedia
Influence Diagrams Approach (IDA) is a technique used in the field of Human reliability
Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering
, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
An Influence diagram
(ID) is essentially a graphical representation of the probabilistic interdependence
between Performance Shaping Factors (PSFs), factors which pose a likelihood of influencing the success or failure of the performance of a task. The approach originates from the field of decision analysis
and uses expert judgement in its formulations. It is dependent upon the principal of human reliability and results from the combination of factors such as organisational and individual factors, which in turn combine to provide an overall influence. There exists a chain of influences in which each successive level affects the next. The role of the ID is to depict these influences and the nature of the interrelationships in a more comprehensible format. In this way, the diagram may be used to represent the shared beliefs of a group of experts on the outcome of a particular action and the factors that may or may not influence that outcome. For each of the identified influences quantitative values are calculated, which are then used to derive final Human Error Probability (HEP) estimates.
1. Describe all relevant conditioning events
Experts who have sufficient knowledge of the situation under evaluation form a group; in depth knowledge is essential for the technique to be used to its optimal potential. The chosen individuals include a range of experts - typically those with first hand experience in the operational context under consideration – such as plant supervisors, reliability assessors, human factor specialists and designers. The group collectively assesses and gradually develops a representation of the most significant influences which will affect the success of the situation. The resultant diagram is useful in that it identifies both immediate and underlying influences of the considered factors with regards their effect on the situation under assessment and upon one another.
2. Refine the target event definition
The event which is the basis of the assessment requires to be defined as tightly as possible.
3. Balance of Evidence
The next stage is to select a middle-level event in the situation and using each of the bottom level influences, assess the weight of evidence, also known as the ‘balance of evidence’; this represents expert analysis of the likelihood that a specific state of influence or combination of the various influences is existent within the considered situation.
4. Assess the weight of evidence for this middle-level influence, which is conditional on bottom-level influences
5. Repeat 3 and 4 for the remaining middle-level and bottom-level influences
These three steps are conducted in the aim of determining the extent to which the influences exist in the process, alone and in different combinations, and their conditional effects.
6. Assess probabilities of target event conditional on middle-level influences
7. Calculate the unconditional probability of target event and unconditional weight of evidence of middle-level influences
For the various combinations of influences that have been considered, the experts identify direct estimates of the likelihood of either success or failure.
8. Compare these results to the holistic judgements of HEPs by the assessors. Revise if necessary to reduce discrepancies.
At this stage the probabilities derived from the use of the technique are compared to holistic estimates from the experts, which have been derived through an Absolute Probability Judgement (APJ) process. Discrepancies are discussed and resolved within the group as required.
9. Repeat above steps until assessors are finished refining their judgements
The above steps are iterated, in which all experts share opinions, highlight new aspects to the problem and revise the initially made assessments of the situation. The process is deemed complete when all participants reach a consensus that any misgivings about the discrepancies are resolved.
10. Perform sensitivity analyses
If individual experts remain to be unsure of the discrepancies about the assessments which have been made, then sensitivity analysis can be used to determine the extent to which individual influence assessments affect the target event HEP. Conducting a cost-benefit analysis is also possible at this stage of the process.
This diagram was originally developed for use in the HRA of a scenario within the settings of a nuclear power situation. The diagram depicts the direct influences of each of the factors on the situation under consideration as well as providing as indication as to the way in which some of the factors affect each other.
There are 7 first level influences on the outcome of the high level task, numbered 1 to 7. Each of these describes an aspect of the task under assessment, which requires to be judged as one of two states.
Differing combinations of these first stage influences affect the state of those on the second level.
By assessing the state of the second level influences, the quality of information, organisation and personal factors, the overall likelihood of either success or failure of the task can be calculated by means of conditional probability calculations.
Human reliability
Human reliability is related to the field of human factors engineering and ergonomics, and refers to the reliability of humans in fields such as manufacturing, transportation, the military, or medicine...
Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering
Engineering
Engineering is the discipline, art, skill and profession of acquiring and applying scientific, mathematical, economic, social, and practical knowledge, in order to design and build structures, machines, devices, systems, materials and processes that safely realize improvements to the lives of...
, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
An Influence diagram
Influence diagram
An influence diagram is a compact graphical and mathematical representation of a decision situation...
(ID) is essentially a graphical representation of the probabilistic interdependence
Interdependence
Interdependence is a relation between its members such that each is mutually dependent on the others. This concept differs from a simple dependence relation, which implies that one member of the relationship can function or survive apart from the other....
between Performance Shaping Factors (PSFs), factors which pose a likelihood of influencing the success or failure of the performance of a task. The approach originates from the field of decision analysis
Decision analysis
Decision analysis is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner...
and uses expert judgement in its formulations. It is dependent upon the principal of human reliability and results from the combination of factors such as organisational and individual factors, which in turn combine to provide an overall influence. There exists a chain of influences in which each successive level affects the next. The role of the ID is to depict these influences and the nature of the interrelationships in a more comprehensible format. In this way, the diagram may be used to represent the shared beliefs of a group of experts on the outcome of a particular action and the factors that may or may not influence that outcome. For each of the identified influences quantitative values are calculated, which are then used to derive final Human Error Probability (HEP) estimates.
Background
IDA is a decision analysis based framework which is developed through eliciting expert judgement through group workshops. Unlike other first generation HRA, IDA explicitly considers the inter-dependency of operator and organisational PSFs. The IDA approach was first outlined by Howard and Matheson [1], and then developed specifically for the nuclear industry by Embrey et al. [2].IDA Methodology
The IDA methodology is conducted in a series of 10 steps as follows:1. Describe all relevant conditioning events
Experts who have sufficient knowledge of the situation under evaluation form a group; in depth knowledge is essential for the technique to be used to its optimal potential. The chosen individuals include a range of experts - typically those with first hand experience in the operational context under consideration – such as plant supervisors, reliability assessors, human factor specialists and designers. The group collectively assesses and gradually develops a representation of the most significant influences which will affect the success of the situation. The resultant diagram is useful in that it identifies both immediate and underlying influences of the considered factors with regards their effect on the situation under assessment and upon one another.
2. Refine the target event definition
The event which is the basis of the assessment requires to be defined as tightly as possible.
3. Balance of Evidence
The next stage is to select a middle-level event in the situation and using each of the bottom level influences, assess the weight of evidence, also known as the ‘balance of evidence’; this represents expert analysis of the likelihood that a specific state of influence or combination of the various influences is existent within the considered situation.
4. Assess the weight of evidence for this middle-level influence, which is conditional on bottom-level influences
5. Repeat 3 and 4 for the remaining middle-level and bottom-level influences
These three steps are conducted in the aim of determining the extent to which the influences exist in the process, alone and in different combinations, and their conditional effects.
6. Assess probabilities of target event conditional on middle-level influences
7. Calculate the unconditional probability of target event and unconditional weight of evidence of middle-level influences
For the various combinations of influences that have been considered, the experts identify direct estimates of the likelihood of either success or failure.
8. Compare these results to the holistic judgements of HEPs by the assessors. Revise if necessary to reduce discrepancies.
At this stage the probabilities derived from the use of the technique are compared to holistic estimates from the experts, which have been derived through an Absolute Probability Judgement (APJ) process. Discrepancies are discussed and resolved within the group as required.
9. Repeat above steps until assessors are finished refining their judgements
The above steps are iterated, in which all experts share opinions, highlight new aspects to the problem and revise the initially made assessments of the situation. The process is deemed complete when all participants reach a consensus that any misgivings about the discrepancies are resolved.
10. Perform sensitivity analyses
If individual experts remain to be unsure of the discrepancies about the assessments which have been made, then sensitivity analysis can be used to determine the extent to which individual influence assessments affect the target event HEP. Conducting a cost-benefit analysis is also possible at this stage of the process.
Example
The diagram below depicts an influence diagram which can be applied to any human reliability assessment [3].This diagram was originally developed for use in the HRA of a scenario within the settings of a nuclear power situation. The diagram depicts the direct influences of each of the factors on the situation under consideration as well as providing as indication as to the way in which some of the factors affect each other.
There are 7 first level influences on the outcome of the high level task, numbered 1 to 7. Each of these describes an aspect of the task under assessment, which requires to be judged as one of two states.
- The design of the task is judged to be either good or bad
- The meaningfulness of the procedures involved in the completion of the task are simply meaningful or not meaningful
- Operators either possess a role in the task that is of primary importance or that is not considered as a primary role
- For the purposes of completing the considered task, they may or not be a formation of teams of individuals
- the stress levels associated with the task can affect performance and render individuals either functional or not functional
- the surrounding work ethic and environment in which the task takes place will provide either a good level of morale or a poor motivation level
- competence of the individuals who are responsible for carrying out the task is either of a high level or a low level
Differing combinations of these first stage influences affect the state of those on the second level.
- The quality of information, which can either be classed as good or bad, is dependent upon the meaningfulness of the procedures of the task and the task design.
- The organisation, whether it is assessed as either requisite or not requisite, is determined by the role of operations functions in completing the task, the meaningfulness of the procedures and whether or not teams are formed to complete the task
- The personal aspect of the task can be judged as either favourable for successful completion or unfavourable. The way in which this is assessed is dependent on competence level of the concerned individuals, stress levels present, morale/motivation levels of the individuals and whether or not teams are formed to complete the task.
By assessing the state of the second level influences, the quality of information, organisation and personal factors, the overall likelihood of either success or failure of the task can be calculated by means of conditional probability calculations.
Advantages of IDA
- Dependence between PSFs is explicitly acknowledged and modelled [3]
- It can be used at any task “level”, i.e. it can be used in a strategic overview or in a very fine breakdown of a task element [3]
- Data requirements are small and no calibration is necessary [3]
- PSFs are precisely defined and their influence is explored in depth [3]
- PSFs and other influence creating error producing conditions are prioritised and if desired, the less significant ones may be ignored
- Sensitivity analysisSensitivity analysisSensitivity analysis is the study of how the variation in the output of a statistical model can be attributed to different variations in the inputs of the model. Put another way, it is a technique for systematically changing variables in a model to determine the effects of such changes.In any...
is possible with use of this technique [3] - It is possible to generate high amounts of qualitative data through the group discussion process
Disadvantages of IDA
- Building IDAs is highly resource-intensive in terms of organising and supporting an extensive group session involving a suitable range of experts [3]
- Eliciting unbiased HEPs requires further research with regards to their accuracy and justification [3]