Trusted system
Encyclopedia
In the security engineering
subspecialty of computer science
, a trusted system is a system that is relied upon to a specified extent to enforce a specified security policy. As such, a trusted system is one whose failure may break a specified security policy.
.
Central to the concept of U.S. Department of Defense
-style "trusted systems" is the notion of a "reference monitor
", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is (a) tamperproof, (b) always invoked, and (c) small enough to be subject to independent testing, the completeness of which can be assured. Per the U.S. National Security Agency
's 1983 Trusted Computer System Evaluation Criteria
(TCSEC), or Orange Book
, a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
The highest levels of assurance were guaranteed by significant system engineering directed toward minimization of the size of the trusted computing base
, or TCB
, defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy.
Because failure of the TCB breaks the trusted system, higher assurance is provided by the minimization of the TCB. An inherent engineering conflict arises in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB. This may lead to some philosophical arguments about the nature of trust, based on the notion that a "trustworthy" implementation may not necessarily be a "correct" implementation from the perspective of users' expectations.
In contrast to the TCSEC's precisely defined hierarchy of six evaluation classes, the more recently introduced Common Criteria
(CC)—which derive from a blend of more or less technically mature standards from various NATO countries—provide a more tenuous spectrum of seven "evaluation classes" that intermix features and assurances in an arguably non-hierarchical manner and lack the philosophic precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support—even encourage—a intermixture of security requirements culled from a variety of predefined "protection profiles." While a strong case can be made that even the more seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information
derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, working under the close technical guidance and economic sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Ft. Hanscom, MA), devised what is known as the Bell-LaPadula model
, in which a more or less trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data, such as files, disks, printers) and subjects (active entities—perhaps users, or system processes or threads operating on behalf of those users—that cause information to flow among objects). The entire operation of a computer system can indeed be regarded a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows.
At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices is either "dominates," "is dominated by," or neither.) She defined a generalized notion of "labels"—corresponding more or less to the full security markings one encounters on classified military documents, e.g., TOP SECRET WNINTEL TK DUMBO—that are attached to entities. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, Secure Computer System: Unified Exposition and Multics Interpretation—whereby labels attached to objects represented the sensitivity of data contained within the object (though there can be, and often is, a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself)), while labels attached to subjects represented the trustworthiness of the user executing the subject. The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it dominates [is greater than is a close enough—albeit mathematically imprecise—interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no-read-up" and "no-write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository whence insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no-read-up and no-write-down rules rigidly enforced by the reference monitor are provably sufficient to constrain Trojan horses
, one of the most general classes of attack (sciz., the popularly reported worms
and viruses are specializations of the Trojan horse concept).
The Bell-LaPadula model technically enforces only "confidentiality," or "secrecy," controls, i.e., they address the problem of the sensitivity of objects and attendant trustworthiness of subjects not inappropriately to disclose it. The dual problem of "integrity," i.e., the problem of accuracy (even provenance) of objects and attendant trustworthiness of subjects not inappropriately to modify or destroy it, is addressed by mathematically affine models, the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model
and Shockley and Schell's program integrity model.
An important feature of the class of security controls described supra, termed mandatory access control
s, or MAC, is that they are entirely beyond the control of any user: the TCB automatically attaches labels to any subjects executed on behalf of users; files created, deleted, read, or written by users; and so forth. In contrast, an additional class of controls, termed discretionary access control
s, are under the direct control of the system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and—in a more flexible and powerful form—by Multics since earlier still) and access control lists (ACLs)
are familiar examples of discretionary access controls.
The behavior of a trusted system is often characterized in terms of a mathematical model—which may be more or less rigorous, depending upon applicable operational and administrative constraints—that takes the form of a finite state machine
(FSM) with state criteria; state transition constraint
s; a set of "operations" that correspond to state transitions (usually, but not necessarily, one); and a descriptive top-level specification, or DTLS, entailing a user-perceptible interface
(e.g., an API, a set of system call
s [in UNIX
parlance] or system exits [in mainframe
parlance]), each element of which engenders one or more model operations.
creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.
or homeland security
, law enforcement
, or social control
policy are systems in which some conditional prediction
about the behavior of people or objects within the system has been determined prior to authorizing access to system resources.
For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing
" initiatives in technical systems security, and the use of credit or identity scoring
systems in financial and anti-fraud applications; in general, they include any system (i) in which probabilistic threat or risk analysis
is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems constraints
to control behavior within the system), or (ii) in which deviation analysis or systems surveillance
is used to ensure that behavior within systems complies with expected or authorized parameters.
The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice
based on accountability for deviant actions after they occur, see Cesare Beccaria, On Crimes and Punishment (1764), to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance
and control through system constraints, see Michel Foucault
, Discipline and Punish
(1975, Alan Sheridan
, tr., 1977, 1995).
In this emergent model, "security" is geared not towards policing but to risk management
through surveillance
, exchange of information, auditing, communication, and classification
. These developments have led to general concerns about individual privacy
and civil liberty and to a broader philosophical debate about the appropriate forms of social governance
methodologies.
is based on the definition of trust as 'Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel' by Ed Gerck.
In Information Theory
, information has nothing to do with knowledge or meaning. In the context of Information Theory, information is simply that which is transferred from a source to a destination, using a communication channel. If, before transmission, the information is available at the destination then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
Likewise, trust as defined by Gerck has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.
Taken together in the model of Information Theory, information is what you do not expect and trust is what you know. Linking both concepts, trust is seen as qualified reliance on received information. In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived: higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
An introduction to the calculus of trust (Example: 'If I connect two trusted systems, are they more or less trusted when taken together?') is given in.
The IBM
Federal Software Group has suggested that provides the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network centric enterprise services environment, such notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.
Security engineering
Security engineering is a specialized field of engineering that focuses on the security aspects in the design of systems that need to be able to deal robustly with possible sources of disruption, ranging from natural disasters to malicious acts...
subspecialty of computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...
, a trusted system is a system that is relied upon to a specified extent to enforce a specified security policy. As such, a trusted system is one whose failure may break a specified security policy.
Trusted systems in classified information
Trusted systems are used for the processing, storage and retrieval of sensitive or classified informationClassified information
Classified information is sensitive information to which access is restricted by law or regulation to particular groups of persons. A formal security clearance is required to handle classified documents or access classified data. The clearance process requires a satisfactory background investigation...
.
Central to the concept of U.S. Department of Defense
United States Department of Defense
The United States Department of Defense is the U.S...
-style "trusted systems" is the notion of a "reference monitor
Reference monitor
In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects' ability to perform operations on objects on a system...
", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is (a) tamperproof, (b) always invoked, and (c) small enough to be subject to independent testing, the completeness of which can be assured. Per the U.S. National Security Agency
National Security Agency
The National Security Agency/Central Security Service is a cryptologic intelligence agency of the United States Department of Defense responsible for the collection and analysis of foreign communications and foreign signals intelligence, as well as protecting U.S...
's 1983 Trusted Computer System Evaluation Criteria
Trusted Computer System Evaluation Criteria
Trusted Computer System Evaluation Criteria is a United States Government Department of Defense standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system...
(TCSEC), or Orange Book
Orange Book
Orange Book may refer to:* Trusted Computer System Evaluation Criteria, a computer security standard* The Orange Book: Reclaiming Liberalism, by members of the British Liberal Democrat party...
, a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
The highest levels of assurance were guaranteed by significant system engineering directed toward minimization of the size of the trusted computing base
Trusted computing base
The trusted computing base of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system...
, or TCB
TCB
TCB may refer to:* TCB , a 1968 Motown television special* TCB Band, Elvis Presley's band* TCB, an acronym for "taking care of business", used in the Seinfeld episode "The Bizarro Jerry"-Organizations:...
, defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy.
Because failure of the TCB breaks the trusted system, higher assurance is provided by the minimization of the TCB. An inherent engineering conflict arises in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB. This may lead to some philosophical arguments about the nature of trust, based on the notion that a "trustworthy" implementation may not necessarily be a "correct" implementation from the perspective of users' expectations.
In contrast to the TCSEC's precisely defined hierarchy of six evaluation classes, the more recently introduced Common Criteria
Common Criteria
The Common Criteria for Information Technology Security Evaluation is an international standard for computer security certification...
(CC)—which derive from a blend of more or less technically mature standards from various NATO countries—provide a more tenuous spectrum of seven "evaluation classes" that intermix features and assurances in an arguably non-hierarchical manner and lack the philosophic precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support—even encourage—a intermixture of security requirements culled from a variety of predefined "protection profiles." While a strong case can be made that even the more seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information
Classified information
Classified information is sensitive information to which access is restricted by law or regulation to particular groups of persons. A formal security clearance is required to handle classified documents or access classified data. The clearance process requires a satisfactory background investigation...
derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, working under the close technical guidance and economic sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Ft. Hanscom, MA), devised what is known as the Bell-LaPadula model
Bell-LaPadula model
The Bell-LaPadula Model is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell and Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell to formalize the U.S. Department of Defense multilevel...
, in which a more or less trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data, such as files, disks, printers) and subjects (active entities—perhaps users, or system processes or threads operating on behalf of those users—that cause information to flow among objects). The entire operation of a computer system can indeed be regarded a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows.
At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices is either "dominates," "is dominated by," or neither.) She defined a generalized notion of "labels"—corresponding more or less to the full security markings one encounters on classified military documents, e.g., TOP SECRET WNINTEL TK DUMBO—that are attached to entities. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, Secure Computer System: Unified Exposition and Multics Interpretation—whereby labels attached to objects represented the sensitivity of data contained within the object (though there can be, and often is, a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself)), while labels attached to subjects represented the trustworthiness of the user executing the subject. The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it dominates [is greater than is a close enough—albeit mathematically imprecise—interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no-read-up" and "no-write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository whence insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no-read-up and no-write-down rules rigidly enforced by the reference monitor are provably sufficient to constrain Trojan horses
Trojan horse (computing)
A Trojan horse, or Trojan, is software that appears to perform a desirable function for the user prior to run or install, but steals information or harms the system. The term is derived from the Trojan Horse story in Greek mythology.-Malware:A destructive program that masquerades as a benign...
, one of the most general classes of attack (sciz., the popularly reported worms
Computer worm
A computer worm is a self-replicating malware computer program, which uses a computer network to send copies of itself to other nodes and it may do so without any user intervention. This is due to security shortcomings on the target computer. Unlike a computer virus, it does not need to attach...
and viruses are specializations of the Trojan horse concept).
The Bell-LaPadula model technically enforces only "confidentiality," or "secrecy," controls, i.e., they address the problem of the sensitivity of objects and attendant trustworthiness of subjects not inappropriately to disclose it. The dual problem of "integrity," i.e., the problem of accuracy (even provenance) of objects and attendant trustworthiness of subjects not inappropriately to modify or destroy it, is addressed by mathematically affine models, the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model
Clark-Wilson model
The Clark-Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system.The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a...
and Shockley and Schell's program integrity model.
An important feature of the class of security controls described supra, termed mandatory access control
Mandatory access control
In computer security, mandatory access control refers to a type of access control by which the operating system constrains the ability of a subject or initiator to access or generally perform some sort of operation on an object or target...
s, or MAC, is that they are entirely beyond the control of any user: the TCB automatically attaches labels to any subjects executed on behalf of users; files created, deleted, read, or written by users; and so forth. In contrast, an additional class of controls, termed discretionary access control
Discretionary access control
In computer security, discretionary access control is a kind of access control defined by the Trusted Computer System Evaluation Criteria "as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong...
s, are under the direct control of the system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and—in a more flexible and powerful form—by Multics since earlier still) and access control lists (ACLs)
Access control list
An access control list , with respect to a computer file system, is a list of permissions attached to an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Each entry in a typical ACL specifies a subject...
are familiar examples of discretionary access controls.
The behavior of a trusted system is often characterized in terms of a mathematical model—which may be more or less rigorous, depending upon applicable operational and administrative constraints—that takes the form of a finite state machine
Finite state machine
A finite-state machine or finite-state automaton , or simply a state machine, is a mathematical model used to design computer programs and digital logic circuits. It is conceived as an abstract machine that can be in one of a finite number of states...
(FSM) with state criteria; state transition constraint
Transition constraint
A transition constraint is a way of enforcing that the data doesn't enter an impossible state because of a previous state. For example, it shouldn't be possible for a person to change from being "married" to being "single, never married"...
s; a set of "operations" that correspond to state transitions (usually, but not necessarily, one); and a descriptive top-level specification, or DTLS, entailing a user-perceptible interface
Interface (computer science)
In the field of computer science, an interface is a tool and concept that refers to a point of interaction between components, and is applicable at the level of both hardware and software...
(e.g., an API, a set of system call
System call
In computing, a system call is how a program requests a service from an operating system's kernel. This may include hardware related services , creating and executing new processes, and communicating with integral kernel services...
s [in UNIX
Unix
Unix is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna...
parlance] or system exits [in mainframe
Mainframe computer
Mainframes are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.The term originally referred to the...
parlance]), each element of which engenders one or more model operations.
Trusted systems in trusted computing
The Trusted Computing GroupTrusted Computing Group
The Trusted Computing Group , successor to the Trusted Computing Platform Alliance , is an initiative started by AMD, Hewlett-Packard, IBM, Intel, and Microsoft to implement Trusted Computing...
creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.
Trusted systems in policy analysis
Trusted systems in the context of nationalNational security
National security is the requirement to maintain the survival of the state through the use of economic, diplomacy, power projection and political power. The concept developed mostly in the United States of America after World War II...
or homeland security
Homeland security
Homeland security is an umbrella term for security efforts to protect states against terrorist activity. Specifically, is a concerted national effort to prevent terrorist attacks within the U.S., reduce America’s vulnerability to terrorism, and minimize the damage and recover from attacks that do...
, law enforcement
Code Enforcement
Code enforcement, sometimes encompassing law enforcement, is the act of enforcing a set of rules, principles, or laws and insuring observance of a system of norms or customs. An authority usually enforces a civil code, a set of rules, or a body of laws and compel those subject to their authority...
, or social control
Social control
Social control refers generally to societal and political mechanisms or processes that regulate individual and group behavior, leading to conformity and compliance to the rules of a given society, state, or social group. Many mechanisms of social control are cross-cultural, if only in the control...
policy are systems in which some conditional prediction
Prediction
A prediction or forecast is a statement about the way things will happen in the future, often but not always based on experience or knowledge...
about the behavior of people or objects within the system has been determined prior to authorizing access to system resources.
For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing
Trusted Computing
Trusted Computing is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by...
" initiatives in technical systems security, and the use of credit or identity scoring
Credit score
A credit score is a numerical expression based on a statistical analysis of a person's credit files, to represent the creditworthiness of that person...
systems in financial and anti-fraud applications; in general, they include any system (i) in which probabilistic threat or risk analysis
Risk analysis
Risk Analysis may refer to:*Quantitative risk analysis*Risk analysis **Probabilistic risk assessment, an engineering safety analysis*Risk analysis *Risk Management*Risk management tools* Certified Risk Analyst...
is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems constraints
Integrity constraints
Integrity constraints are used to ensure accuracy and consistency of data in a relational database. Data integrity is handled in a relational database through the concept of referential integrity...
to control behavior within the system), or (ii) in which deviation analysis or systems surveillance
Surveillance
Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people. It is sometimes done in a surreptitious manner...
is used to ensure that behavior within systems complies with expected or authorized parameters.
The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice
Criminal justice
Criminal Justice is the system of practices and institutions of governments directed at upholding social control, deterring and mitigating crime, or sanctioning those who violate laws with criminal penalties and rehabilitation efforts...
based on accountability for deviant actions after they occur, see Cesare Beccaria, On Crimes and Punishment (1764), to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance
Surveillance
Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people. It is sometimes done in a surreptitious manner...
and control through system constraints, see Michel Foucault
Michel Foucault
Michel Foucault , born Paul-Michel Foucault , was a French philosopher, social theorist and historian of ideas...
, Discipline and Punish
Discipline and Punish
Discipline and Punish: The Birth of the Prison is a book by philosopher Michel Foucault. Originally published in 1975 in France under the title Surveiller et punir: Naissance de la Prison, it was translated into English in 1977. It is an interrogation of the social and theoretical mechanisms behind...
(1975, Alan Sheridan
Alan Sheridan
-Life:Born Alan Mark Sheridan-Smith, Sheridan read English at St Catharine's College, Cambridge before spending 5 years in Paris as English assistant at Lycée Henri IV and Lycée Condorcet. Returning to London, he briefly worked in publishing before becoming a freelance translator...
, tr., 1977, 1995).
In this emergent model, "security" is geared not towards policing but to risk management
Risk management
Risk management is the identification, assessment, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities...
through surveillance
Surveillance
Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people. It is sometimes done in a surreptitious manner...
, exchange of information, auditing, communication, and classification
Categorization
Categorization is the process in which ideas and objects are recognized, differentiated and understood. Categorization implies that objects are grouped into categories, usually for some specific purpose. Ideally, a category illuminates a relationship between the subjects and objects of knowledge...
. These developments have led to general concerns about individual privacy
Privacy
Privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively...
and civil liberty and to a broader philosophical debate about the appropriate forms of social governance
Social control
Social control refers generally to societal and political mechanisms or processes that regulate individual and group behavior, leading to conformity and compliance to the rules of a given society, state, or social group. Many mechanisms of social control are cross-cultural, if only in the control...
methodologies.
Trusted systems in information theory
Trusted systems in the context of information theoryInformation theory
Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and...
is based on the definition of trust as 'Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel' by Ed Gerck.
In Information Theory
Information theory
Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and...
, information has nothing to do with knowledge or meaning. In the context of Information Theory, information is simply that which is transferred from a source to a destination, using a communication channel. If, before transmission, the information is available at the destination then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
Likewise, trust as defined by Gerck has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.
Taken together in the model of Information Theory, information is what you do not expect and trust is what you know. Linking both concepts, trust is seen as qualified reliance on received information. In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived: higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
An introduction to the calculus of trust (Example: 'If I connect two trusted systems, are they more or less trusted when taken together?') is given in.
The IBM
IBM
International Business Machines Corporation or IBM is an American multinational technology and consulting corporation headquartered in Armonk, New York, United States. IBM manufactures and sells computer hardware and software, and it offers infrastructure, hosting and consulting services in areas...
Federal Software Group has suggested that provides the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network centric enterprise services environment, such notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.
See also
- accuracy and precisionAccuracy and precisionIn the fields of science, engineering, industry and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual value. The precision of a measurement system, also called reproducibility or repeatability, is the degree to which...
- computer securityComputer securityComputer security is a branch of computer technology known as information security as applied to computers and networks. The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to...
- data qualityData qualityData are of high quality "if they are fit for their intended uses in operations, decision making and planning" . Alternatively, the data are deemed of high quality if they correctly represent the real-world construct to which they refer...
- information qualityInformation qualityInformation quality is a term to describe the quality of the content of information systems. It is often pragmatically defined as: "The fitness for use of the information provided."- Conceptual problems :...
- secure computingSecure ComputingSecure Computing Corporation, or SCC, was a public company that developed and sold computer security appliances and hosted services to protect users and data...
- trusted computingTrusted ComputingTrusted Computing is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by...