Turing machine equivalents
Encyclopedia
The following article is a referral from the article Turing machine
. Many of the machines described here have articles that offer much more information.
Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf Minsky). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (Recall that the Church-Turing thesis hypothesizes this to be true: that anything that can be “computed” can be computed by some Turing machine.)
While none of the following models have been shown to have more power than the single-tape, one-way infinite, multi-symbol Turing-machine model, their authors defined and used them to investigate questions and solve problems more easily than they could have if they had stayed with Turing's a-machine model.
The sequential-machine models:
All of the following are called "sequential machine models" to distinguish them from "parallel machine models" (van Emde Boas (1990) p. 18).
Turing's a-machine model: Turing's (1936) a-machine (his name) was left-ended, right-end-infinite. He provided symbols əə to mark the left end. Any of finite number of tape symbols were permitted. The instructions (if a universal machine), and the "input" and "out" were written on only on "F-squares", and markers were to appear on "E-squares". In essence he divided his machine into two tapes that always moved together. The instructions appeared in a tabular form called "5-tuples" and were not executed sequentially.
Emil Post (1936) in an independent description of a computational process, reduced the symbols allowed to the equivalent binary set of marks on the tape { "mark", "blank"=not_mark }. He changed the notion of "tape" from 1-way infinite to the right to an infinite set of rooms each with a sheet of paper in both directions. He atomized the Turing 5-tuples into 4-tuples—motion instructions separate from print/erase instructions. Although his (1936) model is ambiguous about this, Post's (1947) model did not require sequential instruction execution.
His extremely simple model can emulate any Turing machine, and although his 1936 Formulation 1 does not use the word "program" or "machine", it is effectively a formulation of a very primitive programmable computer and associated programming language
, with the boxes acting as an unbounded bitstring memory, and the set of instructions constituting a program.
and his most-severely reduced 4-instruction Wang B-machine
("B" for "basic") with the instruction-set
which has not even an ERASE-SQUARE instruction.
Many authors later introduced variants of the machines discussed by Wang:
Minsky (1961) evolved Wang's notion with his version of the (multi-tape) "counter machine" model that allowed SHIFT-LEFT and SHIFT-RIGHT motion of the separate heads but no printing at all. In this case the tapes would be left-ended, each end marked with a single "mark" to indicate the end. He was able to reduce this to a single tape, but at the expense of introducing multi-tape-square motion equivalent to multiplication and division rather than the much simpler { SHIFT-LEFT = DECREMENT, SHIFT-RIGHT = INCREMENT }.
Davis, adding an explicit HALT instruction to one of the machines discussed by Wang, used a model with the instruction-set
and also considered versions with tape-alphabets of size larger than 2.
In keeping with Wang's project to seek a Turing-equivalent theory "economical in the basic operations", and wishing to avoid unconditional jumps, a notable theoretical language is the 4-instruction language P" introduced by Corrado Böhm
in 1964 — the first "GOTO-less" imperative "structured programming
" language to be proved Turing-complete.
The TABLE has full independent control over all the heads, any of all of which move and print/erase their own tapes (cf Aho-Hopcroft-Ullman 1974 p. 26). Most models have tapes with left ends, right ends unbounded.
This model intuitively seems much more powerful than the single-tape model, but any multi-tape machine, no matter how large the k, can be simulated by a single-tape machine using only quadratically more computation time (Papadimitriou 1994, Thrm 2.1). Thus, multi-tape machines cannot calculate any more functions than single-tape machines, and none of the robust complexity classes (such as polynomial time
) are affected by a change between single-tape and multi-tape machines.
If the action table has at most one entry for each combination of symbol and state then the machine is a "deterministic Turing machine" (DTM). If the action table contains multiple entries for a combination of symbol and state then the machine is a "non-deterministic Turing machine" (NDTM). The two are computationally equivalent, that is, it is possible to turn any NDTM into a DTM (and vice versa).
.
van Emde Boas (1990) includes all machines of this type in one category (group, class, collection) -- "the register machine". However, historically the literature has also called the most primitive member of this group i.e. "the counter machine" -- "the register machine". And the most primitive embodiment of a "counter machine" is sometimes called the "Minsky machine".
The primitive model register machine is, in effect, a multitape 2-symbol Post-Turing machine with its behavior restricted so its tapes act like simple "counters".
By the time of Melzak, Lambek, and Minsky (all 1961) the notion of a "computer program" produced a different type of simple machine with many left-ended tapes cut from a Post-Turing tape. In all cases the models permit only two tape symbols { mark, blank }.
Some versions represent the positive integers as only a strings/stack of marks allowed in a "register" (i.e. left-ended tape), and a blank tape represented by the count "0". Minsky (1961) eliminated the PRINT instruction at the expense of providing his model with a mandatory single mark at the left-end of each tape.
In this model the single-ended tapes-as-registers are thought of as "counters", their instructions restricted to only two (or three if the TEST/DECREMENT instruction is atomized). Two common instruction sets are the following:: { INC ( r ), DEC ( r ), JZ ( r,z ) }, i.e.
Although his model is more complicated than this simple description, the Melzak "pebble" model extended this notion of "counter" to permit multi-
pebble adds and subtracts.
Melzak (1961) recognized a couple serious defects in his register/counter-machine model: (i) Without a form of indirect addressing he would not be able to "easily" show the model is Turing equivalent, (ii) The program and registers were in different "spaces", so self-modifying programs would not be easy. When Melzak added indirect addressing to his model he created a random access machine model.
(However, with Gödel numbering of the instructions Minsky (1961) offered a proof that with such numbering the general recursive function
s were indeed possible; in Minsky (1967) he offers proof that μ recursion is indeed possible).
Unlike the RASP model, the RAM model does not allow the machine's actions to modify its instructions. Sometimes the model works only register-to-register with no accumulator, but most models seem to include an accumulator.
van Emde Boas (1990) divides the various RAM models into a number of sub-types:
The RASP is a RAM with the instructions stored together with their data in the same 'space' -- i.e. sequence of registers. The notion of a RASP was described at least as early as Kiphengst (1959). His model had a "mill" -- an accumulator, but now the instructions were in the registers with the data—the so-called von Neumann architecture
. When the RASP has alternating even and odd registers—the even holding the "operation code" (instruction) and the odd holding its "operand" (parameter), then indirect addressing is achieved by simply modifying an instruction's operand (cf Cook and Reckhow 1973).
The original RASP model of Elgot and Robinson (1964) had only three instructions in the fashion of the register-machine model, but they placed them in the register space together with their data. (Here COPY takes the place of CLEAR when one register e.g. "z" or "0" starts with and always contains 0. This trick is not unusual. The unit 1 in register "unit" or "1" is also useful.)
The RASP models allow indirect as well as direct-addressing; some allow "immediate" instructions too, e.g. "Load accumulator with the constant 3". The instructions may be of a highly restricted set such as the following 16 instructions of Hartmanis (1971). This model uses an accumulator A. The mnemonics are those that the authors used (their CLA is "load accumulator" with constant or from register; STO is "store accumulator"). Their syntax is the following, excepting the jumps: "n,, <>" for "immediate", "direct" and "indirect"). Jumps are via two "Transfer instructions" TRA—unconditional jump by directly "n" or indirectly "< n >" jamming contents of register n into the instruction counter, TRZ (conditional jump if Accumulator is zero in the same manner as TRA):
A relative late-comer is Schönhage's Storage Modification Machine (1970) or pointer machine
. Another version is the Kolmogorov-Uspensii machine, and the Knuth "linking automaton" proposal. (For references see pointer machine
). Like a state-machine diagram, a node emits at least two labelled "edges" (arrows) that point to another node or nodes which in turn point to other nodes, etc. The outside world points at the center node.
It is difficult to study sublinear space complexity on multi-tape machines with the traditional model, because an input of size n already takes up space n. Thus, to study small DSPACE
classes, we must use a different model. In some sense, if we never "write to" the input tape, we don't want to charge ourself for this space. And if we never "read from" our output tape, we don't want to charge ourself for this space.
We solve this problem by introducing a k-string Turing machine with input and output. This is the same as an ordinary k-string Turing machine, except that the transition function is restricted so that the input tape can never be changed, and so that the output head can never move left. This model allows us to define deterministic space classes smaller than linear. Turing machines with input-and-output also have the same time complexity as other Turing machines; in the words of Papaditriou 1994 Prop 2.2:
k-string Turing machines with input and output are used in the formal definition of the complexity resource DSPACE
in, for example, Papadimitriou 1994 (Def. 2.6).
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...
. Many of the machines described here have articles that offer much more information.
Machines equivalent to the Turing machine model
Turing equivalence:Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf Minsky). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (Recall that the Church-Turing thesis hypothesizes this to be true: that anything that can be “computed” can be computed by some Turing machine.)
While none of the following models have been shown to have more power than the single-tape, one-way infinite, multi-symbol Turing-machine model, their authors defined and used them to investigate questions and solve problems more easily than they could have if they had stayed with Turing's a-machine model.
The sequential-machine models:
All of the following are called "sequential machine models" to distinguish them from "parallel machine models" (van Emde Boas (1990) p. 18).
Tape-based Turing machines
- For more return to the article Turing machineTuring machineA Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...
Turing's a-machine model: Turing's (1936) a-machine (his name) was left-ended, right-end-infinite. He provided symbols əə to mark the left end. Any of finite number of tape symbols were permitted. The instructions (if a universal machine), and the "input" and "out" were written on only on "F-squares", and markers were to appear on "E-squares". In essence he divided his machine into two tapes that always moved together. The instructions appeared in a tabular form called "5-tuples" and were not executed sequentially.
Single-tape machines with restricted symbols and/or restricted instructions
The following models are single tape Turing machines but restricted with (i) restricted tape symbols { mark, blank }, and/or (ii) sequential, computer-like instructions, and/or (iii) machine-actions fully atomized.Post's "Formulation 1" model of computation
- For more see the article Post-Turing machinePost-Turing machineA Post–Turing machine is a "program formulation" of an especially simple type of Turing machine, comprising a variant of Emil Post's Turing-equivalent model of computation described below. A Post–Turing machine is a "program formulation" of an especially simple type of Turing machine, comprising a...
Emil Post (1936) in an independent description of a computational process, reduced the symbols allowed to the equivalent binary set of marks on the tape { "mark", "blank"=not_mark }. He changed the notion of "tape" from 1-way infinite to the right to an infinite set of rooms each with a sheet of paper in both directions. He atomized the Turing 5-tuples into 4-tuples—motion instructions separate from print/erase instructions. Although his (1936) model is ambiguous about this, Post's (1947) model did not require sequential instruction execution.
His extremely simple model can emulate any Turing machine, and although his 1936 Formulation 1 does not use the word "program" or "machine", it is effectively a formulation of a very primitive programmable computer and associated programming language
Programming language
A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely....
, with the boxes acting as an unbounded bitstring memory, and the set of instructions constituting a program.
Wang machines
In an influential paper, Hao Wang (1954, 1957) reduced Post's "formulation 1" to machines that still use a two-way infinite binary tape, but whose instructions are simpler — being the "atomic" components of Post's instructions — and are by default executed sequentially (like a "computer program"). His stated principal purpose was to offer, as an alternative to Turing's theory, one that "is more economical in the basic operations". His results were "program formulations" of a variety of such machines, including the 5-instruction Wang W-machine with the instruction-set- { SHIFT-LEFT, SHIFT-RIGHT, MARK-SQUARE, ERASE-SQUARE, JUMP-IF-SQUARE-MARKED-to xxx }
and his most-severely reduced 4-instruction Wang B-machine
Wang B-machine
As presented by Hao Wang , his basic machine B is an extremely simple computational model equivalent to the Turing machine. It is "the first formulation of a Turing-machine theory in terms of computer-like models" . With only 4 sequential instructions it is very similar to, but even simpler than,...
("B" for "basic") with the instruction-set
- { SHIFT-LEFT, SHIFT-RIGHT, MARK-SQUARE, JUMP-IF-SQUARE-MARKED-to xxx }
which has not even an ERASE-SQUARE instruction.
Many authors later introduced variants of the machines discussed by Wang:
Minsky (1961) evolved Wang's notion with his version of the (multi-tape) "counter machine" model that allowed SHIFT-LEFT and SHIFT-RIGHT motion of the separate heads but no printing at all. In this case the tapes would be left-ended, each end marked with a single "mark" to indicate the end. He was able to reduce this to a single tape, but at the expense of introducing multi-tape-square motion equivalent to multiplication and division rather than the much simpler { SHIFT-LEFT = DECREMENT, SHIFT-RIGHT = INCREMENT }.
Davis, adding an explicit HALT instruction to one of the machines discussed by Wang, used a model with the instruction-set
- { SHIFT-LEFT, SHIFT-RIGHT, ERASE, MARK, JUMP-IF-SQUARE-MARKED-to xxx, JUMP-to xxx, HALT }
and also considered versions with tape-alphabets of size larger than 2.
Böhm's theoretical machine language P"
- For details see the article P"
In keeping with Wang's project to seek a Turing-equivalent theory "economical in the basic operations", and wishing to avoid unconditional jumps, a notable theoretical language is the 4-instruction language P" introduced by Corrado Böhm
Corrado Böhm
Corrado Böhm , Professor Emeritus at the University of Rome "La Sapienza", is a computer scientist known especially for his contributions to the theory of structured programming, constructive mathematics, combinatory logic, lambda-calculus, and the semantics and implementation of functional...
in 1964 — the first "GOTO-less" imperative "structured programming
Structured programming
Structured programming is a programming paradigm aimed on improving the clarity, quality, and development time of a computer program by making extensive use of subroutines, block structures and for and while loops - in contrast to using simple tests and jumps such as the goto statement which could...
" language to be proved Turing-complete.
Multi-tape Turing machines
In practical analysis, various types of multi-tape Turing machines are often used. Multi-tape machines are similar to single-tape machines, but there is some constant k number of independent tapes.The TABLE has full independent control over all the heads, any of all of which move and print/erase their own tapes (cf Aho-Hopcroft-Ullman 1974 p. 26). Most models have tapes with left ends, right ends unbounded.
This model intuitively seems much more powerful than the single-tape model, but any multi-tape machine, no matter how large the k, can be simulated by a single-tape machine using only quadratically more computation time (Papadimitriou 1994, Thrm 2.1). Thus, multi-tape machines cannot calculate any more functions than single-tape machines, and none of the robust complexity classes (such as polynomial time
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...
) are affected by a change between single-tape and multi-tape machines.
Two-stack Turing machine
Two-stack Turing machines have a read-only input and two storage tapes. If a head moves left on either tape a blank is printed on that tape, but one symbol from a “library” can be printed.Formal definition: multi-tape Turing machine
A k-tape Turing machine can be described as a 6-tuple where- is a finite set of states
- is a finite set of the tape alphabet
- is the initial state
- is the blank symbol
- is the set of final or accepting states
- is a partial function called the transition function, where L is left shift, R is right shift, S is no shift.
Deterministic and non-deterministic Turing machines
- For more see the article non-deterministic Turing machineNon-deterministic Turing machineIn theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers....
.
If the action table has at most one entry for each combination of symbol and state then the machine is a "deterministic Turing machine" (DTM). If the action table contains multiple entries for a combination of symbol and state then the machine is a "non-deterministic Turing machine" (NDTM). The two are computationally equivalent, that is, it is possible to turn any NDTM into a DTM (and vice versa).
Oblivious Turing machines
An oblivious Turing machine is a Turing machine where movement of the various heads are fixed functions of time, independent of the input. In other words, there is a predetermined sequence in which the various tapes are scanned, advanced, and written to. Pippenger and Fischer (1979) showed that any computation that can be performed by a multi-tape Turing machine in n steps can be performed by an oblivious two-tape Turing machine in O(n log n) steps.Register machine models
Register machine
In mathematical logic and theoretical computer science a register machine is a generic class of abstract machines used in a manner similar to a Turing machine...
.
van Emde Boas (1990) includes all machines of this type in one category (group, class, collection) -- "the register machine". However, historically the literature has also called the most primitive member of this group i.e. "the counter machine" -- "the register machine". And the most primitive embodiment of a "counter machine" is sometimes called the "Minsky machine".
The "counter machine", also called a "register machine" model
- For more see the article Counter machineCounter machineA counter machine is an abstract machine used in formal logic and theoretical computer science to model computation. It is the most primitive of the four types of register machines...
.
The primitive model register machine is, in effect, a multitape 2-symbol Post-Turing machine with its behavior restricted so its tapes act like simple "counters".
By the time of Melzak, Lambek, and Minsky (all 1961) the notion of a "computer program" produced a different type of simple machine with many left-ended tapes cut from a Post-Turing tape. In all cases the models permit only two tape symbols { mark, blank }.
Some versions represent the positive integers as only a strings/stack of marks allowed in a "register" (i.e. left-ended tape), and a blank tape represented by the count "0". Minsky (1961) eliminated the PRINT instruction at the expense of providing his model with a mandatory single mark at the left-end of each tape.
In this model the single-ended tapes-as-registers are thought of as "counters", their instructions restricted to only two (or three if the TEST/DECREMENT instruction is atomized). Two common instruction sets are the following:: { INC ( r ), DEC ( r ), JZ ( r,z ) }, i.e.
-
- { INCrement contents of register #r; DECrement contents of register #r; IF contents of #r=Zero THEN Jump-to Instruction #z}: { CLR ( r ); INC ( r ); JE ( ri, rj, z ) }, i.e.
- { CLeaR contents of register r; INCrement contents of r; compare contents of ri to rj and if Equal then Jump to instruction z}
Although his model is more complicated than this simple description, the Melzak "pebble" model extended this notion of "counter" to permit multi-
pebble adds and subtracts.
The Random Access Machine (RAM) model
- For more see the article Random access machineRandom access machineIn computer science, random access machine is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers...
.
Melzak (1961) recognized a couple serious defects in his register/counter-machine model: (i) Without a form of indirect addressing he would not be able to "easily" show the model is Turing equivalent, (ii) The program and registers were in different "spaces", so self-modifying programs would not be easy. When Melzak added indirect addressing to his model he created a random access machine model.
(However, with Gödel numbering of the instructions Minsky (1961) offered a proof that with such numbering the general recursive function
Recursive function
Recursive function may refer to:*Recursion , a procedure or subroutine, implemented in a programming language, whose implementation references itself*A total computable function, a function which is defined for all possible inputs...
s were indeed possible; in Minsky (1967) he offers proof that μ recursion is indeed possible).
Unlike the RASP model, the RAM model does not allow the machine's actions to modify its instructions. Sometimes the model works only register-to-register with no accumulator, but most models seem to include an accumulator.
van Emde Boas (1990) divides the various RAM models into a number of sub-types:
- SRAM, the "successor RAM" with only one arithmetic instruction, the successor (INCREMENT h). The others include "CLEAR h", and an IF equality-between-register THEN jump-to xxx.
- RAM: the standard model with addition and subtraction
- MRAM: the RAM agumented with multiplication and division
- BRAM, MBRAM: Bitwise Boolean versions of the RAM and MRAM
- N****: Non-deterministic versions of any of the above with an N before the name
The Random Access Stored Program (RASP) machine model
- For more see the article Random access stored program machineRandom access stored program machineIn theoretical computer science the Random Access Stored Program machine model is an abstract machine used for the purposes of algorithm development and algorithm complexity theory....
.
The RASP is a RAM with the instructions stored together with their data in the same 'space' -- i.e. sequence of registers. The notion of a RASP was described at least as early as Kiphengst (1959). His model had a "mill" -- an accumulator, but now the instructions were in the registers with the data—the so-called von Neumann architecture
Von Neumann architecture
The term Von Neumann architecture, aka the Von Neumann model, derives from a computer architecture proposal by the mathematician and early computer scientist John von Neumann and others, dated June 30, 1945, entitled First Draft of a Report on the EDVAC...
. When the RASP has alternating even and odd registers—the even holding the "operation code" (instruction) and the odd holding its "operand" (parameter), then indirect addressing is achieved by simply modifying an instruction's operand (cf Cook and Reckhow 1973).
The original RASP model of Elgot and Robinson (1964) had only three instructions in the fashion of the register-machine model, but they placed them in the register space together with their data. (Here COPY takes the place of CLEAR when one register e.g. "z" or "0" starts with and always contains 0. This trick is not unusual. The unit 1 in register "unit" or "1" is also useful.)
- { INC ( r ), COPY ( ri, rj ), JE ( ri, ri, z ) }
The RASP models allow indirect as well as direct-addressing; some allow "immediate" instructions too, e.g. "Load accumulator with the constant 3". The instructions may be of a highly restricted set such as the following 16 instructions of Hartmanis (1971). This model uses an accumulator A. The mnemonics are those that the authors used (their CLA is "load accumulator" with constant or from register; STO is "store accumulator"). Their syntax is the following, excepting the jumps: "n,
- { ADD n , ADD < n >, ADD << n >>, SUB n, SUB < n >, SUB << n >>, CLA n, CLA < n >, CLA << n >>, STO < n >, STO << n >>, TRA n, TRA < n >, TRZ n, TRA < n >, HALT }
The Pointer machine model
- For more see the article Pointer machinePointer machineIn theoretical computer science a pointer machine is an "atomistic" abstract computational machine model akin to the Random access machine....
.
A relative late-comer is Schönhage's Storage Modification Machine (1970) or pointer machine
Pointer machine
In theoretical computer science a pointer machine is an "atomistic" abstract computational machine model akin to the Random access machine....
. Another version is the Kolmogorov-Uspensii machine, and the Knuth "linking automaton" proposal. (For references see pointer machine
Pointer machine
In theoretical computer science a pointer machine is an "atomistic" abstract computational machine model akin to the Random access machine....
). Like a state-machine diagram, a node emits at least two labelled "edges" (arrows) that point to another node or nodes which in turn point to other nodes, etc. The outside world points at the center node.
Machines with input and output
Any of the above tape-based machines can be equipped with input and output tapes; any of the above register-based machines can be equipped with dedicated input and output registers. For example, the Schönhage pointer-machine model has two instructions called "input λ0,λ1" and "output β" (Schönhage 1990 p. 493)It is difficult to study sublinear space complexity on multi-tape machines with the traditional model, because an input of size n already takes up space n. Thus, to study small DSPACE
DSPACE
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a...
classes, we must use a different model. In some sense, if we never "write to" the input tape, we don't want to charge ourself for this space. And if we never "read from" our output tape, we don't want to charge ourself for this space.
We solve this problem by introducing a k-string Turing machine with input and output. This is the same as an ordinary k-string Turing machine, except that the transition function is restricted so that the input tape can never be changed, and so that the output head can never move left. This model allows us to define deterministic space classes smaller than linear. Turing machines with input-and-output also have the same time complexity as other Turing machines; in the words of Papaditriou 1994 Prop 2.2:
- For any k-string Turing machine M operating within time bound f(n)) there is a (k+2)-string Turing machine M’ with input and output, which operates within time bound O(f(n)).
k-string Turing machines with input and output are used in the formal definition of the complexity resource DSPACE
DSPACE
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a...
in, for example, Papadimitriou 1994 (Def. 2.6).
Other equivalent machines and methods
- Multidimensional Turing machine: For example, a model by Schönhage (1990) uses the four head-movement commands { North, South, East, West }.
- Single-tape, multi-head Turing machine: In an undecidability proof of the "problem of tag", Minsky 1961 and Shepherdson and Sturgis (1963) described machines with a single tape that could write along the tape with one head and read further along the tape with another.
- Markov's (1954) Normal Algorithm is another remarkably simple computational model equivalent to the Turing machines.
- Lambda calculusLambda calculusIn mathematical logic and computer science, lambda calculus, also written as λ-calculus, is a formal system for function definition, function application and recursion. The portion of lambda calculus relevant to computation is now called the untyped lambda calculus...
- Queue machineQueue machineA queue machine or queue automaton is a finite state machine with the ability to store and retrieve data from an infinite-memory queue. It is a model of computation equivalent to a Turing machine, and therefore it can process any formal language....