Random access stored program machine
Encyclopedia
In theoretical computer science
the Random Access Stored Program (RASP) machine model is an abstract machine
used for the purposes of algorithm
development and algorithm complexity theory.
The RASP is a Random Access Machine
(RAM) model that, unlike the RAM, has its program
in its "registers" together with its input. The registers are unbounded (infinite in capacity); whether the number of registers is finite is model-specific. Thus the RASP is to the RAM as the Universal Turing machine
is to the Turing machine
. The RASP is an example of the von Neumann architecture
whereas the RAM is an example of the Harvard architecture
.
The RASP is closest of all the abstract models to the common notion of computer
. But unlike actual computers the RASP model usually has a very simple instruction set, greatly reduced from those of CISC
and even RISC processors to the simplest arithmetic, register-to-register "moves", and "test/jump" instructions. Some models have a few extra registers such as an accumulator
.
Together with the Register machine
, the RAM, and the pointer machine
the RASP makes up the four common sequential machine models, called this to distinguish them from the "parallel" models (e.g. parallel random access machine
) [cf. van Emde Boas (1990)].
The reader will remember that the UTM is a Turing machine
with a "Universal" finite-state table of instructions that can interpret any well-formed "program" written on the tape as a string of Turing 5-tuples, hence its universality. While the classical UTM model expects to find Turing 5-tuples on its tape, any program-set imaginable can be put there given that the Turing machine expects to find them -- given that its finite-state table can interpret them and convert them to the desired action. Along with the program, printed on the tape will be the input data/parameters/numbers (usually to the program's right), and eventually the output data/numbers (usually to the right of both, or intermingled with the input, or replacing it). The "user" must position the Turing machine's head over the first instruction, and the input must be placed in a specified place and format appropriate to both the program-on-tape and the finite-state machine's instruction-table.
The RASP mimics this construction: it places the "program" and "data" in the holes (registers). But unlike the UTM the RASP proceeds to "fetch" its instructions in a sequential manner, unless the conditional test sends it elsewhere.
A point of confusion: two sets of instructions: Unlike the UTM, the RASP model has two sets of instructions -- the state machine table of instructions (the "interpreter") and the "program" in the holes. The two sets do not have to be drawn from the same set.
To ease the example we will equip the state machine of the RAM-as-RASP with the primitive instructions drawn from the same set, but augmented with two indirect copy instructions:
As the RASP machine's state machine interprets the program in the registers, what exactly will the state machine be doing? The column containing the exclamation mark ! will list in time sequence the state machine's actions as it "interprets" -- converts to action -- the program:
Tradition divides the state-machine's actions into two major "phases" called Fetch and Execute. We will observe below that there are "sub-phases" within these two major phases. There is no agreed-to convention; every model will require its own precise description.
Upon start, the state machine expects to find a number in the PC -- the first "Program-Instruction" in the program (i.e. at #5).
(Without the use of the indirect COPYs, the task of getting the pointed-to program-instruction into #2 is a bit arduous. The state machine would indirectly decrement the pointed-to register while directly incrementing (empty) register #2. During the "parse" phase it will restore the sacrificed contents of #5 by sacrificing the count in #2.)
The point of the above detour is to show that life is much easier when the state machine has access to two kinds of indirect copy:
The following example shows what happens during the state-machine's "fetch" phase. The state-machine's operations are listed on the column labelled "State machine instruction ↓". Observe that at the end of the fetch, register #2 contains the numerical value 3 of the "operation code" ("opcode") of the first instruction JZ:
Parse phase: Now that the number of the program-instruction (e.g. 3 = "JZ") is in register #2 -- the "Program-Instruction Register" PIR -- the state machine proceeds to decrement the number until the IR is empty:
If the IR were empty before decrement then the program-instruction would be 0 = HALT, and the machine would jump to its "HALT" routine. After the first decrement, if the hole were empty the instruction would be INC, and the machine would jump to instruction "inc_routine". After the second decrement, the empty IR would represent DEC, and the machine would jump to the "dec_routine". After the third decrement, the IR is indeed empty, and this causes a jump to the "JZ_routine" routine. If an unexpected number were still in the IR, then the machine would have detected an error and could HALT (for example).
(i) Operand fetch -- which register to test for empty?: Analogous to the fetch phase, the finite state machine moves the contents of the register pointed to by the PC, i.e. hole #6, into the Program-Instruction Register PIR #2. It then uses the contents of register #2 to point to the register to be tested for zero, i.e. register #18. Hole #18 contains a number "n". To do the test, now the state machine uses the contents of the PIR to indirectly copy the contents of register #18 into a spare register, #3. So there are two eventualities (ia), register #18 is empty, (ib) register #18 is not empty.
(ia): If register #3 is empty then the state machine jumps to (ii) Second operand fetch -- fetch the jump-to address.
(ib): If register #3 is not empty then the state machine can skip (ii) Second operand fetch. It simply increments twice the PC and then unconditionally jumps back to the instruction-fetch phase, where it fetches program-instruction #8 (DEC).
(ii) Operand fetch -- jump-to address. If register #3 is empty, the state machine proceeds to use the PC to indirectly copy the contents of the register it points to (#8) into itself. Now the PC holds the jump-to address 15. Then the state machine unconditionally goes back to the instruction fetch phase, where it fetches program-instruction #15 (HALT).
Without indirect state-machine instructions INCi and DECi, to execute the INC and DEC program-instructions the state machine must use indirect copy to get the contents of the pointed-to register into spare register #3, DEC or INC it, and then use indirect copy to send it back to the pointed-to register.
Alternate instructions: Although the demonstration resulted in a primitive RASP of only four instructions, the reader might imagine how an additional instruction such as "ADD" or "MULT a>,<b>might be done.
An early description of this notion can be found in Goldstine-von Neumann (1946):
Such an ability makes the following possible:
and Robert A. Reckhow define their version of a RASP:
Their purpose was to compare execution-times of the various models: RAM, RASP and multi-tape Turing machine for use in the theory of complexity analysis.
The salient feature of their RASP model is no provision for indirect program-instructions (cf their discussion p. 75). This they achieve by requiring the program to modify itself: if necessary an instruction can modify the "parameter" (their word, i.e. "operand") of a particular instruction. They have designed their model so each "instruction" uses two consecutive registers, one for the "operation code" (their word) and the parameter "either an address or an integer constant".
Their RASP's registers are unbounded in capacity and unbounded in number; likewise their accumulator AC and instruction counter IC are unbounded. The instruction set is the following:
Theoretical computer science
Theoretical computer science is a division or subset of general computer science and mathematics which focuses on more abstract or mathematical aspects of computing....
the Random Access Stored Program (RASP) machine model is an abstract machine
Abstract machine
An abstract machine, also called an abstract computer, is a theoretical model of a computer hardware or software system used in automata theory...
used for the purposes of algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
development and algorithm complexity theory.
The RASP is a Random Access Machine
Random access machine
In computer science, random access machine is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers...
(RAM) model that, unlike the RAM, has its program
Computer program
A computer program is a sequence of instructions written to perform a specified task with a computer. A computer requires programs to function, typically executing the program's instructions in a central processor. The program has an executable form that the computer can use directly to execute...
in its "registers" together with its input. The registers are unbounded (infinite in capacity); whether the number of registers is finite is model-specific. Thus the RASP is to the RAM as the Universal Turing machine
Universal Turing machine
In computer science, a universal Turing machine is a Turing machine that can simulate an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input thereof from its own tape. Alan...
is to the Turing machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...
. The RASP is an example of the von Neumann architecture
Von Neumann architecture
The term Von Neumann architecture, aka the Von Neumann model, derives from a computer architecture proposal by the mathematician and early computer scientist John von Neumann and others, dated June 30, 1945, entitled First Draft of a Report on the EDVAC...
whereas the RAM is an example of the Harvard architecture
Harvard architecture
The Harvard architecture is a computer architecture with physically separate storage and signal pathways for instructions and data. The term originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape and data in electro-mechanical counters...
.
The RASP is closest of all the abstract models to the common notion of computer
Computer
A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem...
. But unlike actual computers the RASP model usually has a very simple instruction set, greatly reduced from those of CISC
Complex instruction set computer
A complex instruction set computer , is a computer where single instructions can execute several low-level operations and/or are capable of multi-step operations or addressing modes within single instructions...
and even RISC processors to the simplest arithmetic, register-to-register "moves", and "test/jump" instructions. Some models have a few extra registers such as an accumulator
Accumulator (computing)
In a computer's central processing unit , an accumulator is a register in which intermediate arithmetic and logic results are stored. Without a register like an accumulator, it would be necessary to write the result of each calculation to main memory, perhaps only to be read right back again for...
.
Together with the Register machine
Register machine
In mathematical logic and theoretical computer science a register machine is a generic class of abstract machines used in a manner similar to a Turing machine...
, the RAM, and the pointer machine
Pointer machine
In theoretical computer science a pointer machine is an "atomistic" abstract computational machine model akin to the Random access machine....
the RASP makes up the four common sequential machine models, called this to distinguish them from the "parallel" models (e.g. parallel random access machine
Parallel Random Access Machine
In computer science, Parallel Random Access Machine is a shared memory abstract machine. As its name indicates, the PRAM was intended as the parallel computing analogy to the random access machine...
) [cf. van Emde Boas (1990)].
Informal definition: Random Access Stored Program model (RASP)
Nutshell description of a RASP:- The RASP is a Universal Turing MachineUniversal Turing machineIn computer science, a universal Turing machine is a Turing machine that can simulate an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input thereof from its own tape. Alan...
(UTM) built on a Random access machineRandom access machineIn computer science, random access machine is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers...
RAM chassis.
The reader will remember that the UTM is a Turing machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...
with a "Universal" finite-state table of instructions that can interpret any well-formed "program" written on the tape as a string of Turing 5-tuples, hence its universality. While the classical UTM model expects to find Turing 5-tuples on its tape, any program-set imaginable can be put there given that the Turing machine expects to find them -- given that its finite-state table can interpret them and convert them to the desired action. Along with the program, printed on the tape will be the input data/parameters/numbers (usually to the program's right), and eventually the output data/numbers (usually to the right of both, or intermingled with the input, or replacing it). The "user" must position the Turing machine's head over the first instruction, and the input must be placed in a specified place and format appropriate to both the program-on-tape and the finite-state machine's instruction-table.
The RASP mimics this construction: it places the "program" and "data" in the holes (registers). But unlike the UTM the RASP proceeds to "fetch" its instructions in a sequential manner, unless the conditional test sends it elsewhere.
A point of confusion: two sets of instructions: Unlike the UTM, the RASP model has two sets of instructions -- the state machine table of instructions (the "interpreter") and the "program" in the holes. The two sets do not have to be drawn from the same set.
An example of a RAM working as a RASP
The following example of a program will move the contents of register (hole) #18 to register (hole) #19, erasing contents of #18 in the process. The program-instructions will be a simple set to keep the example short:- Target RASP program-instructions:
- { INC h; DEC h; JZ h,xxx }
To ease the example we will equip the state machine of the RAM-as-RASP with the primitive instructions drawn from the same set, but augmented with two indirect copy instructions:
- RAM state machine instructions:
- { INC h; DEC h; JZ h,xxx; CPY <
a>>, a>; CPY a>,< a>> }
- { INC h; DEC h; JZ h,xxx; CPY <
As the RASP machine's state machine interprets the program in the registers, what exactly will the state machine be doing? The column containing the exclamation mark ! will list in time sequence the state machine's actions as it "interprets" -- converts to action -- the program:
PC | IR | ||||||||||||||||||
hole # → | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |
|||||||||||||||||||
program, parameters → | 5 | | | |
JZ | 18 | 15 | DEC | 18 | INC | 19 | JZ | 19 | 5 | H | | | n | |
|||||||
encoded program → | 5 | | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | | n | |
|||||||
state machine instructions ↓ | | | | | | | | | | | | | | | | | | | |
|||||||||||||||||||
| | | | | | | | | | | | | | | | | | | |
|||||||||||||||||||
! | | | | | | | | | | | | | | | | | | | |
Tradition divides the state-machine's actions into two major "phases" called Fetch and Execute. We will observe below that there are "sub-phases" within these two major phases. There is no agreed-to convention; every model will require its own precise description.
Fetch phase
The state machine has access to all the registers, both directly and indirectly. So it adopts #1 as "the program counter" PC. The role of the program counter will be to "keep the place" in the program's listing; the state machine has its own state register for its private use.Upon start, the state machine expects to find a number in the PC -- the first "Program-Instruction" in the program (i.e. at #5).
(Without the use of the indirect COPYs, the task of getting the pointed-to program-instruction into #2 is a bit arduous. The state machine would indirectly decrement the pointed-to register while directly incrementing (empty) register #2. During the "parse" phase it will restore the sacrificed contents of #5 by sacrificing the count in #2.)
The point of the above detour is to show that life is much easier when the state machine has access to two kinds of indirect copy:
- copy indirect from i and direct to j: CPY <
i>>, j> - copy direct from i and indirect to j: CPY
i>,< j>>
The following example shows what happens during the state-machine's "fetch" phase. The state-machine's operations are listed on the column labelled "State machine instruction ↓". Observe that at the end of the fetch, register #2 contains the numerical value 3 of the "operation code" ("opcode") of the first instruction JZ:
PC | PIR | ||||||||||||||||||||
hole # → | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | ||
program, parameters → | 5 | JZ | 18 | 15 | DEC | 18 | INC | 19 | JZ | 19 | 5 | H | n | ||||||||
encoded program → | 5 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | ||||||||
step | state machine instructions ↓ | ||||||||||||||||||||
1 | fetch_instr: | CPY <<1>>,<2> |
5 i | [3] | [3] | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
Parse phase: Now that the number of the program-instruction (e.g. 3 = "JZ") is in register #2 -- the "Program-Instruction Register" PIR -- the state machine proceeds to decrement the number until the IR is empty:
If the IR were empty before decrement then the program-instruction would be 0 = HALT, and the machine would jump to its "HALT" routine. After the first decrement, if the hole were empty the instruction would be INC, and the machine would jump to instruction "inc_routine". After the second decrement, the empty IR would represent DEC, and the machine would jump to the "dec_routine". After the third decrement, the IR is indeed empty, and this causes a jump to the "JZ_routine" routine. If an unexpected number were still in the IR, then the machine would have detected an error and could HALT (for example).
PC | IR | ||||||||||||||||||||
hole # → | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | ||
program, parameters → | 5 | JZ | 18 | 15 | DEC | 18 | INC | 19 | JZ | 19 | 5 | H | n | ||||||||
encoded program → | 5 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | ||||||||
step | state machine instructions ↓ | ||||||||||||||||||||
1 | fetch_instr: | CPY <<1>>,<2> |
5 i | [3] | [3] | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
|||||||
2 | parse_instr: | JZ 2,halt |
5 | 3 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
|||||||
3 | | 2-Dec | 5 |
2 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
4 | | JZ 2,dec_routine: | 5 |
2 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
5 | | 2-Dec | 5 |
1 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
6 | | JZ 2,inc_routine | 5 |
1 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
7 | | 2-Dec | 5 |
0 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
8 | | JZ 2, JZ_routine | 5 |
0 ! | | |
| |
|||||||||||||||||||
-- | halt: | HALT | 5 |
3 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
-- | inc_routine: | etc. | 5 |
3 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
-- | dec_routine: | etc. | 5 |
3 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||||
9 | JZ_routine: | etc. | 5 |
3 | | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
Execute phase, JZ_routine
Now the state machine knows what program-instruction to execute; indeed it has jumped to the "JZ_routine" sequence of instructions. The JZ instruction has 2 operands -- (i) the number of the register to test, and (ii) the address to go to if the test is successful (the hole is empty).(i) Operand fetch -- which register to test for empty?: Analogous to the fetch phase, the finite state machine moves the contents of the register pointed to by the PC, i.e. hole #6, into the Program-Instruction Register PIR #2. It then uses the contents of register #2 to point to the register to be tested for zero, i.e. register #18. Hole #18 contains a number "n". To do the test, now the state machine uses the contents of the PIR to indirectly copy the contents of register #18 into a spare register, #3. So there are two eventualities (ia), register #18 is empty, (ib) register #18 is not empty.
(ia): If register #3 is empty then the state machine jumps to (ii) Second operand fetch -- fetch the jump-to address.
(ib): If register #3 is not empty then the state machine can skip (ii) Second operand fetch. It simply increments twice the PC and then unconditionally jumps back to the instruction-fetch phase, where it fetches program-instruction #8 (DEC).
(ii) Operand fetch -- jump-to address. If register #3 is empty, the state machine proceeds to use the PC to indirectly copy the contents of the register it points to (#8) into itself. Now the PC holds the jump-to address 15. Then the state machine unconditionally goes back to the instruction fetch phase, where it fetches program-instruction #15 (HALT).
PC | IR | ||||||||||||||||||||
hole # → | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | ||
program, parameters → | 5 | JZ | 18 | 15 | DEC | 18 | INC | 19 | JZ | 19 | 5 | H | n | ||||||||
encoded program → | 5 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | ||||||||
step | state machine instructions ↓ | ||||||||||||||||||||
9 | JZ_routine | INC 1 |
[6] | 3 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
|||||||
10 | | CPY <<1>>,<2> |
6 i | [18] | 3 | [18] | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
|||||||
11 | test hole: | CPY <<2>>,<3> |
6 | 18 i | [n] | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | [n] | |
||||||
12 | test hole: | JZ 3, jump |
6 | 18 i | [n] | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
| | |
n | n | |
|||||||||||||||||||
13 | no_jump: | INC 1 |
[7] | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
14 | | INC 1 |
[8] | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
15 | | J fetch_instr |
8 | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
1 | fetch_instr: | CPY <<1>>,<2> |
8 i | [2] | n | 3 | 18 | 15 | [2] | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
2 | parse: | etc. |
| |
||||||||||||||||||||
| | |
| |
| |
|||||||||||||||||||
13 | jump: | INC 1 |
[7] | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
14 | | CPY <<1>>,<1> |
[15] | 18 | n | 3 | 18 | [15] | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
15 | J fetch_instr | 15 | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |||||
1 | fetch_instr: | CPY <<1>>,<2> |
15 i | [0] | n | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | [0] | n | |
||||||
2 | parse: | etc. | | | | | | | | | | | | | | | | | | | |
Execute phase INC, DEC
The following completes the RAM's state-machine interpretation of program-instructions, INC h, DEC h and thus completes the demonstration of how a RAM can "impersonate" a RASP:- Target program instruction set: { INC h; DEC h; JZ h,xxx, HALT }
Without indirect state-machine instructions INCi and DECi, to execute the INC and DEC program-instructions the state machine must use indirect copy to get the contents of the pointed-to register into spare register #3, DEC or INC it, and then use indirect copy to send it back to the pointed-to register.
PC | IR | ||||||||||||||||||||
hole # → | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | ||
program, parameters → | 5 | JZ | 18 | 15 | DEC | 18 | INC | 19 | JZ | 19 | 5 | H | n | ||||||||
encoded program → | 5 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | ||||||||
step | state machine instructions ↓ | ||||||||||||||||||||
15 | | J fetch_instr |
8 | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
16 | fetch_instr: | CPY <<1>>,<2> |
8 i | [2] | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
|||||
17 | parse: | JZ 2,halt |
8 | 2 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
18 | | 2-Dec |
8 | [1] | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
19 | | JZ 2, inc_routine: |
8 | 1 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
20 | | 2-Dec |
8 | [0] | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
21 | JZ 2, dec_routine: | 8 | 0 ! | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |||||
22 | dec_routine: | INC 1 |
9 | 0 | n | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |
||||||
23 | CPY <<1>>,<2> | 9 i | 18 | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |||||
24 | CPY <<2>>,<3> | 9 | 18 i | n | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |||||
25 | | JZ 3,*+2 |
9 | 18 | n | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n | |
||||||
26 | .DEC 3 | 9 | 18 | n-1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n | |||||
27 | CPY <3>,<<2>> | 9 | 18 i | n-1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | |||||
28 | INC 1 | 10 | 18 | n-1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | |||||
29 | | J fetch_instr |
10 | 18 | n-1 | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n-1 | |
||||||
| | |
| |
| | |
| |
||||||||||||||||||
30 | fetch_instr: | CPY <<1>>,<2> |
10 i | 1 | n-1 | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n-1 | |
|||||
31 | parse: | JZ 2,halt |
10 | 1 | n-1 | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n-1 | |
||||||
32 | | 2-Dec |
10 | 0 | n-1 | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n-1 | |
||||||
33 | | JZ 2,inc_routine: |
10 | 0 ! | n-1 | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n-1 | |
||||||
34 | inc_routine: | INC 1 | 11 | 0 | n-1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | ||||
35 | CPY <<1>>,<2> | 11 i | 19 | n-1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | |||||
36 | CPY <<2>>,<3> | 11 | 19 i | 0 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | 0 | ||||
37 | INC 3 | 11 | 19 | 1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | 0 | ||||
38 | CPY <3>,<<2>> | 11 | 19 i | 1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | 1 | ||||
39 | INC 1 | 12 | 19 | 1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | 0 | ||||
40 | J fetch_instr | 12 | 19 | 1 | 3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | n-1 | 0 | ||||
41 | fetch_instr: | etc. |
12 | 19 | 1 | |
3 | 18 | 15 | 2 | 18 | 1 | 19 | 3 | 19 | 5 | 0 | | |
n-1 | 0 |
Alternate instructions: Although the demonstration resulted in a primitive RASP of only four instructions, the reader might imagine how an additional instruction such as "ADD
Self-Modifying RASP programs
When a RAM is acting as a RASP, something new has been gained: unlike the RAM, the RASP has the capacity for self-modification of its program-instructions (the state-machine instructions are frozen, unmodifiable by the machine). Cook-Reckhow (1971) (p. 75) comment on this in their description of their RASP model, as does Hartmanis (1971) (pp. 239ff)An early description of this notion can be found in Goldstine-von Neumann (1946):
- "We need an order [instruction] that can substitute a number into a given order... By means of such an order the results of a computation can be introduced into the instructions governing that or a different computation" (p. 93)
Such an ability makes the following possible:
- subroutineSubroutineIn computer science, a subroutine is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code....
s -- the calling routine (or perhaps the subroutine) stores the return addressReturn addressIn postal mail, a return address is an explicit inclusion of the address of the person sending the message. It provides the recipient with a means to determine how to respond to the sender of the message if needed....
"return_address" in the subroutine's last command, i.e. "JMP return_address" - so-called JUMP-tables
- self-modifying codeSelf-modifying codeIn computer science, self-modifying code is code that alters its own instructions while it is executing - usually to reduce the instruction path length and improve performance or simply to reduce otherwise repetitively similar code, thus simplifying maintenance...
RASP program-instruction set of Cook and Reckhow (1973)
In an influential paper Stephen A. CookStephen Cook
Stephen Arthur Cook is a renowned American-Canadian computer scientist and mathematician who has made major contributions to the fields of complexity theory and proof complexity...
and Robert A. Reckhow define their version of a RASP:
- "The Random Access Stored-Program Machine (RASP) described here is similar to the RASP's described by Hartmanis [1971]" (p. 74).
Their purpose was to compare execution-times of the various models: RAM, RASP and multi-tape Turing machine for use in the theory of complexity analysis.
The salient feature of their RASP model is no provision for indirect program-instructions (cf their discussion p. 75). This they achieve by requiring the program to modify itself: if necessary an instruction can modify the "parameter" (their word, i.e. "operand") of a particular instruction. They have designed their model so each "instruction" uses two consecutive registers, one for the "operation code" (their word) and the parameter "either an address or an integer constant".
Their RASP's registers are unbounded in capacity and unbounded in number; likewise their accumulator AC and instruction counter IC are unbounded. The instruction set is the following:
operation | mnemonic | operation code | description |
load constant | LOD, k | 1 | put constant k into accumulator |
add | ADD, j | 2 | add contents of register j to accumulator |
subtract | SUB, j | 3 | subtract contents of register j from accumulator |
store | STO, j | 4 | copy contents of accumulator into register j |
branch on positive accumulator | BPA, xxx | 5 | IF contents of accumulator > 0 THEN jump to xxx ELSE next instruction |
read | RD, j | 6 | next input into register j |
PRI, j | 7 | output contents of register j | |
halt | HLT | any other - or + integer | stop |