Connectionism
Encyclopedia
Connectionism is a set of approaches in the fields of artificial intelligence
, cognitive psychology
, cognitive science
, neuroscience
and philosophy of mind
, that models mental
or behavior
al phenomena as the emergent processes
of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network
models.
that the neuron would generate an action potential
spike. If the activation spreads to all the other units connected to it. Spreading activation is always a feature of neural network models, and it is very common in connectionist models used by cognitive psychologists
.
Most of the variety among neural network models comes from:
Connectionists are in agreement that recurrent neural networks (networks wherein connections of the network can form a directed cycle) are a better model of the brain than feedforward neural networks
(networks with no directed cycles). Many recurrent connectionist models also incorporate dynamical systems theory
. Many researchers, such as the connectionist Paul Smolensky
, have argued that connectionist models will evolve towards fully continuous
, high-dimensional, non-linear, dynamic systems approaches.
realism. Connectionist work in general need not be biologically realistic, but some neural network researchers, computational neuroscientists
, try to model the biological aspects of natural neural systems very closely in so-called "neuromorphic networks". Many authors find the clear link between neural activity and cognition to be an appealing aspect of connectionism. This has been criticized as reductionist.
By formalizing learning in such a way, connectionists have many tools. A very common strategy in connectionist learning methods is to incorporate gradient descent
over an error surface in a space defined by the weight matrix. All gradient descent learning in connectionist models involves changing each weight by the partial derivative
of the error surface with respect to the weight. Backpropagation
, first made popular in the 1980s, is probably the most commonly known connectionist gradient descent algorithm today.
approach that stressed the parallel nature of neural processing, and the distributed nature of neural representations. It provided a general mathematical framework for researchers to operate in. The framework involved eight major aspects:
These aspects are now the foundation for almost all connectionist models. A perceived limitation of PDP is that it is reductionistic. That is, all cognitive processes can be explained in terms of neural firing and communication.
A lot of the research that led to the development of PDP was done in the 1970s, but PDP became popular in the 1980s with the release of the books Parallel Distributed Processing: Explorations in the Microstructure of Cognition - Volume 1 (foundations) and Volume 2 (Psychological and Biological Models), by James L. McClelland, David E. Rumelhart and the PDP Research Group. The books are now considered seminal connectionist works, and it is now common to fully equate PDP and connectionism, although the term "connectionism" is not used in the books.
theories of researchers such as Frank Rosenblatt
from the 1950s and 1960s. But perceptron models were made very unpopular by the book Perceptrons by Marvin Minsky
and Seymour Papert
, published in 1969. It demonstrated the limits on the sorts of functions which single layered perceptrons can calculate, showing that even simple functions like the exclusive disjunction
could not be handled properly. The PDP books overcame this limitation by showing that multi-level, non-linear neural networks were far more robust and could be used for a vast array of functions.
Many earlier researchers advocated connectionist style models, for example in the 1940s and 1950s, Warren McCulloch, Walter Pitts
, Donald Olding Hebb
, and Karl Lashley
. McCulloch and Pitts showed how neural systems could implement first-order logic
: their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. They were influenced by the important work of Nicolas Rashevsky
in the 1930s. Hebb contributed greatly to speculations about neural functioning, and proposed a learning principle, Hebbian learning, that is still used today. Lashley argued for distributed representations as a result of his failure to find anything like a localized engram
in years of lesion
experiments.
Many connectionist principles can be traced to early work in psychology
, such as that of William James
. Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869, the neurologist John Hughlings Jackson
argued for multi-level, distributed systems. Following from this lead, Herbert Spencer
's Principles of Psychology, 3rd edition (1872), and Sigmund Freud
's Project for a Scientific Psychology (composed 1895) propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century, Edward Thorndike
was experimenting on learning that posited a connectionist type network.
In the 1950s, Friedrich Hayek
proposed that spontaneous order in the brain arose out of decentralized networks of simple units. Hayek's work was rarely cited in the PDP literature until recently.
Another form of connectionist model was the relational network
framework developed by the linguist Sydney Lamb
in the 1960s. Relational networks have been only used by linguists, and were never unified with the PDP approach. As a result, they are now used by very few researchers.
There are also hybrid connectionist models, mostly mixing symbolic representations with neural network models.
The hybrid approach has been advocated by some researchers (such as Ron Sun
).
, Steven Pinker
and others. They argued that connectionism, as it was being developed, was in danger of obliterating what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism. Computationalism is a specific form of cognitivism which argues that mental activity is computational
, that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine
. Some researchers argued that the trend in connectionism was a reversion towards associationism
and the abandonment of the idea of a language of thought
, something they felt was mistaken. In contrast, it was those very tendencies that made connectionism attractive for other researchers.
Connectionism and computationalism need not be at odds, but the debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. The differences between the two approaches that are usually cited are the following:
But despite these differences, some theorists have proposed that the connectionist architecture is simply the manner in which the symbol manipulation system happens to be implemented in the organic brain. This is logically possible, as it is well known that connectionist models can implement symbol manipulation systems of the kind used in computationalist models, as indeed they must be able if they are to explain the human ability to perform symbol manipulation tasks. But the debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example.
The debate largely centred on logical arguments about whether connectionist networks were capable of producing the syntactic structure observed in this sort of reasoning. This was later achieved, although using processes unlikely to be possible in the brain, thus the debate persisted. Today, progress in neurophysiology, and general advances in the understanding of neural networks, has led to the successful modelling of a great many of these early problems, and the debate about fundamental cognition has thus largely been decided amongst neuroscientists in favour of connectionism. However, these fairly recent developments have yet to reach consensus acceptance amongst those working in other fields, such as psychology or philosophy of mind.
Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are generally more opaque, to the extent that they may only be describable in very general terms (such as specifying the learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e. connectionism), without representing a helpful theory of the particular process which is being modelled. In this sense the debate might be considered as to some extent reflecting a mere difference in the level of analysis in which particular theories are framed.
The recent popularity of dynamical systems in philosophy of mind
have added a new perspective on the debate; some authors now argue that any split between connectionism and computationalism is more conclusively characterised as a split between computationalism and dynamical systems.
The recently proposed Hierarchical temporal memory
model may help resolving this dispute, at least to some degree, given that it explains how the neocortex
extracts high-level (symbolic) information from low-level sensory input.
Artificial intelligence
Artificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...
, cognitive psychology
Cognitive psychology
Cognitive psychology is a subdiscipline of psychology exploring internal mental processes.It is the study of how people perceive, remember, think, speak, and solve problems.Cognitive psychology differs from previous psychological approaches in two key ways....
, cognitive science
Cognitive science
Cognitive science is the interdisciplinary scientific study of mind and its processes. It examines what cognition is, what it does and how it works. It includes research on how information is processed , represented, and transformed in behaviour, nervous system or machine...
, neuroscience
Neuroscience
Neuroscience is the scientific study of the nervous system. Traditionally, neuroscience has been seen as a branch of biology. However, it is currently an interdisciplinary science that collaborates with other fields such as chemistry, computer science, engineering, linguistics, mathematics,...
and philosophy of mind
Philosophy of mind
Philosophy of mind is a branch of philosophy that studies the nature of the mind, mental events, mental functions, mental properties, consciousness and their relationship to the physical body, particularly the brain. The mind-body problem, i.e...
, that models mental
Mind
The concept of mind is understood in many different ways by many different traditions, ranging from panpsychism and animism to traditional and organized religious views, as well as secular and materialist philosophies. Most agree that minds are constituted by conscious experience and intelligent...
or behavior
Behavior
Behavior or behaviour refers to the actions and mannerisms made by organisms, systems, or artificial entities in conjunction with its environment, which includes the other systems or organisms around as well as the physical environment...
al phenomena as the emergent processes
Emergence
In philosophy, systems theory, science, and art, emergence is the way complex systems and patterns arise out of a multiplicity of relatively simple interactions. Emergence is central to the theories of integrative levels and of complex systems....
of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network
Neural network
The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes...
models.
Basic principles
The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses.Spreading activation
In most connectionist models, networks change over time. A closely related and very common aspect of connectionist models is activation. At any time, a unit in the network has an activation, which is a numerical value intended to represent some aspect of the unit. For example, if the units in the model are neurons, the activation could represent the probabilityProbability
Probability is ordinarily used to describe an attitude of mind towards some proposition of whose truth we arenot certain. The proposition of interest is usually of the form "Will a specific event occur?" The attitude of mind is of the form "How certain are we that the event will occur?" The...
that the neuron would generate an action potential
Action potential
In physiology, an action potential is a short-lasting event in which the electrical membrane potential of a cell rapidly rises and falls, following a consistent trajectory. Action potentials occur in several types of animal cells, called excitable cells, which include neurons, muscle cells, and...
spike. If the activation spreads to all the other units connected to it. Spreading activation is always a feature of neural network models, and it is very common in connectionist models used by cognitive psychologists
Cognitive psychology
Cognitive psychology is a subdiscipline of psychology exploring internal mental processes.It is the study of how people perceive, remember, think, speak, and solve problems.Cognitive psychology differs from previous psychological approaches in two key ways....
.
Neural networks
Neural networks are by far the most commonly used connectionist model today.Though there are a large variety of neural network models, they almost always follow two basic principles regarding the mind:- Any mental state can be described as an (N)-dimensional vector of numeric activation values over neural units in a network.
- Memory is created by modifying the strength of the connections between neural units. The connection strengths, or "weights", are generally represented as an (N×N)-dimensional matrixMatrix (mathematics)In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions. The individual items in a matrix are called its elements or entries. An example of a matrix with six elements isMatrices of the same size can be added or subtracted element by element...
.
Most of the variety among neural network models comes from:
- Interpretation of units: units can be interpreted as neurons or groups of neurons.
- Definition of activation: activation can be defined in a variety of ways. For example, in a Boltzmann machineBoltzmann machineA Boltzmann machine is a type of stochastic recurrent neural network invented by Geoffrey Hinton and Terry Sejnowski. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets...
, the activation is interpreted as the probabilityProbabilityProbability is ordinarily used to describe an attitude of mind towards some proposition of whose truth we arenot certain. The proposition of interest is usually of the form "Will a specific event occur?" The attitude of mind is of the form "How certain are we that the event will occur?" The...
of generating an action potential spike, and is determined via a logistic functionLogistic functionA logistic function or logistic curve is a common sigmoid curve, given its name in 1844 or 1845 by Pierre François Verhulst who studied it in relation to population growth. It can model the "S-shaped" curve of growth of some population P...
on the sum of the inputs to a unit. - Learning algorithm: different networks modify their connections differently. Generally, any mathematically defined change in connection weights over time is referred to as the "learning algorithm".
Connectionists are in agreement that recurrent neural networks (networks wherein connections of the network can form a directed cycle) are a better model of the brain than feedforward neural networks
Feedforward neural networks
A feedforward neural network is an artificial neural network where connections between the units do not form a directed cycle. This is different from recurrent neural networks....
(networks with no directed cycles). Many recurrent connectionist models also incorporate dynamical systems theory
Dynamical systems theory
Dynamical systems theory is an area of applied mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations or difference equations. When differential equations are employed, the theory is called continuous dynamical systems. When difference...
. Many researchers, such as the connectionist Paul Smolensky
Paul Smolensky
Paul Smolensky is a professor of Cognitive Science at the Johns Hopkins University.Along with Alan Prince he developed Optimality Theory, a representational model of linguistics...
, have argued that connectionist models will evolve towards fully continuous
Continuous function
In mathematics, a continuous function is a function for which, intuitively, "small" changes in the input result in "small" changes in the output. Otherwise, a function is said to be "discontinuous". A continuous function with a continuous inverse function is called "bicontinuous".Continuity of...
, high-dimensional, non-linear, dynamic systems approaches.
Biological realism
The neural network branch of connectionism suggests that the study of mental activity is really the study of neural systems. This links connectionism to neuroscience, and models involve varying degrees of biologicalBiology
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, origin, evolution, distribution, and taxonomy. Biology is a vast subject containing many subdivisions, topics, and disciplines...
realism. Connectionist work in general need not be biologically realistic, but some neural network researchers, computational neuroscientists
Computational neuroscience
Computational neuroscience is the study of brain function in terms of the information processing properties of the structures that make up the nervous system...
, try to model the biological aspects of natural neural systems very closely in so-called "neuromorphic networks". Many authors find the clear link between neural activity and cognition to be an appealing aspect of connectionism. This has been criticized as reductionist.
Learning
Connectionists generally stress the importance of learning in their models. Thus, connectionists have created many sophisticated learning procedures for neural networks. Learning always involves modifying the connection weights. These generally involve mathematical formulas to determine the change in weights when given sets of data consisting of activation vectors for some subset of the neural units.By formalizing learning in such a way, connectionists have many tools. A very common strategy in connectionist learning methods is to incorporate gradient descent
Gradient descent
Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point...
over an error surface in a space defined by the weight matrix. All gradient descent learning in connectionist models involves changing each weight by the partial derivative
Partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant...
of the error surface with respect to the weight. Backpropagation
Backpropagation
Backpropagation is a common method of teaching artificial neural networks how to perform a given task. Arthur E. Bryson and Yu-Chi Ho described it as a multi-stage dynamic system optimization method in 1969 . It wasn't until 1974 and later, when applied in the context of neural networks and...
, first made popular in the 1980s, is probably the most commonly known connectionist gradient descent algorithm today.
History
Connectionism can be traced to ideas more than a century old, which were little more than speculation until the mid-to-late 20th century. It wasn't until the 1980s that connectionism became a popular perspective among scientists.Parallel distributed processing
The prevailing connectionist approach today was originally known as parallel distributed processing (PDP). It was an artificial neural networkArtificial neural network
An artificial neural network , usually called neural network , is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes...
approach that stressed the parallel nature of neural processing, and the distributed nature of neural representations. It provided a general mathematical framework for researchers to operate in. The framework involved eight major aspects:
- A set of processing units, represented by a setSet (computer science)In computer science, a set is an abstract data structure that can store certain values, without any particular order, and no repeated values. It is a computer implementation of the mathematical concept of a finite set...
of integers. - An activation for each unit, represented by a vector of time-dependent functionsFunction (mathematics)In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can...
. - An output function for each unit, represented by a vector of functions on the activations.
- A pattern of connectivity among units, represented by a matrix of real numbers indicating connection strength.
- A propagation rule spreading the activations via the connections, represented by a function on the output of the units.
- An activation rule for combining inputs to a unit to determine its new activation, represented by a function on the current activation and propagation.
- A learning rule for modifying connections based on experience, represented by a change in the weights based on any number of variables.
- An environment which provides the system with experience, represented by sets of activation vectors for some subsetSubsetIn mathematics, especially in set theory, a set A is a subset of a set B if A is "contained" inside B. A and B may coincide. The relationship of one set being a subset of another is called inclusion or sometimes containment...
of the units.
These aspects are now the foundation for almost all connectionist models. A perceived limitation of PDP is that it is reductionistic. That is, all cognitive processes can be explained in terms of neural firing and communication.
A lot of the research that led to the development of PDP was done in the 1970s, but PDP became popular in the 1980s with the release of the books Parallel Distributed Processing: Explorations in the Microstructure of Cognition - Volume 1 (foundations) and Volume 2 (Psychological and Biological Models), by James L. McClelland, David E. Rumelhart and the PDP Research Group. The books are now considered seminal connectionist works, and it is now common to fully equate PDP and connectionism, although the term "connectionism" is not used in the books.
Earlier work
PDP's direct roots were the perceptronPerceptron
The perceptron is a type of artificial neural network invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of feedforward neural network: a linear classifier.- Definition :...
theories of researchers such as Frank Rosenblatt
Frank Rosenblatt
Frank Rosenblatt was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960...
from the 1950s and 1960s. But perceptron models were made very unpopular by the book Perceptrons by Marvin Minsky
Marvin Minsky
Marvin Lee Minsky is an American cognitive scientist in the field of artificial intelligence , co-founder of Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy.-Biography:...
and Seymour Papert
Seymour Papert
Seymour Papert is an MIT mathematician, computer scientist, and educator. He is one of the pioneers of artificial intelligence, as well as an inventor of the Logo programming language....
, published in 1969. It demonstrated the limits on the sorts of functions which single layered perceptrons can calculate, showing that even simple functions like the exclusive disjunction
Exclusive disjunction
The logical operation exclusive disjunction, also called exclusive or , is a type of logical disjunction on two operands that results in a value of true if exactly one of the operands has a value of true...
could not be handled properly. The PDP books overcame this limitation by showing that multi-level, non-linear neural networks were far more robust and could be used for a vast array of functions.
Many earlier researchers advocated connectionist style models, for example in the 1940s and 1950s, Warren McCulloch, Walter Pitts
Walter Pitts
Walter Harry Pitts, Jr. was a logician who worked in the field of cognitive psychology.He proposed landmark theoretical formulations of neural activity and emergent processes that influenced diverse fields such as cognitive sciences and psychology, philosophy, neurosciences, computer science,...
, Donald Olding Hebb
Donald Olding Hebb
Donald Olding Hebb FRS was a Canadian psychologist who was influential in the area of neuropsychology, where he sought to understand how the function of neurons contributed to psychological processes such as learning...
, and Karl Lashley
Karl Lashley
-External links:*...
. McCulloch and Pitts showed how neural systems could implement first-order logic
First-order logic
First-order logic is a formal logical system used in mathematics, philosophy, linguistics, and computer science. It goes by many names, including: first-order predicate calculus, the lower predicate calculus, quantification theory, and predicate logic...
: their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. They were influenced by the important work of Nicolas Rashevsky
Nicolas Rashevsky
Nicolas Rashevsky was a Ukrainian-American theoretical biologist who pioneered mathematical biology, and is also considered the father of mathematical biophysics and theoretical biology.-Academic career:...
in the 1930s. Hebb contributed greatly to speculations about neural functioning, and proposed a learning principle, Hebbian learning, that is still used today. Lashley argued for distributed representations as a result of his failure to find anything like a localized engram
Engram (neuropsychology)
Engrams are a hypothetical means by which memory traces are stored as biophysical or biochemical changes in the brain in response to external stimuli....
in years of lesion
Lesion
A lesion is any abnormality in the tissue of an organism , usually caused by disease or trauma. Lesion is derived from the Latin word laesio which means injury.- Types :...
experiments.
Connectionism apart from PDP
Though PDP is the dominant form of connectionism, other theoretical work should also be classified as connectionist.Many connectionist principles can be traced to early work in psychology
Psychology
Psychology is the study of the mind and behavior. Its immediate goal is to understand individuals and groups by both establishing general principles and researching specific cases. For many, the ultimate goal of psychology is to benefit society...
, such as that of William James
William James
William James was a pioneering American psychologist and philosopher who was trained as a physician. He wrote influential books on the young science of psychology, educational psychology, psychology of religious experience and mysticism, and on the philosophy of pragmatism...
. Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869, the neurologist John Hughlings Jackson
John Hughlings Jackson
John Hughlings Jackson, FRS , was an English neurologist.- Biography :He was born at Providence Green, Green Hammerton, near Harrogate, Yorkshire, the youngest son of Samuel Jackson, a yeoman who owned and farmed his land, and the former Sarah Hughlings, the daughter of a Welsh revenue collector...
argued for multi-level, distributed systems. Following from this lead, Herbert Spencer
Herbert Spencer
Herbert Spencer was an English philosopher, biologist, sociologist, and prominent classical liberal political theorist of the Victorian era....
's Principles of Psychology, 3rd edition (1872), and Sigmund Freud
Sigmund Freud
Sigmund Freud , born Sigismund Schlomo Freud , was an Austrian neurologist who founded the discipline of psychoanalysis...
's Project for a Scientific Psychology (composed 1895) propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century, Edward Thorndike
Edward Thorndike
Edward Lee "Ted" Thorndike was an American psychologist who spent nearly his entire career at Teachers College, Columbia University. His work on animal behavior and the learning process led to the theory of connectionism and helped lay the scientific foundation for modern educational psychology...
was experimenting on learning that posited a connectionist type network.
In the 1950s, Friedrich Hayek
Friedrich Hayek
Friedrich August Hayek CH , born in Austria-Hungary as Friedrich August von Hayek, was an economist and philosopher best known for his defense of classical liberalism and free-market capitalism against socialist and collectivist thought...
proposed that spontaneous order in the brain arose out of decentralized networks of simple units. Hayek's work was rarely cited in the PDP literature until recently.
Another form of connectionist model was the relational network
Stratificational linguistics
Stratificational Linguistics is a view of linguistics advocated by Sydney Lamb. His theories advocate that language usage and production is stratificational in nature.Specifically, that there are separate 'strata' or levels in the brain used for language...
framework developed by the linguist Sydney Lamb
Sydney Lamb
Sydney MacDonald Lamb is an American linguist and professor at Rice University, whose stratificational grammar is a significant alternative theory to Chomsky's transformational grammar....
in the 1960s. Relational networks have been only used by linguists, and were never unified with the PDP approach. As a result, they are now used by very few researchers.
There are also hybrid connectionist models, mostly mixing symbolic representations with neural network models.
The hybrid approach has been advocated by some researchers (such as Ron Sun
Ron Sun
Ron Sun is a cognitive scientist and currently Professor of Cognitive Science at Rensselaer Polytechnic Institute, and formerly the James C. Dowell Professor of Engineering and Professor of Computer Science at University of Missouri...
).
Connectionism vs. computationalism debate
As connectionism became increasingly popular in the late 1980s, there was a reaction to it by some researchers, including Jerry FodorJerry Fodor
Jerry Alan Fodor is an American philosopher and cognitive scientist. He holds the position of State of New Jersey Professor of Philosophy at Rutgers University and is the author of many works in the fields of philosophy of mind and cognitive science, in which he has laid the groundwork for the...
, Steven Pinker
Steven Pinker
Steven Arthur Pinker is a Canadian-American experimental psychologist, cognitive scientist, linguist and popular science author...
and others. They argued that connectionism, as it was being developed, was in danger of obliterating what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism. Computationalism is a specific form of cognitivism which argues that mental activity is computational
Computational
Computational may refer to:* Computer* Computational algebra* Computational Aeroacoustics* Computational and Information Systems Laboratory* Computational and Systems Neuroscience* Computational archaeology* Computational auditory scene analysis...
, that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...
. Some researchers argued that the trend in connectionism was a reversion towards associationism
Associationism
Associationism in philosophy refers to the idea that mental processes operate by the association of one state with its successor states.The idea is first recorded in Plato and Aristotle, especially with regard to the succession of memories....
and the abandonment of the idea of a language of thought
Language of thought
In philosophy of mind, the language of thought hypothesis put forward by American philosopher Jerry Fodor describes thoughts as represented in a "language" that allows complex thoughts to be built up by combining simpler thoughts in various ways...
, something they felt was mistaken. In contrast, it was those very tendencies that made connectionism attractive for other researchers.
Connectionism and computationalism need not be at odds, but the debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. The differences between the two approaches that are usually cited are the following:
- Computationalists posit symbolic models that do not resemble underlying brain structure at all, whereas connectionists engage in "low level" modeling, trying to ensure that their models resemble neurological structures.
- Computationalists generally focus on the structure of explicit symbols (mental models) and syntactical rules for their internal manipulation, whereas connectionists focus on learning from environmental stimuli and storing this information in a form of connections between neurons.
- Computationalists believe that internal mental activity consists of manipulation of explicit symbols, whereas connectionists believe that the manipulation of explicit symbols is a poor model of mental activity.
- Computationalists often posit domain specificDomain specificityDomain specificity is a theoretical position in cognitive science that argues that many aspects of cognition are supported by specialized, presumably evolutionarily specified, learning devices...
symbolic sub-systems designed to support learning in specific areas of cognition (e.g. language, intentionality, number), while connectionists posit one or a small set of very general learning mechanisms.
But despite these differences, some theorists have proposed that the connectionist architecture is simply the manner in which the symbol manipulation system happens to be implemented in the organic brain. This is logically possible, as it is well known that connectionist models can implement symbol manipulation systems of the kind used in computationalist models, as indeed they must be able if they are to explain the human ability to perform symbol manipulation tasks. But the debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example.
The debate largely centred on logical arguments about whether connectionist networks were capable of producing the syntactic structure observed in this sort of reasoning. This was later achieved, although using processes unlikely to be possible in the brain, thus the debate persisted. Today, progress in neurophysiology, and general advances in the understanding of neural networks, has led to the successful modelling of a great many of these early problems, and the debate about fundamental cognition has thus largely been decided amongst neuroscientists in favour of connectionism. However, these fairly recent developments have yet to reach consensus acceptance amongst those working in other fields, such as psychology or philosophy of mind.
Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are generally more opaque, to the extent that they may only be describable in very general terms (such as specifying the learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e. connectionism), without representing a helpful theory of the particular process which is being modelled. In this sense the debate might be considered as to some extent reflecting a mere difference in the level of analysis in which particular theories are framed.
The recent popularity of dynamical systems in philosophy of mind
Philosophy of mind
Philosophy of mind is a branch of philosophy that studies the nature of the mind, mental events, mental functions, mental properties, consciousness and their relationship to the physical body, particularly the brain. The mind-body problem, i.e...
have added a new perspective on the debate; some authors now argue that any split between connectionism and computationalism is more conclusively characterised as a split between computationalism and dynamical systems.
The recently proposed Hierarchical temporal memory
Hierarchical Temporal Memory
Hierarchical temporal memory is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex. HTM is a biomimetic model based on the memory-prediction theory of brain function described by Jeff...
model may help resolving this dispute, at least to some degree, given that it explains how the neocortex
Neocortex
The neocortex , also called the neopallium and isocortex , is a part of the brain of mammals. It is the outer layer of the cerebral hemispheres, and made up of six layers, labelled I to VI...
extracts high-level (symbolic) information from low-level sensory input.
See also
|
Emergence In philosophy, systems theory, science, and art, emergence is the way complex systems and patterns arise out of a multiplicity of relatively simple interactions. Emergence is central to the theories of integrative levels and of complex systems.... Eliminative materialism Eliminative materialism is a materialist position in the philosophy of mind. Its primary claim is that people's common-sense understanding of the mind is false and that certain classes of mental states that most people believe in do not exist... Self-organizing map A self-organizing map or self-organizing feature map is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional , discretized representation of the input space of the training samples, called a map... System System is a set of interacting or interdependent components forming an integrated whole.... |