Neats vs. scruffies
Encyclopedia
Neat and scruffy are labels for two different types of artificial intelligence
research. Neats consider that solutions should be elegant, clear and provably correct. Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.
Much success in AI came from combining neat and scruffy approaches. For example, there are many cognitive model
s matching human psychological data built in Soar
and ACT-R
. Both of these systems have formal representations and execution systems, but the rules put into the systems to create the models are generated ad hoc
.
in the mid-1970s to characterize the difference between his work on natural language processing
(which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy
, Allen Newell
, Herbert Simon
, Robert Kowalski
and others whose work was based on logic and formal extensions of logic.
The distinction was also partly geographical and cultural: "scruffy" was associated with AI research at MIT under Marvin Minsky
in the 1960s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours tweaking programs until they showed the required behavior. This practice was named "hacking
" and the laboratory gave birth to the hacker culture. Important and influential "scruffy" programs developed at MIT included Joseph Weizenbaum
's ELIZA
, which behaved as if it spoke English, without any formal knowledge at all, and Terry Winograd
's SHRDLU
, which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm. SHRDLU
, while enormously successful, could not be scaled up into a useful natural language processing
system, however, because it had no overarching design and maintaining a larger version of the program proved to be impossible; it was too scruffy to be extended.
Other AI laboratories (of which the largest were Stanford, Carnegie Mellon University
and the University of Edinburgh
) focussed on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy
, Herbert Simon
, Allen Newell
, Donald Michie
, Robert Kowalski
, and many other "neats".
The contrast between MIT's approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions; they executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions.
The debate reached its peak in the middle 1980s. Nils Nilsson in his presidential address to Association for the Advancement of Artificial Intelligence
in 1983, discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic like formalism. Ad hoc structures have their place, but most of these come from the domain itself." Alex P. Pentland and Martin Fischer of MIT argued in response that "There is no question that deduction and logic-like formalisms will play an important role in AI research; however, it does not seem that they are up to the Royal role that Nils suggests. This pretender King, while not naked, appears to have a limited wardrobe." Many other researchers also weighed in on one side or the other of the issue.
The scruffy approach was applied to robotics by Rodney Brooks
in the middle 1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control
(the title of a 1989 paper co-authored with Anita Flynn). Unlike earlier robots such as Shakey
or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning
techniques, and they did not plan their actions using formalizations based on logic, such as the PLANNER
language. They simply reacted to their sensors in a way that tended to help them survive and move.
Doug Lenat's Cyc
project, one of the oldest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise" (according to Pamela McCorduck
). The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad-hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning
algorithms with natural language processing
that could study the text available over the internet), no such project has yet been successful.
New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as Bayesian nets and mathematical optimization
. This general trend towards more formal methods in AI is described as "the victory of the neats" by Peter Norvig
and Stuart Russell
. Pamela McCorduck
, in 2004: "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms." Neat solutions have been highly successful in the 21st century and are now used throughout the technology industry. These solutions, however, have mostly been applied to specific problems with specific solutions, and the problem of general intelligence
remains unsolved.
The terms "neat" and "scruffy" are rarely used by AI researchers in the 21st century, although the issue remains unresolved. "Neat" solutions to problems such as machine learning
and computer vision
, have become indispensable throughout the technology industry, but ad-hoc and detailed solutions still dominate research into robotics
and commonsense knowledge.
– such as logic
or pure applied statistics
– exclusively. Scruffies are hackers, who will cobble together a system built of anything – even logic
. Neats care whether their reasoning is both provably
sound
and complete
and that their machine learning
systems can be shown to converge in a known length of time. Scruffies would like their learning to converge too, but they are happier if empirical
experience shows their systems working than to have mere equations and proof
s showing that they ought to.
To a neat, scruffy methods appear promiscuous, successful only by accident and unlikely to produce insights about how intelligence actually works. To a scruffy, neat methods appear to be hung up on formalism
and to be too slow, fragile or boring to be applied to real systems.
). For philosophical or possibly scientific reasons, some people believe that intelligence is fundamentally rational
, and can best be represented by logical systems incorporating truth maintenance. Others believe that intelligence is best implemented as a mass of learned or evolved hacks, not necessarily having internal consistency
or any unifying organizational framework.
Ironically, the apparently scruffy philosophy may also turn out to be provably (under typical assumptions) optimal
for many applications. Intelligence is often seen as a form of search, and as such not believed to be perfectly solvable in a reasonable amount of time (see also NP
and Simple Heuristics, commonsense reasoning
, memetics
, reactive planning
).
It is an open question whether human intelligence is inherently scruffy or neat. Some claim that the question itself is unimportant: the famous neat John McCarthy
has said publicly he has no interest in how human intelligence works, while famous scruffy Rodney Brooks
is openly obsessed with creating humanoid intelligence. (See Subsumption architecture
, Cog project
(Brooks 2001)).
Scruffies
Artificial intelligence
Artificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...
research. Neats consider that solutions should be elegant, clear and provably correct. Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.
Much success in AI came from combining neat and scruffy approaches. For example, there are many cognitive model
Cognitive model
A cognitive model is an approximation to animal cognitive processes for the purposes of comprehension and prediction. Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable.In contrast to cognitive architectures, cognitive...
s matching human psychological data built in Soar
Soar (cognitive architecture)
Soar is a symbolic cognitive architecture, created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University, now maintained by John Laird's research group at the University of Michigan. It is both a view of what cognition is and an implementation of that view through a...
and ACT-R
ACT-R
ACT-R is a cognitive architecture mainly developed by John Robert Anderson at Carnegie Mellon University. Like any cognitive architecture, ACT-R aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind....
. Both of these systems have formal representations and execution systems, but the rules put into the systems to create the models are generated ad hoc
Ad hoc
Ad hoc is a Latin phrase meaning "for this". It generally signifies a solution designed for a specific problem or task, non-generalizable, and not intended to be able to be adapted to other purposes. Compare A priori....
.
History
The distinction was originally made by Roger SchankRoger Schank
Roger Schank is an American artificial intelligence theorist, cognitive psychologist, learning scientist, educational reformer, and entrepreneur.-Academic career:...
in the mid-1970s to characterize the difference between his work on natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....
(which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy
John McCarthy (computer scientist)
John McCarthy was an American computer scientist and cognitive scientist. He coined the term "artificial intelligence" , invented the Lisp programming language and was highly influential in the early development of AI.McCarthy also influenced other areas of computing such as time sharing systems...
, Allen Newell
Allen Newell
Allen Newell was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology...
, Herbert Simon
Herbert Simon
Herbert Alexander Simon was an American political scientist, economist, sociologist, and psychologist, and professor—most notably at Carnegie Mellon University—whose research ranged across the fields of cognitive psychology, cognitive science, computer science, public administration, economics,...
, Robert Kowalski
Robert Kowalski
Robert "Bob" Anthony Kowalski is a British logician and computer scientist, who has spent most of his career in the United Kingdom....
and others whose work was based on logic and formal extensions of logic.
The distinction was also partly geographical and cultural: "scruffy" was associated with AI research at MIT under Marvin Minsky
Marvin Minsky
Marvin Lee Minsky is an American cognitive scientist in the field of artificial intelligence , co-founder of Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy.-Biography:...
in the 1960s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours tweaking programs until they showed the required behavior. This practice was named "hacking
Hacker (computer security)
In computer security and everyday language, a hacker is someone who breaks into computers and computer networks. Hackers may be motivated by a multitude of reasons, including profit, protest, or because of the challenge...
" and the laboratory gave birth to the hacker culture. Important and influential "scruffy" programs developed at MIT included Joseph Weizenbaum
Joseph Weizenbaum
Joseph Weizenbaum was a German-American author and professor emeritus of computer science at MIT.-Life and career:...
's ELIZA
ELIZA
ELIZA is a computer program and an early example of primitive natural language processing. ELIZA operated by processing users' responses to scripts, the most famous of which was DOCTOR, a simulation of a Rogerian psychotherapist. Using almost no information about human thought or emotion, DOCTOR...
, which behaved as if it spoke English, without any formal knowledge at all, and Terry Winograd
Terry Winograd
Terry Allen Winograd is an American professor of computer science at Stanford University, and co-director of the Stanford Human-Computer Interaction Group...
's SHRDLU
SHRDLU
SHRDLU was an early natural language understanding computer program, developed by Terry Winograd at MIT from 1968-1970. In it, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified "blocks world", essentially a virtual box...
, which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm. SHRDLU
SHRDLU
SHRDLU was an early natural language understanding computer program, developed by Terry Winograd at MIT from 1968-1970. In it, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified "blocks world", essentially a virtual box...
, while enormously successful, could not be scaled up into a useful natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....
system, however, because it had no overarching design and maintaining a larger version of the program proved to be impossible; it was too scruffy to be extended.
Other AI laboratories (of which the largest were Stanford, Carnegie Mellon University
Carnegie Mellon University
Carnegie Mellon University is a private research university in Pittsburgh, Pennsylvania, United States....
and the University of Edinburgh
University of Edinburgh
The University of Edinburgh, founded in 1583, is a public research university located in Edinburgh, the capital of Scotland, and a UNESCO World Heritage Site. The university is deeply embedded in the fabric of the city, with many of the buildings in the historic Old Town belonging to the university...
) focussed on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy
John McCarthy (computer scientist)
John McCarthy was an American computer scientist and cognitive scientist. He coined the term "artificial intelligence" , invented the Lisp programming language and was highly influential in the early development of AI.McCarthy also influenced other areas of computing such as time sharing systems...
, Herbert Simon
Herbert Simon
Herbert Alexander Simon was an American political scientist, economist, sociologist, and psychologist, and professor—most notably at Carnegie Mellon University—whose research ranged across the fields of cognitive psychology, cognitive science, computer science, public administration, economics,...
, Allen Newell
Allen Newell
Allen Newell was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology...
, Donald Michie
Donald Michie
Donald Michie was a British researcher in artificial intelligence. During World War II, Michie worked for the Government Code and Cypher School at Bletchley Park, contributing to the effort to solve "Tunny," a German teleprinter cipher.-Early life and career:Michie was born in Rangoon, Burma...
, Robert Kowalski
Robert Kowalski
Robert "Bob" Anthony Kowalski is a British logician and computer scientist, who has spent most of his career in the United Kingdom....
, and many other "neats".
The contrast between MIT's approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions; they executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions.
The debate reached its peak in the middle 1980s. Nils Nilsson in his presidential address to Association for the Advancement of Artificial Intelligence
Association for the Advancement of Artificial Intelligence
The Association for the Advancement of Artificial Intelligence or AAAI is an international, nonprofit, scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines...
in 1983, discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic like formalism. Ad hoc structures have their place, but most of these come from the domain itself." Alex P. Pentland and Martin Fischer of MIT argued in response that "There is no question that deduction and logic-like formalisms will play an important role in AI research; however, it does not seem that they are up to the Royal role that Nils suggests. This pretender King, while not naked, appears to have a limited wardrobe." Many other researchers also weighed in on one side or the other of the issue.
The scruffy approach was applied to robotics by Rodney Brooks
Rodney Brooks
Rodney Allen Brooks is the former Panasonic professor of robotics at the Massachusetts Institute of Technology. Since 1986 he has authored a series of highly influential papers which have inaugurated a fundamental shift in artificial intelligence research...
in the middle 1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control
Fast, Cheap and Out of Control
Fast, Cheap and Out of Control is a 1997 film by documentary filmmaker Errol Morris. It profiles four subjects with extraordinary careers: a lion trainer, a topiary sculptor, a mole rat specialist, and a robot scientist....
(the title of a 1989 paper co-authored with Anita Flynn). Unlike earlier robots such as Shakey
Shakey the Robot
Shakey the Robot was the first general-purpose mobile robot to be able to reason about its own actions. While other robots would have to be instructed on each individual step of completing a larger task, Shakey could analyze the command and break it down into basic chunks by itself...
or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...
techniques, and they did not plan their actions using formalizations based on logic, such as the PLANNER
Planner
Planner may refer to:* A diary for planning* Planner programming language* Planner * Urban planner* Route planner* Meeting and convention planner* Planner , part of GNOME Office...
language. They simply reacted to their sensors in a way that tended to help them survive and move.
Doug Lenat's Cyc
Cyc
Cyc is an artificial intelligence project that attempts to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning....
project, one of the oldest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise" (according to Pamela McCorduck
Pamela McCorduck
Pamela McCorduck is the author of a number of books concerning the history and philosophical significance of artificial intelligence, the future of engineering and the role of women and technology. She is also the author of three novels. She is a contributor to Omni, New York Times, Daedalus, the...
). The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad-hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...
algorithms with natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....
that could study the text available over the internet), no such project has yet been successful.
New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as Bayesian nets and mathematical optimization
Optimization (mathematics)
In mathematics, computational science, or management science, mathematical optimization refers to the selection of a best element from some set of available alternatives....
. This general trend towards more formal methods in AI is described as "the victory of the neats" by Peter Norvig
Peter Norvig
Peter Norvig is an American computer scientist. He is currently the Director of Research at Google Inc.-Educational Background:...
and Stuart Russell
Stuart Russell
Stuart Russell may refer to:* Stuart Russell , British Conservative party politician, MP for Darwen 1935–1943* Stuart J. Russell , computer scientist known for his contributions to artificial intelligence...
. Pamela McCorduck
Pamela McCorduck
Pamela McCorduck is the author of a number of books concerning the history and philosophical significance of artificial intelligence, the future of engineering and the role of women and technology. She is also the author of three novels. She is a contributor to Omni, New York Times, Daedalus, the...
, in 2004: "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms." Neat solutions have been highly successful in the 21st century and are now used throughout the technology industry. These solutions, however, have mostly been applied to specific problems with specific solutions, and the problem of general intelligence
Strong AI
Strong AI is artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and...
remains unsolved.
The terms "neat" and "scruffy" are rarely used by AI researchers in the 21st century, although the issue remains unresolved. "Neat" solutions to problems such as machine learning
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...
and computer vision
Computer vision
Computer vision is a field that includes methods for acquiring, processing, analysing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions...
, have become indispensable throughout the technology industry, but ad-hoc and detailed solutions still dominate research into robotics
Robotics
Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots...
and commonsense knowledge.
Typical methodologies
As might be guessed from the terms, neats use formal methodsFormal methods
In computer science and software engineering, formal methods are a particular kind of mathematically-based techniques for the specification, development and verification of software and hardware systems...
– such as logic
Logic
In philosophy, Logic is the formal systematic study of the principles of valid inference and correct reasoning. Logic is used in most intellectual activities, but is studied primarily in the disciplines of philosophy, mathematics, semantics, and computer science...
or pure applied statistics
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....
– exclusively. Scruffies are hackers, who will cobble together a system built of anything – even logic
Logic
In philosophy, Logic is the formal systematic study of the principles of valid inference and correct reasoning. Logic is used in most intellectual activities, but is studied primarily in the disciplines of philosophy, mathematics, semantics, and computer science...
. Neats care whether their reasoning is both provably
Proof theory
Proof theory is a branch of mathematical logic that represents proofs as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as plain lists, boxed lists, or trees, which are constructed...
sound
Soundness
In mathematical logic, a logical system has the soundness property if and only if its inference rules prove only formulas that are valid with respect to its semantics. In most cases, this comes down to its rules having the property of preserving truth, but this is not the case in general. The word...
and complete
Gödel's completeness theorem
Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. It was first proved by Kurt Gödel in 1929....
and that their machine learning
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...
systems can be shown to converge in a known length of time. Scruffies would like their learning to converge too, but they are happier if empirical
Empirical
The word empirical denotes information gained by means of observation or experimentation. Empirical data are data produced by an experiment or observation....
experience shows their systems working than to have mere equations and proof
Mathematical proof
In mathematics, a proof is a convincing demonstration that some mathematical statement is necessarily true. Proofs are obtained from deductive reasoning, rather than from inductive or empirical arguments. That is, a proof must demonstrate that a statement is true in all cases, without a single...
s showing that they ought to.
To a neat, scruffy methods appear promiscuous, successful only by accident and unlikely to produce insights about how intelligence actually works. To a scruffy, neat methods appear to be hung up on formalism
Formalism
The term formalism describes an emphasis on form over content or meaning in the arts, literature, or philosophy. A practitioner of formalism is called a formalist. A formalist, with respect to some discipline, holds that there is no transcendent meaning to that discipline other than the literal...
and to be too slow, fragile or boring to be applied to real systems.
Relation to philosophy and human intelligence
This conflict goes much deeper than programming practices, (though it clearly has parallels in software engineeringSoftware engineering
Software Engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software...
). For philosophical or possibly scientific reasons, some people believe that intelligence is fundamentally rational
Rationality
In philosophy, rationality is the exercise of reason. It is the manner in which people derive conclusions when considering things deliberately. It also refers to the conformity of one's beliefs with one's reasons for belief, or with one's actions with one's reasons for action...
, and can best be represented by logical systems incorporating truth maintenance. Others believe that intelligence is best implemented as a mass of learned or evolved hacks, not necessarily having internal consistency
Internal consistency
In statistics and research, internal consistency is typically a measure based on the correlations between different items on the same test . It measures whether several items that propose to measure the same general construct produce similar scores...
or any unifying organizational framework.
Ironically, the apparently scruffy philosophy may also turn out to be provably (under typical assumptions) optimal
Optimization (mathematics)
In mathematics, computational science, or management science, mathematical optimization refers to the selection of a best element from some set of available alternatives....
for many applications. Intelligence is often seen as a form of search, and as such not believed to be perfectly solvable in a reasonable amount of time (see also NP
NP (complexity)
In computational complexity theory, NP is one of the most fundamental complexity classes.The abbreviation NP refers to "nondeterministic polynomial time."...
and Simple Heuristics, commonsense reasoning
Commonsense reasoning
Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking. There are several components to this problem, including:* Developing adequately broad and deep commonsense knowledge bases....
, memetics
Memetics
Memetics is a theory of mental content based on an analogy with Darwinian evolution, originating from Richard Dawkins' 1976 book The Selfish Gene. It purports to be an approach to evolutionary models of cultural information transfer. A meme, analogous to a gene, is essentially a "unit of...
, reactive planning
Reactive planning
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments....
).
It is an open question whether human intelligence is inherently scruffy or neat. Some claim that the question itself is unimportant: the famous neat John McCarthy
John McCarthy (computer scientist)
John McCarthy was an American computer scientist and cognitive scientist. He coined the term "artificial intelligence" , invented the Lisp programming language and was highly influential in the early development of AI.McCarthy also influenced other areas of computing such as time sharing systems...
has said publicly he has no interest in how human intelligence works, while famous scruffy Rodney Brooks
Rodney Brooks
Rodney Allen Brooks is the former Panasonic professor of robotics at the Massachusetts Institute of Technology. Since 1986 he has authored a series of highly influential papers which have inaugurated a fundamental shift in artificial intelligence research...
is openly obsessed with creating humanoid intelligence. (See Subsumption architecture
Subsumption architecture
Subsumption architecture is a reactive robot architecture heavily associated with behavior-based robotics. The term was introduced by Rodney Brooks and colleagues in 1986...
, Cog project
Cog (project)
Cog was a project at the Humanoid Robotics Group of the Massachusetts Institute of Technology. It was based on the hypothesis that human-level intelligence requires gaining experience from interacting with humans, like human infants do. This in turn requires many interactions with humans over a...
(Brooks 2001)).
Well-known neats and scruffies
Neats- John McCarthyJohn McCarthy (computer scientist)John McCarthy was an American computer scientist and cognitive scientist. He coined the term "artificial intelligence" , invented the Lisp programming language and was highly influential in the early development of AI.McCarthy also influenced other areas of computing such as time sharing systems...
- Allen NewellAllen NewellAllen Newell was a researcher in computer science and cognitive psychology at the RAND corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology...
- Herbert SimonHerbert SimonHerbert Alexander Simon was an American political scientist, economist, sociologist, and psychologist, and professor—most notably at Carnegie Mellon University—whose research ranged across the fields of cognitive psychology, cognitive science, computer science, public administration, economics,...
- Edward FeigenbaumEdward FeigenbaumEdward Albert Feigenbaum is a computer scientist working in the field of artificial intelligence. He is often called the "father of expert systems."...
- Robert KowalskiRobert KowalskiRobert "Bob" Anthony Kowalski is a British logician and computer scientist, who has spent most of his career in the United Kingdom....
- Judea PearlJudea PearlJudea Pearl is a computer scientist and philosopher, best known for developing the probabilistic approach to artificial intelligence and the development of Bayesian networks ....
Scruffies
- Rodney BrooksRodney BrooksRodney Allen Brooks is the former Panasonic professor of robotics at the Massachusetts Institute of Technology. Since 1986 he has authored a series of highly influential papers which have inaugurated a fundamental shift in artificial intelligence research...
- Terry WinogradTerry WinogradTerry Allen Winograd is an American professor of computer science at Stanford University, and co-director of the Stanford Human-Computer Interaction Group...
- Marvin MinskyMarvin MinskyMarvin Lee Minsky is an American cognitive scientist in the field of artificial intelligence , co-founder of Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy.-Biography:...
- Roger SchankRoger SchankRoger Schank is an American artificial intelligence theorist, cognitive psychologist, learning scientist, educational reformer, and entrepreneur.-Academic career:...
- Doug Lenat