What is the connectionism in artificial intelligence?
The connectionism approach in cognitive science is a method of studying human cognition with the help of mathematical models that are known as Artificial Neural Networks or Connectionist Networks. They usually happen to be extremely interconnected processing units that resemble neurons.
It is essentially an approach to artificial intelligence that arose from attempts at understanding the functioning and ability of the human brain at the neural level. The connectionist model is also known as neuron-like computing.
Artificial Neural Networks are models that are loosely based on the human brain. They are composed of vast amounts of neurons and weights that judge the strength of the connections between units.
The difference between the connectionist model and computational neuroscience has not been clearly defined. However, in cognitive science, connectionists usually concentrate more on high-level cognitive processes like comprehension, memory, grammatical competence, recognition, and reasoning, moving away from the specific details of neural functioning.
The connectionism approach to cognitive science was born in the 1940s. It was immensely popular in the 1960s, but soon had significant flaws exposed, which resulted in a reduction in the level of interest towards the approach. However, the approach was revived in the 1980s.
Many people saw the connectionist model to be a replacement for the classical computational artifact-inspired theory of cognition.
How does connectionism work?
The connectionism movement in cognitive science seeks to explain intellectual abilities by making use of artificial neural networks, aka “neural networks” or “neural nets”. These are very simplified models of the brain that are made up of large numbers of units (which are the analogs of biological neurons) along with weights that measure the strength of connections between these units. The weights are there to model the effects of synapses linking one neuron to another. Experiments conducted on models like this have shown that they are able to learn skills like face recognition, reading, as well as the detection of basic grammatical structure.
Connectionism offers a cognitive theory that is based on simultaneously occurring, distributed signal activity through connections that can be signified numerically, in which learning takes place through the modification of the strengths of the connections based on experience.
What are the advantages of the connectionist model and the connectionism theory?
Here are the most significant advantages of connectionist architectures in computer programs:
- It is applicable to a wide range of functions.
- It remains functional even when parts of the system fail (graceful degradation).
- Memory lookup is straightforward. It does not need exhaustive search.
- Connectionist systems have learning capabilities built into them (changing weights).
What are the limitations of the theory of connectionism?
Here are the most significant disadvantages of the connectionist model and connectionist architectures:
- There is a lack of transparency and it is not always easy to understand how the artificial neural networks are processing information.
- Since the neural plausibility argument is not strong, there is no clarity on whether the larger connectionist networks that would be built in the future could be more accurate a reflection of the actual workings of the human brain than rule-based models.
How is the connectionism approach in learning different from the computationalism approach?
Here are the main differences between the connectionism approach to cogntive science and the computationalism approach:
- Connectionists are concerned more with learning from environmental stimuli and seek to store that information as connections between neurons that bare resemblances to biological neurons. Computationalists, on the other hand, care more about the structure of explicit symbols and and syntactical rules for internal manipulation.
- While computationalists assert that internal mental activity consists of manipulation of explicit symbols, connectionists are of the opinion that the manipulation of explicit symbols does not provide an adequate model of mental activity.
- Connectionists focus on low-level modeling attempting to make sure that the connectionist models resemble neurological structures. Computationalists put forward symbolic models that have similarities to underlying brain structures.
What is parallel distributed processing?
The connectionist approach to cognitive science that is predominant today was earlier referred to as parallel distributed processing (PDP). It was an artificial neural network approach that emphasized the parallel nature of neural processing, as well as the distributed nature of neural representations. Parallel distributed processing offered a general mathematical framework for researchers to operate in. This framework was made up of eight major aspects.
- A set of processing units, denoted by a set of integers.
- An activation for every single unit, denoted by a vector of time-dependent functions.
- An output function for every unit, denoted by a vector of functions on the activations.
- A pattern of connectivity amongst units, denoted by a matrix of real numbers signifying the strength of the connections.
- A propagation rule that spreads the activations through the connections, denoted by a function on the output of the units.
- An activation rule to combine inputs to a unit for the purpose of determining its new activation, denoted by a function on the current activation and propagation.
- A learning rule that is used to modify connections on the basis of experience, denoted by a change in the weights based on any number of variables.
- An environment that provides the system with experience, denoted by sets of activation vectors for some subset of the units.
Although a lot of the research in cognitive science leading to the development of, and introduction to parallel distributed processing (PDP) was carried out in the 1970s, parallel distributed processing only rose in popularity in the 1980s after the books Parallel Distributed Processing: Explorations in the Microstructure of Cognition - Volume 1 (foundations) and Volume 2 (Psychological and Biological Models) were released by the authors James L. McClelland, David E. Rumelhart, as well as the PDP Research Group. These books are now looked at as seminal connectionist works and today it is very common to fully equate PDP and connectionism in spite of the fact that the term "connectionism" was not mentioned in either of the books.