For about a century, people have known that the brain is made up of neurons which connect to each another and perform computations through electrochemical transmission. For about half a century, people have known enough about computers to realize that the brain doesn't look much like one but still computes pretty well regardless. How?

Spreading Activation was one of the first models of mental computation. In this theory, you can imagine the brain as a bunch of nodes in a graph with labels like "Warlord" "Mongolia" "Barbarian", "Genghis Khan" and "Salmon". Each node has certain connections to the others; when they get activated around the same time, it strengthens the connection. When someone asks a question like "Who was that barbaric Mongol warlord, again?" it activates the nodes "warlord", "barbarian", and "Mongol". The activation spreads to all the nodes connected to these, activating them too, and the most strongly activated node will be the one that's closely connected to all three - the barbaric Mongol warlord in question, Genghis Khan. All the while, "salmon", which has no connection to any of these concepts, just sits on its own not being activated. This fits with experience, in which if someone asks us about barbaric Mongol warlords, the name "Genghis Khan" pops into our brain like magic, while we continue to not think about salmon if we weren't thinking about them before.

Bark leash bone wag puppy fetch. If the word "dog" is now running through your head, you may be a victim of spreading activation, as were participants in something called a Deese-Roediger-McDermott experiment, who when asked to quickly memorize a list of words like those and then test their retention several minutes later, were more likely to "remember" "dog" than any of the words actually on the list.

So this does seem attractive, and it does avoid the folk psychology concept of a "belief". The spreading activation network above was able to successfully answer a question without any representation of propositional statements like "Genghis Khan was a barbaric Mongol warlord." And one could get really enthusiastic about this and try to apply it to motivation. Maybe we have nodes like "Hunger", "Food", "McDonalds", and "*GET IN CAR, DRIVE TO MCDONALDS*". The stomach could send a burst of activation to "Hunger", which in turn activates the closely related "Food", which in turn activates the closely related "McDonalds", which in turn activates the closely related "*GET IN CAR, DRIVE TO MCDONALDS*", and then before you know it you're ordering a Big Mac.

But when you try to implement this on a computer, you don't get very far. Although it can perform certain very basic computations, it has trouble correcting itself, handling anything too complicated (the question "name one person who is *not* a barbaric Mongol warlord" would still return "Genghis Khan" on our toy spreading activation network), or making good choices (you can convince the toy network McDonalds is your best dining choice just by saying its name a lot; the network doesn't care about food quality, prices, or anything else.)

This simple spreading activation model also crashes up against modern neuroscience research, which mostly contradicts the idea of a "grandmother cell", ie a single neuron that represents a single concept like your grandmother. Mysteriously, all concepts seem to be represented everywhere at once - Karl Lashley found you can remove any part of a rat's cortex without significantly damaging a specific memory, proving the memory was nonlocalized. How can this be?

Computer research into neural nets developed a model that could answer these and other objects, transforming the immature spreading activation model into full-blown connectionism.

CONNECTIONISM

Connectionism is what happens when you try to implement associationism on a computer and find out it's a lot weirder than you thought.

Take a bunch of miniprocessors called "units" and connect them to each other with unidirectional links. Call some units "inputs" and others "outputs". Decide what you want to do with them: maybe learn to distinguish chairs from non-chairs.

Each unit computes a single value representing its "activity level"; each link has a "strength" with which it links its origin unit to its destination unit. When a unit is "activated" (gets an activity level > 0), it sends that activation along all of its outgoing links. If it has an activation level of .5, and two outgoing links, one to A with strength .33 and one to B with strength -.5, then it sends .165 activation to unit A and -.25 activation to unit B. A and B might also be getting lots of activation from other units they're connected to.

Name your two output units "CHAIR" and "NOT A CHAIR". Connect your many input units to sense-data about the objects you want to classify as chairs or non-chairs; each one could be the luminosity of a pixel in an image of the object, or you could be kind to it and feed it pre-processed input like "IS MADE OF WOOD" and "IS SENTIENT".

Suppose we decide to start with a nice wooden chair. The IS MADE OF WOOD node lights up to its maximum value of 1: it's definitely made of wood! The IS SENTIENT node stays dark; it's definitely not sentient. And then...nothing happens, because we forgot to set the link strengths to anything other than 0. IS MADE OF WOOD is sending activation all over, but it's getting multiplied by zero and everything else stays dark.

We now need an program to train the neural net (or a very dedicated human with lots of free time). The training program knows that the correct answer should have been CHAIR, and so the node we designated "CHAIR" should have lit up. It uses one of several algorithms to change the strengths of the links in such a way that next time the nodes that have currently lit up light up, CHAIR will also light up. For example, it might change the link from IS MADE OF WOOD to CHAIR to .3 (why doesn't it change it all the way to its maximum value? Because that erases all previous data and reduces the system's entire intelligence to what it learned on just this case).

On the other hand, IS SENTIENT is dark, so the training program might infer that IS SENTIENT is not a characteristic of chairs, and change the link strength there accordingly.

The next time the program sees a picture of a wooden chair, IS MADE OF WOOD will light up, and it will send its activation to IS CHAIR, making IS CHAIR light up with .3 units of activation: the program has a weak suspicion that the picture is a chair.

This is a pretty boring neural network, but if we add several hundred input nodes with all conceivable properties relevant to chairhood and spend a lot of computing power, eventually the program will become pretty good at recognizing chairs from nonchairs, and "learn" complicated rules that a three-legged wooden object is a stool which sort of counts as a chair, but a three legged sentient being is an injured dog and sitting on it will only make it angry.

Larger and more complicated neural nets contain "hidden nodes" - the equivalent of interneurons which sit between the input and the output and exist only to perform computations; feedback from an output node to a previous node that can create stable circles of activation, and other complications. They can perform much more difficult classification problems - identifying words from speech, or people from a photograph.

This is interesting because it solves a problem that baffled philosophers for millennia: the difficulty of coming up with good boundaries for categories. Plato famously defined Man as "a featherless biped"; Diogenes famously responded by presenting him with a shaved chicken. There seem to be many soft constraints on humans (can use language, have two legs, have a heartbeat) but there are also examples of humans who violate these constraints (babies, amputees, Dick Cheney) yet still seem obviously human.

Classical computers get bogged down in these problems, but neural nets naturally reason with "cluster structures in thing-space" and are expert classifiers in the same way we ourselves are.

SIMILARITIES BETWEEN NETS AND BRAINS


Even aside from their skill at classifying and pattern-matching, connectionist networks share many properties with brains, such as:

- Obvious structural similarities: neural nets work by lots of units which activate with different strengths and then spread that activation through links; the brain works by lots of neurons which fire at different rates and then spread that activation through axons.

- Lack of a "grandmother cell". A classical computer sticks each bit of memory in a particular location. A neural net stores memories as patterns of activation across all units in the network. In a feedback network, specific oft-repeated patterns can form attractor states to which the network naturally tends if pushed anywhere in the region. Association between one idea and another is not through physical contiguity, but through similarities in the pattern. "Grandmother" probably has most of the same neurons in the same state as "grandfather", and so it takes only a tiny stimulus to push the net from one attractor state to the other.

- Graceful failure: Classical computer programs do not fail gracefully; flip one bit, and the whole thing blows up and you have to spend the rest of your day messing around with a debugger. Destroying a few units in a neural net may only cost it a little bit of its processing power. This matches with the brain: losing a couple of neurons may make you think less clearly; losing a lot of neurons may give you dementia, memory loss and poor judgment. But there's no one neuron without which you just sit there near-catatonic, chanting "ERROR: NEURON 10559020481 NOT RESPONDING." And Karl Lashley can take out any part of a rat's cortex without affecting its memories too much.

- Remembering and forgetting: Neural nets can form memories, and the more the stimulus recurs to them the better they will remember it. But the longer they go without considering the stimulus, the more likely it is that the units involved in the memory-pattern will strengthen other connections, and then it will be harder to get them back in the memory pattern. This is much closer to how humans treat memory than the pristine, eternal encoding of classical computers.

- Ability to quickly locate solutions that best satisfy many soft constraints. What's a good place for dinner that's not too expensive, not more than twenty minutes away, serves decent cocktails, and has burgers for the kids? A classical computer would have to first identify the solution class as "restaurants", then search every restaurant it knows to see if they match each constraint, then fail to return an answer if no such restaurant exists. A neural net will just *settle* on the best answer, and if the cocktails there aren't really that good, it'll just settle but give the answer a lower strength.

- Context-sensitivity. Gold silver copper iron tin, and now when I say "lead", you're thinking of Element 82 (Pb), even though without the context a more natural interpretation is of the "leadership" variety. Currently active units can force others into a different pattern, giving context sensitivity not only to semantic priming as in the above example, but to emotions (people's thoughts follow different patterns when they're happy or sad), situations, and people.

Neural nets have also been used to simulate the results of many popular psychological experiments, including different types of priming, cognitive dissonance, and several of the biases and heuristics.

CONNECTIONISM AND REINFORCEMENT LEARNING

The link between connectionism and associationism is pretty obvious, but the link between connectionism and behaviorism is more elegant.

In most artificial neural nets, you need a training program to teach the net whether it's right or wrong and which way to adjust the weights. Brains don't have that luxury. Instead, part of their training algorithm for cognitive tasks is based on surprise: if you did not expect the sun to rise today, and you saw it rise anyway, you should probably decrease the strength of whatever links led you to that conclusion, and increase the strengths of any links that would have correctly predicted the sunrise.

Motivational links, however, could be modified by reinforcement. If a certain action leads to reward, strengthen the links that led to that action; if it leads to punishment, strengthen the links that would have made you avoid that action.

This explains behaviorist principles as a simple case of connectionism, the one where all the links are nice and straight, and you just have to worry about motivation and not about where cognition is coming from. Many of the animals typically studied by behaviorists were simple enough that this simple case was sufficient.

Although I think connectionism is our best current theory for how the mind works at a low level, it's hard to theorize about just because the networks are so complicated and so hard to simplify. Behaviorism is useful because it reduces the complexity of the networks to a few comprehensible rules, which allow higher level psychological theories and therapies to be derived from them.

New Comment
20 comments, sorted by Click to highlight new comments since:

These do not strike me as failures to replicate human brains:

the question "name one person who is not a barbaric Mongol warlord" would still return "Genghis Khan"

Name an object that isn't a jar of peanut butter. What did you immediately think of? (Yeah, you correct afterward. But still, I'd be more likely to blurt out "Genghis Khan" than to the question "Name one person".)

you can convince the toy network McDonalds is your best dining choice just by saying its name a lot

That's how advertising works, isn't it? See also believing everything we're told and your own post on repeated affirmation.

Name an object that isn't a jar of peanut butter. What did you immediately think of?

A jar of peanut butter. Then a jar of jam. Sample size of one, but that looks a lot to me like filtering activated concepts.

The problem as I understand it is precisely that the spreading activation model doesn't include any natural way of doing that filtering.

The problem as I understand it is precisely that the spreading activation model doesn't include any natural way of doing that filtering.

Well, it may not be obvious how the error correction works, but it still explains the part that generate the hypotheses to be chosen.

This is similar to the stroop effect, and from studying that kind of stuff, they've figured out which part of the brain (ACC) actually does the error correction. Since it's a completely separate part of the brain that handles error correction, there's no reason to think that the part that generates the errors works differently.

Name an object that isn't a jar of peanut butter. What did you immediately think of?

An elephant. Due to the fact that the question that's usually asked is about elephants. So I thought of elephants before I finished the sentence

Second, I thought of a jar of peanut butter.

I still haven't consciously thought of another object yet, except just now as I was thinking about what object I might think of, and thought of the spoon I was using to eat my food.

Name an object that isn't a jar of peanut butter. What did you immediately think of?

A jar of peanut butter. Then a knife, and then a piece of toast.

This simple spreading activation model also crashes up against modern neuroscience research, which mostly contradicts the idea of a "grandmother cell", ie a single neuron that represents a single concept like your grandmother.

...

Association between one idea and another is not through physical contiguity, but through similarities in the pattern. "Grandmother" probably has most of the same neurons in the same state as "grandfather", and so it takes only a tiny stimulus to push the net from one attractor state to the other.

This is extremely unlikely. Associations can be made between concepts long after the patterns for those concepts have been learned. For a different explanation, see my 2000 article, A neuronal basis for the fan effect. It used the idea of convergence zones, promoted by Antonio Damasio (Damasio, A. R. (1990), Synchronous activation in multiple cortical regions: A mechanism for recall. The Neurosciences 2:287–296). My paper did this:

  • Have binary-neuron network 1 represent one concept by a collection of activated nodes
  • Have network 2 (or the same network in the next timestep) represent another concept the same way
  • Have a third network (the convergence zone) learn associations between patterns in those two networks, using the Amari/Hopfield algorithm.

Then the settling of the neurons in the convergence zone into a low-energy state causes the presence of one pattern in network 1 to recall an associated pattern in network 2, with dynamics and error rates that closely mimic John Anderson's experiments on the quantitative measurement of spreading activation in humans.

(I was careful in my article to give credit to Amari, who invented the Hopfield network 10 years before Hopfield did. But I see now the editor "fixed" my reference to no longer give Amari priority.)

Thank you.

I know very little about connectionist networks beyond what I have read in a few review articles. I wrote this not because I was the best person to write it but because no one else has written anything on them yet and I had to stumble across a description of them while looking for other stuff, which upset me because I would have loved to have learned about them several years earlier. I would love if you or someone else who is an expert in the field would write something more up-to-date and accurate.

As far as I understand it the "grandmother cell" hypothesis is mostly dead. At least in artificial neural networks, they tend to favor representing concepts as highly distributed pattern. So "grandma" would activate a neuron that represents "old", and another that represents "woman". And often they don't even form human interpretable patterns like that.

Here are some videos of Geoffrey Hinton explaining the idea of distributed representations:

http://d396qusza40orc.cloudfront.net/neuralnets/recoded_videos%2Flec4a%20%5B199f7e86%5D%20.mp4

http://d396qusza40orc.cloudfront.net/neuralnets/recoded_videos%2Flec4b%20%5Bb6788b94%5D%20.mp4

A great example of this concept is word2vec, which learns distributed representations of words. You can take the vectors of each word that it learns and do cool stuff. Like "king"-"man"+"woman" returns a vector very close to the representation for "queen".

And by representing concepts in fewer dimensions, you can generalize much better. If you know that old people have bad hearing, you can then predict grandma might have bad hearing.

Motivational links, however, could be modified by reinforcement. If a certain action leads to reward, strengthen the links that led to that action; if it leads to punishment, strengthen the links that would have made you avoid that action.

Reward comes along too much later for this to work for humans. Instead, the brain uses temporal difference learning. I no longer remember what was the first, classic paper demonstrating temporal difference error signals in the brain; it may have been A Neural Substrate of Prediction and Reward (1997). Google ("temporal difference learning", brain). "Temporal Difference Models and Reward-Related Learning in the Human Brain" , Neuron, 2003, will be one of the hits.

I agree that the brain uses temporal difference learning. I thought temporal difference learning was that reward propagates back to earliest reliable stimulus based on difference between expected and observed, then reinforces it. How is that different from the quoted text except that quoted is simpler and doesn't use that language?

Connectionism may be the best we've got. But it is not very good.

Take the recent example of improving performance on a task by reading a manual. If we were to try and implement something similar in a connectionist/reinforcement model we would have problems. We need positive and negative reinforcement to change the neural connection strengths but we wouldn't get those whilst reading a book, so how do we assimilate the non-inductive information stored in there? It is possible with feedback loops, those can be used to store information quickly in a connectionist system, however I haven't seen any systems use them or learn them on the sort of scale that would be needed for the civilization problem.

There are also more complex processes which seem out of its reach, such as learning a language using a language e.g. "En francais, le mot pour 'cat' est 'chat'".

Neural nets don't need feedback. They can benefit from unsupervised learning too. In this case you would have it learn a model of the manual's text, and another model of playing the game, and connect them.

When words appear in the game, they will activate neurons in the text net. The game playing net might find that these correlate with successful actions in the game and make use of it.

The idea of "virtual machines" mentioned in [Your Brain is (almost) Perfect] (http://www.amazon.com/Your-Brain-Almost-Perfect-Decisions/dp/0452288843) is tempting me to think in the direction of "reading a manual will trigger the nuerons involved in running the task and the reinforcements will be implemented on those 'virtual' runs".

How reading a manual will trigger this virtual run can be answered by the same way hearing "get me a glass of water" will trigger the neurons to do so, and if I get a "thank you" it will be reinforced. In the same way reading "to open the TV, click the red button on the remote" might trigger the neurons for opening a TV and reinforce the behavior in accordance to the manual.

I know this is quite a wild guess, but perhaps someone can elaborate on it in a more accurate manner

All the while, "salmon", which has no connection to any of these concepts, just sits on its own not being activated.

Screw you! Now, because of you, whenever I hear of Genghis Khan in the next coupla weeks, I will think of salmon!

Maybe I'm missing something, but I don't see anything in your article that actually shows how to make a neural network without some sort of "grandmother coding" (e.g. in the "hidden layer").

I'm also curious to see how you consider connectionist networks to stack up against, say, the memory-prediction framework for neural organization. ISTM that "stuff happens in a distributed way across a hidden layer" says a lot less about what we should anticipate if we cut open a brain and watch it working.

News: The idea that there is no grandparent cell seems to be challenged by this: http://www.nature.com/nature/journal/vnfv/ncurrent/full/nature11028.html

So how does "not" work, then? It seems like even if you put in a bunch of hidden nodes and distribute the knowledge throughout the brain, you're gonna have trouble with compositional semantics like that.

Couldn't "not" negatively reinforce a hidden node level between the input and output?

I'd like to hear what an expert like Phil has to say on this topic.

Normal Artificial Neural Networks are Turing complete with a certain amount of hidden layers (I think 4, but it has been a long time, and I don't know the reference off hand, this says 1 for universal approximation (paywalled)). A bit of googling says that recurrent neural networks are turing complete.

Feed forward neural networks can represent any computable function between its input and the output. They are not Turing complete with respect to the past inputs and the output as AIXI is.

Note this doesn't say anything about the set of training data needed to get the network to represent the function or how big the network would need to be. Just about the possibility.

You could use the difficulty of Taboo (the regular game, not the rationalist version) as another example.