From time to time I encounter people who claim that our brains are really slow compared to even an average laptop computer and can't process big numbers.

At the risk of revealing my complete lack of knowledge of neural networks and how the brain works, I want to ask if this is actually true?

It took massive amounts of number crunching to create movies like James Cameron's Avatar. Yet I am able to create more realistic and genuine worlds in front of my minds eye, on the fly. I can even simulate other agents. For example, I can easily simulate sexual intercourse between me and another human. Which includes tactile and olfactory information.

I am further able to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. You can do that too. Having a discussion or playing football are two examples.

Yet any computer can outperform me at simple calculations.

But it seems to me, maybe naively so, that most of my human abilities involve massive amounts of number crunching that no desktop computer could do.

So what's the difference? Can someone point me to some digestible material that I can read up on to dissolve possible confusions I have with respect to my question?

New Comment
24 comments, sorted by Click to highlight new comments since:
[-]Emile200

It took massive amounts of number crunching to create movies like James Cameron's Avatar. Yet I am able to create more realistic and genuine worlds in front of my minds eye, on the fly.

You think you can. When you imagine a Dragon, it's very clear. But once you start trying to put it onto paper, you realize you don't really know how the wings are attached to the body, or how the wings and forelimbs connect, or how the hind limbs fold, etc.

Just like the mind can imagine "a dragon", it can also imagine the perception "and all the details are complete" without being able to actually fill out all the details if you ask it to.

And while you can imagine the details of some scenes (familiar scenes, or if you're a good artist), you can only do it on a small section, you're not summoning a fully detailed scene in your mind. Rendering Avatar requires getting the details for every inch of the screen, simultaneously.

And while you can imagine the details of some scenes (familiar scenes, or if you're a good artist), you can only do it on a small section, you're not summoning a fully detailed scene in your mind. Rendering Avatar requires getting the details for every inch of the screen, simultaneously.

Yes, but I can jump to any inch and simulate it on the fly if I want to. If I take my room or garden, I can simulate any part. Even the fine details of leaves. And the same is true for completely new environments.

I can't draw those scenes. But some people can. I never learnt to draw...

Blind mathematicians can even imagine higher dimensions.

The point was, doesn't this require a lot of number crunching? Big numbers, for what its worth...

[-]gwern160

The point was, doesn't this require a lot of number crunching? Big numbers, for what its worth...

Ask such a blind mathematician to calculate an abstruse property of a geometric figure with randomized values down to the, say, 60th decimal place. Will they be able to do it? After all, it's a trivial amount of computation compared to what you imply is going on in their heads when they do the math that so impresses you.

A general counter-example to your post is dreams: in a dream, one usually feels sure of the reality and convincingness of the dream (and when one has cultivated the rare & unusual skill of doubting dreams, then one can do things like lucid dreaming) and yet there's hardly any information or calculation involved. Have you ever tried to read a book in a dream? Has an excellent logical argument been explained to you in a dream and then you tried to remember it when you woke up? I've heard both examples before, and when I was working on lucid dreaming & kept a dream journal, I did both, to no effect.

What's going on is more a case of domain-specific calculating power being hijacked for other things, heuristics, and people looking where the light is.

(Why was research into fractals and chaotic functions delayed until the '50s and later, when the initial results could often be shown to stem from material in the 1800s? It doesn't require much calculating power, Mandelbrot did his weather simulations on a computer much weaker than a wristwatch. Because the calculating power required, unlike geometry say, was not one that fits nicely into the visual cortex or requires extremely few explicit arithmetical calculations.)

A square inch would require much less number crunching than a whole scene in Avatar; there are also details that humans don't easily imagine right - the way light is reflected on a necklace, the way ambient light colors shadows, the way the skin ripples when muscles move underneath. Those things are expensive to compute, but your mind can get away with not imagining them right, and still think it looks "right".

Those things are expensive to compute, but your mind can get away with not imagining them right, and still think it looks "right".

This is probably right, but I would have to device a way on how to better estimate the precision with which a human brain can simulate a scene.

One could also look at how long it takes a modern supercomputer to simulate an inch of a frame minus some light details. Although I perceive my imagination to be much more photo realistic than a frame from the movie Avatar.

I don't think it's very meaningful to compare one's subjective impression of detail, and a computer rendering with too much precision; those are pretty different things.

The reasons for why I think humans are worse at imagining detail than they usually notice:

  • I like drawing, and often I could imagine something clearly, but when I tried to put it on paper I needed a lot of tried to get it right, if I managed to. Some things - expressions, folds in clothes, poses - are surprisingly difficult to get right, even when we see them all the time.

  • When playtesting the user interface of games, we sometimes ask the playtesters to draw what they saw on the screen. Often huge obvious stuff is missing, and sometimes they even draw things that weren't on the screen, and come from another game.

Those are just rough impressions, but I don't think it's useful to go much further than rough impressions on this specific topic.

p.s.: it's "devise", not "device"

[-][anonymous]-10

Mind Projection Fallacy.

There are artists with sufficient visual and spatial cognitive abilities that can do exactly what you claim cannot be done -- to imagine and sketch out new anatomies on the fly.

Mind Projection Fallacy

The mind projection fallacy is confusing properties of your thoughts with the environment, in this case it would be to think that because your imagination of something was imperfect then it really was fuzzy in the real world. Confusing the way you think with the way others think is called the typical mind fallacy.

[-][anonymous]60

Oops.

Agreed - I was thinking of that in the third paragraph; when it is possible, it's still pretty slow - faster than drawing, slower than rendering.

(Without requiring amazing artists, I think a lot of people would be capable of imagining their bedroom at a high level of detail.)

I think there are some confusions here about the mind's eye, and the way the visual cortex works.

First of all, I suggest you do the selective attention test. Here will do. Selective attention test

This video illustrates the difference between looking at a scene and actually seeing it. Do pay attention closely or you might miss something important!

The bottom line is that when you look at the outside world, thinking that you see it, your brain is converting the external world of light images into an internal coding of that image. It cheats, royally, when it tells you that you're seeing the world as it is. You're perceiving a coded version of it, and that code is actually optimised for usefulness and the ability to respond quickly. It's not optimised for completeness - it's more about enabling you to pay attention to one main thing, and ignore everything else in your visual field that doesn't currently matter.

And that's where your comparisons later fall down. The computers rendering Avatar have to create images of the fictional world. Your own internal mind's eye doesn't have to do that - it only has to generate codes that stand for visual scenes. Where Avatar's computers had to render the dragon pixel by pixel, your internal eye only has to create a suitable symbol standing for "Dragon in visual field, dead centre." It doesn't bother to create nearly all of the rest of your imagined world in the same way as some people ignore the important thing in the attention test. Because you only EVER see the coded versions of the world, the two look the same to you. But it is a much cheaper operation as it's working on small groups of codes, not millions of pixels.

The human brain is a very nice machine - but I also suspect it's not as fast as many people think it is. Time will tell.

Yes, your brain has much more processing power than anybody's laptop. But I think the people you quote are referring to the available general-purpose processing power.

Your visual cortex, for instance, packs a huge amount of processing power, but it's specialized for one task - processing visual information. You can't just tell it to crunch numbers and get a useful output, because it isn't built for that. Yes, there are clever tricks for employing your visual cortex in calculating big numbers, but even those don't take full advantage of it - the author of the linked paper says he was able to multiply two random 10-digit numbers together over a period of 7 hours. If you could use all the power in your visual cortex, you'd get the result instantly.

The brain packs a huge amount of power, but most of it is very specialized and hard to take advantage of, unless your task happens to be one similar to the domain that your brain has specialized circuitry for. Yes, with enough practice and the right tricks, people can learn to perform impressive feats of arithmetic and memory, so the circuitry is to some extent reconfigurable. But no human yet has managed to outdo modern computers in pure number-crunching. And getting even that far takes a lot of practice, so the circuitry isn't usefully available for most tasks.

We do have some processing power available for truly general-purpose computing. Given any (non-quantum) algorithm, a pen and some paper, you can simulate that algorithm with enough time. We had people do that before coming up with digital computers. But it's going to take a while, because your usefully available general-purpose computing power is very limited. In contrast, practically all of the laptop's processing power is general-purpose and usefully available - give it the same algorithm, and it's going to run through it a lot faster than you will.

The switching rate in a processor is faster than the firing rate of neurons.

as a rough estimate it is reasonable to estimate that a neuron can fire about once every 5 milliseconds, or about 200 times a second

All else being equal, a computer should be faster than an aggregate of neurons. But all isn't equal, even when comparing different processors. Comparing transistors in a modern processor to synapses in a human brain yields many more synapses than transistors. Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it's well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point.

For instance, laptop processors are general purpose processors, being able to do many different tasks they aren't really fast or good at any. Some specific tasks may make use of custom made processors, which, even if their clock rate is slower, or if they have less transistors, will still vastly outperform a general purpose processor if they are to compete for the task they were custom-built for.

It took 35,000 processor cores running to render Avatar. If we assume that a Six-Core Opteron 2400 (2009, same year as Avatar) has roughly 10^9 transistors, then we have (35,000/6)*10^9 = 5.83*10^12 transistors.

The primary visual cortex has 280 million neurons, while a typical neuron has 1.000 to 10.000 synapses. That makes 2.8*10^8*10^4 synapses, if we assume 10.000 per neuron, or 2.8*10^12.

By this calculation it takes 5.83*10^12 transistors to render Avatar and 2.8*10^12 synapses to simulate something similar on the fly. Which is roughly the same amount.

Since the clock rate of a processor is about 10^9 Hz and that of a neuron is 200 Hz, does this mean that the algorithms that our brain uses are very roughly (10^9)/200 = 5000000 times more efficient?

I don't think this is a valid comparison, you have no idea whether rendering avatar is similar to processing visual information.

Also, without mentioning the rate at which those processors rendered avatar, the number of processors has much less meaning. You could probably do it with one 35,000 times slower.

Some questions which we need to answer then :

1 ) What is the effective level of visual precision computed by those processors for Avatar, versus the level of detail that's processed in the human visual cortex?

2) Is the synapse the equivalent of a transistor if we are to estimate the respective computing power of a brain and a computer chip? (i.e., is there more hidden computation going, on other levels? As synapse use different neurotransmitters, does that add additional computational capability? Are there processes in the neurons that similarly do computational work too? Are other cells, such as glial cells, performing computationally relevant operations too?)

There are several issues here.

First, just because ~5x10^12 transistors was used to render Avatar (slower than real-time, btw) does not mean that it minimally requires ~5x10^12 transistors to render Avatar.

For example, I have done some prototyping for fast, high quality real-time volumetric rendering, and I'm pretty confident that the Avatar scenes (after appropriate database conversion) could be rendered in real-time on a single modern GPU using fast voxel cone tracing algorithms. That entail only 5*10^9 transistors, but we should also mention storage, because these techniques would require many gigabytes of off-chip storage for the data (stored on a flash drive for example).

Second, rendering and visual recognition are probably of roughly similar complexity, but it would be more accurate to do an apples to apples comparison of human V1 vs a fast algorithmic equivalent of V1.

Current published GPU neuron simulation techniques can handle a few million neurons per GPU, which would entail about 100 GPUs to simulate human V1.

Once again I don't think current techniques are near the lower bound, and I have notions of how V1 equivalent work could be done on around one modern GPU, but this is more speculative.

Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it's well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point. For instance, laptop processors are general purpose processors, being able to do many different tasks they aren't really fast or good at any.

Intuitively this doesn't seem right at all: I can think of plenty of things that a human plus an external memory aid (like a pencil + paper) can do that a laptop can't, but (aside from dumb hardware stuff like "connect to the internet" and so on) I can't think of anything for which the reverse is true; while I can think of plenty of things that they both can do, but a laptop can do much faster. Or am I misinterpreting you?

I'm not sure I understand your question.

I guess part of my point is that a laptop processor is a very general purpose tool, while the human brain is a collection of specialized modules. Also, the more general a tool is, the less efficient it will be on average for any task.

The human brain might be seen as a generalist, but not in the same way a laptop computer processor is.

Besides, even a laptop processor has certain specializations and advantages over the human brain in certain narrow domains, like for instance among others, number crunching and fast arithmetic operations.

But it seems to me, maybe naively so, that most of my human abilities involve massive amounts of number crunching that no desktop computer could do.

I think it's an unfair comparison because you are allowed to cheat. You don't have to produce 1500×1000 pixels, 25 times per second, consistently, with correct lights and reflections, etc. Different minds may work differently, but I suspect there is a lot of cheating. A bad result may seem OK, because you are allowed to fix any detail at the moment you start paying attention to it; you don't even have to notice that you are doing this.

On the other hand, it wouldn't surprise me if human brain would be really faster in some tasks -- those we are optimized for by evolution, such as image processing. Visual part of brain does some massive parallel computation, and if a task allows parallel computation, many slow computers can outperfom one fast computer (or a smaller group of fast computers).

Both effects may work together -- we have parallel hardware optimization for image processing and we are allowed to give imprecise answers and even cheat.

When you create a movie in your mind's eye, then firstly, it's not clear that it actually has the amount of detail that a computer-rendered movie does. But secondly, there is (probably) no structure in your visual cortex that would straightforwardly map to a number. (This is very unlike the case in a computer, where every voltage in some sense represents a number.) Instead your neurons are taking shortcuts that a human engineer would simulate by using explicit arithmetic. Recall the famous pitch-detecting circuit; it is clearly doing the equivalent of some arithmetic, but there's certainly no explicit representation within it of any numbers.

Just because something can be achieved by explicitly representing numbers, doesn't mean that your brain does it that way - not even under the hood. Consequently the comparison to things that were achieved by massive number-crunching capacity is not strong evidence of such capacity in your brain.

I once thought that mathematical geometry worked by a kind of detail crunching.

If a line is just a systematic set of infinite number of points checking whether two lines intersect would just be a "simple" operation to check whether they contain a point in common. Take points from one line and check whether it is a part of the other line. Doing this with literally infinite number of points would amount to a supertask. So you could only do so to an arbitrary precision but not exactly.

However a very simple math problem like "find the intersection of lines y=2x and y=3x+5" can be done exactly in a finite small number of symbol operations. And actually the description of the infinite number of points on the first line can be done by a very finite expression of "y=2x". There are also an infinite number of such lines but finding each of their intersection doesn't include attending them pair by pair. The procedure of solving the descriptions as a equation pair can be expressed in a expression more meta and "more finite".

So instead of just a big fleet of lowest level comparisons what really happens is a tiny amount of work on different levels. If one would count each symbol manipulation as a single number crunching operation the supertask of point comparisons would seem to be the most demanding. However using multiple levels of symbols means supporting a wider array of symbol manipulation operations.

So while I appears that I compare infinite numbers of points when I am doing simple geometry, it's just that I am bypassing one kind of calculations limits by using another kind of calculation.

Yet any computer can outperform me at simple calculations.

Humans aren't built to do math. This is true on both a lower level, in that an imperfectly designed neural network wouldn't be very efficient at precise calculations, and a higher level, in that humans do not have any sort of specialized part of their brain for math.

It is actually pretty interesting: the software that tries to do something highly parallelizable that brain does, always ends up requiring within the ballpark of how many operations per second neurons do in parallel. If we are to talk Avatar, more interesting is the fact that this much computing power is necessary to fool you, within the ballpark of the power of visual cortex, 'naively' computed. Likewise for other things. Even for things that are pretty narrow, where brain performs badly, like Chess, the computing power required is quite formidable. Much moreso for something like Go.

The belief that your abilities are doable without immense number crunching is the notion of overly optimistic AI researchers of the 1960s. It's been obsoleted everywhere but here. Here you have this notion that brain hardware is somehow very 'badly designed' by evolution, never mind that it packs immense operations per second into small volume and small power consumption, with the only major failure being the use of organic materials rather than silicon. And you have the notion that the software is equally very bad. And total lack of awareness why world doesn't think its true any more.