Followup toEconomic Definition of Intelligence?

After I challenged Robin to show how economic concepts can be useful in defining or measuring intelligence, Robin responded by - as I interpret it - challenging me to show why a generalized concept of "intelligence" is any use in economics.

Well, I'm not an economist (as you may have noticed) but I'll try to respond as best I can.

My primary view of the world tends to be through the lens of AI.  If I talk about economics, I'm going to try to subsume it into notions like expected utility maximization (I manufacture lots of copies of something that I can use to achieve my goals) or information theory (if you manufacture lots of copies of something, my probability of seeing a copy goes up).  This subsumption isn't meant to be some kind of challenge for academic supremacy - it's just what happens if you ask an AI guy an econ question.

So first, let me describe what I see when I look at economics:

I see a special case of game theory in which some interactions are highly regular and repeatable:  You can take 3 units of steel and 1 unit of labor and make 1 truck that will transport 5 units of grain between Chicago and Manchester once per week, and agents can potentially do this over and over again.  If the numbers aren't constant, they're at least regular - there's diminishing marginal utility, or supply/demand curves, rather than rolling random dice every time.  Imagine economics if no two elements of reality were fungible - you'd just have a huge incompressible problem in non-zero-sum game theory.

This may be, for example, why we don't think of scientists writing papers that build on the work of other scientists in terms of an economy of science papers - if you turn an economist loose on science, they may measure scientist salaries paid in fungible dollars, or try to see whether scientists trade countable citations with each other.  But it's much less likely to occur to them to analyze the way that units of scientific knowledge are produced from previous units plus scientific labor.  Where information is concerned, two identical copies of a file are the same information as one file.  So every unit of knowledge is unique, non-fungible, and so is each act of production.  There isn't even a common currency that measures how much a given paper contributes to human knowledge.  (I don't know what economists don't know, so do correct me if this is actually extensively studied.)

Since "intelligence" deals with an informational domain, building a bridge from it to economics isn't trivial - but where do factories come from, anyway?  Why do humans get a higher return on capital than chimpanzees?

I see two basic bridges between intelligence and economics.

The first bridge is the role of intelligence in economics: the way that steel is put together into a truck involves choosing one out of an exponentially vast number of possible configurations.  With a more clever configuration, you may be able to make a truck using less steel, or less labor.  Intelligence also plays a role at a larger scale, in deciding whether or not to buy a truck, or where to invest money.  We may even be able to talk about something akin to optimization at a macro scale, the degree to which the whole economy has put itself together in a special configuration that earns a high rate of return on investment.  (Though this introduces problems for my own formulation, as I assume a central preference ordering / utility function that an economy doesn't possess - still, deflated monetary valuations seem like a good proxy.)

The second bridge is the role of economics in intelligence: if you jump up a meta-level, there are repeatable cognitive algorithms underlying the production of unique information.  These cognitive algorithms use some resources that are fungible, or at least material enough that you can only use the resource on one task, creating a problem of opportunity costs.  (A unit of time will be an example of this for almost any algorithm.)  Thus we have Omohundro's resource balance principle, which says that the inside of an efficiently organized mind should have a common currency in expected utilons.

Says Robin:

'Eliezer has just raised the issue of how to define "intelligence", a concept he clearly wants to apply to a very wide range of possible systems.  He wants a quantitative concept that is "not parochial to humans," applies to systems with very "different utility functions," and that summarizes the system's performance over a broad "not ... narrow problem domain."  My main response is to note that this may just not be possible.  I have no objection to looking, but it is not obvious that there is any such useful broadly-applicable "intelligence" concept.'

Well, one might run into some trouble assigning a total ordering to all intelligences, as opposed to a partial ordering.  But that intelligence as a concept is useful - especially the way that I've defined it - that I must strongly defend.  Our current science has advanced further on some problems than others.  Right now, there is better understanding of the steps carried out to construct a car, than the cognitive algorithms that invented the unique car design.  But they are both, to some degree, regular and repeatable; we don't all have different brain architectures.

I generally inveigh against focusing on relatively minor between-human variations when discussing "intelligence".  It is controversial what role is played in the modern economy by such variations in whatever-IQ-tests-try-to-measure.  Anyone who denies that some such a role exists would be a poor deluded fool indeed.  But, on the whole, we needn't expect "the role played by IQ variations" to be at all the same sort of question as "the role played by intelligence".

You will surely find no cars, if you take away the mysterious "intelligence" that produces, from out of a vast exponential space, the information that describes one particular configuration of steel etc. constituting a car design.  Without optimization to conjure certain informational patterns out of vast search spaces, the modern economy evaporates like a puff of smoke.

So you need some account of where the car design comes from.

Why should you try to give the same account of "intelligence" across different domains?  When someone designs a car, or an airplane, or a hedge-fund trading strategy, aren't these different designs?

Yes, they are different informational goods.

And wasn't it a different set of skills that produced them?  You can't just take a car designer and plop them down in a hedge fund.

True, but where did the different skills come from?

From going to different schools.

Where did the different schools come from?

They were built by different academic lineages, compounding knowledge upon knowledge within a line of specialization.

But where did so many different academic lineages come from?  And how is this trick of "compounding knowledge" repeated over and over?

Keep moving meta, and you'll find a regularity, something repeatable: you'll find humans, with common human genes that construct common human brain architectures.

No, not every discipline puts the same relative strain on the same brain areas.  But they are all using human parts, manufactured by mostly-common DNA.  Not all the adult brains are the same, but they learn into unique adulthood starting from a much more regular underlying set of learning algorithms.  We should expect less variance in infants than in adults.

And all the adaptations of the human brain were produced by the (relatively much structurally simpler) processes of natural selection.  Without that earlier and less efficient optimization process, there wouldn't be a human brain design, and hence no human brains.

Subtract the human brains executing repeatable cognitive algorithms, and you'll have no unique adulthoods produced by learning; and no grown humans to invent the cultural concept of science; and no chains of discoveries that produce scientific lineages; and no engineers who attend schools; and no unique innovative car designs; and thus, no cars.

The moral being that you can generalize across domains, if you keep tracing back the causal chain and keep going meta.

It may be harder to talk about "intelligence" as a common factor in the full causal account of the economy, as to talk about the repeated operation that puts together many instantiations of the same car design - but there is a common factor, and the economy could hardly exist without it.

As for generalizing away from humans - well, what part of the notion of "efficient cross-domain optimization" ought to apply only to humans?

New Comment
12 comments, sorted by Click to highlight new comments since:

Whether not there is a good definition of intelligence depends on whether there is a sufficiently unitary concept there to be defined. That is crucial because it also determines whether AI is seedable or not.

Think about a clever optimising compiler that runs a big search looking for clever ways of coding the source that it is compiling. Perhaps it is in a competitions based on compiling a variety of programs, running them and measuring their performance. Now use it to compile itself. It runs faster, so it can search more deeply, and produce cleverer, faster code. So use it to compile itself again!

One hopes that the speed ups from successive self-compilations keep adding a little. 1, 1+r, 1+r+r², 1+r+r²+r³ If it works like that, then the limiting speed up is 1/(1-r) with a singularity at r=1 when the software wakes up. So far software disappoints these hopes. The tricks work once, add a tiny improvement second time around, and makes things worse on the third go for complicated and impenetrable reasons. It appears very different from the example of a nuclear reactor in which each round of neutron multiplication is like the previous round and runaway is a real possibility.

The core issue is the precise sense in which intelligence is real. If it is real in the sense of there being a unifying, codify-able theme, then we can define it and write a seed AI. But maybe it is real only in the "I know it when I see it" sense. Each increment is unique and never comes as "more of the same".

I tend to think of the shared human programming (beyond what we share with other mammals) as being a very fancy bootloader, for the loading of culture. Other older facets of the human architecture act on the loaded culture to try and keep it somewhat towards satisfying the basic needs. This process modifies the culture and expands it, based on experiences the human has with it. Which is spread to the next generation.

Being a bootloader it tells you very little about what the system will actually be able to do. E.g. it would depend on whether it was booting dos or windows XP, or modern culture or stone age tribalism.

So a system with more resources than the human brain would be able to load more of the culture of humanity to start with. It wouldn't necessarily be able to have more experiences, than the sum of humanity, to modify that culture in enough different ways to provide the different biases that humanity has (which would needed for hard take off). So just throwing processing power at the problem may not be enough.

I tend to think of the shared human programming (beyond what we share with other mammals) as being a very fancy bootloader, for the loading of culture. Other older facets of the human architecture act on the loaded culture to try and keep it somewhat towards satisfying the basic needs. This process modifies the culture and expands it, based on experiences the human has with it. Which is spread to the next generation.

Being a bootloader it tells you very little about what the system will actually be able to do. E.g. it would depend on whether it was booting dos or windows XP, or modern culture or stone age tribalism.

So a system with more resources than the human brain would be able to load more of the culture of humanity to start with. It wouldn't necessarily be able to have more experiences, than the sum of humanity, to modify that culture in enough different ways to provide the different biases that humanity has (which would needed for hard take off). So just throwing processing power at the problem may not be enough.

(Third time at attempted posting - something seems wrong - apologies if there are multiples)

I tend to think of the shared human programming (beyond what we share with other mammals) as being a very fancy bootloader, for the loading of culture. Other older facets of the human architecture act on the loaded culture to try and keep it somewhat towards satisfying the basic needs. This process modifies the culture and expands it, based on experiences the human has with it. Which is spread to the next generation.

Being a bootloader it tells you very little about what the system will actually be able to do. E.g. it would depend on whether it was booting dos or windows XP, or modern culture or stone age tribalism.

So a system with more resources than the human brain would be able to load more of the culture of humanity to start with. It wouldn't necessarily be able to have more experiences, than the sum of humanity, to modify that culture in enough different ways to provide the different biases that humanity has (which would needed for hard take off). So just throwing processing power at the problem may not be enough.

[-]HH10

The mention of "regularity" in this post convinces me more and more that true intelligence, at its core, is nothing else but the ability to make accurate predictions. That is, intelligence in any particular area is the ability to make accurate predictions about the future in that area. If you can predict cash flows accurately, you're an intelligent analyst. If you can predict game scores accurately, you're an intelligent gambler. Combining such intelligence with goals yields the following: if you can accurately predict whether your actions will bring you closer to your goal or not, you're intelligent. Thus, in a sense, there's nothing uniquely human: it's just a matter of how we developed our goals [through evolution], what tools we have to try to fulfill them [our brain, as evolved], and how good we are at that [how intelligent we are, which varies].

There are lots of interesting connections between economics and intelligence, which have been articulated by several people such as Eric Baum ("Manifesto for an Evolutionary Economics of Intelligence") and David Wolpert ("Collective Inteligence").

The central question is: how can billions of neurons, each of which is very limited in terms of computational capacity and information supply, cooperate effectively to produce higher level intelligence? There is a clear analogy to the human economy, where many individual agents with limited skills (any OB readers know how to manufacture a pencil?) and information can cooperate to produce incredibly complex and efficient economic systems.

The view expressed by the above authors is that the essential requirement is to establish a framework for interactions between agents, which will guarantee that a good global outcome will be achieved by agents maximizing their local utility functions (self-interest). Baum suggests that the nature of this framework should be similar to the rules of capitalism: you need to protect "property", ensure conservation of wealth (no agents can print money), etc.

The view expressed by the above authors is that the essential requirement is to establish a framework for interactions between agents, which will guarantee that a good global outcome will be achieved by agents maximizing their local utility functions (self-interest). Baum suggests that the nature of this framework should be similar to the rules of capitalism: you need to protect "property", ensure conservation of wealth (no agents can print money), etc.

Capitalist economists do love their competition. Do they really see this in the human brain? OK, there's neural darwinism, but that's hardly the basis of the day-to-day function of the brain. Rather, the brain works cooperatively - more like an ant colony, than a capitalist economy. The brain cells are all clones of each other. Why would they compete? They wouldn't - and they don't.

So you need some account of where the car design comes from. [...] you'll find humans, with common human genes that construct common human brain architectures.

Yes, but what does this do to constrain your expectations about economics? If your intelligence idea is applicable to economics, it must have some explantory power that standard economic approaches lack (failing that, it should at least reformulate standard economic insights in a simpler setting).

A general theory of intelligence designed for constructing AI's does not need to be universally applicable. It's not a weakness if economics isn't one of its applications (after all, economics manages to get very correct results by assuming a simplified homo econimus that does not exist, so correctly moddelling economics is not a good test of the underlying assumptions), so there's no need to insist that it is.

A general theory of intelligence designed for constructing AI's does not need to be universally applicable.

I think the idea is that once that AI is running, it would be nice to have an objective measure of just how pwerful it is, over and above how efficiently it can build a car.

We humans seem to display a significant cognitive bias while thinking about intelligence. ;)

Does intelligence admit a partial order? I have often thought about this question and I find that the concept of intelligence and it's partial order are highly suspect and probably the output of cognitive biases.

On the bottom of the ocean floor, is the human more intelligent or is the octopus? Is the HIV more intelligent or the common cold virus and are we more intelligent then them? We are? From our perspective or the viri's? Or from some universal perspective? Is the neuron in optical lobe more intelligent or the one in your spinal cord?

Intelligence is very likely a wrong concept. It is a popular concept but likely the wrong one. I mean wrong in the sense that it is likely a purely cognitive myth. Humans consider whatever they perceive of as better than them in the medium to long time periods as more intelligent. And they tend to apply the same concept extrapolatively backwards too as in a bee or a beaver would have to think of us as better. The fact that concept breaks down when trying to apply it to non human like things and activities is ample evidence.

Tim: "Capitalist economists do love their competition" To this capitalist and (amateur) economist the striking thing about the economy is not the competition, but the strength and depth of co-operation that pervades it aka 'division of labour'. The market is an optimiser (like evolution), one which increases coordination, giving resources to people and organisations which use them well, and taking resources from those who use them badly. To look for 'intelligence' in the economy is not quite as foolish as looking for intelligence in evolution, but it's still an error.