Epistemic Status

Exploratory and unconfident, but I believe important.

 

Acknowledgements

I'm grateful to "JustisMills", "sigilyph", "tomcatfish" and others for their valuable feedback on drafts of this post.

 


 

Introduction

By compact, I mostly mean "non-composite". General intelligence would be compact if there were universal/general optimisers for real world problem sets that weren't ensembles/compositions of many distinct narrow optimisers.

AIXI and its approximations are in this sense not "compact" (even if their Kolmogorov complexity might appear to be low).

By "general intelligence", I'm using Yudkowsky's conception of (general) intelligence as "efficient cross-domain optimisation".

Thus, when I say, "a general intelligence", I'm thinking of an optimisation process.

 

Note

I'm currently flirting between the terms "compact" and "simple" to capture the intuition of "non-composite". Let me know which term you think is more legible/accessible (or if you have some other term that you think better captures "non-composite").

("Simple" has the inconvenience of being contrasted with "complex" in the Kolmogorov complexity sense.)

I might equivocate between "problem set" and "domain"; they refer to the same thing.

 


 

Why Does This Matter?

Why should you care if general intelligence is compact or not? How is it (supposedly) relevant for the development artificial intelligence?

As best as I can tell — I'll briefly explain why later in this essay — the compactness (or not) of general intelligence may largely determine:

  • Whether algorithmic/architectural innovation can lead to a fast takeoff
    • This bears on the feasibility of fast takeoff via:
      • Designing successor agents
      • Recursive self-improvement
      • Other positive feedback loops in algorithmic/architectural innovation
        • E.g., AIs contributing to AI research
    • More generally, it constrains takeoff by bounding marginal returns to algorithmic/architectural innovation
    • Takeoff via scale will also be constrained as a second order effect
      • Scaling the size of cognitive engines
        • Examples of "scaling the size of cognitive engines":
          • Brains with more synaptic connections or more neurons
          • ML models with more parameters/hyperparameters
      • Scaling the available computational resources invested in AI systems:
        • Training compute
        • Inference compute
        • Training data
        • Inference data
        • Available memory
        • Etc.
  • What is possible in the limits of arbitrary high intelligence
    • How far humans are from said limits
  • What "strongly superhuman intelligence" looks like
    • What cognitive capabilities would a strongly superhuman AI possess?
  • Whether "strongly superhuman intelligence" is economically viable
    • How do the relevant marginal returns behave?

In summary, it's crucial to determining the dynamics of AI takeoff.

 

In the remainder of this post, I'd like to present two distinct models of general intelligence, consider the implications of the two models on AI development/outcomes, and later speculate on which model reality most closely resembles.

 


 

Two Models of General Intelligence

This is an oversimplification, but to help gesture at what I'm talking about, I'd like to consider two distinct ways in which general intelligence might plausibly manifest. Our reality may not exactly match one of these models (it probably will not), but I expect that it looks a lot more like one model than another.

(In general, I expect the two examples I'll consider will serve as foci around which models of general intelligence might cluster.

Alternatively, you might consider general intelligence to manifest on a linear spectrum, and the two models I describe below serve as focal points at the extremes of that spectrum. 

I believe that a distribution of ways in which general intelligence might manifest would be bimodal around the foci I'll consider.)

 

Compact General Intelligence Hypothesis (CGIH)

There exists a class of non-compositional optimisation algorithms that are universal optimisers for the domains that manifest in the real world (these algorithms need not be universal for arbitrary domains that are irrelevant for real world problems).

(To be clear, when I claim they are universal optimisers, there's an implicit assumption that they are efficient at optimising. Random search would get the right answer eventually for every search problem with a finite search space, but it's a lot less efficient than, say, binary search for ordered search spaces).


Alternatively, the generally intelligent algorithms optimise better for domains the more likely they are to manifest in reality/the more useful they are for influencing reality.

Alternatively, they are universal for many/most important problems.

Alternatively, they are universal for common problems.

(There are many ways of framing what "compact general intelligence" looks like; I hope that the various formulations above are gesturing at the notion of "universality" I have in mind).


General intelligence is implemented by algorithms in the aforementioned class. Architectural/algorithmic innovation of generally intelligent agents looks like optimisation over this class (finding more efficient/more performant/simpler/"better" [according to the criterion of interest] algorithms [or implementations thereof] within this class).

 

Ensemble General Intelligence Hypothesis (EGIH)

Non-compositional optimisation algorithms are either incredibly inefficient or not universal in the sense that is important to us. Alternatively, no such algorithms exist.

Instead, efficient cross domain optimisation functions by gluing together many narrow optimisers for the problem sets/domains. That is, general optimisers are compositions of narrow optimisers.

(This is not necessarily a one-to-one mapping; some narrow optimisers may apply to more than one problem set, and some problem sets may be tackled by multiple narrow optimisers).

A general optimiser could also dynamically generate narrow optimisers on the fly for the problem sets it's presented with.

A general intelligence might be described as an algorithm for selecting (a) narrow optimiser(s) to apply to a given problem set (given  examples from said set).


General intelligence is implemented by algorithms that orchestrate, generate, and synthesise these narrow optimisers. Architectural/algorithmic innovation of generally intelligent agents looks like meta optimisation over the class of narrow optimisers:

  • Better procedures for selecting/generating narrow optimisers given  examples of a problem set
  • Better procedures for synthesising results of narrow optimisers for a given problem
  • Better procedures for coordination among narrow optimisers
  • Etc.

 

Note

When I say "select" above, I'm imagining that there exists a (potentially infinite) set of all possible narrow optimisers a general intelligence might generate/select from, and there exists a function mapping problem sets (given  examples of said set) to tuples/subsets of narrow optimisers (perhaps with rules for how to combine/synthesise them).

However, this is just a sketch of how one might formalise a model of ensemble general intelligence. I do not intend to imply that any such representation of all possible narrow optimisers is stored internally in the agent, nor that the agent implements (something analogous to) a lookup table.

I equivocate between selection and generation. In practice I imagine that ensemble general intelligence will function via generation, but the mechanics of selection are easier to reason about/formally specify.

 


 

Implications of the Models

Which model of general intelligence our reality most closely resembles is important for reasoning about what is possible for advanced cognitive capabilities. If you want to reason about what "strongly superhuman intelligence" entails, you must make assumptions about what model of general intelligence you are dealing with.

Let's consider what the world looks like in either of the two models as applied to e.g., prediction:

  1. CGIH: there are compact universal algorithms for predicting stimuli in the real world.
    • Becoming better at prediction in one domain reliably transfers across several/many/all domains.
      • This could also be reframed as "there is only one/a few domain(s) under consideration when improving predictive accuracy"
  2. EGIH: there are only narrow algorithms good at predicting stimuli in distinct domains.
    • Becoming a good predictor in one domain doesn't necessarily transfer to other domains.

(Considerations like the above could be applied to the other cognitive abilities and aggregations of them).

 

Results

It should be immediately apparent that the curve of real-world capability returns to increased cognitive capability is much steeper in CGIH worlds than in EGIH worlds.

Marginal returns to cognitive investment would also exhibit a much steeper curve under CGIH than under EGIH. As a result, takeoff under EGIH would probably be considerably slower than takeoff under CGIH.

(The above is perhaps the main finding of this post — the reason I began writing it in the first place).


Details of the results depend on posts I've not yet published, so the rest of this section will be light on demonstration, and just aim to explain the high-level picture (reproducing my investigations on the relevant matters is beyond the scope of this post).

 

EGIH

Across many domains, the marginal returns to improving predictive accuracy diminish at an exponential rate (see the "Caveats and Clarifications" subsection below for where this does and doesn't apply).


Without any ability to improve predictive accuracy in many domains at once, marginal returns to cognitive capability [as leveraged via more accurate predictions] must also be sharply diminishing (the sum to infinity of an exponentially decaying sequence converges, so even if predictive accuracy as a function of cognitive capabilities grew superlinearly, the exponential decay would still result in sharply diminishing returns). 

Consider an AI with the ability to make accurate predictions in every domain (or maximally accurate ones given available information and underlying entropy). Let's call this ability "superprediction".

EGIH suggests that "superprediction" is:

  1. Infeasible
    • There is no universal way to efficiently transfer improvements in one domain to arbitrary other domains or to novel domains.
    • The only way to improve predictive accuracy in a target domain is to learn that domain.
      • Learning that domain wouldn't make you reliably better at other domains.
    • If one seeks to become exceptional at prediction in  domains, they would have to learn all  domains.
      • Contrast the CGIH world in which they'd have to learn only a single domain and could extrapolate somewhat to arbitrary others from there.
  2. Economically unrewarding
    • Marginal returns to improved predictive accuracy diminish at an exponential rate.
      • Again, see the "Caveats and Clarifications" subsection below for where this does and doesn't apply
    • There is no/limited possibility to transfer improvements in prediction across domains.
      • Marginal returns to investment in cognitive capabilities leveraged via improved predictions are sharply diminishing.
    • A rational agency may be disincentivised from investing economic resources in improving cognitive capabilities, as there are better returns to be found elsewhere.
      • E.g. marginal returns from acquiring more energy do not appear to diminish so harshly

 

In such a world, not only would we expect takeoff to be (excruciatingly) slow, but strongly superhuman intelligence might simply not manifest anytime soon (if ever). Not because it is impossible, but because the marginal returns to real world capabilities from increased cognitive capabilities diminish at too harsh a rate to justify investment in cognitive capabilities.

 

Broader Findings

A result that is analogous to the above should hold for other narrow cognitive skills and the aggregate of an agent's cognitive capabilities (albeit the behaviour of marginal returns in a narrow domain may be markedly different).

Improved performance in a single domain has limited transferability to other domains. To become competent in a new domain, in most cases the agent will have to learn that new domain.

I expect that: "the marginal returns to cognitive skill X in general is upper bounded by the marginal returns to X in a specific domain", i.e. there is limited opportunity for returns to cognitive capabilities to compound on each other.

This bounds the marginal returns to cognitive investment (chiefly via algorithmic/architectural innovation but perhaps also via scale).

 

Caveats and Clarifications

The exponential decay to marginal returns on predictive accuracy holds for domains where predictive accuracy is leveraged by actions analogous to betting on the odds implied by credences (e.g. by selling insurance policies).

I investigated such an operationalisation because I was looking for a method to turn subjective credence in a proposition [assuming the agent is well calibrated] into money [money is an excellent measure of real-world capabilities for reasons that I shall not cover here]. This was an attempt to measure real world capability returns on increased predictive accuracy).

This is not a fully general result. There are some domains in which returns to predictive accuracy may behave more gracefully.

In multiagent scenarios with heavy tailed outcomes (e.g. winners take most or winner take all dynamics), increased predictive accuracy could have sharply rising marginal returns across an interval of interest.

(In general, in multiagent scenarios, returns on a particular cognitive capability cannot be assessed without knowing the distribution of that capability among the other participant agents. Depending on the particular distribution and the nature of the "game" [in the game theoretic sense] under consideration, marginal returns on cognitive capability may exhibit various behaviours across different intervals).

For this and other reasons, the above result is not generally applicable.

 

Furthermore, the entire analysis is an oversimplification. Predictions are just one aspect of cognitive capabilities, and returns to other aspects do not necessarily diminish at an exponential rate (or necessarily diminish at all).

 

CGIH

On the other hand, if improvements in predictive accuracy reliably transferred across domains, then compounding returns to real world capability from increased cognitive capability become probable.

An ability like "superprediction" would become not only feasible, but economically rewarding. It would be possible to improve predictive accuracy at  domains from investment/innovation across only one (or at least just a few) domain(s).

And this would transfer to entirely novel domains.

In general, superlative forms of all/most other cognitive skills would be feasible as there's a "compact" algorithm governing performance on those skill across all/most domains, so one need only improve said universal algorithm to improve performance everywhere.

 

Conclusions of "Implications of the Models"

Which model of general intelligence our reality most closely resembles may mostly determine the returns to investment in cognitive capability.

The function governing marginal returns (see the "Clarifications" on this subsection) under CGIH dominates the function governing marginal returns under EGIH.

(There are ways to rigorously state this via asymptotic analysis, but at this stage, such formalisms would be premature. The gist of what I'm pointing out is that the former function grows "much faster" than the latter function. E.g.:

As , the gap between the two functions grows every wider [also tending to ].

The asymptotic differences in the curves for the relevant marginal returns would also apply to models of general intelligence that fall somewhere on the spectrum between the two models. Models of general intelligence that are "closer" to CGIH would generally dominate models that are "closer" to EGIH.)


There are several implications of the above:

  • Achievable optimum of cognitive capabilities in a given time frame under CGIH would be considerably higher than under EGIH
    • The gap grows the longer the time frame is
      • (Not necessarily proportional to the length of the time frame due to the relationships between the relevant functions)
  • Economic investment in cognitive amplification is considerably more attractive under CGIH than under EGIH
    • Sharp differences seem likely
      • It may be the case that cognitive amplification to strongly superhuman level is economically attractive under CGIH but not under EGIH.
        • I.e. strongly superhuman intelligences may simply not manifest in EGIH worlds because it's not an attractive use of economic resources.
      • This will apply to some level of cognitive ability.
        • There is some level of cognitive capabilities that will never be realised in EGIH worlds because it's economically prohibitive.
    • The difference in the economic attractiveness of cognitive amplification under the two models would further exacerbate the difference in takeoff dynamics by bounding the economic resources invested in cognitive amplification.
      • Less human capital allocated to AI research and development.
      • Less money spent purchasing computational resources to scale up AI models.
  • Takeoff under CGIH would be considerably faster than takeoff under EGIH
    • It's hard to quantify what "considerably faster" means at this stage (we lack formal specifications of the functions governing the relevant marginal returns), but I hope the idea of one function growing much faster than the other helps gesture at it.


As a result, whether we can have a "fast" takeoff at all — whether this is possible in principle — depends chiefly on what model of general intelligence our reality manifests.

 

Clarifications

The "marginal returns" mentioned earlier include:

  • Marginal returns to cognitive capabilities from cognitive investment
    • Via algorithmic innovation
    • Via architectural innovation
    • Via larger cognitive engines
      • ML models with more parameters/hyperparameters
      • Brains with more synaptic connections or neurons
  • Marginal returns to real world capabilities from cognitive capabilities
    • How much more capable in the real world does becoming  times smarter make you?
    • How much more capability in the real world does a linear increase in intelligence translate to?
  • Marginal (economic) returns from investment in cognitive capabilities
    • If you invest e.g. $1,000 worth of resources in making a system smarter, how much more would you get back from the system within a given horizon?
    • Alternatively, what is the difference in the net present value of an AI system now and if you invest e.g. $1,000 extra in amplifying its cognitive capabilities.

These different marginal returns will have different functions describing them, but CGIH functions should grow much faster than their EGIH counterparts.

 


 

Interlude on Epistemic Status

Which model of general intelligence our reality most closely resembles is what I'll ponder for the remainder of this post. Though be forewarned, said ponderings are the main reason I'm unconfident in this post. 

I'm very unsure of the details of general intelligence in our reality, and of the considerations I highlighted to speculate on it.

 


 

General Intelligence in Humans

Our brain is an ensemble of some inherited and some dynamically generated (via neuroplasticity) narrow optimisers.

 

Inherited Narrow Optimisers

A non-exhaustive list of specialised neural machinery we inherit:

  • Visual cortex
    • Dedicated circuits for:
      • Face recognition
      • Object recognition
      • Place recognition
      • Movement recognition
  • Motor cortex
    • Movement
  • Parietal lobe
    • Language comprehension
  • Wernicke's area
    • Speech comprehension
  • Broca's area
    • Speech
  • Auditory cortex
  • Somatosensory cortex
  • Olfactory cortex

 

Thoughts on Narrow Perception

Perceptual abilities are quite old in the evolutionary history of central nervous systems. Compared to novel skills like symbolic and abstract reasoning, perceptual machinery has been optimised and refined a lot more. That is, I would expect our perceptual machinery to be a lot closer to optimal (given the relevant constraints) than our machinery for higher reasoning.

As such, I think the nature of perception in mammals may be somewhat informative about implementations of perception in our universe.

 

The specialisations of visual systems for image recognition strike me as particularly compelling evidence against general optimisers in humans. We don't have a general optimiser that can do arbitrary image recognition. There's a particular region of the visual cortex involved in face perception, and if that region is damaged (in adults: children are much more adaptable), people are no longer able to distinguish faces. They generally retain their ability to discriminate between objects, but not faces. The name for this defect (in both its congenital and acquired forms) is "prosopagnosia".

The machinery for general object recognition cannot be applied to successfully discriminate between faces. 

There is (an exceedingly rare) mirror defect that impairs ability to discriminate or recognise objects but leaves facial recognition ability intact.

 

But the narrowness is even more specific than just specialised circuits for face recognition, object recognition, image recognition, etc. We are specialised to recognise certain kinds of faces by a phenomenon called "perceptual narrowing".

6-month-old human babies are roughly ambivalent at distinguishing human faces vs monkey faces. By the time they're 9 months old, they are more selective towards human faces (they can better discriminate human faces than monkey ones).

(Sourced from this lecture).

 

It's not just human faces vs. monkeys either. People who grow up only around faces from a particular race have their face recognition machinery narrow to that race. They lose their ability to adequately discriminate between faces of other races. 

From the Wikipedia article:

Most of the research done to date in the area of perceptual narrowing involves facial processing studies conducted with infants. Using a preferential looking procedure in cross racial studies, Caucasian infants were tested on their ability to distinguish two faces from four different racial groups. Facial prompts were presented from their own racial group, as well as, African, Asian, and Middle Eastern. At three months of age, infants were able to show recognition for familiar faces from all racial groups, but by six months, a pattern was beginning to emerge where the infants could only recognize faces from the Caucasian or Chinese groups—groups they had more familiarity with. At nine months, recognition took place only in the own-race group. These cross race studies provide strong evidence that children do start out with cross racial recognition abilities but as they age, they quickly begin to organize the data and select the stimuli that is most familiar to them, typically own-race faces


This result is kind of striking — it's not a phenomenon that I would have expected before learning about it. If our machinery for just facial recognition — already a narrow task — was fully general with respect to faces, we wouldn't expect to see narrowing to a particular race.

In general, if our machinery for narrow perception was fully general with respect to that domain, we wouldn't see any sort of perceptual narrowing. The phenomenon of perceptual narrowing seems to me like a strong indictment against fully general algorithms for perception.

 

Caveats and Clarifications

The visual cortex of people who were born blind is repurposed to perform other perceptual tasks such as reading braille or hearing words. It is often said that the only reason our visual cortex is for vision is because it's connected to the optic nerve. If the optic nerve was connected elsewhere, the region would become the visual cortex (experiments in infant monkeys have apparently validated this).

This is suggestive of flexibility in the brain and may be evidence for universal learning capabilities (the specialised "organs" of our neocortex can learn to perform functions different from the ones they were specialised to over the course of our evolution).

 

Dynamically Generated Narrow Optimisers

Frequent practitioners at a task may develop dedicated neural circuits to support:

  • Playing chess
  • Playing Go
  • Playing Scrabble
  • Playing a piano
  • Strumming a guitar
  • Playing a saxophone
  • Typing
  • Writing
  • Etc.

There's an entire phenomenon whereby the brain rewires itself to adapt to novel tasks. We are much better at this in childhood but retain the ability well into adulthood. It appears to be how we're so good at learning new tasks and operating in novel domains.

 

General Machinery

I'm guessing that we probably do have some general meta-machinery as a higher layer (for stuff like abstraction, planning, learning new tasks/rewiring our neural circuits, etc.; other cognitive skills that are useful in metacognition).

But it seems like we fundamentally learn/become good at new tasks by developing specialised neural circuits to perform those tasks, not leveraging a preexisting general optimiser.

(This seems to me an especially significant distinction).


We already self-modify our cognitive engine (just rarely in a conscious manner), and our ability to do general intelligence at all is strongly dependent on our self-modification ability.

Our general optimiser is just a system/procedure for dynamically generating narrow optimisers to fit individual tasks.

 

Conclusions of "General Intelligence in Humans"

It seems that general intelligence in humans more closely resembles EGIH.

 


 

General Intelligence and No Free Lunch Theorems

One reason to be strongly sceptical of CGIH are the no free lunch theorems in search and optimisation:

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method.

...

It does not apply to the case where the search space has underlying structure (e.g., is a differentiable function) that can be exploited more efficiently (e.g., Newton's method in optimization) than random search or even has closed-form solutions (e.g., the extrema of a quadratic polynomial) that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. 

...

In formal terms, there is no free lunch when the probability distribution on problem instances is such that all problem solvers have identically distributed results. In the case of search, a problem instance is an objective function, and a result is a sequence of values obtained in evaluation of candidate solutions in the domain of the function. For typical interpretations of results, search is an optimization process. There is no free lunch in search if and only if the distribution on objective functions is invariant under permutation of the space of candidate solutions.[5][6][7] This condition does not hold precisely in practice,[6] but an "(almost) no free lunch" theorem suggests that it holds approximately.[8]

If we are being loose, we might summarise the theorem as: "all optimisation algorithms perform roughly the same when averaged over all possible objective functions".

A common repudiation of the NFL theorem as applied to compact algorithms for general intelligence is that the search space of reality/the problems we care about are not maximum entropy distributions; they have underlying structure that can be exploited. Yudkowsky makes this refutation quite elegantly in his reply to Francois Chollet on "The Impossibility of the Intelligence Explosion".

 

Speculation on the Applicability of NFL Theorems in General

There are distinct levels of structure and regularity. For maximum entropy distributions, no single algorithm outperforms random chance when averaged across all objective functions on that distribution. For very structured distributions (e.g., distributions for which closed form solutions exist), a single (compact) algorithm may perform optimally for most/all objective functions on that distribution.

It seems to me that you can talk about how much exploitable structure/regularity there is in a distribution, i.e. a degree to which optimisation on that distribution is constrained by NFL theorems.

Giving that a distribution has some exploitable structure, I'd expect that exploitability (insomuch as we can coherently define it) is positively correlated with broadness of applicability of the most general optimisation algorithms (defined on that distribution).

Thus:

  • The more exploitable a distribution is, the more closely general intelligence for that distribution will resemble CGIH rather than EGIH.
  • The less exploitable a distribution is, the more closely general intelligence for that distribution will resemble EGIH rather than CGIH.

 

Speculation on the Applicability of NFL Theorems to Reality

The underlying structure/regularity of reality is often posited as the reason humans can function as efficient cross domain optimisers in the first place. However, while we do in fact function as efficient cross domain optimisers, we do not do so via compact universal algorithms.

It seems to me that the ensemble-like nature of general intelligence in humans suggests that reality is perhaps not so exploitable as for us to totally escape the No Free Lunch theorems.

The more NFL theorems were a practical constraint, the more I'd expect general intelligence to look like an ensemble of narrow optimisers as opposed to a compact universal optimiser.


Insomuch as we have an example of general intelligence in our reality, it's not a compact implementation of it. This doesn't make general intelligence in our reality impossible — even in worlds where CGIH were true, ensemble intelligences would still be possible — but it is evidence in favour of EGIH over CGIH. We'd see general intelligence manifest as ensembles more in worlds where EGIH was true than in worlds where CGIH was.

(Consider that in worlds where CGIH was true, ensemble-like implementations of general intelligence would not be particularly efficient. So, insomuch as you include efficiency as a consideration in your conception of general intelligence, the central examples of general intelligences would be compact optimisers.)
 

I think the question of how you update to a particular hypothesis about general intelligence given the nature of general intelligence in humans depends a lot on your priors about hominid evolution (and the evolution of brains more generally), how powerful evolution is as an optimisation process, whether we're stuck in/near a local optimum, etc.

It's possible that an ensemble-like implementation of general intelligence evolved further back in our evolutionary history, and the algorithmic/architectural innovation along the hominid line was just improving the ensemble algorithms/architecture. Perhaps, there was simply no way to transition from an ensemble architecture to a compact architecture. This doesn't seem implausible given the way evolution works and what's required for complex interdependent mutations to acquire fixation in a population. It's not necessarily the case that evolution would have produced a compact architecture if one were attainable. Perhaps, the human brain would have been one, had evolution simply branched down a different path.


It's not readily apparent to me that that there's an obvious conclusion to reach from this data.

Admittedly, I'm somewhat sceptical that the form of general intelligence that hominid evolution manifested was just sheer happenstance. Ensemble-like general intelligence in humans is mostly making me update towards the EGIH world.

 


 

Overall Conclusions

It seems to me that there is no compact general optimiser in humans.

Perhaps, none exist in our reality.

 


 

Next Steps

This section is mostly intended as a note for future me. That said, anyone else who wants to further this inquiry is welcome to consult it.

Commentary on the items listed and/or feedback on items you think should be included will be greatly appreciated.

 

Research

Stuff I'd like to learn about to clarify my thinking on the compactness of general intelligence:

  • Transfer learning in humans (and animals)
    • How well do learned cognitive skills generalise across domains?
    • How tightly linked do the domains need to be to see robust generalisations?
    • When can humans/animals display zero/one/few shot learning?
  • Transfer learning in ML models
    • Same questions as for humans and animals
  • Mathematical optimisation and No Free Lunch Theorems
    • How well do my intuitions of exploitability and regularity match the extant literature?
    • What determines how exploitable a given distribution is?
    • What determines how learnable it is?
  • Drexler's Comprehensive AI Services
    • This is possibly a sketch of what the future trajectory of AI development looks like given EGIH like models.
  • Steven Brynes' sequence on brain like AGI safety
    • The alignment work isn't relevant for this agenda, but it may be a comprehensive compilation of LW's knowledge on human cognition.
  • The human brain and cognition
    • Theories of how the brain works
      • Predictive processing
      • Jeff Hawkins' Thousand Brains Theory
      • Others
    • Neuroplasticity
    • Memory
      • How does it work?
      • What role does it play in human cognition?
    • Synaptic pruning
      • What function does it play in learning/knowledge formation?
    • Learning (very broadly)
      • How does learning work in humans and animals?
      • Does the brain implement (something analogous to) a universal learning algorithm?
    • Development of the brain from infancy through childhood
      • Development of cognitive skills in feral children
      • Development of cognitive organs in people born with sensory impairment
        • What happens to the brain areas traditionally specialised for the defective sense?
      • Development of cognitive organs in people who acquire sensory impairment
        • What happens to the brain areas traditionally specialised for the defective sense?
    • Neural implementations of concrete cognitive skills
      • Skills
        • Abstraction
        • Symbolic reasoning
        • Planning
        • Pattern recognition
        • Intuition
        • Inference
        • Concept synthesis
        • Imagination/generation/creativity
        • Linguistics
        • Arithmetic
      • Questions
        • How is a skill implemented?
          • What areas/regions of the brain are responsible
        • Are the neural circuits underlying a given skill specialised to particular domains or can they be leveraged for new domains?
        • Which skills are specialised to particular domains, and which are more general?
        • How general are the most general skills?

 

Further Work

Stuff I might like to do in sequels to this post or other work that furthers this inquiry:

  • Investigate marginal returns to cognitive capabilities under the two models more
    • Marginal returns on population
      • How does adding more brains and having them collaborate improve cognitive capabilities?
    • Marginal returns on computational resources
      • Are there differences in how amenable the computations underlying cognitive capabilities are to parallelisation?
  • Formalise the two models of general intelligence
    • Formalise "mixtures" of these models
      • Other ways of specifying models of general intelligence that lie somewhere on the spectrum between these two models
      • Models where some skills (e.g., learning) are universal, whereas others (e.g., prediction) are narrow
      • Specify a mixture that more accurately describes the human brain
    • Illustrate mixtures graphically/diagrammatically.
  • Specify the differences in cognitive capabilities between the two models
    • Via e.g., asymptotic analysis of various cognitive tasks given the two models
    • Generalise to mixtures of these models
      • Specify for the human brain mixture
  • Specify the differences in marginal returns between the two models
    • Via asymptotic analysis
    • Generalise to mixtures of these models
      • Specify for the human brain mixture
  • Formalise the notion of "exploitability" of an environment
    • How exploitable is reality?
New Comment
7 comments, sorted by Click to highlight new comments since:
[-]gjm70

There are three imaginable classes of intelligent agent, not two. (To be clear, I am not suggesting that OP is unaware of this; I'm taking issue with the framing.)

  1. ("Simple".) Applies some single process to every task that comes along, without any sort of internal adaptation being needed.
  2. ("Universally adaptable".) Needs special-purpose processes for particular classes of task, but can generate those processes on the fly.
  3. ("Ensemble specialized".) Has special-purpose processes for particular classes of task, but limited ability to do anything beyond the existing capabilities of those processes.

It seems clear that human intelligence is more #2/#3 than #1. But for many purposes, isn't the more important distinction between #1/#2 and #3?

For instance, OP says that for a "simple" intelligence we expect improvement in prediction to transfer across domains much better than for an "ensemble" intelligence, but I would expect at least some kinds of improvement to generalize well for a "universally adaptable" intelligence: anything that improves it by making it better at making those special-purpose processes (or by e.g. improving some substrate on which they all run).

"But that only applies to some kinds of improvement!" Yes, but the same goes even for "simple" intelligences. Even if there's some general process used for everything, many specific applications of that process will depend on specific knowledge, which will often not generalize. E.g., if you get good at playing chess, then whether or not you're doing it by growing custom hardware or implementing special search procedures part of what you're doing will be learning about specific configurations of pieces on a chessboard, and that won't generalize even to similar-ish domains like playing go.

I don't think I quite buy the argument that simplicity of the best optimizers ~= exploitability of the domain being optimized over. The fuzzy mental image accompanying this not-buying-it is a comparison between two optimization-landscape-families: (1) a consistent broad parabolic maximum with a large amount of varying noise on top of it, and (2) a near-featureless plain with just a bit of varying noise on it, with a hundred very tall sharp spikes. 1 is not very exploitable because of all the noise, but the best you can do will be something nice and simple that models the parabolic peak. 2 is extremely exploitable, but to exploit it well you need to figure out where the individual peaks are and deal with them separately. (This fuzzy mental image should not be taken too literally.)

Our world is simple but omplicated; there are simple principles underlying it, but historical accident and (something like) spontaneous symmetry breaking mean that different bits of it can reflect those simple principles in different ways, and it may be that the best way to deal with that variety is to have neither a single optimization process, nor a fixed ensemble of them, but a general way of learning domain-specific optimizers.

For my Ensemble General Intelligence model, I was mostly imagining #2 instead of #3.

I said of my ensemble general intelligence model:

It could also dynamically generate narrow optimisers on the fly for the problem sets.

General intelligence might be described as an algorithm for picking (a) narrow optimiser(s) to apply to a given problem set (given x examples from said set).

I did not intend to imply that the set of narrow optimisers the general optimiser is selecting from is represented within the agent. I was thinking of a rough mathematical model for how you can describe it.

That there exists a (potentially infinite) set of all possible narrow optimisers a general intelligence might generate/select from, and there exists a function mapping problems sets (given x examples of said set) to narrow optimisers does not imply that any such representation is stored internally in the agent, nor that the agent implements a look up table.

I equivocated between selection and generation. In practice I imagine generation, but the mathematics of selection are easier to reason about.

I imagine that trying to implement ensemble specialised is impractical in the real world because there are too many possible problem sets. I did not at all consider it a potential model of general intelligence.

I might add this clarification when next I'm on my laptop.

 

It seems to me that the qualm is not about #2 vs #3 as models for humans, but how easily transfer learning happens for the relevant models of general intelligence, and what progress among the class of general intelligence that manifests in our world looks like.

Currently, I think that it's possible to improve the meta optimisation processes for generating object level optimisation processes, but this doesn't imply that an improvement to a particular object level optimisation process will transfer across domains.

This is important because improving object level processes and improving meta level processes are different. And improving meta level processes mostly looks like learning a new domain quicker as opposed to improved accuracy in all extant domains. Predictive accuracy still doesn't transfer across domains the way it would for a simple optimiser.

I can probably make this distinction clearer, elaborate on it more in the OP.

I'll think on this issue more in the morning.

 

The section I'm least confident/knowledgeable about is the speculation around applicability of NFL theorems and exploitation of structure/regularity, so I'll avoid discussing it.

I simply do not think it's a discussion I can contribute meaningfully to.

Future me with better models of optimisation processes would be able to reason better around it.

If general intelligence was like #3 (Ensemble Intelligence) how would the ability to learn new tasks arise? Who would learn?

I suppose new skills could be hard won after many subjective years of effort, and then transferred via language. Come to think of it, this does resemble how human civilization works. It took hundreds of years for humans to learn how to do math, or engineering, but these skills can be learned in less than 4 years (ie at college).

[-]gjm32

What distinguishes #2 from #3 is that in #3 you can't learn (well) to do new tasks that are too far outside the domains covered by your existing modules.

It's a spectrum, rather than binary. Humans are clearly at least somewhat #2-not-#3, and also I think clearly at least somewhat #3-not-#2. The more #2-not-#3 we are, the more we really qualify as general intelligences.

And yes, human learning can be pretty slow. (Slower than you give it credit for, maybe. To learn to do mathematical research or engineering good enough to make bridges etc. that are reasonably-priced, look OK, and reliably don't fall down, takes a bunch of what you learn in elementary and high school, plus those 4 years in college, plus further postgraduate work.)

[+][comment deleted]10

CGIH: there are compact universal algorithms for predicting stimuli in the real world.

  • Becoming better at prediction in one domain reliably transfers across several/many/all domains.
    • This could also be reframed as "there is only one/a few domain(s) under consideration when improving predictive accuracy"

I believe this possibility tends to be contradicted by empiricism, e.g. people practicing one thing do not tend to become better at unrelated things, and AIs tend to be fairly specialized in practice.

The exponential decay to marginal returns on predictive accuracy holds for domains where predictive accuracy is leveraged by actions analogous to betting on the odds implied by credences (e.g. by selling insurance policies).

I think when betting, the value you gain is often based on your difference in ability compared to the person you're betting with, which in practice would get you a sigmoidal curve, with the inflection point being reached when you're ~as intelligent as the people you are betting with. So there would be exponential decay to being smarter than humans, but exponential return to approaching human smartness.

EGIH suggests that "superprediction" is:

Infeasible: If one seeks to become exceptional at prediction in  domains, they would have to learn all  domains.

Why is it infeasible to just learn all n domains? Especially for an AI that can presumably be run in parallel.

I helped edit this a (very minor) bit[1], so you'd expect this probably, but

I do really like this, and hope we can look into it a bit more. It seems really important to AI timelines.


  1. A few "word choice" things ↩︎