Resolving the Fermi Paradox: New Directions

Our sun appears to be a typical star: unremarkable in age, composition, galactic orbit, or even in its possession of many planets.  Billions of other stars in the milky way have similar general parameters and orbits that place them in the galactic habitable zone.  Extrapolations of recent expolanet surveys reveal that most stars have planets, removing yet another potential unique dimension for a great filter in the past.  

According to Google, there are 20 billion earth like planets in the Galaxy.

A paradox indicates a flaw in our reasoning or our knowledge, which upon resolution, may cause some large update in our beliefs.

Ideally we could resolve this through massive multiscale monte carlo computer simulations to approximate Solonomoff Induction on our current observational data.  If we survive and create superintelligence, we will probably do just that.

In the meantime, we are limited to constrained simulations, fermi estimates, and other shortcuts to approximate the ideal bayesian inference.

The Past

While there is still obvious uncertainty concerning the likelihood of the series of transitions along the path from the formation of an earth-like planet around a sol-like star up to an early tech civilization, the general direction of the recent evidence flow favours a strong Mediocrity Principle.

Here are a few highlight developments from the last few decades relating to an early filter:

  1. The time window between formation of earth and earliest life has been narrowed to a brief interval.  Panspermia has also gained ground, with some recent complexity arguments favoring a common origin of life at 9 billion yrs ago.[1]
  2. Discovery of various extremophiles indicate life is robust to a wider range of environments than the norm on earth today.
  3. Advances in neuroscience and studies of animal intelligence lead to the conclusion that the human brain is not nearly as unique as once thought.  It is just an ordinary scaled up primate brain, with a cortex enlarged to 4x the size of a chimpanzee.  Elephants and some cetaceans have similar cortical neuron counts to the chimpanzee, and demonstrate similar or greater levels of intelligence in terms of rituals, problem solving, tool use, communication, and even understanding rudimentary human language.  Elephants, cetaceans, and primates are widely separated lineages, indicating robustness and inevitability in the evolution of intelligence.

So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction - but see this reply for an argument for an early filter).

The Future(s)

When modelling the future development of civilization, we must recognize that the future is a vast cloud of uncertainty compared to the past.  The best approach is to focus on the most key general features of future postbiological civilizations, categorize the full space of models, and then update on our observations to determine what ranges of the parameter space are excluded and which regions remain open.

An abridged taxonomy of future civilization trajectories :

Collapse/Extinction:

Civilization is wiped out due to an existential catastrophe that sterilizes the planet sufficient enough to kill most large multicellular organisms, essentially resetting the evolutionary clock by a billion years.  Given the potential dangers of nanotech/AI/nuclear weapons - and then aliens, I believe this possibility is significant - ie in the 1% to 50% range.

Biological/Mixed Civilization:

This is the old-skool sci-fi scenario.  Humans or our biological descendants expand into space.  AI is developed but limited to human intelligence, like CP30.  No or limited uploading.

This leads eventually to slow colonization, terraforming, perhaps eventually dyson spheres etc.

This scenario is almost not worth mentioning: prior < 1%.  Unfortunately SETI in current form is till predicated on a world model that assigns a high prior to these futures.

PostBiological Warm-tech AI Civilization:

This is Kurzweil/Moravec's sci-fi scenario.  Humans become postbiological, merging with AI through uploading.  We become a computational civilization that then spreads out some fraction of the speed of light to turn the galaxy into computronium.  This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.

One of the very few reasonable assumptions we can make about any superintelligent postbiological civilization is that higher intelligence involves increased computational efficiency.  Advanced civs will upgrade into physical configurations that maximize computation capabilities given the local resources.

Thus to understand the physical form of future civs, we need to understand the physical limits of computation.

One key constraint is the Landauer Limit, which states that the erasure (or cloning) of one bit of information requires a minimum of kTln2 joules.  At room temperature (293 K), this corresponds to a minimum of 0.017 eV to erase one bit.  Minimum is however the keyword here, as according to the principle, the probability of the erasure succeeding is only 50% at the limit.  Reliable erasure requires some multiple of the minimal expenditure - a reasonable estimate being about 100kT or 1eV as the minimum for bit erasures at today's levels of reliability.

Now, the second key consideration is that Landauer's Limit does not include the cost of interconnect, which is already now dominating the energy cost in modern computing.  Just moving bits around dissipates energy.

Moore's Law is approaching its asymptotic end in a decade or so due to these hard physical energy constraints and the related miniaturization limits.

I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.

From Warm-tech to Cold-tech

There is a way forward to vastly increased energy efficiency, but it requires reversible computing (to increase the ratio of computations per bit erasures), and full superconducting to reduce the interconnect loss down to near zero.

The path to enormously more powerful computational systems necessarily involves transitioning to very low temperatures, and the lower the better, for several key reasons:

  1. There is the obvious immediate gain that one gets from lowering the cost of bit erasures: a bit erasure at room temperature costs 100 times more than a bit erasure at the cosmic background temperature, and a hundred thousand times more than an erasure at 0.01K (the current achievable limit for large objects)
  2. Low temperatures are required for most superconducting materials regardless.
  3. The delicate coherence required for practical quantum computation requires or works best at ultra low temperatures.
At a more abstract level, the essence of computation is precise control over the physical configurations of a device as it undergoes complex state transitions.  Noise/entropy is the enemy of control, and temperature is a form of noise.  

Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero.  Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage.

In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible.  The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better.  The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint.  These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.

Stellar Escape Trajectories

For a cold-tech civilization, one interesting long term strategy involves escaping the local star's orbit to reach the colder interstellar medium, and eventually the intergalactic medium.

If we assume that these future civs have long planning horizons (reasonable), we can consider this an investment that has an initial cost in terms of the energy required to achieve escape velocity and a return measured in the future integral of computation gained over the trajectory due to increased energy efficiency.  Expendable boost mass in the system can be used, and domino chains of complex chaotic gravitational assist maneuvers computed by deep simulations may offer a route to expel large objects using reasonable amounts of energy.[3]

The Great Game 

Given the constraints of known physics (ie no FTL), it appears that the computational brains housing more advanced cold-tech civs will be incredibly vulnerable to hostile aliens.  A relativistic kill vehicle is a simple technology that permits little avenue for direct defense.  The only strong defense is stealth.

Although the utility functions and ethics of future civs are highly speculative, we can observe that a very large space of utility functions lead to similar convergent instrumental goals involving control over one's immediate future light cone.  If we assume that some civs are essentially selfish, then the dynamics suggest successful strategies will involve stealth and deception to avoid detection combined with deep simulation sleuthing to discover potential alien civs and their locations.

If two civs both discover each other's locations around the same time, then MAD (mutually assured destruction) dynamics takeover and cooperation has stronger benefits.  The vast distances involve suggest that one sided discoveries are more likely.

Spheres of Influence

A new civ, upon achieving the early postbiological stage of development (earth in say 2050?), should be able to resolve the general answer to the fermi paradox using advanced deep simulation alone - long before any probes would reach distant stars.  Assuming that the answer is "lots of aliens", then further simulations could be used to estimate the relative likelihood of elder civs interacting with the past lightcone.  

The first few civilizations would presumably realize that the galaxy is more likely to be mostly colonized, in which case the ideal strategy probably involves expansion of actuator type devices (probes, construction machines) into nearby systems combined with construction and expulsion of advanced stealthed coldtech brains out into the void.  On the other hand, the very nature of the stealth strategy suggests that it may be hard to confidently determine how colonized the galaxy is. 

For civilizations appearing later, the situation is more complex.  The younger a civ estimates itself to be in the cosmic order, the more likely it becomes that it's local system has already come under an alien influence.

From the perspective of an elder civ, an alien planet at a pre-singularity level of development has no immediate value.  Raw materials are plentiful - and most of the baryonic mass appears to be interstellar and free floating.  The tiny relative value of any raw materials on a biological world are probably outweighed - in the long run - by the potential future value of information trade with the resulting mature civ.

Each biological world - or seed of a future elder civ - although perhaps similar in abstract, is unique in details.  Each such world is valuable in the potential unique knowledge/insights it may eventually generate - directly or indirectly.  From a pure instrumental rational standpoint, there is some value in preserving biological worlds to increase general knowledge of civ development trajectories.

However, there could exist cases where the elder civ may wish to intervene.  For example, if deep simulations predict that the younger world will probably develop into something unfriendly - like an aggressive selfish/unfriendly replicator - then small pertubations in the natural trajectory could be called for.  In short the elder civ may have reasons to occasionally 'play god'.

On the other hand, any intervention itself would leave a detectable signature or trace in the historical trajectory which in turn could be detected by another rival or enemy civ!  In the best case these clues would only reveal the presence of an alien influence.  In the worst case they could reveal information concerning the intervening elder civ's home system and the likely locations of its key assets.

Around 70,000 years ago, we had a close encounter with Scholz's star, which passed with 0.8 light years of the sun (within the oort cloud).  If the galaxy is well colonized, flybys such as this have potentially interesting implications  (that particular flyby corresponds to the estimated time of the Toba super-eruption, for example).

Conditioning on our Observational Data

Over the last few decades SETI has searched a small portion of the parameter space covering potential alien civs.  

SETI's original main focus concerned the detection of large permanent alien radio beacons.  We can reasonably rule out models that predict advanced civs constructing high energy omnidirectional radio beacons.

At this point we can also mostly rule out large hot-tech civilizations (energy constrained civilizations) that harvest most of the energy from stars.

Obviously detecting cold-tech civilizations is considerably more difficult, and perhaps close to impossible if advanced stealth is a convergent strategy.

However, determining whether the galaxy as a whole is colonized by advanced stealth civs is a much easier problem.  In fact, one way or another the evidence is already right in front of us.  We now know that most of the mass in the galaxy is dark rather than light.  I have assumed that coldtech still involves baryonic matter and normal physics, but of course there is also the possibility that non-baryonic matter could be used for computation.  Either way, the dark matter situation is favorable.  Focusing on normal baryonic matter, the ratio of dark/cold to light/hot is still large - very favorable for colonization.

Observational Selection Effects

All advanced civs will have strong instrumental reasons to employ deep simulations to understand and model developmental trajectories for the galaxy as a whole and for civilizations in particular.  A very likely consequence is the production of large numbers of simulated conscious observers, ala the Simulation Argument.  Universes with the more advanced low temperature reversible/quantum computing civilizations will tend to produce many more simulated observer moments and are thus intrinsically more likely than one would otherwise expect - perhaps massively so.

 

Rogue Planets


If the galaxy is already colonized by stealthed coldtech civs, then one prediction is that some fraction of the stellar mass has been artificially ejected.  Some recent observations actually point - at least weakly - in this direction.

From "Nomads of The Galaxy"[4]

We estimate that there may be up to ∼ 10^5 compact objects in the mass range 10^−8 to 10^−2M⊙
per main sequence star that are unbound to a host star in the Galaxy. We refer to these objects as
nomads; in the literature a subset of these are sometimes called free-floating or rogue planets.

Although the error range is still large, it appears that free floating planets outnumber planets bound to stars, and perhaps by a rather large margin.

Assuming the galaxy is colonized:  It could be that rogue planets form naturally outside of stars and then are colonized.  It could be they form around stars and then are ejected naturally (and colonized).  Artificial ejection - even if true - may be a rare event.  Or not.  But at least a few of these options could potentially be differentiated with future observations - for example if we find an interesting discrepancy in the rogue planet distribution predicted by simulations (which obviously do not yet include aliens!) and actual observations.

Also: if rogue planets outnumber stars by a large margin, then it follows that rogue planet flybys are more common in proportion.

 

Conclusion

SETI to date allows us to exclude some regions of the parameter space for alien civs, but the regions excluded correspond to low prior probability models anyway, based on the postbiological perspective on the future of life.  The most interesting regions of the parameter space probably involve advanced stealthy aliens in the form of small compact cold objects floating in the interstellar medium.

The upcoming WFIST telescope should shed more light on dark matter and enhance our microlensing detection abilities significantly.  Sadly, it's planned launch date isn't until 2024.  Space development is slow.

 

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 6:55 AM
Select new highlight date
Rendering 50/59 comments  show more

First, in the interest of full disclosure, the reason I'm here on LW is to maximize my contribution to promoting intelligent life. It currently appears that maximizing the number of Quality Adjusted Life Years integrated over the period from now until the heat death of the universe can only be achieved through spaceflight and spreading life/AI through the solar system, and then the galaxy. This can be done through either directed panspermia or by spreading intelligent life/AI directly. I have spent the last year or so trying to find any flaws in my understanding, and so I'm about to do everything I can to tear your initial argument to shreds. That's not necessarily because I don't agree with you, (although my reasoning diverges about halfway through) but rather a concerted effort to avoid confirmation bias. I don't want to devote my entire life to something sub-optimal, just because I'm afraid to put my views under scrutiny.

So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction).

You mentioned several possibilities for a great filter in the past, but that was by no means a comprehensive list. Here's a longer list, off the top of my head:

  1. Habitable stars are rare. (roughly sun-sized, minimal solar flares, etc) Poor candidate, as you point out.

  2. Habitable planets are rare. (Orbit within the habitable zone, liquid H2O, ingredients for life) You touched on this, but our understanding of the source of Earth's water is poor, so I don't think we can discard this as a possibility. We have an oddly large moon, which may have played a role. First, it's gravity ensured that the Earth's rotational axis is roughly parallel to it's orbital plane most of the time. This means that the planet is baked roughly evenly, rather than spending millions of years with the north pole facing the sun. Tidal forces also effect the mantle, which creates our magnetosphere, which in turn prevents atmospheric loss to space. There are a surprising number of other theories linking the moon to life on Earth.

  3. Panspermia / Abiogenesis is rare. (transport may be limited by radiation/mutations, while genesis of new life may require rare environments or energy sources) We have reasonable evidence that life could survive within rocks blasted off of a planet's surface long enough to seed nearby planets, but not necessarily that life could survive the long voyage between nearby stars. We've demonstrated that most, but not all, essential amino acids can be generated under conditions similar to those of early Earth. Also, there's a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment, which might have created conditions conducive to the formation of life late enough after planetary formation that geological activity could settle down a bit. There doesn't seem to be any reason why there should have been a second heavy bombardment period, though, so that may be unique to our solar system.

  4. Either photosynthesis is rare, or the Oxygen Catastrophe generally kills off all species. (High concentrations of oxygen are highly poisonous, which caused a massive extinction event. Additionally, losing all that CO2 from the atmosphere cooled earth tremendously since the sun wasn't so bright. This caused the longest Snowball Earth episode in the planet's history, in which all the planet's oceans froze solid and all the land was covered in one massive glacier.) It seems likely that life could never have recovered from this.

  5. Prokaryotic life is common, but Eukaryotic life is rare. (It's really hard to evolve a cell nucleus.) Eukaryotes only appeared about 2 billion years after Prokaryotes; halfway through the chain of evolution from the first life until today.

  6. Eukaryotic life is common, but multicellular life is rare. We've only had it for ~500 million years.

  7. Multicellular life is common, but complex life on land is rare. It's possible that we could never have developed spines or crawled onto land, or that animal life itself might be rare. This seems much less plausible, since it seems to have sprung directly from the evolution of multicellular life, in a fairly spectacular explosion of complexity.

  8. Complex life is common, but is regularly wiped out before it can become intelligent. There have been 5 big extinction events in earth's history, most recently the meteor that killed the dinosaurs. Although these weren't enough to wipe out all life on earth, there are several cosmic threats that could. These include collision with another planet or other sufficiently large object, which might be caused by orbital periods synching up with Jupiter or by passing stars or black-holes. Additionally, Gamma Ray Bursts are extremely common, and might regularly wipe out all life in the inner solar system, where the stars are closer together. This would explain why we evolved out on the edge of a spiral arm of the milky way, and not closer to the galactic center.

  9. Complex life is common, but intelligent life is rare. There seem to be a lot of somewhat intelligent creatures that aren't closely related to us. (Parrots, octopus, dolphins, etc.) There are even several animals that make limited use of tools. What is rare, however, appears to be the capacity for abstract thought. Chimps can learn from each other by copying, but have a hard time learning or teaching each other without demonstrating. We're also much better at learning by copying others, but we can also learn from abstract symbols written on a piece of paper. This appears to be a result of runaway evolution, where humans selected for mates with a high capacity for abstract thought, perhaps via a high capacity to predict others actions and plot accordingly.

  10. Intelligent life is common, but technological civilizations are rare. We have had several steady-state conditions over our specie's history. We used the first simple stone tools ~2.3 million years ago, and then stood upright and invented fire 1.5 million years ago. We haven't evolved noticeably over the past 200,000 years, and yet we only developed agriculture and colonized the planet 10,000 years ago. Some of that may be due to the most recent ice age, but not all of it. We didn't invent bronze or written language until 5,000 years ago. All the great advanced civilizations made relatively small advances in technology, and put all their efforts into infrastructure rather than R&D. The only thing the Romans invented was concrete; everything else was an adaptation of ideas from other cultures. Western civilization is really the first culture to invest heavily in R&D, and we generally suck at it. Places like silicon valley are the exception to the rule.

Given all this, I wouldn't be so quick to assume that the great filter is in front of us. All this must be weighed against the risks posed by all the various existential risks. Nuclear war was a close call in the cold war, and the risk is an order of magnitude lower now, but is by no means gone. AI gets discussed a lot on here, but I don't think biological warfare gets the attention it deserves. Our understanding of biology is growing rapidly, and I think it may one day be relatively easy for anyone to genetically engineer a unusually dangerous virus or pandemic. Additionally, advanced civilizations in general tend to only last on the order of hundred years, according to this paper. That's more or less in line with the Future of Humanity Institute's informal Global Catastrophic Risk Survey. (The mean estimate for humanity's chance of going extinct this century was on the order of a 20%.) That said, Nick Bostram himself appears to think that the great filter is more likely to lie behind us than ahead of us. To me, it seems like it could easily go either way, but since Bostram has been researching this much longer than I have, I'm inclined to shift my probability estimate a bit further toward the great filter being behind us.

The above dealt primarily with the first half of your post, but let me also address the 2nd half. You've assigned several probability estimates to various outcomes of our civilization:

  1. Collapse/Extinction: "in the 1% to 50% range." I'm inclined to agree with you on this one, as described in the last paragraph of my above post.

  2. Biological/Mixed Civilization: “This scenario is almost not worth mentioning: prior < 1%” I think you've defined this a bit too narrowly. We don't yet see any limiting factor for AI advancement besides physics, but that doesn't mean that one won't make itself apparent. Maybe this factor will turn out to be teraFLOPS (aka limited by Moore's law) or energy (limited by our energy production capacity) or even matter (limited by the amount of rare earth elements necessary to make computronium). But it could also happen that we fail to make a super-intelligence at all, or that AI eventually achieves most, but not all, of humans mental abilities. The livelihood of a general intelligence increases asymptotically with time, but I think it would be a mistake to assume that it is increasing asymptotically toward 1. It could easily be getting closer and closer to 0.8 or some other value which is hard to calculate. The existence of the human mind shows that consciousness can be built out of atoms, but not necessarily that it can be built out of a string of transistors, or that it is simple enough that we can ever understand it well enough to reproduce it in code. There's also the existential risk of developing a flawed AI. We only have 1 shot at it, and the evidence seems to be against developing one correctly on the first try. I suspect that the supermajority of civilizations that develop AI's develop flawed AIs. Even if 90% develop an AI before going to the stars, perhaps >99.9999% are wiped out by a poorly designed AI. This would lead to many more “Biological/Mixed Civilizations” than AI civilizations, if the flawed AI's tend to wipe themselves out or not to spread out into the universe.

  3. PostBiological Warm-tech AI Civilization: “I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.” This seems slightly low to me, but not by much. “This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.” Although this state doesn't flow from energy being a limiting factor (aka biological/mixed civilizations may also be energy limited) I agree that such a civilization would eventually become energy limited. I see 2 ways of solving this: better harvesting (aka Dyson swarms, since Dyson spheres are likely mass-limited) or broader civilization (if it takes less energy to send a colony to the nearest star, then you do that before you start building a Dyson swarm).

  4. From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass. I'd probably put less, but that's not actually my main contention. I don't buy that this is sufficient reason to travel to the interstellar medium, away from such a ready energy and matter source as a solar system. You list 3 reasons: lower energy bit erasures, superconductivity, and quantum computer efficiency. Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc. Only a few superconductors require temperatures below ~50 Kelvin, and you can get that anywhere perpetually shaded from the sun, such as the craters on the north and south poled of the moon (~30 Kelvin). If you want it somewhere else, stop an asteroid from spinning and build a computer on the dark side. I'm not sure that quantum computers need to be below that either. Anywhere you go, you'll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability? In order to expand exponentially, such a system would still need huge amounts of matter for superconductors and whatever else.

Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage. In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible. The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better. The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint. These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.

I don't understand why this predicts no Dyson spheres, no visible mega-engineering, etc, and convergent self-limiting to a handful of solar systems and cold brains per civilization.

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem. You say that the reversible brains don't need that much energy. OK, but more computing power is always better, the cold brains want as much as possible, so what limits them? If it's energy, then they will want to pipe in as much energy as possible from their local star. If it's putting matter into the right configuration for cold brains and shielding, then they will... want to pipe in as much matter lifted by energy as possible from their local star so they can build even more cold brains. Space is vast, so it's not like they're going to run out of cold places to put cold brains, and even if they do, well, a Dyson sphere around a star will fix that, so they'll keep expanding with the matter & energy. Interconnects and IO use up a lot of energy? Well, we already know how to solve that. Whatever the binding limit to their computational power is, it seems to be solved by either more matter, more energy, or both, and the largest available source of both is stars, far from being 'trash heaps'.

And since they are already expanding, their massive redundancy and deep space stealth/mobility means relativistic strikes are irrelevant, and so the usual first-mover expansionary convergent argument applies. So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with. This doesn't sound remotely like a Fermi paradox resolution.

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem.

Every practical computational tech substrate has some error bounded compute/temperature curve, where computational capability quickly falls to zero past some upper bound temperature. Even for our current tech, computational capacity essentially falls off a cliff somewhere well below 1,000K.

My general point is that the really advanced computing tech shifts all those curves over - towards lower temperatures. This is a hard limit of physics, it can not be overcome. So for a really advanced reversible quantum computer that employs superconduction and long coherence quantum entanglement, 1K is just as impossible as 1,000K. It's not entirely a matter of efficiency.

Another way of looking at it - advanced tech just requires lower temperatures - as temperature is just a measure of entropy (undesired/unmodeled state transitions). Temperature is literally an inverse measure of computational potential. The ultimate computer necessarily must have a temperature of zero.

You say that the reversible brains don't need that much energy.

At the limits they need zero. Approaching anything close to those limits they have no need of stars. Not only that, but they couldn't survive any energy influx much larger than some limit, and that limit necessarily must go to zero as their computational capacity approaches theoretical limits.

If it's energy, then they will want to pipe in as much energy as possible from their local star.

No. There is an exact correct amount of energy to pipe in based on their viable operating temperature of their current tech civ. And this amount goes to zero as you advance up the tech.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

The same principle applies, but as you advance up the ultra-tech ladder, the temp ranges get lower and lower (because remember, temp is literally an inverse measure of maximum computational capabillity).

OK, but more computing power is always better, the cold brains want as much as possible, so what limits them?

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass).

Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

So the most valuable mass that gets colonized first would be the rogue planets/nomads - which apparently are more common than attached planets.

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

The big unknown variable is again what the end of tech in the universe looks like, which gets back to that new universe creation question. If that kind of ultimate/magic tech is possible, civs will invest everything in to that, and you have less colonization, depending on the difficulty/engineering tradeoffs.

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

This still doesn't answer my question. I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation? My smartphone processor is more energy-efficient than my laptop, but that doesn't mean datacenters don't exist or are useless or aren't popping up like mushrooms.

At the limits they need zero.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it? Slowness is a problem for a superintelligence. What good is super-efficiency if it takes millennia to calculate answers which some more energy would have solved quicker? Time is not free.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively. Turn it into a Matrioshka brain or something from one of Ander's papers on optimal large-scale computing artifacts.

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass). Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and leaves the Fermi paradox unresolved.

I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation?

At any point in development, investing resources in physical expansion has a payoff/cost/risk profile, as does investing resources in tech advancement. Spatial expansion offers polynomial growth, which is pretty puny compared to the exponential growth from tech advancement. Furthermore, the distances between stars are pretty vast.

If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it's economic payoff compared to chasing Moore's Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome. If the tech singularity leads to stronger outcomes ala new universe manipulations, then you never need to colonize, it's best to just invest everything locally. And of course there is the spectrum in between, where you get some colonization, but the timescale is slowed.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it?

No, not for reversible computing. The energy required to represent/compute a 1 bit state transition depends on reliability, temperature, and speed, but that energy is not consumed unless there is an erasure. (and as energy is always conserved, erasure really just means you lost track of a bit)

In fact the reversible superconducting designs are some of the fastest feasible in the near term.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively.

Biological computing (cells) doesn't work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

I'm not all that confident that moving mass out system is actually better than just leaving it in place and doing best effort cooling in situ. The point is that energy is not the constraint for advancing computing tech, it's more mass limited than anything, or perhaps knowledge is the most important limit. You'd never want to waste all that mass on a dyson sphere. All of the big designs are dumb - you want it to be as small, compact, and cold as possible. More like a black hole.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and

It's extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not 'use up' more than a tiny fraction of the matter of earth, and so on.

leaves the Fermi paradox unresolved.

From the evidence for mediocrity, the lower KC complexity of mediocrity, and the huge number of planets in the galaxy, I start with a prior strongly favoring reasonably high number of civs/galaxy, and low odds on us being first.

We have high uncertainty on the end/late outcome of a post-singularity tech civ (or at least I do, I get the impression that people here inexplicably have extremely high confidence in the stellavore expansionist model, perhaps because of lack of familiarity with the alternatives? not sure).

If post-singularity tech allows new universe creation and other exotic options, you never have much colonization - at least not in this galaxy, from our perspective. If it does not, and there is an eventual end of tech progression, then colonization is expected.

But as I argued above, even colonization could be hard to detect - as advanced civs will be small/cold/dark.

Transcension is strongly favored a priori for anthropic reasons - transcendent universes create far more observers like us. Then, updating on what we can see of the galaxy, colonization loses steam: our temporal rank is normal, whereas most colonization models predict we should be early .

For transcension, naturally its hard to predict what that means .. . but one possibility is a local 'exit' at least from the perspective of outside observers. Creation of lots of new universes, followed by physical civ-death in this universe, but effective immortality in new universes (ala game theoretic horse trading across the multiverse). New universe creation could also potentially alter physics in ways that permit further tech progression. Either way, all of the mass is locally invested/used up for 'magic' that is incomprehensibly more valuable than colonization.

If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it's economic payoff compared to chasing Moore's Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome.

So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox? I don't see what your reversible computing detour adds to the discussion, if you can't show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.

Biological computing (cells) doesn't work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

I never said anything about using biology or leaving the Earth intact. I said quite the opposite.

It's extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not 'use up' more than a tiny fraction of the matter of earth, and so on.

You need to show your work here. Why is it unlikely? Why don't they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it. Why is it better to have fewer cold brains rather than more? Why is it better to have less computational power than more? Why do all this intricate engineering for super-efficient reversible computers in the depths of the void, and only make a few and not use up all the local matter? Why are all the answers to these questions so iron-clad and so universally compelling that none of the trillions of civilizations you get from mediocrity will do anything different?

So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox?

No . .. As I said above, even if transcension is possible, that doesn't preclude some expansion. You'd only get zero expansion if transcension is really easy/fast. On the convergence issue, we should expect that the main development outcomes are completely convergent. Transcension is instrumentally convergent - it helps any realistic goals.

I don't see what your reversible computing detour adds to the discussion, if you can't show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.

The reversible computing stuff is important for modeling the structure of advanced civs. Even in transcension models, you need enormous computation - and everything you could do with new universe creation is entirely compute limited. Understanding the limits of computing is important for predicting what end-tech computation looks like for both transcend and expand models. (for example if end-tech optimal were energy limited, this predicts dyson spheres to harvest solar energy)

The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

I never said anything about using biology or leaving the Earth intact. I said quite the opposite.

Advanced computation doesn't happen at those temperatures, for the same basic reasons that advanced communication doesn't work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.

You need to show your work here. Why is it unlikely? Why don't they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it.

First let us consider the optimal compute configuration of a solar system without any large-scale re-positioning, and then we'll remove that constraint.

For any solid body (planet,moon,asteroid,etc), there is some optimal compute design given it's structural composition, internal temp, and incoming irradiance from the sun. Advanced compute tech doesn't require any significant energy - so being closer to the sun is not an advantage at all. You need to expend more energy on cooling (for example, it takes about 15 kilowatts to cool a single current chip from earth temp to low temps, although there have been some recent breakthroughs in passive metamaterial shielding that could change that picture). So you just use/waste that extra energy cooling the best you can.

So, now consider moving the matter around. What would be the point of building a dyson sphere? You don't need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn't help with any of that.

Basically we can rule out config changes for the metal/rocky mass (useful for compute) that: 1.) increase temperature 2.) increase size

The gradient of improvement is all in the opposite direction: decreasing temperature and size (with tradeoffs of course).

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

One of the big unknowns of course being the timescale, which depends on the transcend issue.

Now for the star itself, it has most of the mass, but that mass is not really accessible, and most of it is in low value elements - we want more metals. It could be that the best use of that matter is to simply continue cooking it in the stellar furnace to produce more metals - as there is no other way, as far as i know.

But doing anything with the star would probably take a very long amount of time, so it's only relevant in non-transcendent models.

In terms of predicted observations, in most of these models there are few if any large structures, but individual planetary bodies will probably be altered from their natural distributions. Some possible observables: lower than expected temperatures, unusual chemical distributions, and possibly higher than expected quantities/volumes of ejected bodies.

Some caveats: I don't really have much of an idea of the energy costs of new universe creation, which is important for the transcend case. That probably is not a reversible op, and so it may be a motivation for harvesting solar energy.

There's also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it's final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).

This dimming star is one out of perhaps 10 million nearby stars we have observed in this way. Say 1 in 10 systems will ever develop life, the timescale spread or deviation is about a billion years - then we should expect to observe about 1 in 10 million endphase dimming stars, given that phase lasts only 1,000 years. This would of course predict a large number of endstate stars, but given that we just barely detected KIC 8462852 because it was dimming, we probably can't yet detect stars that already dimmed and then stabilized long ago.

Advanced computation doesn't happen at those temperatures

Could it make sense to use an enormous amount of energy to achieve an enormous amount of cooling? Possibly using laser cooling or some similar technique?

Advanced computation doesn't happen at those temperatures, for the same basic reasons that advanced communication doesn't work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.

And I was trying to illustrate that there's more to life than considering one cold brain in isolation in the void without asking any questions about what else all that free energy could be used for.

So, now consider moving the matter around. What would be the point of building a dyson sphere? You don't need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn't help with any of that.

A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling. If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.

But doing anything with the star would probably take a very long amount of time, so it's only relevant in non-transcendent models.

Exponential growth. I think Sandberg's calculated you can build a Dyson sphere in a century, apropos of KIC 8462852's oddly gradual dimming. And you hardly need to finish it before you get any benefits.

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

You say 'may', but that seems really likely. After all, what 'complex set of unknowns' will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number? This is the heart of your argument! You need to show this, not handwave it! You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems' energy and matter totally useless! As it stands, this article reads like '1. reversible computing is awesome 2. ??? 3. no expansion, hence, transcension 4. Fermi paradox solved!' No, it's not. Stop handwaving and show that more cold brains are not better, that there are zero uses for all the stellar energy and mass, and there won't be any meaningful colonization or stellar engineering.

There's also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it's final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).

Which is a highly dubious case, of course.

we probably can't yet detect stars that already dimmed and then stabilized long ago.

I don't see why the usual infrared argument doesn't apply to them or KIC 8462852.

I don't see why the usual infrared argument doesn't apply to them or KIC 8462852.

If by infrared argument, you refer to the idea that a dyson swarm should radiate in the infrared, this is probably wrong. This relies on the assumption that the alien civ operates at earth temp of 300K or so. As you reduce that temp down to 3K, the excess radiation diminishes to something indistinguishable to the CMB, so we can't detect large cold structures that way. For the reasons discussed earlier, non-zero operating temp would only be useful during initial construction phases, whereas near-zero temp is preferred in the long term. The fact that KIC 8462852 has no infrared excess makes it more interesting, not less.

A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling.

Moving matter - sure. But that would be a temporary use case, after which you'd no longer need that config, and you'd want to rearrange it back into a bunch of spherical dense computing planetoids.

potentially with elemental conversion

This is dubious. I mean in theory you could reflect/recapture star energy to increase temperature to potentially generate metals faster, but it seems to be a huge waste of mass for a small increase in cooking rate. You'd be giving up all of your higher intelligence by not using that mass for small compact cold compute centers.

If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.

Yes, but that's just equivalent to shielding. That only requires redirecting the tiny volume of energy hitting the planetary surfaces. It doesn't require any large structures.

Exponential growth.

Exponential growth = transcend. Exponential growth will end unless you can overcome the speed of light, which requires exotic options like new universe creation or altering physics.

I think Sandberg's calculated you can build a Dyson sphere in a century, apropos of KIC 8462852's oddly gradual dimming. And you hardly need to finish it before you get any benefits.

Got a link? I found this FAQ, where he says:

Using self-replicating machinery the asteroid belt and minor moons could be converted into habitats in a few years, while disassembly of larger planets would take 10-1000 times longer (depending on how much energy and violence was used).

That's a lognormal dist over several decades to several millenia. A dimming time for KIC 8462852 in the range of centuries to a millenia is a near perfect (lognormal) dist overlap.

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

You say 'may', but that seems really likely.

The recent advances in metamaterial shielding stuff suggest that low temps could be reached even on earth without expensive cooling, so the case I made for moving stuff away from the star for cooling is diminished.

Collecting/rearranging asteroids, and rearranging rare elements of course still remain as viable use cases, but they do not require as much energy, and those energy demands are transient.

After all, what 'complex set of unknowns' will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number?

Physics. It's the same for all civilizations, and their tech paths are all the same. Our uncertainty over those tech paths does not translate into a diversity in actual tech paths.

You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems' energy and matter totally useless!

There is no 'paradox'. Just a large high-D space of possibilities, and observation updates that constrain that space.

I never ever claimed that cold brains will "find harnessing solar systems' energy and matter totally useless", but I think you know that. The key question is what are their best uses for the energy/mass of a system, and what configs maximize those use cases.

I showed that reversible computing implies extremely low energy/mass ratios for optimal compute configs. This suggests that advanced civs in the timeframe 100 to 1000 years ahead of us will be mass-limited (specifically rare metal element limited) rather than energy limited, and would rather convert excess energy into mass rather than the converse.

Which gets me back to a major point: endgames. For reasons I outlined earlier, I think the transcend scenarios more likely. They have a higher initial prior, and are far more compatible with our current observations.

In the transcend scenarios, exponential growth just continues up until some point in the near future where exotic space-time manipulations - creating new universes or whatever - are the only remaining options for continued exponential growth. This leads to an exit for the civ, where from the outside perspective it either physically dies, disappears, or transitions to some final inert config. Some of those outcomes would be observable, some not. Mapping out all of those outcomes in detail and updating on our observations would be exhausting - a fun exercise for another day.

The key variable here is the timeframe from our level to the final end-state. That timeframe determines the entire utility/futility tradeoff for exploitation of matter in the system, based on ROI curves.

For example, why didn't we start converting all of the useful matter of earth into babbage-style mechanical computers in the 19th century? Why didn't we start converting all of the matter into vaccuum tube computers in the 50's? And so on....

In an exponentially growing civ like ours, you always have limited resources, and investing those resources in replicating your current designs (building more citizens/compute/machines whatever) always has complex opportunity cost tradeoffs. You also are expending resources advancing your tech - the designs themselves - and as such you never expend all of your resources on replicating current designs, partly because they are constantly being replaced, and partly because of the opportunity costs between advancing tech/knowledge vs expanding physical infrastructure.

So civs tend to expand physically at some rate over time. The key question is how long? If transcension typically follows 1,000 years after our current tech level, then you don't get much interstellar colonization bar a few probes, but you possibly get temporary dyson swarms. If it only takes 100 years, then civs are unlikely to even leave their home planet.

You only get colonization outcomes if transcension takes long enough, leading to colonization of nearby matter, which all then transcend roughly within the timeframe of their distance from the origin. Most of the nearby useful matter appears to be rogue planets, so colonization of stellar systems would take even longer, depending on how far down it is in the value chain.

And even in the non-transcend models (say the time to transcend is greater than millions of years), you can still get scenarios where the visible stars are not colonized much - if their value is really low, compared to abundant higher value cold dark matter (rogue planets, etc), colonization is slow/expensive, and the timescale spread over civ ages is low.

Er, a few species of placental mammal are hardly "widely separated lineages". Trying to draw conclusions for completely alien biologies by looking at convergent evolution inside a clade with a single common ancestor in the last 2-or-3% of the history of life on Earth is absurd. And the fact that the Placentalia start with an unusually high EQ among vertebrates-as-a-whole make it a particularly unsuitable lineage for estimating the possibilities of independent evolution of high animal intelligence.

Parrots and other birds seem to be about that intelligent, and octopi are close.

Perhaps that's an argument for the difficulty of the chimp to human jump: we have (nearly) ape-level intelligence evolving multiple times, so it can't be that hard, but most lineages plateaued there.

The conditions for the chimp to human jump require a series of changes where each brain increase enables better language/tools that pays for the increased costs.

Parrots/birds don't seem to have a feasible path like that - light bodies designed for flight, lack of hands. Cetaceans can easily grow and support large brains but fire doesnt work under water and most tool potentials are limited. Elephants seem to be the most likely runner up, if primates weren't around - perhaps in a few tens of millions or hundreds of years there could have been a pachyderm civilization.

So yeah - it might be somewhat rare, but its hard to say, as it didn't take that long on earth.

Er, a few species of placental mammal are hardly "widely separated lineages".

Sure they are - given that the placental clade contains most of the extant mammal diversity.

Trying to draw conclusions for completely alien biologies by looking at convergent evolution inside a clade with a single common ancestor in the last 2-or-3% of the history of life on Earth is absurd.

Hardly. Using the "last 2-or-3% of the history of life on Earth" is perhaps disingenuous, as evolution is highly nonlinear. The entire period from the cambrian explosion to now is what - 15% of the history of life?

More importantly - elephants, cetaceans and primates occupy widely diverse environments and niches.

And the fact that the Placentalia start with an unusually high EQ among vertebrates-as-a-whole

EQ is a rather poor indicator of intelligence compared to total synapse count.

The common placentilia ancestors are believed to be small rodent like insectivores which had small brains - presumably on the order of 21 million neurons in the cortex, similar to rats. The fact that brains increased by a factor of 2 to 3 orders of magnitude in 3 divergent branches of placentilia is evidence to me for robustness in selection for high intelligence.

Now of course, it's fairly easy for evolution to just make a brain bigger. The difficulty is in scaling up the brain in the right way to actually increase intelligence. Rodent brains scale better than lizard brains, and elephant, cetecean, and primate brains scale even better still. So evolution found increasingly better scaling strategies over time, and in some occasions in parallel.

Sure they are - given that the placental clade contains most of the extant mammal diversity.

The very issue is that "mammal diversity" is vastly insufficient to make any conclusions about general independent evolutionary trends. The number of potential explanations of the advantages of intelligence derived from features from the recent common evolutionary origin completely overwhelms any evidence for general factors.

For one example, if someone were to demonstrate that intelligence is usually useful for a species of animals where the adults, by a quirk of evolution, have to take active care of their young for an extended time — BOOM. A huge quantity of the "independence" is blown up in favor of a single ancestral cause, the existence of nursing of the young in mammals. And the same happens every other time you can show intelligence specifically helps given an ancestrally-derived feature or is promoted by an ancestrally-derived feature in the whole group. The placental mammals are far, far too alike in life cycle, biochemistry, et cetera for parallel evolution within the group to be good evidence of real evolutionary independence of a trait on a scale of completely separate planetary biome evolutions.

The entire period from the cambrian explosion to now is what - 15% of the history of life?

That's not disingenuity, that's driving home the point. The octopus, separated by that whole stretch of 15%, is a far better case for evolutionary independence of intelligence than puttering around with various branches of the placental mammals — but still not nearly as good as if we had a non-animal example (or even better, a non-eukaryote). Unless and until we have good evidence of the probability of the evolution of animal-analogues, near-ape-level intelligence being (in general) weakly useful for animals (with Cephalopoda, Aves, and Mammalia being the only three classes we know have it or even strongly suspect from the fossil record have ever had it) is hardly strong evidence that near-ape-or-better intelligence is a highly probable feature of life-in-general.

Our universe might be fined-tuned for life because there are a huge number of universes each with different laws of physics and only under a tiny set of these laws can sentient life exist and we shouldn’t be surprised to live in one of these fine-tuned universes. Our universe might also be fine-tuned for the Fermi paradox, especially if advanced civilizations often create paperclip maximizers.

Perhaps if you look at the subset of all possible laws of physics under which sentient life can exist, in a tiny subset of these you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring. Civilizations such as ours will constantly arise in these universes.

In contrast, imagine that in universes fine-tuned for life but not the Fermi paradox civilizations often create some kind of paperclip maximizer that spreads at the maximum possible speed making the development of further life impossible. As a result, these universes tend to contain very few observers such as us. Consequently, even though a far higher percentage of universes might be fined-tuned for just life than for both life and the Fermi paradox, most civilizations might exist in the latter.

One of the most likely candidates for filter (and variant of our future) is not mentioned here. That is, technological progress will simply end much sooner than usually expected, without any catastrophic events. There is not a filter, but a solid wall on the way from current technology to dyson sphere and starship building.

I agree this is a possibility - a special subtype of the collapse. It seems unlikely to be a convergent enough high probability outcome that it could explain the fermi paradox.

I mean not collapse, but that there is an option that technologies necessary for interstellar flight and megascale engineering are either impossible in themselves or impossible to obtain for any civilization.

Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter - most of the universe's mass was already used up by alien civs.

You can't get rid of the waste heat without it being visible. You can't even sequester it - you always need to dump it to a location of lower temperature.

I like this quote from Next Big Future: " looking on planets and around stars could be like primitives looking into the best caves and wondering where the advanced people are."

most of the universe's mass was already used up by alien civs.

But then why not all of it? Why leave anything for civs like ours?

Why haven't we turned all of earth into one huge factory/computer/whatever? I discussed some of this in my post.

Mass has some value as raw materials, but that does not imply that the mass near stars is the most valuable. In contrast, the mass near stars is very low value, because it is far too hot, and cooling it requires an investment of energy.

Most of the mass is actually free floating, and that is the high value mass anyway - as it is already colder and or easier to cool.

Furthermore early biological civilizations will also have present scientific value as objects of study, and potential future value as information/knowledge trading partners.

Why haven't we? We are very far from being in a steady state.

Maybe the elder civs aren't either. It might take billions of years to convert an entire light cone into dark computronium. And they're 84.5% of the way done.

I'm guessing the issue with this is that the proportion of dark matter doesn't change if you look at older or younger astronomical features.