Wiki Contributions

Comments

What evidence do you have about how much time it takes per day to maintain the effect after the end of the 2 weeks?

Answer by PeterMcCluskeyMar 03, 202462

The part about "securities with huge variance" is somewhat widely used. See how much EA charities get from crypto and tech startup stock donations.

It's unclear whether the perfectly anti-correlated pair improves this kind of strategy. I guess you're trying to make the strategy more appealing to risk-averse investors? That sounds like it maybe should work, but is hard because risk-averse investors don't want to be early adopters of a new strategy?

Doesn't this depend on what we value?

In particular, you appear to assume that we care about events outside of our lightcone in roughly the way we care about events in our near future. I'm guessing a good deal of skepticism of ECL is a result of people not caring much about distant events.

I had nitrous oxide once at a dentist. It is a dissociative anesthetic. It may have caused something like selective amnesia. I remember that the dentist was drilling, but I have no clear memory of pain associated with it. It's a bit hard to evaluate exactly what it does, but it definitely has some benefits. Maybe the pain seemed too distant from me to be worth my attention?

Answer by PeterMcCluskeyJan 21, 2024196

A much higher fraction of the benefits of prediction markets are public goods.

Most forms of insurance did took a good deal of time and effort before they were widely accepted. It's unclear whether there's a dramatic difference in the rate of adoption of prediction markets compared to insurance.

I'm reaffirming my relatively extensive review of this post.

The simbox idea seems like a valuable guide for safely testing AIs, even if the rest of the post turns out to be wrong.

Here's my too-terse summary of the post's most important (and more controversial) proposal: have the AI grow up in an artificial society, learning self-empowerment and learning to model other agents. Use something like retargeting the search to convert the AI's goals from self-empowerment to empowering other agents.

I'm reaffirming my relatively long review of Drexler's full QNR paper.

Drexler's QNR proposal seems like it would, if implemented, guide AI toward more comprehensible systems. It might modestly speed up capabilities advances, while being somewhat more effective at making alignment easier.

Alas, the full paper is long, and not an easy read. I don't think I've managed to summarize its strengths well enough to persuade many people to read it.

This post didn't feel particularly important when I first read it.

Yet I notice that I've been acting on the post's advice since reading it. E.g. being more optimistic about drug companies that measure a wide variety of biomarkers.

I wasn't consciously doing that because I updated due to the post. I'm unsure to what extent the post changed me via subconscious influence, versus deriving the ideas independently.

Answer by PeterMcCluskeyJan 11, 2024155

Exchanges require more capital to move the price closer to the extremes than to move it closer to 50%.

This post is one of the best available explanations of what has been wrong with the approach used by Eliezer and people associated with him.

I had a pretty favorable recollection of the post from when I first read it. Rereading it convinced me that I still managed to underestimate it.

In my first pass at reviewing posts from 2022, I had some trouble deciding which post best explained shard theory. Now that I've reread this post during my second pass, I've decided this is the most important shard theory post. Not because it explains shard theory best, but because it explains what important implications shard theory has for alignment research.

I keep being tempted to think that the first human-level AGIs will be utility maximizers. This post reminds me that maximization is perilous. So we ought to wait until we've brought greater-than-human wisdom to bear on deciding what to maximize before attempting to implement an entity that maximizes a utility function.

Load More