Earlier today I was reading this post about the rationalist community's limited success betting on bitcoin and thinking about how the singularity is going to be the ultimate test of the rationalist community's ability to translate their unusual perspectives into wealth and influence.

There needs to be some default community advice here for people who believe that we're likely to create AGI in our lifetimes but don't know how to prepare for it. I think it would an absolute shame if we missed opportunities to invest in the singularity the same way we missed opportunities to invest in Bitcoin (even though this community was clued in to crypto from a very early stage). I don't want to read some retrospective about how only 9% of readers made $1000 or more from the most important even in human history even though we were clued in to the promise and peril of AGI decades before the rest of the world.

John_Maxwell made a post about this last year along the same lines. but i'd like to expand on what he wrote.

Why is this important?

In addition to the obvious benefit of everyone getting rich, I think there are several other reasons coming up with a standard set of community advice is important.

Betting on the eventual takeover of the entire world economy by AI is not yet a fashionable bet. But like Bitcoin, betting on AGI will inevitably become a very fashionable bet in the next few decades as first early adopters buy in, and then it becomes standard part of financial advice given out by investment professionals.

In these early days, I think there is an opportunity for us to set the standard for how this type of investment is done. This should include not just a clear idea of how to invest in AGI's creation (via certain companies, ETFs, AI focused SPACs etc), but also what should NOT be done.

For example, the community advice should probably advise against investing in companies without a strong AI alignment team, as capitalizing such companies will increase the likelihood that AI will destroy you and everything you love. We may also want to discourage investment in companies that don't have a clause on how they plan to deal with race conditions that could compromise safety. There are probably other considerations we should make that I am not thinking of. OpenAI's charter seems like a pretty well thought-out set of guidelines for AGI creation. This site has a very healthy community of AI safety researchers whose advice on this topic I would very much appreciate.

Whatever the advice is, I think it's important that it subsidizes good behavior without diminishing expected returns too much. If we advise against investing in the organization that looks most likely to create AGI because they don't meet some safety standard, we run the risk of people ignoring the advice.

There is some small chance that if these guidelines are well thought out they could eventually be adopted by investment companies or even governments. BlackRock, an investment management corporation with $8.67 trillion under management, has begun to divest from fossil fuels in the interest of attracting money from organizations concerned about climate change. If the public comes to see unaligned AI as a threat at some point in the future, existing guidelines already adopted by other investors or financial institutions could become a easy thing for investment managers to adopt so they can say they are "being proactive" about the risk from AI.

What would this advice look like?

Let's reflect on some of the lessons learned from crypto craze. Here I will quote from several posts I've read.

A hindsight solution for rationalists to have reduced the setup costs of buying bitcoin would have been either to have had a Rationalist mining pool or arrange to have a few people buy in bulk and serve as points of distribution within the community.

This suggests that if a future opportunity appears to be worth the risk of investment, but has some barrier to entry that is individually costly but collectively trivial, we ought to work first to eliminate that barrier to entry, and then allow the community to evaluate more dispassionately on risk and return alone.

  • clarkey

I think lowering the barrier to doing something is a great idea but it's hard to know exactly what that would look like. Could we create our own ETF? Would it be best to create a list of stocks of companies that are both likely to create AGI and have good incentive structures set up to make proper alignment more likely? I think ideally there would be tiers of actions people could take depending on how much effort they wanted to expend, where the lowest tier with the least action would be "Set up a TD Ameritrade account and buy this ETF" or something and the most complicated would be "here is a summary for each company widely regarded by members of the AI alignment forum to have good alignment plans and here's a link to some resources to learn about them."

Is there really an opportunity here? Why would we expect to beat the market in this situation?

The answer to this is more complicated and I'm sure other people probably have better answers to this than me. But I'll give it a shot.

I realize that saying this sounds very unacademic, but the creation of AGI will be the most important moment in the history of life so far. If AGI does not destroy us or torture us or pump our brains full of dopamine for eternity, it will have transformative effects on the economy the likes of which we have never seen. It's plausible that worldwide GDP growth could accelerate by 10x or more. A well aligned AI is a wish-granting machine whose limitations are the as-of-yet-incomplete laws of physics.

Think about how nuts this sounds to the average hedge fund manager. They have no point of reference for AGI. It pattern matches to happy magical fairy tale or moral fables from children's storybooks. It doesn't sound real. And I would bet the prospect of ridicule has prevented the few who actually buy the idea from bringing it up with investors. If you listen to interviews with top people from JP Morgan or Goldman Sachs they use the same language to refer to AI as they use to refer to everything else in their investment portfolio. There's nothing to signal that this is fundamentally different from biotech or clean energy or SAAS products.

With such communication and conceptual barriers, why would we expect assets to be priced properly?

I'd welcome feedback here. Maybe I'm missing something or maybe I've been listening to the wrong subset of the investment community. But my overwhelming impression is that almost no one on Wallstreet or anywhere else truly buys into the vision of AGI as the last invention humans will ever make.

Here's my current strategy and why I find it unsatisfying

Earlier this year I got sick of not betting on my actual beliefs, and put about $10k into Google, Microsoft and Facebook in proportion to the number of publications they each made in NeurIPS and ICML over the last two years, treating publication count as a proxy for the likelihood that each company would create the first AGI. I would have put in more but I don't have much more.

Though I think this is better than nothing, I can't help but think there must be a better, more targeted way to bet on AGI specifically. For example, I don't really care that much about Google's search business, but I am forced to buy it when I buy Google stock.

This strategy also neglects all small companies. I think there is a low enough level of hardware overhang right now that it is overwhelmingly likely AGI will be created in one or more big research labs. But perhaps the final critical piece of the puzzle will come from some startup that gets acquired by Microsoft AI labs and owning a piece of that startup will result in dramatically higher returns than buying the parent company directly.

Unfortunately accredited investor laws literally make it illegal to invest in early-stage startups unless you're already rich. So all the rapid growth from early-stage startups is forever out of reach for people who aren't already rich. (By the way, these laws are one of the reasons private equity has averaged about double the returns of the S&P 500 over the last 30 years. Rich people have a monopoly on startup equity). SPACs are kind of a backdoor to getting into early-stage startups without a lot of money, but companies have to agree to merge with a SPAC so your options are still somewhat limited. However I think the SPAC strategy is worth looking into.

You could always buy an AI ETF. I'll be honest and say I haven't really looked into that much but would appreciate feedback from anyone that has.

Anyways, those are my thoughts on this subject. Let me know what you think.

New Comment
52 comments, sorted by Click to highlight new comments since:

One approach that feels a bit more direct is investing in semiconductor stocks. If we expect AGI to be a big deal and massively economically relevant, it seems likely that this will involve vast amounts of compute, and thus need a lot of computer chips. I believe ASML (Netherlands based) and TSMC (Taiwan based) are two of the largest semiconductor manufacturers and are publicly traded, though I'm unsure which countries let you easily invest in them.

Problems with this:

  • A bunch of their current business comes from crypto-mining, so this also has some crypto exposure. The stocks have done well over the last few years, and I believe this is mostly from the crypto boom than the AI boom
  • TSMC is based in Taiwan, and thus is exposed to Taiwan-China problems
  • This assumes AGI will require a lot of compute (which I personally believe, but YMMV)
  • It's unclear how much of the value of AGI will be captured by semiconductor manufacturers
  • A bunch of their current business comes from crypto-mining, so this also has some crypto exposure. The stocks have done well over the last few years, and I believe this is mostly from the crypto boom than the AI boom

According to Nic Carter, this isn't true:

Ultimately, Bitcoin miners represent a small fraction of TSMC revenue — around 1% according to Bernstein. The notion of a marginal, Tier II industry being responsible for chip shortages is fanciful. The more immediate cause is the supply inelasticity of foundry space (due to gargantuan fixed costs) and the massive surge of demand for electronics due to a global lockdown and new technologies coming online.

Source

But the global chip shortage means semiconductor foundries like Taiwan Semiconductor Manufacturing Co. are already scrambling to fill other orders. They are also cautious about adding new capacity given how finicky crypto demand has proven to be. Bernstein estimates that crypto will only contribute about 1% TSMC’s revenue this year, versus around 10% in the first half of 2018 during the last crypto boom.

Looking at the WSJ source, looks like it's actually arguing that Bitcoin mining wasn't a big cause of the global chip shortage. And that 1% was a low, and that it had previously been 10%.

Still less than I'd expected, but 10% seems plausibly enough to significantly boost profits?

A bunch of their current business comes from crypto-mining, so this also has some crypto exposure. The stocks have done well over the last few years, and I believe this is mostly from the crypto boom than the AI boom

This is particular problematic given the scenario of a switch away from proof of work to proof of stake, which might happen in 1-2 years and tank crypto-mining completely.

which might happen in 1-2 years and tank crypto-mining completely.

Good point. But that would be a much better time to buy in for long-term value. 

One approach that feels a bit more direct is investing in semiconductor stocks.

I agree with this and the above points. 

One way to potentially overcome the issues with TSMC might be to supplement the investment by buying into commodities like silicon and coltan. This is still not guaranteed to capture most of the value, but might be a method of diversification. But there are many ethical considerations (particularly with coltan). 

We have so much uncertainty abut pathways that I'm skeptical there is really any benefit here. If we knew enough to write such a guide, that would be great, but for reasons having nothing to do with our financial preparedness.

This seems like a very surprising claim to me. You can make money on stocks by knowing things above pure chance. Do you really think that for all stocks?

Your question ignores timeframes. I’m happy to argue that P(stock rises in the next 5 years |AGI in 20 years)≈P(stock rises in the next 5 years) for all stocks.
 

I’m a professional equity investor, and trust me, the market isn’t that forward-looking. Unless you believe in AGI within the next 10 years, I suggest ignoring it when it comes to picking investments. Because for the intermediate timeframe until the market begins to take the concept seriously, the value of your investments will be determined by all the other factors which you’re ignoring in favour of focusing on AGI, so unless you want your investment results to be meh for years-to-decades, then don‘t go for some all-out bet on AI.

Because for the intermediate timeframe until the market begins to take the concept seriously, the value of your investments will be determined by all the other factors which you’re ignoring in favour of focusing on AGI, so unless you want your investment results to be meh for years-to-decades, then don‘t go for some all-out bet on AI.

For most people here, the choice is not between choosing AI stocks based on fundamental value, and choosing another set of stocks based on fundamental value. Rather, it's between choosing AI stocks based on their fundamental value, and choosing an index fund. If we assume that whatever short term variables affect AI stocks are just as unpredictable as those that affect index funds, the only real risk to investing in AI stocks is that you might not be diversified enough.

In other words, the main argument against investing in AI stocks (conditional on AGI being a real force later this century), is that you don't want to expose yourself to that much risk. But plenty of people are OK with that level of risk, so I don't see a problem with it.

The question is not if you can build a portfolio where the expected gains conditional on AGI is positive, it's whether you can get enough of an advantage that it outweighs the costs of doing so, and in expectation outperforms the obvious alternative strategy of index funds. If you're purely risk-neutral, this is somewhat easier. Otherwise, the portfolio benefits of reducing probability of losses are hard to beat.

You also may have cases where P(stock rises | AGI by date X)>>P(Stock rises), but P(stock falls | ~AGI  by date X) is high enough not to be worthwhile.

I would add that money is probably much less valuable after AGI than before, indeed practically worthless. But it's still potentially a good idea to financially prepare for AGI, because plausibly the money would arrive before AGI does and thereby allow us to e.g. make large donations to last-ditch AI safety efforts.

If you think of it less like "possibly having a lot of money post-AGI" and more like "possibly owning a share of whatever the AGIs produce post-AGI", then I can imagine scenarios where that's very good and important. It wouldn't matter in the worst scenarios or best scenarios, but it might matter in some in-between scenarios, I guess. Hard to say though ...

This is a good point, but even taking it into account I think my overall claim still stands. The scenarios where it's very important to own a larger share of the AGI-produced pie [ETA: via the mechanism of pre-existing stock ownership] are pretty unlikely IMO compared to e.g. scenarios where we all die or where all humans are given equal consideration regardless of how much stock they own, and then (separate point) also our money will probably have been better spent prior to AGI trying to improve the probability of AI going well than waiting till after AI to do stuff with the spoils.

I would add that money is probably much less valuable after AGI than before, indeed practically worthless.

Depending on your system of ethics, there shouldn't be large diminishing returns to real wealth in the future. Of course, personally, if you're a billionaire, then $10,000 doesn't make much of a difference to you, whereas to someone who owns nothing, it could be life-saving.

But in inflation adjusted terms, dollars represent the amount of resources that you control, and stuff that you can produce. For instance, if you care about maximizing happiness, and your utility function is linear in the number of happy beings, then each dollar you have goes just as far whether you are a billionaire or trillionaire. It also makes sense from a perspective of average utilitarianism. From that perspective, what matters most is plausibly what fraction of beings you can cause to be happy, which implies that the fraction of global wealth you control matters immensely.

This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction, and moreover there wasn't any other, less-scalable ways to convert dollars into happiness/suffering-reduction that are more cost-effective. This condition clearly does not obtain. Instead, the position I find myself in as a longtermist is one where I'm trying to maximize the probability of an OK future, and this is not the sort of thing that the billionth dollar is just as useful for as the thousandth. Low-hanging fruit effect is real. (But I'm pretty sure there are diminishing returns for everyone, not just longtermists. The closest-to-counterexample I can think of is a shorttermist who is prioritizing, say, carbon offsets or GiveDirectly. Those things seem pretty scalable, up to billions of dollars at least (not necessarily trillions)

This would only be true if there were an indefinitely scalable way to convert dollars into happiness/suffering-reduction

I don't agree, but I think my argument assumed a different foundation than what you have in mind. Let me try to explain.

Assume for a moment that at some point we are will exist in a world where the probability of existential risk changes negligibly as a result of marginal dollars thrown at attempts to mitigate it. This type of world seems plausible post-AGI, since if we already have superintelligences running around, so to speak, then it seems reasonably likely that we will have already done all we can do about existential risk.

The type of altruism most useful in such a world would probably be something like, producing happy beings (if you're a classical utilitarian, but we can discuss other ethical frameworks too). In that case, the number of dollars you own would scale nearly linearly with the number of happy beings you can produce. Why? Because your total wealth will likely be tiny compared to the global wealth, and so you aren't likely to hit large diminishing returns even if you spend all of your wealth towards that pursuit.

Quick intuition pump: suppose you were a paperclip maximizer in the actual world (this one, in 2021) but you weren't superintelligent, weren't super-rich, and you there was no way you could alter existential risk. What would be the best action to take? Well, one obvious answer would be to use all of your wealth to buy paperclips (and by "all of your wealth" I mean, everything you own, including the expected value of your future labor). Since your wealth is tiny compared to the overall paperclip market, your actions aren't likely to increase the price of paperclips by much, and thus, the number of paperclips you cause to exist will be nearly linear in the number of dollars you own.

ETA: After thinking more, perhaps your objection is that in the future, we will be super-rich, so this analogy does not apply. But I think the main claims remain valid insofar as your total wealth is tiny compared to global wealth. I am not assuming that you are poor in some absolute sense, only that you literally don't control more than say, 0.01% of all wealth.

ETA2: I also noticed that you were probably just arguing that money spent pre-AGI goes further than money spent post-AGI. Seems plausible, so I might have just missed the point. I was just  arguing a claim that inflation adjusted dollars shouldn't have strongly diminishing marginal utility in the future to altruistic ethics systems.

Yeah, I think you misunderstood me. I'm saying that we should aim to spend our money prior to AGI, because it goes a lot farther prior to AGI (e.g. we can use it to reduce x-risk) compared to after AGI where either we are all dead, or maybe we live in a transhumanist utopia where money isn't relevant, or maybe we can buy things with money, but we still can't buy x-risk reduction since x-risk has already been reduced a lot so the altruism we can do is much less good than the altruism we can do now.

So, "financially preparing for AGI" to me (and to pretty much any effective altruist, I claim) means "trying to make lots of money in the run-up to AI takeoff to be spent just prior to AI takeoff" and not "trying to make lots of money from AGI, so as to spend it after AI takeoff."

money is probably much less valuable after AGI than before, indeed practically worthless.

I think this overstates the case against money. Humans will always value services provided by other humans, and these will still be scarce after AGI. Services provided by humans will grow in value (as measured by utility to humans) if AGI makes everything else cheap.  It seems plausible that money (in some form) will still be the human-to-human medium of exchange, so it will still have value after AGI.

It does not make the case against money at all; it just states the conclusion. If you want to hear the case against money, well, I guess I can write a post about it sometime. So far I haven't really argued at all, just stated things. I've been surprised by how many people disagree (I thought it was obvious).

To the specific argument you make: Yeah, sure, that's one factor. Ultimately a minor one in my opinion, doesn't change the overall conclusion.

Most rationalists are heavily invested into AGI in non-monetary ways — career paths, free time, hopes for longevity/coordination breakthroughs. As other commenters have pointed out, if humanity achieves aligned AGI in the future, financial returns will feasibly be far less important. Given that, maybe the best investments are to bet against AGI as a hedge for humanity not achieving it. 

There are 3 futures: If we achieve aligned AGI, we win the game and nothing else matters*. If we achieve misaligned AGI, we die and nothing else matters. If we fail to achieve AGI at all, then we've wasted a lot of our time, careers, and hopes. In that case, we want investments to fall back on. 

In that 3rd future, what commodities and equities are most successful? Can we buy those now?

*subject to accepting the singularity-like premise.

Some people think that there will be something reasonably described as a singularity, but that money (or more generally property rights) will still plausibly matter.

Even if you don't think that, we might be able to predict that the market will ignore any *other* possibility (such as a singularity that obsoletes money). So we can predict that the market will predict that certain stocks will be valuable. So we can get money just ahead of the singularity, and then maybe use it to avert bad outcomes. Or maybe not, but that's one way it might make sense to bet on AGI even if you're confident it will obsolete money.

I think Vicarious AI is doing more AGI-relevant work than anyone. I pore over all their papers. They're private so this doesn't directly answer your question. But what bugs me is: Their investors include Good Ventures & Elon Musk ... So how do they get away with (AFAICT) doing no safety work whatsoever ...?

I think Vicarious AI is doing more AGI-relevant work than anyone

Interesting, can you say more about this/point me to any good resources on their work? I never hear about Vicarious in AI discussions

I know from some interviews I've watched that Musk's main reason for investing in AI startups is to have inside info about their progress so he can monitor what's going on. Perhaps he's just not really paying that much attention? He always has like 15 balls in the air, so perhaps he just doesn't realize how bad Vicarious's safety work is.

Come to think of it, if you or anyone you know have contact with Musk, this might be worth mentioning to him. He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink. So perhaps he just doesn't know that Vicarious AI is being reckless when it comes to safety.

He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink.

Both of these examples betray an extremely naive understanding of AI risk.

  • OpenAI was intended to address AI-xrisk by making the superintelligence open source. This is, IMO, not a credible way to avoid someone - probably someone in a hurry - getting a decisive strategic advantage.
  • Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea, etc. I'm also unenthusiastic on technical grounds.
  • SpaceX. Moving to another planet does not save you from misaligned superintelligence. (being told this is, I hear, what led Musk to his involvement in OpenAI)

So I'd attribute it to some combination of too many competing priorities, and simply misunderstanding the problem.

[-][anonymous]20

Moving to another planet does not save you from misaligned superintelligence.

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea

The only way I can see Musk's position making sense is that it's actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Let's use Toby Ord's categorisation - and ignore natural risks, since the background rate is low. Assuming a self-sustaining civilisation on Mars which could eventually resettle Earth after a disaster:

  • nuclear war - avoids accidental/fast escalation; unlikely to help in deliberate war
  • extreme climate change or environmental damage - avoids this risk entirely
  • engineered pandemics - strong mitigation
  • unaligned artificial intelligence - lol nope.
  • dystopian scenarios - unlikely to help

So Mars colonisation handles about half of these risks, and maybe 1/4 of the total magnitude of risks. It's a very expensive mitigation, but IMO still clearly worth doing even solely on X-risk grounds.

[-][anonymous]20

I strongly believe that nuclear war and climate change are not existential risks, by a large margin.

For engineered pandemics, I don't see why Mars would be more helpful than any other isolated pockets on Earth - do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?

Curiously enough, the last scenario you pointed out - dystopias - might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.

It does take substantially longer to get to Mars than to get to any isolated pockets on Earth. So unless the pandemic's incubation period is longer than the journey to Mars, it's likely that Martians would know that passengers aboard the ship were infected before it arrived.

[-][anonymous]10

The absolute travel time matters less for disease spread in this case. It doesn't matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won't spread to those places naturally.

And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they're obscure almost by definition) and plant the virus there, they'll most certainly have no trouble bringing it to Mars either.

I had always assumed that any organization trying to destroy the world with an engineered pathogen would basically release whatever they made and then hope it did its work.

IDK, this topic gets into a lot of information hazard, where I don't really want to speculate because I don't want to spread ideas for how to make the world a lot worse.

I'd worry that if we're looking at a potentially civilization-ending pandemic, a would-be warlord with a handful of followers decides that north sentinel island seems a kind of attractive place to go all of a sudden.

[-][anonymous]00

Moving to another planet does not save you from misaligned superintelligence.

 

Depends how super. FOOM to godhood isn't the only possible path for AI.

AI doesn't need godhood to affect another planet. Simply scalling up architectures that are equal in intelligence to the smartest humans to work 1 billion times in parallel is enough. 

We should have a similar conversation [Question post?] for anticipating the consequences of transformative biotech.

Which biotech in particular?

As far as genetic engineering goes, I was thinking about writing up a post on that myself to the effect of "why you should [or should not] consider having your kids via IVF.

But I haven't done much research on transformative biohazards like engineered pandemics and am wary of writing such a post.

I was listening to Buterin on the Tim Ferriss podcast this morning, he made an offhand comment that biotech is at a similar point that computers were in the 50s; that left it salient with I read this, but in conversation and from general reading I have a sense that there's a good chance that a lot of progress is about to be unlocked in the field due to machine learning, much cheaper / higher throughput genetic sequencing and DNA/RNA/protein synthesis and much better DNA editing techniques due to CRISPR.

IBM managed to extract some value from the computer revolution but the real gains where made by countries that were founded later. 

Big Pharma companies at the moment seem more disfunctional then IBM back (there's a reason why DeepMind does protein folding prediction and Pfizer doesn't), so I would expect a good portion of the profits to be made by new biotech companies. 

As far as biotech and Buterin, VitaDAO is currently in formation. Longevity biotech DAO on the blockchain hits a lot of hip keywords. 

[-][anonymous]10

I would love to hear some longevity-related biotech investment advices from rationalists, which I (and presumably many others here) predict to be the second biggest deal in big picture futurism. 

The only investment idea I can come up with myself are for-profit spin-off companies from SENS Research Foundation, but that's just the obvious option to someone without expertise in the field and trusting the most vocal expert.

Although some growth potential has already been lost due to the pandemic bringing a lot of attention towards this field, I think we're still early enough to capture some of the returns.

You mean consequences not limited by possible financial gain?

My impression from skimming a few AI ETFs is that they are more or less just generic technology ETFs with different branding and a few random stocks thrown in. So they're not catastrophically worse than the baseline "Google, Microsoft and Facebook" strategy you outlined, but I don't think they're better in any real way either.

Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter. 

It's also possible that investing not in "what's most likely to make AGI" but "what's most likely to make me money before AGI based on market speculation" is throwing fuel on the ungrounded-speculation bonfire. Which attracts sociopaths rather than geeks. Which cripples real AGI efforts. Which is good. (Not endorsing this, just making a hypothesis.)

rationalist community's limited success betting on bitcoin

Wait, what?  The sum of the net worth of those who consider themselves members of the rationalist community is MUCH greater due to crypto than it was before.  What definition of "success" are you using which so devalues that outcome?

There needs to be some default community advice here for people who believe that we're likely to create AGI in our lifetimes but don't know how to prepare for it.

Do you really want default advice?  I'd rather have correct advice, and I'd rather still have correct personal behavior, regardless of advice.  "Correct", in this case, means "best possible experienced outcome", not "sounds good" or "best prediction at this point but still wrong".

Earlier this year I got sick of not betting on my actual beliefs, and put about $10k into Google, Microsoft and Facebook in proportion to the number of publications they each made in NeurIPS and ICML over the last two years, treating publication count as a proxy for the likelihood that each company would create the first AGI.

I think this summarizes your confusion pretty well.  Stock picks aren't bets about any particular outcome.  You're not making conditional predictions about what will actually happen.  You're claiming to predict who creates the first AGI, but not trying to figure out what happens when it does.  Why would the stock go up, as opposed to the employees in control just absconding with (or being absorbed into) the AGI and the stock becoming irrelevant?  Or someone else learning from the success and turning it into an actual financial boon.  Or any of a billion other sequences that would make it a dumb idea to pick a stock based on number of papers published in a narrow topic that may or may not correlate with AGI creation.

IMO, actual best advice given what we know now is to invest in fairly broad index funds, and look for opportunities where your personal expertise can be leveraged to identify opportunities (financial and otherwise) that are much better than average.

Wait, what? The sum of the net worth of those who consider themselves members of the rationalist community is MUCH greater due to crypto than it was before. What definition of "success" are you using which so devalues that outcome?

I'm mostly referring to the narrative from this post. There have been some successes, but those have mostly been due to a very small number of huge winners. And in the case of the biggest winner of all, Vitalik Buterin, he actually ended up joining the rationalist community AFTER he started Ethereum.

Do you really want default advice? I'd rather have correct advice, and I'd rather still have correct personal behavior, regardless of advice. "Correct", in this case, means "best possible experienced outcome", not "sounds good" or "best prediction at this point but still wrong".

I probably wasn't as clear as I could have been in the original post. What I mean by "default advice" is a set of actions people can take if they believe there is a decent chance AGI will be created in their lifetimes and want to prepare for it but are not willing to spend all the time to develop a detailed personal plan.

For example, if you believe the efficient market hypothesis, you can act on that belief by buying low-cost index funds. I'm thinking it would be useful to have a similar easy option for people who buy that we will likely see AGI in our lifetimes.

Why would the stock go up, as opposed to the employees in control just absconding with (or being absorbed into) the AGI and the stock becoming irrelevant? Or someone else learning from the success and turning it into an actual financial boon. Or any of a billion other sequences that would make it a dumb idea to pick a stock based on number of papers published in a narrow topic that may or may not correlate with AGI creation.

True, and this is why I said I am not particularly satisfied with my current strategy. I still think in the scenario where AGI has been created or is close to being created, Google's stock price is likely to go up more than an index fund of all stocks on the market.

set of actions people can take if they believe there is a decent chance AGI will be created in their lifetimes and want to prepare for it but are not willing to spend all the time to develop a detailed personal plan.

I don't think that's how markets and finance works.  The actions you can take if you're not willing/able to get into the details of a personal plan are pretty much "follow the crowd".  Perhaps you can pick a crowd to follow, in the form of slightly-less-general indexes.  

For example, if you believe the efficient market hypothesis, you can act on that belief by buying low-cost index funds.

No, no, no.  Regardless of what you believe, if the EMH is true, you can't do better than buying low-cost index funds.  It's only the case where you have TRUER beliefs than the market aggregate, in ways that you can predict the market shift that will happen when your belief becomes common, that you can invest better than the average.  Without quite a bit of analysis and research, I don't think you can predict whether Google shareholders benefit or lose from AGI development any better than the market.

There's some major challenges here.

The first is trying to predict what will be a reliable store of value in a world where TAI may disrupt normal power dynamics. For example, if there's a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI? Seems like not, in which case it's very hard to know what assets you can meaningfully own that would be worth owning, let alone by what mechanisms you can meaningfully own things in such a world.

Now we might screen off bad outcomes since they don't matter to this question, but then we're still left with a lot of uncertainty. Maybe it just doesn't matter because we'll be expanding so rapidly that there's little value in existing assets (they'll be quickly dwarfed via expansion). Maybe we'll impose fairness rules that make held assets irrelevant for most things that matter to you. Maybe something else. There's a lot of uncertainty here that makes it hard to be very specific about anything beyond the run up to TAI.

We can, however, I think give some reasonable advice about the run up to TAI and what's likely to be best to have invested in just prior to TAI. Much of the advice about semiconductor equities, for example, seems to fall in this camp.

For example, if there's a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI?

No, which is why I "invest" in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.

This bet would've paid major dividends in hindsight. Is there a way to bet on OpenAI and Anthropic or other AI safety focused labs to both give them more access to capital and to make a profit? Nvidia stock has already ballooned quite a bit, and seems to be mostly duel use. Also I'm not confident about the safety credibility of many other AI companies. Although scoring each major foundation model building company for safety would be a useful project to do... (Pondering if I should do this)

I'm asking this mostly to see if anyone else has already done their homework on this question.

You're assuming that success will be apparent enough to the market that it will notice & respond to an AGI research group that is about to succeed. You may want to clarify for yourself why you believe you're in the world where a signal would be apparent (in general), able to be received, and visible to the market at all.

edit to add: my own assumption is that money would have ?? value after a successful FAI, so I don't think this is worth optimizing for, with regards to AGI stocks or whatever.