One measure of status is how far outside the field of accomplishment it extends. Using American public education as the standard, Leibniz is only known for calculus.
there is not any action that any living organism, much less humans, take without a specific goal
Ah, here is the crux for me. Consider these cases:
These are situations where either the goal is not known, or it is fictionalized, or it is contested (between goals that are also not known). Even in the case of everyday re-actions, how would the specific goal be defined?
I can clearly see an argument along the lines of evolutionary forces providing us with an array of specific goals for almost every situation, even when we are not aware of them or they are hidden from us through things like self-deception. That may be true, but even given that it is true I come to the question of usefulness. Consider things like food:
Or sex:
It doesn't feel to me like thinking of these actions in terms of manipulation adds anything to them as a matter of description or analysis. Therefore when talking about social things I prefer to use the word manipulation for things that are strategic (by which I mean we have an explicit goal and we understand the relationship between our actions and that goal) and unaligned (which I mean in the same sense you described in your earlier comment, the other person or group would not have wanted the outcome).
Turning back to the post, I have a different lens for how to view How To Win Friends and Influence People. I suggest that these are habits of thought and action that work in favor of coordination with other people; I say it works the same way rationality works in favor of being persuaded by reality.
I trouble to note that this is not true in general of stuff about persuasion/influence/etc. A lot of materials on the subject do outright advocate manipulation even as I use the term. But I claim that Carnegie wrote a better sort of book, that implies pursuing a kind of pro-sociality in the same way we pursue rationality. I make an analogy: manipulators are to people who practice the skills in the book as Vulcan logicians are to us, here.
A sports analogy is Moneyball.
The counterfactual impact of a researcher is analogous to the insight that professional baseball players are largely interchangeable because they are all already selected from the extreme tail of baseball playing ability, which is to say the counterfactual impact of a given player added to the team is also low.
Of course in Moneyball they used this to get good-enough talent within budget, which is not the same as the researcher case. All of fantasy sports is exactly a giant counterfactual exercise; I wonder how far we could get with 'fantasy labs' or something.
I agree that processor clock speeds are not what we should measure when comparing the speed of human and AI thoughts. That being said, I have a proposal for the significance the fact that the smallest operation for a CPU/GPU is much faster than the smallest operation for the brain.
The crux of my belief is that having faster fundamental operations means you can get to the same goal using a worse algorithm in the same amount of wall-clock time. That is to say, if the difference between the CPU and neuron is ~10x, then the CPU can achieve human performance using an algorithm with 10x as many steps as the algorithm that humans actually use in the same clock period.
If we view the algorithms with more steps than human ones as sub-human because they are less computationally efficient, and view a completion of the steps of an algorithm such that it generates an output as a thought, this implies that the AI can get achieve superhuman performance using sub-human thoughts.
A mechanical analogy: instead of the steps in an algorithm consider the number of parts in a machine for travel. By this metric a bicycle is better than a motorcycle; yet I expect the motorcycle is going to be much faster even when it is built with really shitty parts. Alas, only the bicycle is human-powered.
It isn't quoted in the above selection of text, but I think this quote from same chapter addresses your concern:
“I instantly saw something I admired no end. So while he was weighing my envelope, I remarked with enthusiasm: "I certainly wish I had your head of hair." He looked up, half-startled, his face beaming with smiles. "Well, it isn't as good as it used to be," he said modestly. I assured him that although it might have lost some of its pristine glory, nevertheless it was still magnificent. He was immensely pleased. We carried on a pleasant little conversation and the last thing he said to me was: "Many people have admired my hair." I'll bet that person went out to lunch that day walking on air. I'll bet he went home that night and told his wife about it. I'll bet he looked in the mirror and said: "It is a beautiful head of hair." I told this story once in public and a man asked me afterwards: "'What did you want to get out of him?" What was I trying to get out of him!!! What was I trying to get out of him!!! If we are so contemptibly selfish that we can't radiate a little happiness and pass on a bit of honest appreciation without trying to get something out of the other person in return - if our souls are no bigger than sour crab apples, we shall meet with the failure we so richly deserve.”
So the smarter one made rapid progress in novel (to them) environments, then revealed they were unaligned, and then the first round of well established alignment strategies caused them to employ deceptive alignment strategies, you say.
Hmmmm.
I don't see this distinction as mattering much: how many ASI paths are there which somehow never go through human-level AGI? On the flip side, every human-level AGI is an ASI risk.
I would perhaps urge Tyler Cowen to consider raising certain other theories of sudden leaps in status, then? To actually reason out what would be the consequences of such technological advancements, to ask what happens?
At a guess, people resist doing this because predictions about technology are already very difficult, and doing lots of them at once would be very very difficult.
But would it be possible to treat increasing AI capabilities as an increase in model or Knightian uncertainty? It feels like questions of the form "what happens to investment if all industries become uncertain at once? If uncertainty increases randomly across industries? If uncertainty increases according to some distribution across industries?" should be definitely answerable. My gut says the obvious answer is that investment shifts from the most uncertain industries into AI, but how much, how fast, and at what thresholds are all things we want to predict.
A few years after the fact: I suggested Airborne Contagion and Air Hygiene for Stripe’s (reprint program)[https://twitter.com/stripepress/status/1752364706436673620].