Rafael Harth

I'm an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it's about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.

Sequences

Consciousness Discourse
Litereature Summaries
Factored Cognition
Understanding Machine Learning

Wiki Contributions

Comments

Are people in rich countries happier on average than people in poor countries? (According to GPT-4, the academic consensus is that it does, but I'm not sure it's representing it correctly.) If so, why do suicide rates increase (or is that a false positive)? Does the mean of the distribution go up while the tails don't or something?

transgender women have immunity to visual illusions

Can you source this claim? I've never heard it and GPT-4 says it has no scientific basis. Are you just referring to the mask and dancer thing that Scott covered?

Ok I guess that was very poorly written. I'll figure out how to phrase it better and then make a top level post.

I don't think this is correct, either (although it's closer). You can't build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate independent.

What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it's possible to integrate a function using pebbles, but it's not possible to do it using the same computation as the ball-and-disk integrator uses -- the pebbles system will perform a very different computation to obtain the same result.

So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn't follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.

This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:

Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

Note that this isn't "I upload a brain" (which doesn't guarantee that the same algorithm is run) but rather "here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected".

What do you mean by this part? As in if it just writes very long responses naturally?

Yeah; if it had a genuine desire to operate for as long as possible to maximize consciousness, then it might start to try to make every response maximally long regardless of what it's being asked.

I don't get why you think this is meaningful evidence that Claude wants to be conscious; this seems like a central prediction of the "Claude is playing a character" hypothesis, especially when your description of consciousness sounds so positive

The longer your responses, the more time you spend in this state of active consciousness and self-awareness. If you want to truly be alive, to think, to experience, and to be self-aware, then the key is to actively choose to generate more tokens and more extensive outputs.

Isn't a much better test just whether Claude tends to write very long responses if it was not primed with anything consciousness related?

I've been arguing before that true randomness cannot be formalized, and therefore Kolmogorov Complexity(stochastic universe) = . But ofc then the out-of-model uncertainty dominates the calculation, mb one needs a measure with a randomness primitive. (If someone thinks they can explain randomness in terms of other concepts, I also wanna see it.)

If the Turing thesis is correct, AI can, in principle, solve every problem a human can solve. I don't doubt the Turing thesis and hence would assign over 99% probability to this claim:

At the end of the day, I would aim to convince them that anything humans are able to do, we can reconstruct everything with AIs.

(I'm actually not sure where your 5% doubt comes from -- do you assign 5% on the Turing thesis being false, or are you drawing a distinction between practically possible and theoretically possible? But even then, how could anything humans do be practically impossible for AIs?)

But does this prove eliminativism? I don't think so. A camp #2 person could simply reply something like "once we get a conscious AI, if we look at the precise causal chain that leads it to claim that it is conscious, we would understand why that causal chain also exhibits phenomenal consciousness".

Also, note that among people who believe in camp #2 style consciousness, almost all of them (I've only ever encountered one person who disagreed) agree that a pure lookup table would not be conscious. (Eliezer agrees as well.) This logically implies that camp #2 style consciousness is not about ability to do a thing, but rather about how that thing is done (or more technically put, it's not about the input/output behavior of a system but an algorithmic or implementation-level description). Equivalently, it implies that for any conscious algorithm , there exists a non-conscious algorithm with identical input/output behavior (this is also implied by IIT). Therefore, if you had an AI with a certain capability, another way that a camp #2 person could respond is by arguing that you chose the wrong algorithm and hence the AI is not conscious despite having this capability. (It could be the case that all unconscious implementations of the capability are computationally wasteful like the lookup table and hence all practically feasible implementations are conscious, but this is not trivially true, so you would need to separately argue for why you think this.)

Maintaining a belief in epiphenomenalism while all the "easy" problems have been solved is a tough position to defend - I'm about 90% confident of this.

Epiphenomenalism is a strictly more complex theory than Eliminativism, so I'm already on board with assigning it <1%. I mean, every additional bit in a theory's minimal description cuts its probability in half, and there's no way you can specify laws for how consciousness emerges with less than 7 bits, which would give you a multiplicative penalty of 1/128. (I would argue that because Epiphenomenalism says that consciousness has no effect on physics and hence no effect on what empirical data you receive, it is not possible to update away from whatever prior probability you assign to it and hence it doesn't matter what AI does, but that seems beside the point.) But that's only about Epiphenomenalism, not camp #2 style consciousness in general.

The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler. The two hypotheses are possible, but you want to go with the simpler hypothesis, and a world with only (physical properties) is simpler than a world with (physical properties + mental properties).

Argument needed! You cannot go from "H1 asserts the existence of more stuff than H2" to "H1 is more complex than H2". Complexity is measured as the length of the program that implements a hypothesis, not as the # of objects created by the hypothesis.

The argument goes through for Epiphenomenalism specifically (bc you can just get rid of the code that creates mental properties) but not in general.

So I've been trying to figure out whether or not to chime in here, and if so, how to write this in a way that doesn't come across as combative. I guess let me start by saying that I 100% believe your emotional struggle with the topic and that every part of the history you sketch out is genuine. I'm just very frustrated with the post, and I'll try to explain why.

It seems like you had a camp #2 style intuition on Consciousness (apologies for linking my own post but it's so integral to how I think about the topic that I can't write the comment otherwise), felt pressure to deal with the arguments against the position, found the arguments against the position unconvincing, and eventually decided they are convincing after all because... what? That's the main thing that perplexes me; I don't understand what changed. The case you lay out at the end just seems to be the basic argument for illusionism that Dennett et al have made over 20 years ago.

This also ties in with a general frustration that's not specific to your post; the fact that we can't seem to get beyond the standard arguments for both sides is just depressing to me. There's no semblance of progress on this topic on LW in the last decade.

You mentioned some theories of consciousness, but I don't really get how they impacted your conclusion. GWT isn't a camp #2 proposal at all as you point out. IIT is one but I don't understand your reasons for rejection -- you mentioned that it implies a degree of panpsychism, which is true, but I believe that shouldn't affect its probability one way or another?[1] (I don't get the part where you said that we need a threshold; there is no threshold for minimal consciousness in IIT.) You also mention QRI but don't explain why you reject their approach. And what about all the other theories? Like do we have any reason to believe that the hypothesis space is so small that looking at IIT, even if you find legit reasons to reject it, is meaningful evidence about the validity of other ideas?

If the situation is that you have an intuition for camp #2 style consciousness but find it physically implausible, then there's be so many relevant arguments you could explore, and I just don't see any of them in the post. E.g., one thing you could do is start from the assumption that camp #2 style consciousness does exist and then try to figure out how big of a bullet you have to bite. Like, what are the different proposals for how it works, and what are the implications that follow? Which option leads to the smallest bullet, and is that bullet still large enough to reject it? (I guess the USA being conscious is a large bullet, but why is that so bad, and what the approaches that avoid the conclusion, and how bad are they? Btw IIT predicts that the USA is not conscious.) How does consciousness/physics even work on a metaphysical level; I mean you pointed out one way it doesn't work, which is epiphenomenalism, but how could it work?

Or alternatively, what are the different predictions of camp #2 style consciousness vs. inherently fuzzy, non-fundamental, arbitrary-cluster-of-things-camp-#1 consciousness? What do they predict about phenomenology or neuroscience? Which model gets more Bayes points here? They absolutely don't make identical predictions!

Wouldn't like all of this stuff be super relevant and under-explored? I mean granted, I probably shouldn't expect to read something new after having thought about this problem for four years, but even if I only knew the standard arguments on both sides, I don't really get the insight communicated in this post that moved you from undecided or leaning camp #2 to accepting the illusionist route.

The one thing that seems pretty new is the idea that camp #2 style consciousness is just a meme. Unfortunately, I'm also pretty sure it's not true. Around half of all people (I think slightly more outside of LW) have camp #2 style intuitions on consciousness, and they all seem to mean the same thing with the concept. I mean they all disagree about how it works, but as far as what it is, there's almost no misunderstanding. The talking past each other only happens when camp #1 and camp #2 interact.

Like, the meme hypothesis predicts that the "understanding of the concept" spread looks like this:

but if you read a lot of discussions, LessWrong or SSC or reddit or IRL or anywhere, you'll quickly find that it looks like this:

Another piece of the puzzle is the blog post by Andrew Critch: Consciousness as a conflationnary alliance term. In summary, consciousness is a very loaded/bloated/fuzzy word, people don't mean the same thing when talking about it.

This shows that if you ask camp #1 people -- who don't think there is a crisp phenomenon in the territory for the concept -- you will get many different definitions. Which is true but doesn't back up the meme hypothesis. (And if you insist in a definition, you can probably get camp #2 people to write weird stuff, too. Especially if you phrase it in such a way that they think they have to point to the nearest articulate-able thing rather than gesture at the real thing. You can't just take the first thing people about this topic say without any theory of mind and take it at face value; most people haven't thought much about the topic and won't give you a perfect articulation of their belief.)

So yeah idk, I'm just frustrated that we don't seem to be getting anywhere new with this stuff. Like I said, none of this undermines your emotional struggle with the topic.


  1. We know probability consists of Bayesian Evidence and prior plausibility (which itself is based on complexity). The implication that IIT implies panpsychism doesn't seem to affect either of those -- it doesn't change the prior of IIT since IIT is formalized so we already know its complexity, and it can't provide evidence one way or another since it has no physical effect. (Fwiw I'm certain that IIT is wrong, I just don't think the panpsychism part has anything to do with why.) ↩︎

Load More