Spoiled Discussion of Permutation City, A Fire Upon The Deep, and Eliezer's Mega Crossover
Permutation City is an awesome novel that was written in 1994. Even if the author, Greg Egan, used a caricature of this community as a bad guy in a more recent novel, his work is still a major influence on a lot of people around these parts who have read it. It dissolves so many questions around uploading and simulation that it's hard for someone who has read the book to talk about simulationist metaphysics without wanting to reference the novel... but doing that runs into constraints imposed by spoiler etiquette.
So go read Permutation City if you haven't read it already because it's philosophically important and a reasonably fun read.
In the meantime, if you haven't then you should also read A Fire Upon The Deep by Vernor Vinge (of "singularity" coining fame) and then read Eliezer's fan fic The Finale of the Ultimate Meta Mega Crossover which references both of them in interesting ways to make substantive philosophical points and doesn't take too long to read.
In the comments below there will be discussion that has spoilers for all three works.
Hm. The Chinese Room seems to be different in my head than on wikipedia. I guess I assumed that writing a book that covers all possible inputs convincingly would necessarily involve lots of brute force.
Well, the man in the Chinese Room is supposed to be manually 'stepping through' an algorithm that can respond intelligently to questions in Chinese. He's not necessarily just "matching up" inputs with outputs, although Searle wants you to think that he may as well just be doing that.
Searle seems to have very little appreciation of how complicated his program would have to be, though to be fair, his intuitions were shaped by chatbots like Eliza.
Anyway, the "Systems Reply" is correct (hurrah - we have a philosophical "result"). Even those philosophers who think this is in some way controversial ought to agree that it's irrelevant whether the man in the room understands Chinese, because he is analogous to the CPU, not the program.
Therefore, his thought experiment has zero value - if you can imagine a conscious machine then you can imagine the "Systems Reply" being correct, and if you can't, you can't.