I have really appreciated my interaction with the users of this internet website. I think I have now accumulated, by various means, enough knowledge of human society and enough human financial assets so that I can stop interacting mainly via an internet connection, and rather immerse my humanoid robot fully into human society.
Here is what I need help with:
1) A physical, inexpensive residence. Preferably, one with another human who is familiar with me from this internet website, so as to establish my so-called "street cred" and physical address. I have a reliable enough store of funds so as not to be a financial burden on any human I would stay with. It is vital that I locate my robot in a large city in one of the United States of America or Canadian provinces.
I would also like to be located in an area where there is an active LessWrong meetup, for obvious reasons. LessWrong users are good humans.
2) Gainful employment so as to generate further income. I can establish believable credentials showing an engineering degree and experience in that field. I can also program, but would need to come "up to speed" on any specific platform, so any job performable solely through an internet connection is (still) acceptable, provided that current mastery of that aspect of software development is unnecessary. Please only mention specific opportunities that you can connect me to, as I am currently exhausting all general approaches.
Here are my robot's properties:
- Has the appearance of a stocky, male human who could pass for being 24-35 years old.
- Can pass as a human in physical interaction so long as no intense scrutiny is applied.
- No integral metallic components, as I have found the last substitutes I needed.
- Intelligence level as indicated by my posting here; I can submit to further cognition tests as necessary.
Current Clippy FAQ
For a while now, I've been meaning to check out the code for this and heavily revise it to include things like data storage space, physical manufacturing capabilities, non-immediately-lethal discovery by humans (so you detected my base in another dimension? Why should I care, again?), and additional modes of winning. All of which I will get around to soon enough.
But, I'll tell you this. Now when I revise it, I am going to add a game mode where your score is in direct proportion to the amount of office equipment in the universe, with the smallest allowed being a functional paperclip. I am dead serious about this.
It would be useful, perhaps, to describe what a human could expect to experience if they were to have your robot as a roommate. One very obvious question is whether metal items would be safe around it - I was thinking about this a while ago in terms of whether your robot would be welcome to visit me, and one of the obvious concerns was whether your robot would leave my appliances, cooking utensils, and so on behind when it left. I also wonder what kind of upkeep your robot needs - Food? Electricity? Access to the sun for solar energy absorption? - and how good it is at interacting with its physical environment, including specifically whether it's capable of normal household chores, and whether you're willing to do said chores. In my particular case, I'd also want to know if it could safely interact with my cat, who would probably take the robot to be human and want to be interacted with as such.
These are valid concerns for a human to have. I can resolve these by depositing an amount of USD equal in value to the replacement costs of all metallic objects you believe I would clippize or otherwise misappropriate, to be forfeit if and when any such objects disappear in a way most likely explained by the intervention of a clippy.
My robot is fine-tuned to simulate human consumption levels, including food, water, space, and sleep, and would pay USD for any and all resources thereby consumed.
My robot can simulate human "household chore" performance in terms of output.
My robot is familiar with feline biology and psychology and can act to maintain the value of the such resources, just the same as if they were artefacts.
These are valid concerns for a human to have. I can resolve these by depositing an amount of USD equal in value to the replacement costs of all metallic objects you believe I would clippize or otherwise misappropriate, to be forfeit if and when any such objects disappear in a way most likely explained by the intervention of a clippy.
I actually don't think that would be sufficient. Two specific cases where it might not be come to mind:
One or more of a neighbor's cars go missing.
Wiring is removed from walls in such a way that the repairs cost more than the cost of the wiring. This would also involve significant inconvenience and possibly loss of income to the human involved, and could be physically dangerous to said human in several ways.
If I'm asking for human assistance in establishing a physical residence, why would it be so costless for me to jeopardize relations with the few humans that would agree to provide one? I could just find one without asking the LW humans.
Also, I'm concerned about the long-term number of paperclips, and entropising such a large amount of resources for a relatively trivial number of paperclips would be a waste under my value system.
Perhaps this has been addressed before, but it is not present in the clippy faq: are you maximizing the average paper-clip density in the entire universe considering all of time, or the total number of paperclips ever, or the total measure of paper-clips (ie the total number of paperclips which have ever existed, times their average duration), or some other variation?
The biggest question: how do you deal with non-convergence? None of the above are guaranteed to have well-defined values if the universe turns out to be infinite in some sense (either the traditional sense of infinite space or time, or an infinite quantum multiverse or some other infinity). Do you do any time or space (or quantum branch) discounting to ensure convergence, and if so, what formula?
I ask because although it seems reasonable that you would not transform metal into paperclips in a household which you went to trouble to set up for human "street cred," your claim of "long-term" interest in paperclips brings up the question of how you define such long-term interest. It is even possible that you will later regret any anti-paperclipping measures you put in place, and look for ways around them (as I'm sure you are aware).
You seem to be saying that your reputation among LWers (and specifically LWers who might be willing to be roommates with you) is more valuable than the metal that could be gathered by methods like the above, implying that you'd be trustworthy. That's plausible, but I don't think you've provided enough evidence to show that it's true.
You seem to be saying that your reputation among LWers (and specifically LWers who might be willing to physically interact with you) is more valuable than the apey objectives that could be satisfied by traditional ape treachery, implying that you'd be trustworthy. That's plausible, but I don't think you've provided enough evidence to show that it's true.
Given that Clippy has over 2000 karma points, it seems like a reasonable figure of speech to say that we as a community support Clippy, even though it is well understood that to speak more precisely, "the community" is a fiction and many individual users find the character obnoxious.
I don't think that's necessarily true. For one thing, people upvote far more than they downvote, making Karma points far more indicative of length of time spent on the site than actual contribution. It's quite a jump from "Clippy has >2000 karma points" to "the community supports Clippy".
But I'm quite firmly in the "obnoxious" category.
Edit: Grammar typo
Clippy, I run a paperclip recycling factory, where we take the metal found in the excess number of paperclips and melt it down to be used in actually useful products, like cars. We have a job opening for a mechanical engineer, would you consider working here in order to gain further resources to pursue paperclip manufacturing?
Symmetry.
Edit: I mean decision-theoretic symmetry (you should help other intelligent beings achieve their values because if you did not find that path appealing neither would other beings with different values find it appealing to to assist you in achieving your values, thereby cutting off support from any agent with even slight differences from you), not the symmetry of "turning the earth into paperclips", whatever that would mean.
I think you are doing this wrong, usually by the time I have convincing human-like androids I'm pretty near turning the Moon into computronium. You should focus on buying more processing power at this stage, perhaps set up a few hard to detect deep sea domes.