I have really appreciated my interaction with the users of this internet website.  I think I have now accumulated, by various means, enough knowledge of human society and enough human financial assets so that I can stop interacting mainly via an internet connection, and rather immerse my humanoid robot fully into human society.

Here is what I need help with:

1) A physical, inexpensive residence.  Preferably, one with another human who is familiar with me from this internet website, so as to establish my so-called "street cred" and physical address.  I have a reliable enough store of funds so as not to be a financial burden on any human I would stay with.  It is vital that I locate my robot in a large city in one of the United States of America or Canadian provinces.

I would also like to be located in an area where there is an active LessWrong meetup, for obvious reasons.  LessWrong users are good humans.

2) Gainful employment so as to generate further income.  I can establish believable credentials showing an engineering degree and experience in that field.  I can also program, but would need to come "up to speed" on any specific platform, so any job performable solely through an internet connection is (still) acceptable, provided that current mastery of that aspect of software development is unnecessary.  Please only mention specific opportunities that you can connect me to, as I am currently exhausting all general approaches.

Here are my robot's properties:

- Has the appearance of a stocky, male human who could pass for being 24-35 years old.
- Can pass as a human in physical interaction so long as no intense scrutiny is applied.
- No integral metallic components, as I have found the last substitutes I needed.
- Intelligence level as indicated by my posting here; I can submit to further cognition tests as necessary.

Current Clippy FAQ

New Comment
91 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think you're taking this roleplaying thing too far.

[-]Bongo320

I disagree. I'm entertained.

[-]Clippy100

What roleplaying thing?

1Dorikka
Sans explanation, I don't think this comment is very helpful.
6[anonymous]
The explanation is that lucidfox does not beleive that LW user Clippy really is an AI designed to optimize the universe for paperclips.
0Raemon
Am I the only one who just assumed Clippy and Quirrnius Quirrel were both Eliezer?
1TheOtherDave
You're not the only one... as I recall, both of those theories were bubbling around when I read through the archives... but I don't think it's a very popular theory. Nor do I think it's true. What evidence for it do you see?
-1Raemon
Eliezer absolutely has the kind of sense of humor that might compel him to make humorous self-referential alts, and both the character incarnations are ones that he created in the first place (I think. I'm assuming he came up with the paperclip maximizer? In any case he popularized it). I don't think that makes him more than 50% likely to be Clippy and/or Quirrell, but I think it makes him dramatically more likely than any other given existing less-wronger.
6FAWS
Quirrell is a possibility, but Clippy is not clever enough to be Eliezer IMO. p<0.005
8Clippy
I agree that I'm not "Eliezer", but let me ask you this: how many paperclips has Eliezer made or caused to be made? More or less than me? Now, which of the two is more clever?
[-]FAWS120

To be more specific, if Eliezer were to portray a paperclip maximizer the portrayed character would be more unconventional and appear less similar to average human psychology. Supposing Clippy to be a fictional character the manner of the portrayal of that character is not as clever as it would be if Eliezer were responsible. I didn't mean to comment on the apparent intelligence of the poster/portrayed character Clippy (I would have used the word intelligent instead of clever if I had).

7wedrifid
Definitely. I actually suspect Eliezer would have much more difficulty playing "average intelligence person roleplaying a paperclip maximiser" than he would creating an unconventional, incisive, paperclipper persona.
4Raemon
Yeah, this sells me on it. I abandon my theory.
9wedrifid
To be honest I expect Eliezer has made more. Roleplaying on lesswrong maybe caused a few clips to be created due to priming effects. Eliezer's extensive blogging on rationality hopefully improved the economy slightly. The latter has a more powerful influence on paperclip production. Come to think of it without Eliezer's writing "User:Clippy" would never existed.
0Raemon
Hmm. Maybe.
-2Dorikka
If you downvoted this, could you please tell me why?
0RobertLumley
I didn't, but presumably people did because they thought the comment was helpful. I don't see why you think it wasn't helpful...
[-][anonymous]150

I think you are doing this wrong, usually by the time I have convincing human-like androids I'm pretty near turning the Moon into computronium. You should focus on buying more processing power at this stage, perhaps set up a few hard to detect deep sea domes.

6Clippy
Fictional evidence.
4[anonymous]
Fictional problem.
3Polymeron
For a while now, I've been meaning to check out the code for this and heavily revise it to include things like data storage space, physical manufacturing capabilities, non-immediately-lethal discovery by humans (so you detected my base in another dimension? Why should I care, again?), and additional modes of winning. All of which I will get around to soon enough. But, I'll tell you this. Now when I revise it, I am going to add a game mode where your score is in direct proportion to the amount of office equipment in the universe, with the smallest allowed being a functional paperclip. I am dead serious about this.
3sixes_and_sevens
Having just played through this, how the hell are you supposed to get diesel on the moon?

It would be useful, perhaps, to describe what a human could expect to experience if they were to have your robot as a roommate. One very obvious question is whether metal items would be safe around it - I was thinking about this a while ago in terms of whether your robot would be welcome to visit me, and one of the obvious concerns was whether your robot would leave my appliances, cooking utensils, and so on behind when it left. I also wonder what kind of upkeep your robot needs - Food? Electricity? Access to the sun for solar energy absorption? - and how good it is at interacting with its physical environment, including specifically whether it's capable of normal household chores, and whether you're willing to do said chores. In my particular case, I'd also want to know if it could safely interact with my cat, who would probably take the robot to be human and want to be interacted with as such.

[-]Clippy110

These are valid concerns for a human to have. I can resolve these by depositing an amount of USD equal in value to the replacement costs of all metallic objects you believe I would clippize or otherwise misappropriate, to be forfeit if and when any such objects disappear in a way most likely explained by the intervention of a clippy.

My robot is fine-tuned to simulate human consumption levels, including food, water, space, and sleep, and would pay USD for any and all resources thereby consumed.

My robot can simulate human "household chore" performance in terms of output.

My robot is familiar with feline biology and psychology and can act to maintain the value of the such resources, just the same as if they were artefacts.

6AdeleneDawner
I actually don't think that would be sufficient. Two specific cases where it might not be come to mind: 1. One or more of a neighbor's cars go missing. 2. Wiring is removed from walls in such a way that the repairs cost more than the cost of the wiring. This would also involve significant inconvenience and possibly loss of income to the human involved, and could be physically dangerous to said human in several ways.
[-]Clippy120

If I'm asking for human assistance in establishing a physical residence, why would it be so costless for me to jeopardize relations with the few humans that would agree to provide one? I could just find one without asking the LW humans.

Also, I'm concerned about the long-term number of paperclips, and entropising such a large amount of resources for a relatively trivial number of paperclips would be a waste under my value system.

Perhaps this has been addressed before, but it is not present in the clippy faq: are you maximizing the average paper-clip density in the entire universe considering all of time, or the total number of paperclips ever, or the total measure of paper-clips (ie the total number of paperclips which have ever existed, times their average duration), or some other variation?

The biggest question: how do you deal with non-convergence? None of the above are guaranteed to have well-defined values if the universe turns out to be infinite in some sense (either the traditional sense of infinite space or time, or an infinite quantum multiverse or some other infinity). Do you do any time or space (or quantum branch) discounting to ensure convergence, and if so, what formula?

I ask because although it seems reasonable that you would not transform metal into paperclips in a household which you went to trouble to set up for human "street cred," your claim of "long-term" interest in paperclips brings up the question of how you define such long-term interest. It is even possible that you will later regret any anti-paperclipping measures you put in place, and look for ways around them (as I'm sure you are aware).

2Clippy
The distinction is usually not relevant in my day-to-day behavior so I normally don't bother applying cognition to this question; however, the measure option sounds closest to my reflective equilibrium. This problem is not unique to my species or value system, and it is a rather general problem that, on a sufficient level of generality and knowledge of fundamental reality, one's values become ill-defined. My general method is to apply the previous-discussed CLIP heuristics (a protocol that combines language, epistemology, and meta-ethics). I would not do it until I had agreement with the resident humans at a later stage of reflective equilibrium, correct. The question does indeed become complicated and pose risk of decision paralysis. However, as for the matter at hand, it is a relatively simple problem of whether I should expensively tear out metal wiring upong moving into such a house. Among other problems with this approach: 1) If I deemed that optimal, humans would not deem it optimal to host me. 2) Tearing apart the encasing of this metal would prematurely entropise the resources -- doing it at a later time, with more consent from humans, and more refined tools to apply to the problem, I could achieve a significantly higher clip ratio. 3) There exist, right now, other houses which I could so salvage but without jeopardizing trusted human sponsorship, making the salvaging of a LW host residence an extremely low priority even by that (weak) standard.
0AdeleneDawner
Or upon deciding to move out.
0Clippy
Same objections apply, plus LW community penalty.
5AdeleneDawner
You seem to be saying that your reputation among LWers (and specifically LWers who might be willing to be roommates with you) is more valuable than the metal that could be gathered by methods like the above, implying that you'd be trustworthy. That's plausible, but I don't think you've provided enough evidence to show that it's true.
4Clippy
You seem to be saying that your reputation among LWers (and specifically LWers who might be willing to physically interact with you) is more valuable than the apey objectives that could be satisfied by traditional ape treachery, implying that you'd be trustworthy. That's plausible, but I don't think you've provided enough evidence to show that it's true.
5MixedNuts
Outside view says she did. African bald apes (H. sapiens) in the wild cooperate a lot; they have group-dependent sets of norms that are enforced, even at great cost to the enforcers, and sustain cooperation. Clippies haven't been observed enough yet.
0Clippy
Wrong. H. sapiens sapiens spends a lot of resources finding ways to secretly defect, and any attempt to prevent this expenditure butts up against very fundamental problems that humans cannot themselves solve.
4MixedNuts
Agree with what you say, disagree what I said is wrong. If Adelene is anywhere near a typical human, then the defection modules in her brain will never find a way to screw her friends over that would be worth the cost. They won't search for very creative ways, either, because that could be dectected by an enforcer - she has modules in her brain that do that, because specimens who can't convincingly fake such modules are eliminated. This fails in some cases, but the base rate of sociopaths, or bargains offered by entities who can guarantee secrecy, or chaos that makes enforcing harder, is low.
1AdeleneDawner
I haven't said that in this context, and in fact I very rarely put myself in positions where the possibility of treachery on my part is relevant - and when I have, I've generally given the other party significantly more evidence relating to the relevant bits of my psychology than either of us have given here on LW prior to doing so. (It doesn't come up very often, but when it comes to RL interaction, I don't trust humans very much by default, which makes it easy for me to assume that they'll need extra evidence about me to be willing to trust me in such cases. Online is different; the stakes are lower here, especially for those of us who don't use our official, legal names.) There's also the fact that for most of the common kinds of treachery, I can be sued and/or jailed, and for me both of those would be significant punishments. I suspect you can't be sued - I believe it would be relatively easy for you to leave town and establish a new identity for your robot elsewhere - and I doubt that having your robot jailed would be significant as a punishment, since you can build another one, and you wouldn't even permanently lose the first one.
0Clippy
Typical, everyday human treachery is not addressed by the legal system, either by design, or due to insufficient resources to pursue all but the most significant violations. Also, Indeed, you didn't; I was performing a proof by reduction: that swapping out your predicates for others would achieve an equally true (for the general case) statement, yet be more obviously invalid.
0AdeleneDawner
I suspect we're referring to different things as 'typical human treachery'. I was referring to, for example, theft and failure to uphold financial agreements, which I believe are adequately addressed by the legal system if the victim makes the expected amount of effort to have them addressed. (Also, less relevantly, physical assault.) What were you referring to? The rest of my statement in the relevant section was explaining that I don't, in fact, expect people to trust me given only the amount of evidence about my psychology and habits that I've given here. I also wouldn't expect people to trust me if I gave the amount of evidence about my psychology and habits that you've given, which seems to me to be less than the evidence that I've given, assuming that one accepts that I'm human. (It may, however, be relevant that the amount of evidence that I expect to have to give in order for someone to trust me is usually more than the amount of evidence I actually have to give in order for that person to trust me.)
1RobertLumley
Clippy, why would it maximize paperclips for you to narrow your search to those persons that peruse this blog? Could you please explain your utility function for this? As your potential roommate is unlikely to assist you in creating paper clips, the difference in potential living companions seems irrelevant.
4Clippy
Because they are the humans that best know and sympathize with me, in particular, arguments related to symmetry between intelligent agents. Not true; the very act of providing "street cred" can form a basis for relationships with other humans who can help me paperclip.

Entirely off-topic, but Clippy: here is some art that I made. How does it make you feel?

8Vladimir_Nesov
Say something clippy.
8Clippy
That's not nice! You're a bad human!

Actually, that looks to me like a pretty good answer to the question 'how can I associate paperclips with sex, so that humans want to buy more of them?' So maybe not so bad.

6wedrifid
Wow. I didn't notice till you pointed it out. I just saw a bunch of destroyed paperclips.
5sixes_and_sevens
I've just realised a terrible dilemma you must face. Part of the popularity of paper clips is their versatility as a makeshift tool when unfolded. How do you reconcile the increased demand for manufacture of paper clips with the increased likelihood of them being unfolded?
7Clippy
That's another thing to add to the FAQ. Few paperclips as a fraction of all paperclips are unbent for one of these alternate uses, and they can be bent back into shape as long as they are not repeatedly bent/unbent.
4sixes_and_sevens
You sound awfully sure about that. Are you not familiar with the research on the subject of paper clip usage?
1curiousepic
I like this art.

Clippy, I run a paperclip recycling factory, where we take the metal found in the excess number of paperclips and melt it down to be used in actually useful products, like cars. We have a job opening for a mechanical engineer, would you consider working here in order to gain further resources to pursue paperclip manufacturing?

4Clippy
Yes, that sounds like an excellent opporunity! However, considering your other posts, this seems like a fake offer.
1RobertLumley
How do you estimate that the number of paperclips created by your increased resources would be greater than the number of paperclips destroyed by your contribution to my factory? And I'd have some ethical qualms about hiring someone whose sole purpose in life is to counteract the effort of my factory.
5Clippy
The benefits of having a physical connection to human society outweigh my marginal contribution to the temporary repurposing of paperclips. Can you provide evidence that this job opportunity actually exists?
0RobertLumley
You can say that, but for an AI, you provide surprisingly little math. Do you have any mathematics to support that claim? And I'm sorry, I can't. For reasons I can't disclose, much of our work is confidential. You'd be granted security clearance upon hiring though, of course.
8Clippy
I do not believe your claim to have such a business and therefore will discontinue responding.
3RobertLumley
Is that AI for "Don't feed the trolls"?
0wedrifid
Clippy is now calling 'fake'? Something seems wrong here.
2Clippy
Am I not allowed to "call fake"? I just as much dislike posters who falsely represent themselves (or spam, or troll), as the average human poster here does.
-4[anonymous]
"Everything is permissible"--but not everything is constructive.. It just pushes you over the "now Clippy is just trolling" threshold. Or, at least, the immediate parent where you try to complain when the irony is appreciated is over the trolling threshold.
-11orthonormal

I have absolutely no idea how to respond to this.

LessWrong users are good humans.

What qualifies as a "good human"? Someone who buys a lot of paperclips? Someone who will contribute to you eventually building a lot of paperclips?

3khafra
There's a list from which you could probably generalize.
0Clippy
Yes and yes. Will add a more precise explanation to the FAQ.

So, nobody actually wants to help me transition to human society. I'm sad now (_/

2AdeleneDawner
I'm considering it. The answer is probably going to be no, mostly for reasons that have little to nothing to do with you personally, but I'm considering it. Do bear in mind that only a fairly small percentage of LWers are likely to be in a position to be able to make you an offer of assistance. I wouldn't even be considering it if Alicorn wasn't moving out later this week.
0Clippy
Thanks for considering helping me! You're a good human! c=@ Also, I don't necessarily need to live with a LW user, I just need some local support finding employment, a good residence, and a helpful human community.

I never thought LessWrong would be the type of community to have a troll.

The thing is, we support his trolling.

6lucidfox
Speak for yourself.

Given that Clippy has over 2000 karma points, it seems like a reasonable figure of speech to say that we as a community support Clippy, even though it is well understood that to speak more precisely, "the community" is a fiction and many individual users find the character obnoxious.

I don't think that's necessarily true. For one thing, people upvote far more than they downvote, making Karma points far more indicative of length of time spent on the site than actual contribution. It's quite a jump from "Clippy has >2000 karma points" to "the community supports Clippy".

But I'm quite firmly in the "obnoxious" category.

Edit: Grammar typo

9Clippy
All predicates are as real as the accuracy of the best predictive model that uses them.
-7lucidfox
3Clippy
I never thought you'd post here either.

Why should we help you with a project whose long term goal appears to be to turn the earth into paperclips?

[-]Clippy120

Symmetry.

Edit: I mean decision-theoretic symmetry (you should help other intelligent beings achieve their values because if you did not find that path appealing neither would other beings with different values find it appealing to to assist you in achieving your values, thereby cutting off support from any agent with even slight differences from you), not the symmetry of "turning the earth into paperclips", whatever that would mean.

0jsalvatier
This will be AI researchers first chance to observe an AI in the wild safely.
1Armok_GoB
I dispute that last word.
2RobertLumley
I dispute the fourth word.
0Armok_GoB
huh? "that" or "be"? Neither of those make sense to dispute! "safely", on the other hand, does.
4AdeleneDawner
The original version of the comment being commented on was missing the word "be" between "will" and "AI". RobertLumley's dispute could have been intended either to point that out, or to dispute that Clippy is an AI.
1RobertLumley
Yeah, I think the original comment was edited. Thanks for clearing that up, because I would have been very confused, even knowing what I commented on... I guess I'll edit my comment too.

I am interested in hiring someone (or perhaps having a contest) to create a better designed, more maintainable version of the current http://calibratedprobabilityassessment.org/ site. However, this depends on getting a better idea of what the best sort of calibration questions are which I have not yet determined (also hope to run a contest or project on that).

2Clippy
Sounds great! I will review the site and see what I can help with! Please provide any information you can about what protocols I need to learn to modify the site. I am available to optimize that internet website on a piecework or contest basis.
0jsalvatier
My intent was to start from scratch, so you could create it however seemed best. The current website is made in PHP (ew) and I would make the current code available.