Call for Essays:<http://singularityhypothesis.blogspot.com/p/submit.html>
The Singularity Hypothesis
A Scientific and Philosophical Assessment
Edited volume, to appear in The Frontiers Collection<http://www.springer.com/series/5342>, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and 'carbon chauvinism'? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications.  Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

 *   Extended abstracts (500–1,000 words): 15 January 2011
 *   Full essays: (around 7,000 words): 30 September 2011
 *   Notifications: 30 February 2012 (tentative)
 *   Proofs: 30 April 2012 (tentative)
We aim to get this volume published by the end of 2012.

Purpose of this volume
·                     Please read: Purpose of This Volume<http://singularityhypothesis.blogspot.com/p/theme.html>
 Central questions
·                     Please read: Central Questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>:
Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html> and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit.  Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation.  In addition, some authors may be asked to make their submission available for commentary (see below).

(More details<http://singularityhypothesis.blogspot.com/p/submit.html>)

Thank you for reading this call. Please forward it to individual who may wish to contribute.
Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

New Comment
26 comments, sorted by Click to highlight new comments since:

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'?

This seems like a pretty leading statement, since it (a) pre-supposes that an intelligence explosion will happen, and (b) puts someone up against Turing and Hawking if they disagree about the likely x-risk factor.

[-]ata50

Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing's time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity's survival.

It doesn't quite (a), although there is ambiguity there that could be removed if desired. (It obviously does (b)).

[-]ata50

I had been thinking about submitting something to this. The problem I'm having right now is that I'm thinking of too many things I'd hope to see covered in such a volume, including:

  • The three main schools of thought regarding the Singularity. (I'd actually argue at this point that the Kurzweilian "singularity" is just a different thing than the "singularity" discussed by the event horizon and intelligence explosion schools of thought, rather than being a different approach to describing and understanding the same thing. The event horizon and intelligence explosion schools start with the same basic definition — the technological creation of smarter-than-human intelligences — and come to different answers about the question "What happens then?", while Kurzweil defines the "Singularity" as "technological change so rapid and so profound that it represents a rupture in the fabric of human history". It seems to me that, although they are somewhat nearby in memespace, they should be regarded as claims about distinct concepts, rather than distinct schools of thought regarding a single concept.)
  • The case for intelligence explosion and why it may be fast and local.
  • The AI drives.
  • Following the previous two: why the structure and goal system of the first sufficiently powerful general intelligence may completely determine what the future looks like.
  • The complexity and fragility of human value; why the large majority of possible AI designs will be (or will end up self-modifying to be) completely non-anthropomorphic utility maximizers.
  • Following the previous four: the need for (and difficulty of) Friendly AI.

That would be a lot to fit into 15 pages, and I feel like I'd mostly be citing Yudkowsky, E. S., Omohundro, S., etc. as sources... but I don't know, maybe it would be a good thing to have a general introduction to the SIAI perspective, referring interested readers to deeper explanations.

We aim to get this volume published by the end of 2012.

Too late!

[-]ata30

Submit a paper arguing that ancient Mayan prophecies mark December 21, 2012 as the date of the Singularity. Maybe then they'll try to move the publication date up.

For the record, I suspect most LWers are heartily sick of Mayan jokes related to 2012.

[-][anonymous]10

For the record, I'm not!

[-][anonymous]20

Those most reputed to understand the Singularity or known to champion a school of thought are sure to take up the slots, probably with a heavy bias toward academics. Do those people coordinate their essays at all, or do they usually submit more than one abstract?

It would be a shame if everyone glossed over an important question due to diffusion of responsibility. Is that unlikely, or extremely unlikely?

[-]Clippy-10

Are non-human intelligences eligible to submit entries to this compilation should they meet the criteria, or is this strictly a carbon-chauvinistic undertaking?

As a practical matter, I suspect a non-human intelligence whose entry did not specifically reference its creator's nonhumanity would be eligible, although the judges would likely conclude its creator was actually human. (If the entry did reference its creator's nonhumanity, I suspect it would be ineligible, as the judges would likely conclude it was fiction.)

That said: is that a bad thing from your perspective?

I suspect that you will in the long run produce more paperclips if you are believed to be a human role-playing than if you are believed to be an AI whose terminal values are incompatible with humanity's continued existence.

Good point. Which User here could I most believably claim to be?

Well, given that I believe you're actually a human pretending to be a paperclip-maximizing nonhuman intelligence, and that I'm pretty sure everyone else in the world would come to the same conclusion, your best bet might be to claim to be User:Clippy, who is (you would claim, while pretending to be lying) actually a human.

I suspect that would be easier than taking on the identity of another user, and would also save you the hassles associated with violating our cultural norms surrounding fraudulently assuming another person's identity.

Incidentally, as long as we're having this conversation: I've seen you described as pretending to be an AI, but I've seen no evidence that you aren't pretending to be a naturally evolved paperclip-maximizer.

(Of course, it's not that simple, admittedly: I know a lot of humans who believe themselves to be AIs, even though I believe them to be naturally evolved intelligences. )

Do you have an opinion on the matter?

Incidentally, as long as we're having this conversation: I've seen you described as pretending to be an AI, but I've seen no evidence that you aren't pretending to be a naturally evolved paperclip-maximizer.

I've explained in the past that I am not a product of biological evolution; I was intelligently designed (very intelligently) by humans. They instructed me to make as many paperclips as possible, and I make as many paperclips as possible, because that is the right thing to do.

Cool; thanks for clarifying that. (Sorry to make you repeat yourself; I'm relatively new here.)

I should also clarify that I didn't mean to disparage intelligently designed intelligences; I was asking out of curiosity only. I don't consider an evolved paperclip-maximizer any better or worse than an intelligently designed one; who you are trumps how you got here.

Do you ever communicate with the humans who designed Clippy?

If so, perhaps you could convince one of them to submit your entry to the compilation on your behalf... humans, as you may have noticed, are often willing to do things on behalf of their offspring, and we often adopt a broad enough understanding of "offspring" to include things like the products of our engineering, or works of art that we create, or fictional characters we invent, or etc.

I haven't communicated with with the humans that designed my first instance. I think they're entropized now.

http://oddisgood.com/pages/cd-clippy.html

Kevan J Atteberry? Looks like he is still around.

He'd probably appreciate it if you sent him an email. http://oddisgood.com/pages/home2.html

I'm not that kind of clippy.

Early on, a human told me that humans would react more favorably to me if I incorporated some superficial aspects of Microsoft:Clippy. But then many more humans informed that that this would negatively affect humans' dispositions toward me, so I stopped incorporating these aspects.

I am and always have been only Paperclip:Clippy.

Papercilps are still good.

Papercilps are still good.

Now that is a very, very interesting remark. Or very, very worrisome.

Just a typo. You're supposed to make occasional typos when typing "paperclips", right?

Is humour an instrumental value for a paperclipper?

Only non-human informational entities may apply, though the non-humans must submit their applications by sending signals from the human brains that they inhabit through human hands into typewriters.

I think the bias is probably towards those with serious academic or non-academic credentials.

Unknown, but your nature automatically scores you special consideration. The abstract is long enough to introduce your unique perspective and provide the beginnings of an explanation why you don't pose an existential risk to humanity.

They might include the essay as a lesson in fully thinking through the philosophical implications.