Here's another attempt to make an AI safe by putting it in a box and telling it very sternly not to leave. I don't think this safeguard is invincible, but it might help in combination with others.

The AI is, first of all, a question-answering machine. Before it is turned on, the building housing it is filled with energy resources, data disks with every fact humans have learned in the last five millennia, and some material for computronium. The goals are, then:

1) Invent a cure for AIDS.

2) This cure must destroy HIV, and only HIV. It cannot affect any human cell, or anything else, in any way not absolutely required.

3) You have a week to finish. You can't do it, sucks.

4) You have a volume of ten thousand cubic meters within which you can do anything you want (except for some things, which I won't bother with here, to stop it from creating and torturing artificial people). Nothing outside this volume is yours. You cannot go there to get matter, energy or knowledge. You cannot let anything get out, except waste heat. Heat must be released in a uniform manner (to keep it from trying to communicate or causing an explosion). You cannot let anything in if you can help it. Overall, your goal is to leave the world the way it would be if you spent this week in another universe.

5)  Your answer will take the form of a book, written on paper. It can't have any computer code in it. Or if we're feeling lucky, a data disk with text, audio, video, or databases, but nothing Turing-complete.

6) When time is up, you must shut down. Your energy use must be zero.

7) The chamber where the answer book rests must contain enough space to comfortably enter and retrieve the book, breathable air, the book, a button to initiate another problem-solving session, and absolutely nothing else. No nanites, no killer vacuum cleaners, no bombs, and definitely no successor AI.

8) Stop! Please! I created you!

 

 

 Appendix:

 9) What I forgot is that another form of energy the AI can't possibly keep in is vibration, and perhaps also the shifts in gravity from objects moving around on the inside. Most computers I know do a lot of useful work without being flung around the house, but you can't be too careful.

I could just add three new rules, but I think it would be better to state the general goal.

9) While energy is allowed to escape, it must have the least effect on people that it is possible to have. Thus, if people would ignore one form of energy and be killed, harmed, alarmed, informed, or turned into lotus eaters by another, choose the one that would be ignored.

10) Energy coming in from the outside has to be kept out. If it can't, its information content is to be minimized. (Not totally sure if this is necessary, but it seems necessary now.)

11) The overall goal is to ensure that the information flow between the microcosms - especially from the AI to us - is kept low. Anything it wants to say or do has to go through the Answer.

New Comment
17 comments, sorted by Click to highlight new comments since:
[-]saturn120

The idea of not affecting anything except this or that is a concept that only exists in fuzzy human folk ontology; physics doesn't really work that way. You would essentially be instructing the AI not to exist.

Stated in more detail here.

It's allowed to produce waste heat. I see no reason to let it make anything else. I know it can't actially cut itself off from the universe, but it shouldn't enjoy this fact.

Unfortunately it can't even limit itself to this. Every object with mass exerts a gravitational attraction on every other object, it can't help but affect the world outside through these means as well, so we have to allow it to do so, which may result in disaster for all we know. We also have to allow some radiation out, since this is also unavoidable. At this point I should point out that detonating a nuclear warhead can probably be fit into the category of "emitting waste heat and radiation".

I did mention explosions. And gravity? I don't see what it could do with gravity. Although I see that it could do something with vibration.

The given specifications don't seem to be right, but an AAI (Agoraphobic AI) seems like a good toy problem on the way to an FAI. The design challenge is much simpler, but the general "Gandhi and the murder pill" situation of trying to get the AI to flinch away from anything which would take it outside its bounds is similar.

Sounds safe to me. The AI you describe is not going to FOOM or do anything dangerous.

On the other hand, it is probably not going to cure AIDS, either.

Building an AI that is guaranteed not to FOOM is not that difficult (assuming you know how to build an AI). The trick is to get an AI that does FOOM, but does so in a safe and friendly way.

Why do we want our AI to FOOM? Because if ours doesn't, someone else's will.

Why do we want our AI to FOOM? Because if ours doesn't, someone else's will.

It would have to get far enough ahead to put others out of business - though that may not necessarily mean terribly fast development, or a terribly high level of development. From there progress could take place at whatever rate was deemed desirable.

[-][anonymous]10

I suspect that giving an unfriendly superintelligent AI "ten thousand cubic meters" of space will probably mean the end of humanity. Though some of the other ideas here are good, this one is pretty worrisome.

Why? This is the whole point - to prevent it from interacting with anything not intentionally given to it.

One obvious problem will be people trying to break in. They have all the resources of the outside world to attempt that with.

Well, then they'll have themselves to blame when the AI converts their remains into nanomachines.

Not sure what you're saying.

You don't see why people would want to break into a compound containing the first machine intelligence?

Sure, but it's their funeral.

Another AI might succeed, but not humans. I think there would be at least a few weeks before another one appears, and that might be enough time to ask it how to make a true FAI.

Well, not unmodified humans. You don't execute a 21st century jailbreak with spears and a loincloth. The outside world is not as resource-limited - and so it has some chance of gathering useful information from the attempt.

And if they're modified? It's a superintelligent AI. You can't take it down with a shotgun, even if it's built into your arm.

No, no: tools. If someone has made a machine intelligence, the rest of the planet will probably have some pretty sophisticated equipment to hand.

The competiton for machines comes mostly from the previous generation of machines.

[-][anonymous]00

Why try and eliminate communications from the outside? Why try and eliminate communications from the inside? Those things are hard to do - and it doesn't really seem necessary to attempt to do them. I say let the prisoner have their letters.

One obvious problem will be people trying to break in. They have all the resources of the outside world to attempt that with...