Preface to a Proposal for a New Mode of Inquiry

Summary: The problem of AI has turned out to be a lot harder than was originally thought. One hypothesis is that the obstacle is not a shortcoming of mathematics or theory, but limitations in the philosophy of science. This article is a preview of a series of posts that will describe how, by making a minor revision in our understanding of the scientific method, further progress can be achieved by establishing AI as an empirical science.

 

The field of artificial intelligence has been around for more than fifty years. If one takes an optimistic view of things, its possible to believe that a lot of progress has been made. A chess program defeated the top-ranked human grandmaster. Robotic cars drove autonomously across 132 miles of Mojave desert. And Google seems to have made great strides in machine translation, apparently by feeding massive quantities of data to a statistical learning algorithm.

But even as the field has advanced, the horizon has seemed to recede. In some sense the field's successes make its failures all the more conspicuous. The best chess programs are better than any human, but go is still challenging for computers. Robotic cars can drive across the desert, but they're not ready to share the road with human drivers. And Google is pretty good at translating Spanish to English, but still produces howlers when translating Japanese to English. The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

So what went wrong, and how to move forward? Most mainstream AI researchers are reluctant to provide clear answers to this question, so instead one must read behind the lines in the literature. Every new paper in AI implicitly suggests that the research subfield of which it is a part will, if vigorously pursued, lead to dramatic progress towards intelligence. People who study reinforcement learning think the answer is to develop better versions of algorithms like Q-Learning and temporal difference (TD) learning. The researchers behind the IBM Blue Brain project think the answer is to conduct massive neural simulations. For some roboticists, the answer involves the idea of embodiment: since the purpose of the brain is to control the body, to understand intelligence one should build robots, put them in the real world, watch how they behave, notice the problems they encounter, and then try to solve those problems. Practitioners of computer vision believe that since the visual cortex takes up such a huge fraction of total brain volume, the best way to understand general intelligence is to first study vision.

Now, I have some sympathy for the views mentioned above. If I had been thinking seriously about AI in the 80s, I would probably have gotten excited about the idea of reinforcement learning. But reinforcement learning is now basically an old idea, as is embodiment (this tradition can be traced back to the seminal papers by Rodney Brooks in the early 90s), and computer vision is almost as old as AI itself. If these avenues really led to some kind of amazing result, it probably would already have been found.

So, dissatisfied with the ideas of my predecessors, I've taken some trouble to develop my own hypothesis regarding the question of how to move forward. And desperate times call for desperate measures: the long failure of AI to live up to its promises suggests that the obstacle is no small thing that can be solved merely by writing down a new algorithm or theorem. What I propose is nothing less than a complete reexamination of our answers to fundamental philosophical questions. What is a scientific theory? What is the real meaning of the scientific method (and why did it take so long for people to figure out the part about empirical verification)? How do we separate science from pseudoscience? What is Ockham's Razor really telling us? Why does physics work so amazingly, terrifyingly well, while fields like economics and nutrition stumble?

Now, my answers to these fundamental questions aren't going to be radical. It all adds up to normality. No one who is up-to-date on topics like information theory, machine learning, and Bayesian statistics will be shocked by what I have to say here. But my answers are slightly different from the traditional ones. And by starting from a slightly different philosophical origin, and following the logical path as it opened up in front of me, I've reached a clearing in the conceptual woods that is bright, beautiful, and silent.

Without getting too far ahead of myself, let me give you a bit of a preview of the ideas I'm going to discuss. One highly relevant issue is the role that other, more mature fields have had in shaping modern AI. One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident. To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood. Another influence, that should in principle be healthy but in practice isn't, comes from physics. Unfortunately, for the most part, AI researchers have imitated only the superficial appearance of physics - its use of sophisticated mathematics - while ignoring its essential trait, which is its obsession with reality. In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality. But theories of AI will not work like theories of physics. We'll see that AI can be considered, in some sense, the epistemological converse of physics. Physics works by using complex deductive reasoning (calculus, differential equations, group theory, etc) built on top of a minimalist inductive framework (the physical laws). Human intelligence, in contrast, is based on a complex inductive foundation, supplemented by minor deductive operations. In many ways, AI will come to resemble disciplines like botany, zoology, and cartography - fields in which the researchers' core methodological impulse is to go out into the world and write down what they see.

An important aspect of my proposal will be to expand the definitions of the words "scientific theory" and "scientific method". A scientific theory, to me, is a computational tool that can be used to produce reliable predictions, and a scientific method is a process of obtaining good scientific theories. Botany and zoology make reliable predictions, so they must have scientific theories. In contrast to physics, however, they depend far less on the use of controlled experiments. The analogy to human learning is strong: humans achieve the ability to make reliable predictions without conducting controlled experiments. Typically, though, experimental sciences are considered to be far harder, more rigorous, and more quantitative than observational sciences. But I will propose a generalized version of the scientific method, which includes human learning as a special case, and shows how to make observational sciences just as hard, rigorous, and quantitative as physics.

As a result of learning, humans achieve the ability to make fairly good predictions about some types of phenomena. It seems clear that a major component of that predictive power is the ability to transform raw sensory data into abstract perceptions. The photons fall on my eye in a certain pattern which I recognize as a doorknob, allowing me to predict that if I turn the knob, the door will open. So humans are amazingly talented at perception, and modestly good at prediction. Are there any other ingredients necessary for intelligence? My answer is: not really. In particular, in my view humans are terrible at planning. Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the "magic" really comes from the ability to make accurate predictions. So a major difference in my approach as opposed to traditional AI is that the emphasis is on prediction through learning and perception, as opposed to planning through logic and deduction.

As a final point, I want to note that my proposal is not analogous to or in conflict with theories of brain function like deep belief networks, neural Darwinism, symbol systems, or hierarchical temporal memories. My proposal is like an interface: it specifies the input and the output, but not the implementation. It embodies an immense and multifaceted Question, to which I have no real answer. But, crucially, the Question comes with a rigorous evaluation procedure that allows one to compare candidate answers. Finding those answers will be an awesome challenge, and I hope I can convince some of you to work with me on that challenge.

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:42 AM
Select new highlight date
All comments loaded

Here's an issue of style and presentation: Would you mind editing your text (or your future texts), striving to remove self-reference and cheerleading ("fluff")?

A small number of uses of "I/my" and colorful language ("amazing, terrifying, bright, beautiful, silent, immense, multifaceted") is reasonable, but the discipline of focusing almost entirely on the ideas being discussed helps both you and your readers understand what the ideas actually are.

As far as I can tell, the content of your post is "I will be posting over the next couple of weeks.", and the rest is fluff. Since you did invest some time in writing this post, you must have believed there was more to it. The fluff has either confused you (into believing this post was substantial) or confused me (preventing me from seeing the substantial arguments).

I'm intrigued and looking forward to reading your articles. I suggest you change your title-writing algorithm, though. To my ears, "Preface to a Proposal for a New Mode of Inquiry" sounds like a softcover edition of a book co-authored by a committee of the five bastard stepchildren of Kant and Kafka.

You maybe should have mentioned the earlier discussion of your idea on the open thread, in which I believed I spotted some critical problems with where you're going: you seem to be endorsing a sort of "blank slate" model in that humans have a really good reasoning engine, and the stimuli humans get after birth are sufficient to make all the right inferences.

However, all experimental evidence tells us (cf. Pinker's The Blank Slate) that humans make a significantly smaller set of inferences on our sense data than are logically possible under constraint of Occam's razor; there are grammatical errors that children never make in any language; there are expectations babies all have, at the same time, though none has gathered enough postnatal sense data to justify such inferences, etc.

I conclude that it is fruitless to attempt to find "general intelligence" by looking at what general algorithm would make the inferences human do, given postnatal stimuli. My alternative suggestion is to identify human intelligence as a combination of general reasoning and pre-encoding of environment-specific knowledge that humans do not have to entirely relearn after birth because the brain wiring-up in the womb already filters out inference patterns that don't win.

That knowledge can come from the "accumulated wisdom" of the evolution history, meaning you need to account for how that data was transformed in a human's present internal model.

ETA: Wow, I was sloppy when I wrote this; hope the point was able to shine through. Typos and missing words corrected. Should make more sense now.

Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the "magic" really comes from the ability to make accurate predictions.

You need to locate a reasonable hypothesis before there is any chance for it to be right. A lot of magic is hidden in the "invent a plan".

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

"Computer science is no more about computers than astronomy is about telescopes." -- E. Dijkstra

Dijkstra did take a bit narrow view of computer science though, or maybe he was a bit tongue-in-cheek here.

I think actual computers should influence computer science; for instance, it's crucial for fast algorithms to be smart with respect to CPU cache usage, but many of the 'classical computer science' hash tables are quite bad in that area.

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

It's been brought up in multiple comments already, but I also wanted to register my disapproval of this statement. The first four minutes of the first SICP video lecture has the best description of computer science that I've ever heard, so I quote:

"The reason that we think computer science is about computers is pretty much the same reason that the Egyptians thought geometry was about surveying instruments, and that is when some field is just getting started and you don't really understand it very well, it's very easy to confuse the essence of what you're doing with the tools that you use...I think in the future, people will look back and say, "well yes, those primitives in the 20th century were fiddling around with these gadgets called 'computers,' but really what they were doing was starting to learn how to formalize intuitions about process: how to do things; starting to develop a way to talk precisely about 'how-to' knowledge, as opposed to geometry that talks about 'what is true.'" - Hal Abelson

That said, I'm looking forward to your upcoming posts.

Yet, OP has a point. In the course of getting a PhD in computer science, I had the requirement or opportunity to study computer hardware architecture, operating system design, compiler design, data structures, databases, graphics, and lots of different computer languages. And none of that stuff was ever relevant to AI - not one page of it. (Even the data structures and databases courses dealt only with data structures inappropriate for AI.) The courses I took in linguistics, neuroscience, mathematics, psychology, and even electrical engineering were all more useful.

Other than the specifically AI-oriented courses, I can recall only 2 computer science courses that turned out to be helpful for AI: Algorithm analysis, and computational complexity theory. And the AI courses always seemed out of place in the computer science department.

I would not recommend anyone interested in AI to major in computer science. Far too much time wasted on irrelevant subjects. It's difficult to say what they should major in - perhaps neuroscience, or math.

Er, have you given much thought to friendliness?

Anna Salamon once described the Singularity Institute's task as to "discover differential equations before anyone else has discovered algebra". The idea being that writing an AI that will behave predictably according to a set of rules you give it is much more difficult than building an AI that's smart enough to do dangerous stuff. It seems to me that if your ideas about AI are correct, you will be contributing to public knowledge of algebra.

I see that I am caught between a rock and a hard place. To people who think I'm wrong, I'm a crackpot who should be downvoted into oblivion. To people who think I might have something interesting and original to say, I'm helping to bring about the destruction of humanity.

To people who think I'm wrong: fine, who cares? Isn't the point of this site to be a forum where relatively well-informed discussions can take place about issues of mutual interest?

To people who think I'm bringing about doomsday: if my ideas are substantively right, it's going to take a long time before this stuff gets rolling. It will take a decade just to convince the mainstream scientific establishment. After that, things might speed up, but it's still going to be a long, hard slog. Did I mention I have only a good question, not an answer? Let's all take some deep breaths.

BTW, a potential bias you should be aware of in this situation is the human tendency to be irrationally inclined to go through with things once they said they're going to do them. (I believe Robert Cialdini's Influence: Science and Practice talks about this.) So you might want to consider self-observing and trying to detect if that bias is having any influence on your thought process. I (and, probably, all of the kind folks at SIAI--although of course I can't speak for them) will completely forgive you if you go back on your public statements on this. Speaking for myself individually, I'd see this as a demonstration of virtue.

And just to be a little silly, I'll use another technique from Influence on you: reciprocation. When I read that you didn't think computer science would be fundamental to the development of strong AI, I immediately thought "That can't be right". I had a very strong gut feeling that somehow, computer science must be fundamental to the development of strong AI and I immediately starting trying to find a reason for why it was. (It seems Vladimir Nesov's reaction was very similar to mine, and note that he didn't find much of a reason. My guess is his comment's high score is a result of many LW readers sharing his and my gut instinct.) However, I noticed that my mind had entered one of its failure modes (motivated continuation) and I thought to myself "Well, I don't have any solid argument now for why computer science must be fundamental, and there's no real reason for me to look for an argument in favor of that idea instead of an argument against it." So now I've publicly admitted that my gut instinct was unfounded and that my mind is broken; maybe using the Dark Technique of trying to get you to reciprocate will convince you to do the same. :P

To people who think I'm bringing about doomsday: if my ideas are substantively right, it's going to take a long time before this stuff gets rolling. It will take a decade just to convince the mainstream scientific establishment. After that, things might speed up, but it's still going to be a long, hard slog. Did I mention I have only a good question, not an answer? Let's all take some deep breaths.

I believe Eliezer is a member of the school of thought which holds that the intelligence explosion could potentially be triggered by nine geniuses working together in a basement.

I believe Eliezer is... nine geniuses working together in a basement.

By the nether gods... IT ALL MAKES SENSE NOW

But that is an absurd task, because if you don't understand algebra, you certainly won't be discovering differentiation. Attempting to "discover differential equations before anyone else has discovered algebra" doesn't mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE's.

It seems that a more reasonable approach would be a) work towards algebra while simultaneously b) researching and publicizing the potential dangers of unrestrained algebra use (Oops, the metaphor broke.)

But that is an absurd task, because if you don't understand algebra, you certainly won't be discovering differentiation. Attempting to "discover differential equations before anyone else has discovered algebra" doesn't mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE's.

To clarify: 'Anna Salamon once described the Singularity Institute's task as to "discover differential equations before anyone who isn't concerned with friendliness has discovered algebra".'

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn't even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it's going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion?

I apologize for being snarky, but I can't help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)

Last I checked, Robin Hanson put probability of hard takeoff at less than 1%.

And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI?

Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?

If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.

Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?

I don't buy that that's a good approach, though. This seems more like security through obscurity to me: keep all the work hidden, and hope that it's both a) on the right track and b) that no one else stumbles upon it. If, on the other hand, AI discussion did take place on LW, then that gives us a chance to frame the discussion and ensure that FAI is always a central concern.

People here are fond of saying "people are crazy, the world is mad," which is sadly true. But friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity; every effort needs to be made to bring this issue to the forefront of mainstream AI research.

friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity

I agree, which is why I wrote, "SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI". If for some reason, the OP does not wish to or is not able to join one of the existing responsible groups, he can start his own.

In security through obscurity, a group relies on a practice they have invented and kept secret when they could have chosen instead to adopt a practice that has the benefit of peer review and more testing against reality. Well, yeah, if there exists a practice that has already been tested extensively against reality and undergone extensive peer review, then the responsible AGI groups should adopt it -- but there is no practice like that for solving this particular problem. There are no good historical examples of the current situation with AGI, but the body of practice with the most direct applicability that I can think of right now is the situation during and after WW II in which the big military powers mounted vigorous systematic campaigns that lasted for decades to restrict the dissemination of certain kind of scientific and technical knowledge. Let me remind that in the U.S. this campaign included the requirement for decades that vendors of high-end computer hardware and machine tools obtain permission from the Commerce Department before exporting any products to the Soviets and their allies. Before WW II, other factors (like wealth and the will to continue to fight) besides scientific and technical knowledge dominated the list of factors that decided military outcomes.

Note the current plan of the SIAI for what the AGI should do after it is created is to be guided by an "extrapolation" that gives equal weight to the wishes or "volition" of every single human living at the time of the creation of the AGI, which IMHO goes a very long way to aleviating any legit concerns of people who cannot joing one of the responsible AGI groups.

And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.

I didn't realize that. Have there been surveys to establish that Robin's view is extreme?

In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin.

In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson.

All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year's Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.

All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does.

I'm not convinced that "full-time employees and volunteers of SIAI" are representative of "writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer", even when weighted by level of rationality.

I'm under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?

ETA: . . . or is there a reason to exclude them from the relevant class of writers?

No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett's opinions about hard take-off. (But I'd rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about "hard take-off" specifically.)

Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)

Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI's area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI's prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person's game.)

Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)

any other questions for me?

Please expand on your reasons for thinking AGI is a serious risk within the next 60 years or so.

Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.

When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I'm worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.

At a workshop following last year's Singularity Summit, every attendee expressed the > wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se.

Are you sure this wasn't a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?

Are you sure this wasn't a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?

Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a "result") for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.

But to answer your question in case you are asking out of curiosity rather than to forward the discussion on "controlled dissemination": well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility even when utility is defined the "popular" way rather than the rather outre way I define it.)

For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures. Most economists I've talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven't read enough of your writings or attended your events seems a bit biased to me.

"The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. They also reviewed efforts to develop principles for guiding the behavior of autonomous and semi-autonomous systems. Some of the prior and ongoing research on the latter can be viewed by people familiar with Isaac Asimov's Robot Series as formalization and study of behavioral controls akin to Asimov’s Laws of Robotics. There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems."

Hi Robin!

If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use.

I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett's probability.

But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven't read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn't an AGI amateur like you.

(Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)

If the probability of hard takeoff was 0.1%, it's still too high a probability for me to want there to be public discussion of how one might build an AI.

http://www.nickbostrom.com/astronomical/waste.html

Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

Something of a jarring note in an otherwise interesting post (I'm at least curious to see the follow-up), in that you are a) reasoning by analogy and b) picking the wrong one: the usual story about music is that it begins with plucked strings and that the study of string resonance modes gave rise to the theories of tuning and harmony.

I think I understand better now.

Your proposal seems to involve throwing out "sophisticated mathematics" in favor of something else more practical, and probably more complex. You can't do that. Math always wins.

The problem with math is that it's too powerful: it describes everything, including everything you're not interested in. In theory, all you need to make an AI is a few Turing machines to simulate reality and Bayes theorem to pick the right ones. In practice this AI would take an eternity to run. Turing machines live in a world of 0s and 1s, but we live a world made of clouds and birds, and a machine that talks in binary about clouds and birds would be complicated and hard to find. For a practical AI, you need a model of computation that regards nouns, verbs and people as the building blocks of reality, and regards Turing machines as very weird examples of nouns. This model would perform worse than a Turing machine if presented with a freakish alternate universe with no concept of time or space, but otherwise it's fine. The hard part is compromising between simplicity and open-mindedness.

The same applies to neural networks. In theory, the shape can be anything you like as long as it's big enough.(I'm leaving out a lot of details here, sorry.)Math is just the general framework that you build reality inside.

Empirical methods are upside down. You're starting with the gritty details, hoping that as everything piles up something more powerful than bayesian inference will emerge. That won't happen. Instead you'll get a lousy, brittle copy of bayesian inference that can't handle anything too different from what it was designed for... like a human.

(Edited for grammar)

Your proposal seems to involve throwing out "sophisticated mathematics"

I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it's impressive.

Obviously, in fields like physics math is very, very useful. In other cases, it's better to just go out and write down what you see. So cartographers make maps, zoologists write field guides, and linguists write dictionaries. Why a priori should we prefer one epistemological scheme to another?

"(and why did it take so long for people to figure out the part about empirical verification)?"

Most of the immediate progress after the advent of empiricism was about engineering more than science. I think the biggest hurdle wasn't lack of understanding of the importance of empirical verification, but lack of understanding of human biases.

Early scientists just assumed that they were either unbiased or that their biases wouldn't affect the data. They had no idea of the power of expectation and selection biases, placebo effects, etc. It wasn't until people realized this and started controlling for it that science took off.

'An important aspect of my proposal will be to expand the definitions of the words "scientific theory" and "scientific method"'

I have to admit that this idea makes me extremely wary, but that's probably because I'm used to statements like this coming from people with a harmful agenda (i.e. creationists). I'll try to keep an open mind when I read your future posts in this series.

I am unsure whether this is LW material. There are plenty of people with ideas about AI and it tends to generate more heat than light, from my experience. I'll reserve judgement though, since there is a need for a place to discuss things.

First I agree with the need to take AI in different directions.

However I'm sceptical of the Input Output view of intelligence. Humans aren't pure functions that always map the same input to the same output, it relies on their history as well. So even if you have a system that corresponds with what a human does for the time t to t+n it may not correspond at times greater than t+n.

The way forward, for me, is to look at altering the software ecosystem. Currently the programs we write are static rigid structures with limited awareness of their surrounding software. They are like this because it is easier for the human system administrator to deal with. We need to write software that looks at its computing environment and reasons about it to manage itself and the (virtual) machines that enable this to be done in an controlled fashion.

So... what's your proposal?

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.

Aw come on, just one little hint? Most posts have a tl;dr paragraph or a "related to" to help people understand.

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

Computer science is probably not what you think it is. AI is included in it; but so is applied stuff like hacking. I think time (not watchmaking, just time) would make a better example.

Edited for trying/failing not to sound mean/weird.

Have you heard of the methodology proposed by cyberneticists and systems engineers and how is it similar or different from what you are proposing?

Edited for diplomacy/clarity.