RHollerith

Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com

My probability that AI research will end all human life is .92.  It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)

Currently I am willing to meet with almost anyone on the subject of AI extinction risk.

Last updated 26 Sep 2023.

Wiki Contributions

Comments

Just because it was not among the organizing principles of any of the literate societies before Jesus does not mean it is not part of the human mental architecture.

Answer by RHollerithApr 14, 20245-4

There is very little hope here IMO. The basic problem is the fact that people have a false confidence in measures to render a powerful AI safe (or in explanations as to why the AI will turn out safe even if no one intervenes to make it safe). Although the warning shot might convince some people to switch from one source of false hope to a different source, it will not materially increase the number of people strongly committed to stopping AI research, all of which have somehow come to doubt all of the many dozens of schemes published so far for rendered powerful AI safe (and the many explanations for why the AI will turn out safe even if we don't have a good plan for ensuring its safety).

I wouldn't be surprised to learn that Sean Carroll already did that!

Is it impossible that someday someone will derive the Born rule from Schrodinger's equation (plus perhaps some of the "background assumptions" relied on by the MWI)? 

Being uncertain of the implications of the hypothesis has no bearing on the Kolmogorv complexity of a hypothesis.

Fire temperature can be computed from the fire's color.

I'm tired of the worthless AI-generated art that writers here put in their posts and comments. Some might not be able to relate, but the way my brain works, I have to exert focus for a few seconds to suppress the effects of having seen the image before I can continue to engage with the writer's words. It is quite mentally effortful.

As a deep-learning novice, I found the post charming and informative.

The statement does not mention existential risk, but rather "the risk of extinction from AI".

Answer by RHollerithMar 15, 20242-3

Any computer program can be presented in the form of an equation. Specifically, you define a function named step such that step (s, input) = (s2, output) where s and s2 are "states", i.e., mathematical representations of the RAM, cache and registers.

To run the computer program, you apply step to some starting state, yielding (s2, output), then you apply step to s2, yielding (s3, output2), then apply step to s3, and so on for billions of repetitions.

Another reply to your question asserts that equations cannot handle non-determinism. Untrue. To handle it, all we need to do is add another argument to step, rand say, that describes the non-deterministic influences on the program. This is routinely done in formalisms for modelling causality, e.g., the structural equation models used in economics.

So, in summary, your question has some implicit assumptions that would need to be made explicit before I can answer.

Load More