If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.
I'm not sure if it is "worth saying" but a google search for "Secret Bayesian Man" turned up nothing so I wrote this:
ET Jaynes was a 1000 year old vampire
Inspired Cog Sci, AI and Eliezer.
The Probabilities,
Is something that we'll see.
Given that we have the same priors.
Secret Bayesian Man,
Secret Bayesian Man,
You update your beliefs,
Based on the evidence. (2x)
A grandmaster of Bayesian Statistics
He'll straighten out your bias and Heuristics
You'll be Less Wrong than most
You'll take a "Rational Approach..."
Given that we have the same priors
Secret Bayesian Man
Secret Bayesian Man
You update your beliefs,
Based on the evidence. (2x)
Aumann states we must come to agreement
If we have common knowledge with no secrets
Our posteriors must be the same
Or one of us is to blame
Given that we have the same priors.
I deeply apologize.
I love the idea of the open thread. So many things I would like to discuss, but that I don't feel confident to actually make discussion posts on. Here's one:
On Accepting Compliments
Something I learned, and taught to all my students is that when you are performing certain things (fire, hoops, bellydancing whatever), people are going to be impressed, and are going to compliment you. Even though YOU know that you are nowhere near as good as Person X, or YOU know that you didn't have a good show, you ALWAYS accept their compliment. Doing otherwise is actually an insult to the person who just made an effort to express their appreciation to you. Anyways, you see new performers NOT following this advice, all the time. And I know why. It's HARD to accept compliments, especially when you don't feel deserving of them. But you have to learn to do it anyway, because it's the right thing to do.
Same idea, said better by somebody else
This is one of those things that's probably a pet peeve of mine because I use to do it myself, but I figured I would share what I was told during my performance days. I've seen this phenomenon a bunch, a performer or presenter gets done, an audience member comes up and says something along the lines of "Great job," the complimented responds with something like:
"Oh I totally screwed up"
"No, I didn't really do anything"
"No, I thought it went awfully"
Invariably there are two things that drive this:
The presenter/performer is so caught up in their own self examination, that they are being hyper critical and sharing it with the complementer.
The presenter/performer is concerned about the appearance of humility.
Both ignore a greater truth in the interaction: Someone has said something nice to you, and you are immediately telling them they are wrong! Even if they don't directly perceive this, it can leave them with a bad taste in their mouth. So what do you do? Say "Thank you," that's it. Leave the self examination stuff where it belongs, in your head. If you are concerned with your ego, accept and expand the compliment: "Thank you; I have to say the audience was really great, you guys asked really great questions."
It's a silly little thing, but it can have a big impact on how you are perceived.
I think this is applicable to all areas of life not just performing. In fact, doing some googling, I found a Life Hack on the subject. Some excerpts:
A compliment is, after all, a kind of gift, and turning down a gift insults the person giving it, suggesting that you don’t value them as highly as they value (soon to be “valued”) you. Alas, diminishing the impact of compliments is a pretty strong reflex for many of us. How can we undo what years of habitual practice has made almost unconscious?
Stop [...] making them work for it: Cut the long stream of “no, it was nothings” and “I just did what I had to dos” and let people give you the compliment. Putting it off until they’ve given it three or four times, each time more insistently, is selfish.
This link actually has sample dialogue, if that helps, but it is bellydance-centric.
A common claimed Great Filter issue is that Earth-like planets need a large moon to stabilize their orbit. However, recent research seems to indicate that this view is mistaken. In general, this seems to be part of a general pattern where locations that are conducive to life seem more and more common(for another recent example see here) (although I may have some confirmation bias here?). Do these results force us to conclude that a substantial part of the Great Filter is in front of us?
No. The mainstream expectation has pretty much always been that locations conducive to life would be reasonably common; the results of the last couple of decades don't overturn the expectation, they reinforce it with hard data. The controversy has always been on the biological side: whether going from the proverbial warm little pond to a technological civilization is probable (in which case much of the Great Filter must be in front of us) or improbable (in which case we can't say anything about what's in front of us one way or the other). For what it's worth, I think the evidence is decisively in favor of the latter view.
Is it worth posting a series of videos that makes up a gentle introduction to the basics of game theory from the AI class on LW for people who aren't in the class and aren't very good at math?
This is the one of the videos from the series. There are also a few easy practical exercises and their solutions.
John Cheese from Cracked.com pulls out another few loops of bloodsoaked intestine and slaps them on the page as a ridiculously popular Internet humour piece: 9 YouTube Videos That Prove Anyone Can Get Sober. I hate watching video, and I sat down and watched the lot. Horrifying and compelling. I've been spending this afternoon reading the original thread. It's really bludgeoning home to me just how much we're robots made of meat and what a hard time the mind has trying to steer the elephant. Fighting akrasia is one thing - how do you fight an addiction with the power of your mind?
I was musing on the old joke about anti-Occamian priors or anti-induction: 'why are they sure it's a good idea? Well, it's never worked before.' Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?
Well, in what sort of universe would every failure of X to appear that time interval make X that much more likely? It sounds a bit vaguely like the hope function but actually sounds more like an urn of balls where you sample without replace: every ball you pull (and discard) without finding X makes you a little more confident that next time will be X. Well, what kind of universe sees its possibilities shrinking every time?
For some reason, entropy came to mind. Our universe moves from low to high entropy, and we use induction. If a universe moved the opposite direction from high to low entropy, would its minds use anti-induction? (Minds seem like they'd be possible, if odd; our minds require local lowering of entropy to operate in an environment of increasing entropy, so why not anti-minds which require local raising of entropy to operate in an environment of decreasing entropy - somewhat analogous to reversible computers expending energy to erase bits.)
I have no idea if this makes any sense. (To go back to the urn model, I was thinking of it as sort of a cellular automaton mental model where every turn the plane shrinks: if you are predicting a glider as opposed to a huge turing machine, as every turn passes and the plane shrinks, the less you would expect to see the turing machine survive and the more you would expect to see a glider show up. Or if we were messing with geometry, it'd be as if we were given a heap of polygons with thousands of sides where every second a side was removed, and predicted a triangle - as the seconds pass, we don't see any triangles, but Real Soon Now... Or to put it another way, as entropy decreases, necessarily fewer and fewer arrangements show up; particular patterns get jettisoned out as entropy shrinks, and so having observed a particular pattern, it's unlikely to sneak back in: if the whole universe freezes into one giant simple pattern, the anti-inductionist mind would be quite right to have expected all but one observations to not repeat. Unlike our universe, where there seem to be ever more arrangements as things settle into thermal noise: if a arrangement shows up we'll be seeing a lot of it around. Hence, we start with simple low entropy predictions and decreases confidence.)
Boxo suggested that anti-induction might be formalizable as the opposite of Solomonoff induction, but I couldn't see how that'd work: if it simply picks the opposite of a maximizing AIXI and minimizes its score, then it's the same thing but with an inverse utility function.
The other thing was putting a different probability distribution over programs, one that increases with length. But while you are forbidden uniform distributions over all the infinite integers, and you can have non-uniform decreasing distributions (like the speed prior or random exponentials), it's not at all obvious what a non-uniform increasing distribution looks like - apparently it doesn't work to say 'infinite-length programs have p=0.5, then infinity-1 have p=0.25, then infinity-2 have p=0.125... then programs of length 1/0 have p=0'.
I was musing on the old joke about anti-Occamian priors or anti-induction: 'why are they sure it's a good idea? Well, it's never worked before.' Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?
How can they possibly know/think that 'it' has never worked before? That assumes reliability of memory/data storage devices.
I don't see how these anti-Occamians can ever conclude that data storage is reliable.
If they believe data storage is reliable, they can infer whether or not data storage worked in the past. If it worked, then data storage is probably not reliable now. If it didn't work then it didn't record correct information about the past. In neither case is the data storage reliable.
Cf. Eric Flint, I've always found the idea of bringing technology back in time very interesting. Specifically, I've always wondered what technology I could independently invent and how early I could invent it. Of course, the thought experiment requires me to handwave away lots of concerns (like speaking the local language, not being killed as a heretic/outsider, and finding a patron).
Now, I'm not a scientist, but I think I could invent a steam engine if there was decent metallurgy already. Steam engine: Fill large enclosed container with water, heat water to boiling, steam goes through a tube to turn a crank, voila - useful work. So, 1000s in Europe, maybe?
I'd like to think that I could inspire someone like Descartes to invent calculus. But there's no way I could invent in on my own.
Anyone else ever had similar thoughts?
Of course; it's a common thought-experiment among geeks, ever since A Connecticut Yankee. There's even a shirt stuffed with technical info in case one ever goes back in time.
(FWIW, I think you'd do better with conceptual stuff like Descartes and gravity, which you can explain to the local savant and work on hammering out the details together; metallurgy is hard, and it's not like there weren't steam engines before the industrial revolution - they were just uselessly weak and expensive. Low cost of labor means machines are too expensive to be worth bothering with.)
Now, I'm not a scientist, but I think I could invent a steam engine if there was decent metallurgy already.
No way, unless perhaps you're an amateur craftsman with a dazzling variety of practical skills and an extraordinary talent for improvization. And even if you managed to cobble together something that works, you likely wouldn't be able to put it to any profitable use in the given economic circumstances.
Funny comic from bouletcorp: Physics of a Pixelated World
I'm having a wall-banging philosophical disputation with someone over the word "scientism", but mostly over qualia and p-zombies, here. I ask your help on trying to work out if any commonality is resolvable in between the spittle-flecked screaming at each other. (I would suggest diving in only if you have familiarity with the participants - assume we're all smart, particularly the guy who's wrong.)
I feel like I ought to admire your tenacity on that thread.
I don't, actually, but I feel like I ought to.
Anyway... no, I haven't a clue how you might go about resolving whatever commonality might exist there. Then again, I've never been able to successfully make contact across the qualia gulf, even with people whose use of the English language, and willingness to stick to a single topic of discussion, is better aligned with mine than yours and Lev's seem to be.
Heh. Am I making no sense either? I'm sticking with it because I've known Lev for decades and I'm a huge fan of his and I have little to no idea what the fuck he's on about with this crazy moon language. The p-zombie argument proves - proves - magic, rather than demonstrating that philosophers are easily convinced of rubbish? What?
(I have little doubt he's thinking the same of me.)
The thread is still going, by the way. Twenty days later. None of us know when to give up.
Well, you're making sense to me, but that's perhaps due to the fact that I basically would be saying the same things if I somehow found myself in that conversation. (Which could easily happen... hell, I've gotten into that conversation on LW.)
I think you would all benefit from drop-kicking about half the terms you're using, and unpacking them instead... it seems moderately clear that you don't agree on what a p-zombie is, for example. But I would be surprised if he agreed to that.
That said, I don't think he'd agree with your summary of his position.
Does he always have such eccentric syntax?
I'm curious what happened to SarahC. I enjoyed her presence, but I haden't seen her recently and I notice she's deleted her account (http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/3f00). Anyone know what happened?
I'm usually a lurker here. I generally spend a little too much time on this site. I'm making a personal resolution to leave the site alone for the rest of the day, whenever i read an article here and find that i have nothing to say about it. Under this policy, i expect that i will be spending less time here, and also that i will be contributing more.
For those interested in the Big Five: "The Big-Five Trait Taxonomy: History, Measurement, and Theoretical Perspectives".
There is an unfortunate equivocation in the word theory (compare "Theory of Evolution" to "Just War Theory"). Popper says that theory can only be called scientific if it is falsifiable. Using that conceptual terminology, Freudian theory is pseudoscience, not a scientific theory. But many things that the vernacular calls theories are not falsifiable. (What would it mean to falsify utilitarian theory?)
Does that mean that we can't talk about moral theories? What word should we use instead? Because it seems like talking about moral theories is doing something productive.
For some context, I'm starting this post to separate off this conversation from a distinct conversation I'm having here
I'm interested in conducting a simple, informal study requiring a moderate number of responses to be meaningful. Specifically, I want to look at some aspects of the "wisdom of the crowd". I'm new here, so I want to ask first: is LessWrong Discussion a good place to put things like this that ask people to take a quick survey in the name of satisfying my curiosity? Are there other websites where this is appropriate?