If it’s worth saying, but not worth its own post, here’s a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

New Comment
50 comments, sorted by Click to highlight new comments since:

👋 Hey All, I'm basically new here, though I've been lurking for about two years. Although I've been interested in mathematical logic and epistemology as long as I can remember, and I had accidentally stumbled upon and bought Godel, Escher, Bach while ducking into a Barnes and Noble to use the bathroom, RAZ was what really tied everything together for me. I found that after someone recommended the HPMOR podcast to me, and then looked up all the references...

If you're reluctant to adjust your stock market exposure because selling ETFs would incur capital gains for you, there are ways to do it via options or futures instead. See https://www.cmegroup.com/education/whitepapers/hedging-with-e-mini-sp-500-future.html and https://www.optionseducation.org/strategies/all-strategies/synthetic-short-stock. (Disclaimer: I'm not a lawyer, accountant, or investment advisor, and the above is for general informational purposes purposes only.)

Hello! I've been a lurker for a few weeks and promised myself I'd actually make an account after my university exams finished.

I'm 22, from England, and technically still an undergraduate student of English Language and Literature at the University of Oxford (though I did do an A-Level in philosophy). Since my final exams finished, I'm finding myself with much more time on my hands to learn new skills, and I was drawn to the rigour of discussion here. I took the fact that I was so intimidated by every single discussion point as a reason to jump in. Truthfully, I feel a bit of an outsider as an arts/humanities student on this forum, but also have a suspicion this feeling will prove unfounded.

I'm a newbie to almost every major topic of discussion at LW, but my main goals are: to absorb as much new knowledge as I can; to improve my writing style, which I dislike; and to define, and refine, my worldview based on hard thinking.

It's great to meet everyone!

welcome! Come on in, the water's fine. 

Mini-feature announcement I didn't have anywhere else to put: Links to Metaculus forecasting questions now have hover-previews that allow you to see the state of the prediction on metaculus. See this link for an example.

Wow! Beautiful!

Hey. For the past 2 years, I have been working on LearnObit (https://learnobit.com).

When I first started to learn coding, I used Anki and the old tried-and-tested Spaced Repetition algorithm, but it wasn't suitable for more complex, general topics like Mathematics or Physics. Because of that, Spaced Repetition has always been widely overlooked despite its proven effectiveness in learning.

I wanted to unlock this effectiveness for a wider audience by building LearnObit, a tool that combines tree-structure note-taking tools with Spaced Repetition so that you can apply it to more than just simple fact pairs.

And now it even has the feature to export data to Anki, so you can leave with your data when you decide not to use it! (Of course I don't want anyone to use it, but I couldn't help it, because people wouldn't start using it without this in the first place...)

Anyway, I'll be here anytime if you have any questions for me. Thanks for your time.

I like LearnObit so far. We should talk sometime about possible improvements to the interface. Are you familiar with quantum.country? Similar goal, different methods, possible synergy. They demo what is (in my opinion) a very effective teaching technique, but provide no good method for people to create new material. I think with some minor tweaks LearnObit might be able to fill that gap.

There is a study reconstructing how Coronavirus spread in Austria (link to German summary article on Spiegel Online). The researchers found not a single infection cluster in schools or Kindergarten, and not a single case where a child was the source of infection in a family. The conclusions of the study include (short version of the points listed in the article I am linking to):

  • people infect others before (they realize) they have symptoms,
  • infection takes place within few days after infection (3 to 5 days),
  • transmission happens when several people are at the same place for a longer timespan (about 15 minutes),
  • for most clusters, point expositions were found: one person is at the beginning of the chain, and there are more infections when that person has contact to other persons for a longer timespan (probably they mean the 15 minutes from the previous point)
  • quarantine measures and barriers work,
  • there are currently no transmission chains that would give evidence to transmission via public transport or visiting a shop.

Hi everyone! The path that lead me here started with my childhood desire to not die, Charlie Mungers talk on the the psychology of human misjudgement, then reading HPMOR, Currently reading the Sequences and as of 5/17/2020 started the process of signing up for Cryonics. Looking forward to the journey of becoming sane.

Howdy. I came across Ole Peters' "ergodicity economics" some time ago, and was interested to see what LW made of it. Apparently one set of skeptical journal club meetup notes: https://www.lesswrong.com/posts/gptXmhJxFiEwuPN98/meetup-notes-ole-peters-on-ergodicity

I am not sure what to make of criticisms of Seattle meetups (they appear correct, but I am not sure if they are relevant; see my comment there).

Not planning to write a proper post, but here is an example blog post of Peters which I found illustrative and demonstrates why I think the "ergodicity way of thinking" might have something in it: https://ergodicityeconomics.com/2020/02/26/democratic-domestic-product/ . In summary, looking at the aggregate ensemble quantity such GDP per capita does not tell much what happens to individuals in the ensemble: the typical individual experienced growth in population in general is not related to GDP growth per capita (which may be obvious to a numerate person but not necessarily so, given the importance given to GDP in public discussion). And if one takes average of exponential growth rate, one obtains a measure (geometric mean income that they dub "DDP") known in economics literature, but originally derived otherwise.

But maybe this looks insightful to me because I am not that very well-versed in economics literature, so it would be nice to have some critical discussion about this.

I am not versed in economics literature, so I can't meet your need. But I have also encountered ergodicity economics, and thought it was interesting because it had good motivations.

I am skeptical for an entirely different reason; I encountered ergodic theory beforehand in the context of thermodynamics, where it has been harshly criticized. I instinctively feel like if we can do better in thermodynamics, we can employ the same math to do better in other areas.

Of course this isn't necessarily true: ergodic theory might cleave reality better when describing an economy than gas particles; there is probably a significant difference between the economics version and the thermodynamics version; the criticism of ergodic theory might be ass-wrong (I don't think it is, but I'm not qualified enough for strong confidence).

I was a common south bay member in the meet ups we had a few years ago before our core members decided they would do stuff other than host meetings in the south bay. I'm back from an alternative account I used to use when #NRx was more prominent among members here. I just want to say hello again to everyone on the new website(no clue how long its been since we've had it) but i'm glad to be back and discussing stuff like Jaynes, Solomonoff induction, and how we can best prevent humanities disasters, and indeed perhaps even cause golden ages. I'm a fan of Isaac Levi's work in formal epistemoloy, and I follow the author of the Incerto quite closely. Anyways glad to be back here.

Hi, I'm new here Read a bunch of the sequences a few years ago (not sure how I got there).

Started reading Scott's and Zvi's posts on here recently after reading on their blogs for a while.

Welcome! I hope you will enjoy the things you find here!

[-]gjm60

In the space of four frames, today's SMBC comic touches on unfriendly AI, fun theory, and overfitting in machine learning.

[-]PP10

What is a reference image?

[-]gjm20

I assume it means an image used in the training process by which the robots learned to recognize things.

Hello! I’m more or less new hear. Been reading on and off for ~ a year, but keep getting distracted by school. I’m a CS sophomore trying to gain control of my life and my mind, and, trying to make friends who care about understanding things.

Hello, I wanted to post something but when reading the guidelines, I am a little confused. The issue of my confusion is "Aim to explain, not persuade". English is not my native language, but when I googled "persuade" I found that it means "induce (someone) to do something through reasoning or argument. ". To me, this sounds a little ridiculous on the rationality forum.

Welcome!

The dictionary definition of "persuade" misses some of the connotations. Persuading someone often means "get them to agree with you" and not "jointly arrive at what's true, which includes the possibility that others can point out your mistakes and you change your mind." Explaining usually means more something like "explain your reasoning and facts, which might lead someone to agree with if they think your reasoning is good."

The key difference might be something like "persuade" is written to get the reader to accept what is written regardless of whether it's true, while "explain" wants you to accept the conclusion only if it's true. It's the idea symmetric/asymmetric weapons in this post.

Sorry if that's still a bit unclear, I hope it helps.

Yeah, what Ruby said. I might also point to the difference between what a scientist and a politician does when giving a speech. A scientist attempts to explain the information that they've discovered to help understand some domain, and a politician decides what they want you to do and then looks to find information that gets you to do that thing, sometimes regardless of whether it's true, so that they can make you do what they want. (Scientists don't always live up to the ideal, but it's a good ideal.)

[-][anonymous]30

There's an interesting discussion here on Twitter about simplification and transparency of laws.

Robin Hanson reminded me of Leibniz who wanted to design a legal system made up only by a few basic principles taken from ancient roman law.

[-][anonymous]30

I have a doubt on Rationalization, dunno if someone can use it for a post.


Imagine you're a lawyer and someone comes to you saying he/she needs to be defended.

You don't know whether he/she is guilty or not. You can obviously "guessing" using all data you can gather.

Your duty as a lawyer, if you accept the case, is to defend them no matter what. In fact, everyone deserves a due process.

Here's my dilemma: take A Rational Argument and the other posts from "Against Rationalization". It is said that to present a rational argument, you have to gather up the more evidence you can and then choose the best candidate.

So there are a few paths involving laws and evidence:

A) Both law and evidence on your side

B) Law on your side

C) Evidence on your side

D) Neither law or evidence

What should you do if you have only law on your side? Would you accept the case knowing that there's a high degree of probability that your client is guilty?

It's part of the ethos of a good lawyer to accept guilty clients as well as clients that aren't guilty. The job of a lawyer also isn't to provide rational arguments but to provide arguments that serve the interest of their client.

[-][anonymous]20

I know well what is the ethos of lawyers. My point was another. Lawyers can accept or decline cases, their decision depends on two factors: defending someone because it's right and being paid. I was trying to understand what would be a rational take on this matter, knowing that both ways are legit, to maximize profit and to choose only safe cases.

[-]gjm30

What do you mean by "knowing that both ways are legit"? Only one way is legit: when someone comes to you needing defence and willing to pay your fees, you defend them.

(I think the actual system is a little different: a lawyer isn't expected to defend their client if they're sure the client is guilty; in that case they would ask them to find another lawyer, or something. But that isn't because those clients don't deserve defending, it's because they deserve defending better than someone who's sure they're guilty is likely to manage.)

[-][anonymous]00

Hmm not really, let’s disambiguate. Here I am talking mainly about civil issues. There are ex officio lawyers appointed by the court and freelance lawyers. Do freelance lawyers have the deontological duty to defend people? Yes Do freelance lawyers have the right to choose their cases? Again, yes. How do you balance this? Lawyers have to point to valid reasons why they deny to defend. So, both ways are legit: if someone comes to you, you can defend him or not. In this sense there is a margin to maximize profit.

[-]gjm60

It's true that lawters aren't required to take every client who comes along, but I think generally the legal profession strongly encourages them to be willing to take unattractive cases. For instance, the ABA Model Code of Professional Responsibility has various things to say, of which I've excerpted the bits that seem to me most important (on both sides of the question):

A lawyer is under no obligation to act as adviser or advocate for every person who may wish to become his client; but in furtherance of the objective of the bar to make legal services fully available, a lawyer should not lightly decline proffered employment. The fulfillment of this objective requires acceptance by a lawyer of his share of tendered employment which may be unattractive both to him and the bar generally.
[...]
When a lawyer is appointed by a court or requested by a bar association to undertake representation of a person unable to obtain counsel, whether for financial or other reasons, he should not seek to be excused from undertaking the representation except for compelling reasons. Compelling reasons do not include such factors as the repugnance of the subject matter of the proceeding, the identity or position of a person involved in the case, the belief of the lawyer that the defendant in a criminal proceeding is guilty, or the belief of the lawyer regarding the merits of the civil case.

So they don't quite say that lawyers should never refuse to represent clients just because they think they're guilty. But they do say that lawyers should be willing to take "unattractive" cases, and that if a court assigns a lawyer to represent someone who can't afford to pay for his own lawyer then that lawyer shouldn't refuse just because they think the client is guilty.

So my earlier statement goes too far, but I think it's more right than wrong: in general lawyers aren't supposed to refuse to defend you just because they think you're probably guilty. Even though they are allowed to refuse to defend you.

[-][anonymous]10

Upvoted because I agree. That’s what I was saying, they have to point to valid reasons.

It's not always rational to make a rational argument. Rationality in the same Eliezer defines it has to do with your goals.

[-][anonymous]10

Ok, thank you for your answers. I was searching for a new angle from which looking at my area of interest, but I've probably asked the wrong question.

I feel like there is no conflict here; in fact it is widely considered a deal-breaker for a client to be guilty and lie to their attorney about it. A client lying to a lawyer is one of the ethically accepted reasons to dump a client they have already agreed to serve. This isn't even pro-forma; in practice, lawyers don't blame one another for dumping clients that lie to them. Nor is it considered a black mark for future hiring with other law firms.

The important variables here are that the lawyer is constrained by the evidence, but they have a duty to their client. This is because lawyers are not fact finders; they are advocates. The American trial system employs the 'arguments are soldiers' system specifically and deliberately, then it has a lot of rules for setting a floor on how bad the arguments can be and relies on nominally-neutral third parties (a judge and/or jury) to assess them.

Consider that a lawyer can represent themselves, their family, or parties in whom they have a financial stake without conflict of interest. However it is considered a conflict of interest if they have a financial stake in the other party, or anything else that might compromise their commitment to advocacy of their client.

So at least in the American system, I put it to you that accepting the case with total certainty your client is guilty is both ethical and rational.

Is it ok to omit facts to you lawyer? I mean is the lawyer entitled to know everything about the client?

Everything about the client *that is relevant to the case,* yes. Omitting relevant facts is grounds for terminating the relationship.

Does a predictable punchline have high or low entropy?

From False Laughter

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

Yup, low. Although a high-entropy punchline probably wouldn't be funny either, for different >١c񁅰򺶦˥è򡆞.

What's that SSC post where Scott talks about how he didn't think he terminally valued punishment, but then he was able to think of some sufficiently bad actions, and then he felt what seemed like a terminal value for the punishment of the bad actors?


I went through the titles of his posts over the last year and googled around a bit, but couldn't find it.

The Whole City is Center:

This story had a pretty big impact on me and made me try to generate examples of things that could happen such that I would really want the perpetrators to suffer, even more than consequentialism demanded. I may have turned some very nasty and imaginative parts of my brain, the ones that wrote the Broadcast interlude in Unsong, to imagining crimes perfectly calculated to enrage me. And in the end I did it. I broke my brain to the point where I can very much imagine certain things that would happen and make me want the perpetrator to suffer – not infinitely, but not zero either.

Four German medical professional associations demand an almost complete opening of schools and preschools due to supposedly low risk of coronavirus spread among children: https://www.spiegel.de/panorama/bildung/corona-krise-mediziner-fordern-komplette-schul-und-kita-oeffnung-a-4d1a0336-680d-4259-818e-7a263732f811

In a twitter thread, Muge Cevik discusses evidence on Corona virus transmission dynamics:

https://twitter.com/mugecevik/status/1257392347010215947

I recently gave up on trying to keep track of where things are in the kitchen, since my husband also cooks and puts everything where he likes, too. So I just taped the lists to the inside of the cabinets' doors. Surprisingly efficient, except for the time when we literally lost a pan in the next drawer. So after this worked, we now have a reminder on how to wash some things in the washing machine taped above the machine. And a reminder of our sizes taped in the wardrobe. I am thinking about labeling shelves in the kid's wardrobe, but this might be not enforceable anyway.

Hello, I would like to ask a straightforward question: Is there a logically valid way to call evolution science and creationism a pseudo-science? I am not creationist, I just hope I will strengthen my evolutionistic opinion. And I would like to have it in a clear form. I must admit that because of my family member I visited a specific creationistic site. And there was the extraordinary claim that there is no such logically valid way and I would like to disprove this for me and my family member by stating at least one. And I want to stop visiting that site after disproving. Thanks for answer in advance.

Maybe the answer is that creationistic hypotheses don't make any falsifiable but confirmed predictions, but I am not sure.

And I would also like to ask a broader question whether there is some rationally efective method of looking for the truth on the internet. Or more simply stated, who to believe.

Hey Peter, (I'm a mod here)

I think this is a reasonable question to have, but is something that LessWrong has kinda tried to move on from as a whole, so you may not find many people interested in answering it here. (i.e. LessWrong doesn't focus much on traditional skeptic/woo debates, instead trying to settle those questions so we can move on to more interesting problems)

Just wanted to let you know. I do think reading through many of the core sequences may help. (Unfortunately I don't currently know which sub-sections will be most relevant to you offhand)

Is there a logically valid way to call evolution science and creationism a pseudo-science?

I'm not sure what's meant by "logically valid" here. Formal logic does have a notion of validity, but it's not particularly useful for these kinds of questions. If I just came up with a definition for science and said that evolution matches that definition, thus therefore evolution is a science, it would be a logically valid argument... even if my argument made no sense.

For instance, if I said that "scientific fields are those which involve eating cheese on the moon, evolution involves eating cheese on the moon, thus evolution is a scientific field", then technically, this is a logically valid argument. (See the link for an explanation of why.) It also makes no sense, but making sense is not a requirement for the technical meaning of "logical validity".

So either the page has its own definition for what "logically valid" means, or it's just making a meaningless but impressive-sounding claim.

[-]TAG40

The question of what constitutes science is called the demarcation problem.

It doesn't have a generally accepted solution , but it doesn't follow from that that creationism is science. Pragmatically, science is what scientists say it is, and very few scientitists are creationists.

This sounds like an XY problem. What you're looking for isn't logic, but epistemology.

Science is, and always has been, Natural Philosophy. Now, philosophy can be done well or badly. But Natural Philosophy has a great advantage over other philosophical branches: better data. This makes such a difference that scientists often don't think of themselves as philosophers anymore (but they are).

What makes the modern Scientific Method especially effective compared to older approaches is the rejection of unreliable epistemology. The whole culture of modern science is founded on that. Sure, we have better tools and more developed mathematics now, but the principles of epistemology we continue to use in science were known anciently. The difference is in what methods we don't accept as valid.

So is Creationism a philosophy of nature? Sure is! But they're making embarrassing, obvious mistakes in epistemology that would make any real scientist facepalm, and yet call themselves "scientific". That's why we call it pseudoscience.

If you're trying to win an argument with a creationist family member, presenting them evidence isn't going to help as long as their epistemology remains broken. They'll be unable to process it. Socratic questioning can lead your interlocutor to discover the contradictions their faulty methods must lead them to. Look up "street epistemology" for examples of the technique.

which more specifically would be something like "not using logical fallacies".

In the discourse that existed 20 years ago better reasoning might mean "not using logical fallacies". In this community it never meant that. In the beginning in this community it meant "being a good Bayesian" which involves thinking in terms of probability and not in terms of logic. You could see following Tedlock's principles for Superforcasters as an ideal for that kind of reasoning.

Since then, we also have the Center for Applied Rationality that focuses on neither probability nor logic. It had a lot influence on the current discourse that happens here. On aspect of it as I see it is that understanding other people's mental models and sharing your own mental models with other people is an important part of reasoning. Part of understanding models is finding cruxes.

I think the main problem with the whole idea of trying to be more rational is that the discussion usually focuses on only two dimensions: logic and evidence, while leaving out a crucial third one.

Which discussion are you talking about? If I would ask a bunch of people who read LessWrong on which dimensions our discussion focuses I doubt a majority would answer logic and evidence.

If you like Yudkowskian fiction, Wertifloke = Eliezer Yudkowsky

The Waves Arisen https://wertifloke.wordpress.com/