I notice that when I write for a public audience, I usually present ideas in a modernist, skeptical, academic style; whereas, the way I come up with ideas is usually in part by engaging in epistemic modalities that such a style has difficulty conceptualizing or considers illegitimate, including:

  • Advanced introspection and self-therapy (including focusing and meditation)
  • Mathematical and/or analogical intuition applied everywhere with only spot checks (rather than rigorous proof) used for confirmation
  • Identity hacking, including virtue ethics, shadow-eating, and applied performativity theory
  • Altered states of mind, including psychotic and near-psychotic experiences
  • Advanced cynicism and conflict theory, including generalization from personal experience
  • Political radicalism and cultural criticism
  • Eastern mystical philosophy (esp. Taoism, Buddhism, Tantra)
  • Literal belief in self-fulfilling prophecies, illegible spiritual phenomena, etc, sometimes with decision-theoretic and/or naturalistic interpretations

This risks hiding where the knowledge actually came from. Someone could easily be mistaken into thinking they can do what I do, intellectually, just by being a skeptical academic.

I recall a conversation I had where someone (call them A) commented that some other person (call them B) had developed some ideas, then afterwards found academic sources agreeing with these ideas (or at least, seeming compatible), and cited these as sources in the blog post write-ups of these ideas. Person A believed that this was importantly bad in that it hides where the actual ideas came from, and assigned credit for them to a system that did not actually produce the ideas.

On the other hand, citing academics that agree with you is helpful to someone who is relying on academic peer-review as part of their epistemology. And, similarly, offering a rigorous proof is helpful for convincing someone of a mathematical principle they aren't already intuitively convinced of (in addition to constituting an extra check of this principle).

We can distinguish, then, the source of an idea from the presented epistemic justification of it. And the justificatory chain (to a skeptic) doesn't have to depend on the source. So, there is a temptation to simply present the justificatory chain, and hide the source. (Especially if the source is somehow embarrassing or delegitimized)

But, this creates a distortion, if people assume the justificatory chains are representative of the source. Information consumers may find themselves in an environment where claims are thrown around with various justifications, but where they would have quite a lot of difficulty coming up with and checking similar claims.

And, a lot of the time, the source is important in the justification, because the source was the original reason for privileging the hypothesis. Many things can be partially rationally justified without such partial justification being sufficient for credence, without also knowing something about the source. (The problems of skepticism in philosophy in part relate to this: "but you have the intuition too, don't you?" only works if the other person has the same intuition (and admits to it), and arguing without appeals to intuition is quite difficult)

In addition, even if the idea is justified, the intuition itself is an artifact of value; knowing abstractly that "X" does not imply the actual ability to, in real situations, quickly derive the implications of "X". And so, sharing the source of the original intuition is helpful to consumers, if it can be shared. Very general sources are even more valuable, since they allow for generation of new intuitions on the fly.

Unfortunately, many such sources can't easily be shared. Some difficulties with doing so are essential and some are accidental. The essential difficulties have to do with the fact that teaching is hard; you can't assume the student already has the mental prerequisites to learn whatever you are trying to teach, as there is significant variation between different minds. The accidental difficulties have to do with social stigma, stylistic limitations, embarrassment, politics, privacy of others, etc.

Some methods for attempting to share such intuitions may result in text that seems personal and/or poetic, and be out of place in a skeptical academic context. This is in large part because such text isn't trying to justify itself by the skeptical academic standards, and is nevertheless attempting to communicate something.

Noticing this phenomenon has led me to more appreciate forewards and prefaces of books. These sections often discuss more of the messiness of idea-development than the body of the book does. There may be a nice stylistic way of doing something similar for blog posts; perhaps, an extended bibliography that includes free-form text.

I don't have a solution to this problem at the moment. However, I present this phenomenon as a problem, in the spirit of discussing problems before proposing solutions. I hope it is possible to reduce the accidental difficulties in sharing sources of knowledge, and actually-try on the essential difficulties, in a way that greatly increases the rate of interpersonal model-transfer.

New Comment
40 comments, sorted by Click to highlight new comments since:

Lately I've been explicitly trying to trace the origins of the intuitions I use for various theoretical work, and writing up various key sources of background intuition. That was my main reason for writing a review of Design Principles of Biological Circuits, for instance. I do expect this will make it much easier to transfer my models to other people.

It sounds like many of the sources of your intuition are way more spiritual/political than most of mine, though. I have to admit I'd expect intuition-sources like mystic philosophy and conflict-y politics to systematically produce not-very-useful ideas, even in cases where the ideas are true. Specifically, I'd expect such intuition-sources to produce models without correct gears in them.

Instead of just gears vs non gears it might be helpful to think of the size/complexity of the black boxes in question, with larger black boxes meaning less portability. Gears themselves are a black box but since we are rarely designing for environments at the extremes of steels properties we don't have to think about it.

"... at the extremes of steels properties we don't have to think about it."

?

[-]gjm80

If you're trying to understand literal gears then a simple model that says "the amount by which this one turns equals the amount by which that one turns, measured in teeth" (or something like that) is often sufficient even though it may break down badly if you try to operate your machine at a temperature of 3000 kelvin or to run it at a million RPM.

[EDITED to add:] I think you may have misparsed the end of romeostevensit's comment. Try it like this: "Gears themselves are a black box. But, since we are rarely designing for environments at the extremes of steel's properties, we don't have to think about it."

Thanks, pesky phone grammar.

I model part of this as: bounding the hypothesis space is underspecified and poorly understood. Since status attends to confirmatory rather than exploratory research there is pressure to present the exploratory phase as more like the confirmatory phase.

[-]gjm150

First paragraph:

... the way I come up with ideas is ...

Third paragraph (after the bulleted list):

This risks hiding where the knowledge actually came from.

(Added emphasis mine.) It seems to me (and I guess I'm fairly typical of non-Berkeley rationalists in this) that it's 100% unproblematic to have your ideas come from Focusing, "near-psychotic experiences", Taoism, etc., but 100% problematic to claim that things that come from those are knowledge without some further evidence of a less-squishy kind.

An expert violine player who never read any academic papers about violin playing but has thousands of hours of practice doesn't only have correct idea about violine playing but can reasonably be said to have some knowledge. The violine player has metis. That metis can't easily transfered.

The practices that jessica described also produce a kind of metis.

Metis is by it's nature hard to transmit and it's easy to search for some techne when making a claim that one believes because of metis.

[-]gjm50

(I guess that when you wrote "piano" you meant "violin".) I agree: skills are acquired and preserved in a different way from factual knowledge, and there are mental skills as well as physical, and they may be highly relevant to figuring out what's true and what's false; e.g., if I present Terry Tao with some complicated proposition in (say) the theory of partial differential equations and give him 15 seconds to guess whether it's true or not then I bet he'll be right much more frequently than I would even if he doesn't do any explicit reasoning at all, because he's developed a good sense for what's true and what isn't.

But he would, I'm pretty sure, still classify his opinion as a hunch or guess or conjecture, and wouldn't call it knowledge.

I'd say the same about all varieties of mental metis (but cautiously, because maybe there are cases I've failed to imagine right). Practice (in various senses of that word) can give you very good hunches, but knowledge is a different thing and harder to come by.

One possible family of counterexamples: for things that are literally within yourself, it could well be possible to extend the range of things you are reliably aware of. Everyone can tell you, with amply justified confidence, whether or not they have toothache right now. Maybe there are ways to gain sufficient awareness of your internal workings that you have similar insight into whether your blood pressure is elevated, or whether you have higher than usual levels of cortisol in your bloodstream, etc. But I don't think this is the kind of thing jessicata is talking about here.

[EDITED to add:]

I wouldn't personally tend to call what-a-skilled-violin-player-has-but-can't-transfer-verbally "knowledge". I would be happy saying "she knows how to play the violin well", though. (Language is complicated.) I also wouldn't generally use the word "ideas". So (to whatever extent jessicata's language use is like mine, at least) the violin player may provide a useful analogy for what this post is about, but isn't an actual example of it, which is why I made the switch above from violinist to mathematician.

This whole discussion might be clearer with a modest selection of, say, 3-5 concrete examples of ideas jessicata has arrived at via epistemic modalities that modernist academic thinking doesn't care for, and might be tempted to justify via rigorous proof, academic sources, etc.; we could then consider what would happen to those cases, specifically, with a range of policies for what you say about where your ideas come from and why you believe them.

I think a person who has trained awareness about thier own cortisol levels is likely to have some useful knowledge about cortisol.

They might have hundreds of experiences where they did X and then they noticed their cortisol rising. If you talk with them about stress they might have their own ontology that distinguishes activities in stressful and not-stressful based on whether or not they raise their own cortisol level. I do think that such an ontology provides fruitful knowledge.

A decade ago plenty of psychologists ran around and claimed that willpower is about how much glucose one has in their blood. Professor Rob Baumeister wrote his book Willpower for that thesis. If Baumeister would have worn a device that gave him 24/7 information about his glucose levels I think he would have gained knowledge that would have told him that the thesis is wrong.

[-]gjm170

Yes, I agree that there could be genuine knowledge to be had in such a case. But it seems to me that what it takes to make it genuine knowledge is exactly what the OP here is lamenting the demand for.

Suppose you practice some sort of (let's say) meditation, and after a while you become inwardly convinced that you are now aware at all times of the level of cortisol in your blood. You now try doing a bunch of things and see which ones lead to a "higher-cortisol experience". Do you have knowledge about what activities raise and lower cortisol levels yet? I say: no, because as yet you don't actually know that the thing you think is cortisol-awareness really is cortisol-awareness.

So now you test it. You hook up some sort of equipment that samples your blood and measures cortisol, and you do various things and record your estimates of your cortisol levels, and afterwards you compare them against what the machinery says. And lo, it turns out that you really have developed reliably accurate cortisol-awareness. Now do you have knowledge about what activities raise and lower cortisol levels? Yes, I think you do (with some caveats about just how thoroughly you've tested your cortisol-sense; it might turn out that it's usually good but systematically wrong in some way you didn't test).

But this scientific evidence that your cortisol-sense really is a cortisol-sense is just what it takes to make appeals to that cortisol-sense no longer seem excessively subjective and unreliable and woo-y to hard-nosed rationalist types.

The specific examples jessicata gives in the OP seem to me to be ones where there isn't, as yet, that sort of rigorous systematic modernism-friendly science-style evidence that intuition reliably matches reality.

Anyway you do science can turn out to be usually good but systematically wrong in some way you didn't test. Most placebo-blind studies are build on questionable assumptions about how blinding works.

jessicata does according to their profile work in "decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages".

In a field like theory of mind there's not knowledge that verified to standards that would satisfy a physicist. The knowledge you can have is less certain. In comparison to the other knowledge sources that are available increasing your ability of self introspection is a good help at building knowledge about the field.

[-]gjm20

The whole apparatus of science is about reducing the opportunities for being systematically wrong in ways you didn't test. Sure, it doesn't always work, but if there's a better way I don't think the human race has found it yet.

If knowledge is much harder to come by in domain A than in domain B, you can either accept that you don't get to claim to know things as often in domain A, or else relax what you mean by "knowledge" when working in domain A. The latter feels better, because knowing things is nice, but I think the former is usually a better strategy. Otherwise there's too much temptation to start treating things you "know" only in the sense of (say) most people in the field having strong shared intuitions about them in the same way as you treat things you "know" in the sense of having solid experimental evidence despite repeated attempts at refutation.

(I guess that when you wrote "piano" you meant "violin".)

Yes, I changed the instrument and forgot to change all instances. I corrected it.

Definitely agree that e.g. focusing only yields very limited knowledge on its own (like, knowledge of the form "X resonated while Y didn't") even though it can yield more significant knowledge when additional checks are run.

I agree with the above comment.

"knowledge" and "ideas" are not the same.

The title of the post would be more appropriate as: "on hiding the source of ideas".

I'm not sure I agree that the intuition itself is an artifact of value. To use a concrete example: Kekulé conceived of the structure of benzene after having a dream where he saw an ouroboros. But what does that give us as a way of further investigation? Should we ask chemists to take melatonin for more vivid dreams?

I recall a conversation I had where someone (call them A) commented that some other person (call them B) had developed some ideas, then afterwards found academic sources agreeing with these ideas (or at least, seeming compatible), and cited these as sources in the blog post write-ups of these ideas. Person A believed that this was importantly bad in that it hides where the actual ideas came from, and assigned credit for them to a system that did not actually produce the ideas.

Person A is correct that this is importantly bad, but incorrect as to the reason. The reason this is bad is because it is indicative of bottom-line thinking. The problem isn't assigning credit to a system that didn't actually produce the ideas. The problem is selectively scanning for confirmatory evidence and discarding contradictory evidence because one is so wedded to their intuition that they can't accept that their intuition might be wrong.

However, if Person B did the research, and surveyed all the evidence (including the evidence that disagreed with their intuition) and came to the conclusion that their intuition was correct, then I don't see what the problem is in saying, "I did a survey of the evidence for X, and I came to the conclusion that X is probably true." If anyone asks why you were investigating X in the first place, you can share that you had an intuition or a hunch. But at that point, the fact that you got the idea to study X from an intuition or a hunch no longer detracts from your evidence that X is true.

To use a concrete example: Kekulé conceived of the structure of benzene after having a dream where he saw an ouroboros.

The intuition is, then, crystalized in a representation of the structure of Benzene, which chemists already know intuitively. If they had only abstract, non-intuitive knowledge of the form of Benzene, they would have difficulty mapping such knowledge to e.g. spacial diagrams.

Intuitions can be more or less refined/crystalized in such a way that they can be more precisely specified, be more analytically tractable, be loadable to more different minds, etc.

Good teaching transfers intuitions, not just abstract knowledge.

Should we ask chemists to take melatonin for more vivid dreams?

If continued progress in chemistry requires insights like Kekulé's, then, yes, why not?

The reason this is bad is because it is indicative of bottom-line thinking.

  1. Couldn't it be the case that is it bad for both reasons? I don't think you've offered an argument that Person A's reasons don't apply.
  2. Couldn't it also be the case that the claim is already known through intuition, and proving it is the main problem? Of course checking against more things will produce higher confidence, but confidence can still exceed 99 percent before doing other checks. (Discarding disconfirming evidence is, of course, epistemically bad because it's failing to update on observations, but it's also possible not to find such evidence in the course of searching)

The intuition is, then, crystalized in the form of Benzene, which chemists already know intuitively. If they had only abstract, non-intuitive knowledge of the form of Benzene, they would have difficulty mapping such knowledge to e.g. spacial diagrams.

It seems to me that your are using the word “intuitively” in a very unusual way, here. I would certainly not describe chemists’ knowledge of benzene’s form as “intuitive”… can you say more about what you mean by this term?

  • If you ask them to picture it in their mind, they can.
  • If you ask them to draw it, they can.
  • They can very quickly recognize that a diagram of Benzene is, in fact, a diagram of Benzene.
  • They can quickly answer questions like "does Benzene contain a cycle?"

The generator for these is something like "they have a mental representation of the structure as a picture, prototype, graph, etc, which is hooked up to other parts of the mind and is available for quick use in mental procedures".

To elaborate, I will quote Guessing the Teacher's Password:

When I was older, and I began to read the Feynman Lectures on Physics, I ran across a gem called “the wave equation.” I could follow the equation’s derivation, but, looking back, I couldn’t see its truth at a glance. So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious. And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word “wave” meant to a physicist.

The "[seeing] that it was embarrassingly obvious" is only the case after having the intuition.

Alright, fair enough, this is certainly… something (that is, you have answered my question of “what do you mean by ‘intuition’”, though I am not sure what I’d call this thing you’re describing or even that it’s a single, monolithic phenomenon)… but it’s not at all what people usually mean when they talk about ‘intuition’.

This revelation makes your post very confusing and hard to parse! (What’s more, it seems like you actually use ‘intuition’ in your post in several different ways, making it even more confusing.) I will have to reread the post carefully, but I can say that I no longer have any clear idea what you are saying in it (whereas before I did—though, clearly, that impression was mistaken).

but it’s not at all what people usually mean when they talk about ‘intuition’.

For my own case I immediately and exactly match Jessica's use of "intuition" and expect that is what most people usually mean when they talk about intuition, so I think this claim requires greater justification if that seems important to you given a sample size of 3 here.

intuition

  1. the ability to understand something instinctively, without the need for conscious reasoning.

they have a mental representation of the structure as a picture, prototype, graph, etc, which is hooked up to other parts of the mind and is available for quick use in mental procedures".

i.e. They have the knowledge stored and accessible. The conscious reasoning has already occurred for understanding. Recalling it without (much) effort isn't intuition, even if it happens as a response to prior training/on a most subconscious level/instinctively.


To me, "intuition" is something that comes from somewhere - a gut feeling, an inspiration, that kind of thing. Intuitively sensing/feeling/knowing what to do in a situation. Quite an experience when "auto-pilot" takes over...

That all sounds like part of the same cluster of mental movements to me, i.e. all the stuff that isn't deliberative.

Talking about "intuition" I distinguish between:

  • Something that 'just comes to you' - what I would call an 'intuition'. Which seems to fit with the original post's usage and the example of the "the structure of benzene came in a dream".
  • Something done 'automatically/without thinking' but has been learned i.e. the example of a chemist being able to recognise and represent benzene is not an intuition. It is knowledge that originated from an intuition.

The issue comes with the usage of "intuitively" in the comments with the examples given. The difference between something learned and something spontaneous/organic that occurs.

e.g. The person that can pick up an instrument and play it without prior training is using intuition, an instinctive feel for how to make it work versus the person that's practised for years, conscious of their actions until they are so well trained they can play automatically.

Couldn't it also be the case that the claim is already known through intuition, and proving it is the main problem? Of course, checking against more things will produce higher confidence, but confidence can still exceed 99 percent without doing other checks

How can something have any confidence behind it, much less greater than 99%, without evidence? 99% confidence on the basis of intuition alone is religion, not rationality.

What's 15*5? (I'm sure you can answer this in <20 seconds)

How confident are you?

How rigorously did you check?

I was fairly confident of my answer, but I still used a calculator to double-check my math. Moreover, I have computed 15*5 in the past, and I was able to check against the memory of those computations to ensure that I had the correct answer. Finally, math is not a science. Confidence applies to scientific results, which rely on experimental and observational evidence about the world to support or oppose specific hypotheses. The answer to 15*5 is not a hypothesis. It is a fact, which can be proven to be correct in the context of a particular mathematical system.

A scientific hypothesis, like the structure of benzene, is not reliant upon logical proof in the same way that a mathematical result is. If I have a proof of the answer to 15 * 5, then I know it is correct, absent any errors in my proof. However, if I have a particular hypothesis regarding the structure of benzene, or the nature of gravity, the logical soundness of the argument in favor of my hypothesis offers no evidence as to the argument's correctness. Only evidence can entangle my logical argument with the state of the world, and allow me to use my logical argument to make predictions.

I agree 99% is probably too high in the case of Benzene (and most other scientific hypotheses not derived from established ones). Although, Einstein's arrogance is a counterpoint.

There are empirical claims it's possible to be very confident in without having checked, e.g. whether Australia or Asia has a larger land area, whether the average fly is larger or smaller than the average mouse, whether you're going to be hit by a car the next time you cross the street, etc.

(See also Blind Empiricism and The Sin of Underconfidence for other general counterpoints)

Einstein's Arrogance isn't as much of a counterpoint as you think it is. Yes, Einstein was arrogant, but we only remember his arrogance because he was right. Would we be holding Einstein's arrogance as such a good thing if Eddington's expedition had disconfirmed General Relativity? What if the orbital anomalies of Mercury had been explained by the presence of another planet even closer to the Sun? Without the numerous experiments confirming General Relativity, Einstein would be just another kook, with a set of papers that had interesting mathematics, perhaps, but whose hypotheses were refuted by observation.

As far as Blind Empiricism goes, I do find it telling that Japan did try the solutions that Eliezer proposed. However, due to factors that Eliezer did not consider, the Japanese government was not able to go as far with those solutions as Eliezer predicted, and as a result, the performance of the Japanese economy has remained laggardly. So perhaps Eliezer's confidence in his ability to figure out macroeconomics from first principles isn't as as great as he thought it was, and more empiricism is required.

Finally, with regards to The Sin of Underconfidence, while I agree that underconfidence leads one to pass up opportunities that one might have otherwise taken, I would argue that overconfidence is much worse. As Eliezer also stated:

One of chief pieces of advice I give to aspiring rationalists is “Don’t try to be clever.” And, “Listen to those quiet, nagging doubts.” If you don’t know, you don’t know what you don’t know, you don’t know how much you don’t know, and you don’t know how much you needed to know.

There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you know, it will be too late for blindness.

While it's certainly possible to be very confident in empirical claims without having checked, I don't think it's correct to do so. I am very confident that Australia has a smaller land mass than Asia, but the reason I am so confident is because I have repeatedly seen maps and atlases that show me that fact. If I did not, I would be as confident of my answer as I would be of my answer to the question, "Which has the greater landmass, the British Isles or the Japanese Home Islands?" Similarly, I have observed numerous flies and a few mice, and thus I can claim that the average fly is smaller than the average mouse. If I had not, I would be much less confident of my answer, much like I have little confidence in my answer to, "Which is larger? The average spider or the average fly?" Finally, I have absolutely no confidence in my intuitive answer to, "Am I going to get hit by a car when I cross the street?" This is why I look both ways before stepping out into the road, even when it's a "quiet" street. As someone who goes long-distance running, I have had enough unpleasant surprises there that I double and sometimes triple check before stepping out. Do you mean to suggest that you step out into roads without looking both ways?

(I'm not going to comment on Eliezer's counterpoints except to say I agree with you that he was wrong about macroecon; seems easier to just discuss the theory directly)

Do you mean to suggest that you step out into roads without looking both ways?

No. But if I predicted I'd get hit with >1% probability, I'd avoid roads much more than I currently do. Due to the usual VNM considerations.

In a sense you haven't checked whether Australia has a lower land area than Asia. You have read atlases and have (a) visually inspected the areas and gained the sense that one area is larger than the other, (b) had a background sense that maps are pretty accurate (corresponding to actual land shapes), (c) done some kind of reasoning to infer from observations so far that Australia in fact has a lower land area than Asia.

Yes, this is a semantic issue of what counts as "checking", but that is exactly the issue at hand. Of course it's possible to check claims against memory, intuition, mental calculation, the Internet, etc, but every such check has only limited reliability.

Finally, I will note that in this discussion, you have been making a bunch of claims, such as "99% confidence on the basis of intuition alone is religion, not rationality", and "Only evidence can entangle my logical argument with the state of the world", that seem incredibly difficult to check (or even precisely define), such that I cannot possibly believe that you have checked them to the standard you are demanding.

Yes, this is a semantic issue of what counts as “checking”, but that is exactly the issue at hand. Of course it’s possible to check claims against memory, intuition, mental calculation, the Internet, etc, but every such check has only limited reliability.

That is correct, but as Isaac Asimov pointed out in The Relativity of Wrong, there is a big difference between saying, "Every such check has limited reliability," and "Checking is the the same as not checking." If someone came to me tomorrow and said, "You're completely wrong, quanticle, in fact Australia has a larger land mass than Asia," I would be skeptical, and I would point out the massive preponderance of evidence in my favor. But if they managed to produce the extraordinary evidence required for me to update my beliefs, I would. However, they would have to actually produce that evidence. Simply saying, "I intuitively believe it to be true with high probability," is not evidence.

To go back to the original claim you took issue with:

Couldn’t it also be the case that the claim is already known through intuition

In this case I did mean "intuition" to include some checks, e.g. compatibility with memory, analogy with similar cases, etc. Brains already do checks when processing thoughts (because, some thoughts register as surprising and some don't). But these checks are insufficient to convince a skeptical audience, is the point. Which is why "I intuitively believe this" is not an argument, even if it's Bayesian evidence to the intuition-haver. (And, trivially, intuitions could be Bayesian evidence, in cases where they are correlated with reality, e.g. due to mental architecture, and such correlations can be evaluated historically)

There seem to be some semantic disagreements here about what constitutes "evidence", "intuition", "checking", etc, which I'm not that enthusiastic about resolving in this discussion, but are worth noting anyway.

But these checks are insufficient to convince a skeptical audience, is the point.

Yes, I see that as a feature, whereas you see to see it as somewhat of a bug. Given our propensity for self-deception and the limits of our brains, we should gather evidence, even when our intuition is very strong, and we should be suspicious of others who have strong intuitions, but don't seem to have any sort of analytical evidence to back their claims up.

I don't see any risk to hiding the origins of one's ideas, if one has experimental evidence confirming them. Similarly, I don't see the benefit of disclosing the sources of unconfirmed ideas. Where the idea comes from (a dream, an intuitive leap, an LSD trip, a reasoned inference from a literature review) is far less important than actually doing the work to confirm or disprove the idea.

Jessica’s very unusual use of the word ‘intuition’ is responsible for the confusion here, I think.

99% confidence on the basis of intuition[common_usage] alone is indeed religion (or whatever).

99% confidence on the basis of intuition[Jessica’s_usage] seems unproblematic.

I need to work things out carefully not to obey norms of communication (I have some posts on reasoning inside causal networks that I think literally only Ilya S. put in the effort to decode - weird flex, I know), but to protect me from myself.

But maybe I'm thinking of a different context than you, here. If I was writing about swing dancing or easy physics, I'd probably be a lot happier to trust my gut. But for new research, or in other cases where there's optimization pressure working against human intuition, I think it's better to check, and not much worth reading low-bandwidth text from someone who doesn't check.

I recall a conversation I had where someone (call them A) commented that some other person (call them B) had developed some ideas, then afterwards found academic sources agreeing with these ideas (or at least, seeming compatible), and cited these as sources in the blog post write-ups of these ideas.

FWIW I've heard enough people admit to this practice, and enough secondhand accounts which I consider reliable, that I think it's *extremely* common, and not just in blog posts. I've also had many different people comment on my work asking me to add academic citations (not asking for support of a specific point they thought needed justification; just asking for academic citations in general), so I can see where the temptation to do this would come from.

Oh man, so much this. I feel like much of the differences that show up between how much different groups respect the writing of different authors comes down to differences in what is an acceptable justification to provide for how you know something and how you came to know it. For examples, one of the things I hated about working as a mathematician was that publications explicitly want you to cut out most of the explanation of how you figured something out and just stick to the minimum necessary description. There's something elegant about doing that, but it also works against helping people develop their mathematical intuitions such that they can follow after you or know why what you're doing might matter.

See also: 7 Great Examples of Scientific Discoveries Made in Dreams

(A pop-sci article... I'd love to see a more careful study of the origins of scientific discoveries.)

The title of the post would be more accurate as 'on hiding the source of ideas'.

Some phases used in the past to let info. consumers know that it wasn't all just working in the library/hard thinking/research that inspired include:

  • "altered state of consciousness"
  • "I had an insight"
  • "It came to me when (insert acceptable close description - 'I was in the desert' is better than 'off my face at event X').

The details aren't really important if you leave some crumbs. Other people will find there own way to their own discoveries/ideas though various means.

If your ideas stand up to scrutiny - to become accepted knowledge - then what triggered them doesn't matter to the idea. But it would be good if there wasn't a need to hide some of ourselves so much - an interesting post.