My name is Brent, and I'm probably insane.
I can perform various experimental tests to verify that I do not perform primate pack-bonding rituals correctly, which is about half of what we mean by "insane". This concerns me simply from a utilitarian perspective (separation from pack makes ego-depletion problems harder; it makes resources harder to come by; and it simply sucks to experience "from the inside"), but these are not the things that concern me most.
The thing that concerns me most is this:
What if the very tools that I use to make decisions are flawed?
I stumbled upon Bayesian techniques as a young child; I was lucky enough to have the opportunity to perform a lot of self-guided artificial intelligence "research" in Junior High and High School, due to growing up in a time and place when computers were utterly mysterious, so no one could really tell me what I was "supposed" to be doing with them - so I started making simple video games, had no opponents to play them against due to the aforementioned failures to correctly perform pack-bonding rituals, decided to create my own, became dissatisfied with the quality of my opponents, and suddenly found myself chewing on Hopfstaedter and Wiener and Minsky.
I'm filling in that bit of detail to explain that I have been attempting to operate as a rational intelligence for quite some time, so I believe that I've become very familiar with the kinds of "bugs" that I will tend to exhibit.
I've spent a very long time attempting to correct for my cognitive biases, edit out tendencies to seek comfortable-but-misleading inputs, and otherwise "force" myself to be rational, and often, the result is that my "will" will crack under the strain. My entire utility-table will suddenly flip on its head, and attempt to maximize my own self-destruction rather than allow me to continue to torture it with endlessly recursive, unsolvable problems that all tend to boil down to "you do not have sufficient social power, and humans are savage and cruel no matter how much you care about them."
Most of my energy is spent attempting to maintain positive, rational, long-term goals in the face of some kind of regedit-hack of my utility table itself, coming from somewhere in my subconscious that I can't seem to gain write-access to.
Clearly, the transhumanist solution would be to identify the underlying physical storage where the bug is occurring, and replace it with a less-malfunctioning piece of hardware.
Hopefully someday someone with more self-control, financial resources, and social resources than I will invent a method to do that, and I can get enough of a partial personectomy to create something viable with the remaining subroutines.
In the meantime, what is someone who wishes to be rational supposed to do, when the underlying hardware simply won't cooperate?
Chorus: Hi, Brent.
Being aware of the biases, yet unable to adapt your reasoning to compensate, seems to be contradictory. When you say "I know I only think X because of bias Y, so my actual belief should be Z", you seem to already have solved the problem in that instance, by just switching them out (in lambda calculus: E[X:=Z]).
The unknown unknowns are in my opinion the crux of the problem: those biases you did not (yet) recognize in specific situations, regardless of how well you trained yourself to reflect upon your own reasoning. Due to the nature of the problem, we wouldn't even be aware of how much progress we made in recognizing biases, and how much is left to be done. (Comparing the variance among reasoning agents would help: Based on Aumann, we can in principle eliminate - or at least notice the existence of - biases that we do not share, but two agents with a shared kind of bias would still converge on the same belief and thus be oblivious to it*).
What do? Do the best with the hand dealt to you, e.g. if it were the case (as a cosmic joke) that Occam's Razor didn't hold true for vetting ToE's after all, too bad. At least we did our very best then.
* I'm not certain this is a formal result, it should be the case for a majority of cases. Comments welcome.