Ends Don't Justify Means (Among Humans)

Quantified Humanism

Followup toWhy Does Power Corrupt?

"If the ends don't justify the means, what does?"
        —variously attributed

"I think of myself as running on hostile hardware."
        —Justin Corwin

Yesterday I talked about how humans may have evolved a structure of political revolution, beginning by believing themselves morally superior to the corrupt current power structure, but ending by being corrupted by power themselves—not by any plan in their own minds, but by the echo of ancestors who did the same and thereby reproduced.

This fits the template:

In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.

From this proposition, I now move on to my main point, a question considerably outside the realm of classical Bayesian decision theory:

"What if I'm running on corrupted hardware?"

In such a case as this, you might even find yourself uttering such seemingly paradoxical statements—sheer nonsense from the perspective of classical decision theory—as:

"The ends don't justify the means."

But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.

By the power of naive realism, the corrupted hardware that you run on, and the corrupted seemings that it computes, will seem like the fabric of the very world itself—simply the way-things-are.

And so we have the bizarre-seeming rule:  "For the good of the tribe, do not cheat to seize power even when it would provide a net benefit to the tribe."

Indeed it may be wiser to phrase it this way:  If you just say, "when it seems like it would provide a net benefit to the tribe", then you get people who say, "But it doesn't just seem that way—it would provide a net benefit to the tribe if I were in charge."

The notion of untrusted hardware seems like something wholly outside the realm of classical decision theory.  (What it does to reflective decision theory I can't yet say, but that would seem to be the appropriate level to handle it.)

But on a human level, the patch seems straightforward.  Once you know about the warp, you create rules that describe the warped behavior and outlaw it.  A rule that says, "For the good of the tribe, do not cheat to seize power even for the good of the tribe."  Or "For the good of the tribe, do not murder even for the good of the tribe."

And now the philosopher comes and presents their "thought experiment"—setting up a scenario in which, by stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is certain to save the five lives.  "There's a train heading to run over five innocent people, who you can't possibly warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the train.  These are your only options; what do you do?"

An altruistic human, who has accepted certain deontological prohibits—which seem well justified by some historical statistics on the results of reasoning in certain ways on untrustworthy hardware—may experience some mental distress, on encountering this thought experiment.

So here's a reply to that philosopher's scenario, which I have yet to hear any philosopher's victim give:

"You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty.  But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine.  Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree.  However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings."

Now, to me this seems like a dodge.  I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort.  The sort of person who goes around proposing that sort of thought experiment, might well deserve that sort of answer.  But any human legal system does embody some answer to the question "How many innocent people can we put in jail to get the guilty ones?", even if the number isn't written down.

As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another.  But I don't think that our deontological prohibitions are literally inherently nonconsequentially terminally right.  I endorse "the end doesn't justify the means" as a principle to guide humans running on corrupted hardware, but I wouldn't endorse it as a principle for a society of AIs that make well-calibrated estimates.  (If you have one AI in a society of humans, that does bring in other considerations, like whether the humans learn from your example.)

And so I wouldn't say that a well-designed Friendly AI must necessarily refuse to push that one person off the ledge to stop the train.  Obviously, I would expect any decent superintelligence to come up with a superior third alternative.  But if those are the only two alternatives, and the FAI judges that it is wiser to push the one person off the ledge—even after taking into account knock-on effects on any humans who see it happen and spread the story, etc.—then I don't call it an alarm light, if an AI says that the right thing to do is sacrifice one to save five.  Again, I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.  I happen to be a human.  But for a Friendly AI to be corrupted by power would be like it starting to bleed red blood.  The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason.  It wouldn't spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.

I would even go further, and say that if you had minds with an inbuilt warp that made them overestimate the external harm of self-benefiting actions, then they would need a rule "the ends do not prohibit the means"—that you should do what benefits yourself even when it (seems to) harm the tribe.  By hypothesis, if their society did not have this rule, the minds in it would refuse to breathe for fear of using someone else's oxygen, and they'd all die.  For them, an occasional overshoot in which one person seizes a personal benefit at the net expense of society, would seem just as cautiously virtuous—and indeed be just as cautiously virtuous—as when one of us humans, being cautious, passes up an opportunity to steal a loaf of bread that really would have been more of a benefit to them than a loss to the merchant (including knock-on effects).

"The end does not justify the means" is just consequentialist reasoning at one meta-level up.  If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn't think this way.  But it is all still ultimately consequentialism.  It's just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.

 

Part of the sequence Ethical Injunctions

Next post: "Protected From Myself"

Previous post: "Why Does Power Corrupt?"

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:14 AM
Select new highlight date
All comments loaded

"So here's a reply to that philosopher's scenario, which I have yet to hear any philosopher's victim give" People like Hare have extensively discussed this, although usually using terms like 'angels' or 'ideally rational agent' in place of 'AIs.'

The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn't spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.
This is critical to your point. But you haven't established this at all. You made one post with a just-so story about males in tribes perceiving those above them as corrupt, and then assumed, with no logical justification that I can recall, that this meant that those above them actually are corrupt. You haven't defined what corrupt means, either.

I think you need to sit down and spell out what 'corrupt' means, and then Think Really Hard about whether those in power actually are more corrupt than those not in power;and if so, whether the mechanisms that lead to that result are a result of the peculiar evolutionary history of humans, or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.

You might argue that if you have one Sysop AI, it isn't subject to evolutionary forces. This may be true. But if that's what you're counting on, it's very important for you to make that explicit. I think that, as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.

as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.

Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally 'Friendly' AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the 'young revolutionary' of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.

Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.

Why would a Friendly AI lose out? They can do anything any other AI can do. They're not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.

Still disagreeing with the whole "power corrupts" idea.

A builder, or a secratary, who looks out for his friends and does them favours is... a good friend. A politician who does the same is... a corrupt politician.

A sad bastard who will sleep with anyone he can is a sad bastard. A politician who will sleep with anyone he can is a power-abusing philanderer.

As you increase power, you become corrupt just by doing what you've always done.

I finally put words to my concern with this. Hopefully it doesn't get totally buried because I'd like to hear what people think.

It might be the case that a race of consequentialists would come up with deontological prohibitions on reflection of their imperfect hardware. But that isn't close to the right story for how human deontological prohibitions actually came about. There was no reflection at all, cultural and biological evolution just gave us normative intuitions and cultural institutions. If things were otherwise (our ancestors were more rational) perhaps we wouldn't have developed the instinct that the ends don't always justify the means. But that is different from saying that a perfectly rational present day human can just ignore deontological prohibitions. Our ancestral environment could have been different in lots of different ways. Threats from carnivores and other tribes could have left us with a much strong instinct for respecting authority-- such that we follow our leaders in all circumstances. We could have been stronger individually and less reliant on parents such that there was no reason for altruism to develop into as strong a force as it is. You can't extrapolate an ideal morality from a hypothetical ancestral environment.

Non-consequentialists think the trolley problems just suggest that our instincts are not, in fact, strictly utilitarian. It doesn't matter that an AI doesn't have to worry about corrupted hardware, if it isn't acting consistently with human moral intuitions it isn't ethical (bracketing concerns about changes and variation in ethics).

Interesting point. It seems like human morality is more than just a function which maximizes human prosperity, or minimizes human deaths. It is a function which takes a LOT more into account than simply how many people die.

However, it does take into account its own biases, at least when it finds them displeasing, and corrects for them. When it thinks it has made an error, it corrects the part of the function which produced that error. For example, we might learn new things about game theory, or even switch from a deontological ethical framework to a utilitarian one.

So, the meta-level question is which of our moral intuitions are relevant to the trolley problem. (or more generally, what moral framework is correct.) If human deaths can be shown to be much more morally important than other factors, then the good of the many outweighs the good of the few. If, however, deontological ethics is correct, then the ends don't justify the means.

I think the simple statement you want is, "You should accept deontology on consequentialist grounds."

There's really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends.

But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on increasingly effective interaction with relevant aspects of the environment of interaction, promoting in principle a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences.

This is a really interesting post, and it does a good job of laying out clearly what I've often, less clearly, tried to explain to people: the human brain is not a general intelligence. It has a very limited capacity to do universal computation, but it's mostly "short-cuts" optimized for a very specific set of situations...

To take a subset of the topic at hand, I think Mencius nailed it when he defined corruption. To very roughly paraphrase, corruption is a mismatch between formal and informal power.

Acton's famous aphorism can be rewritten in the following form: 'Those with formal power tend to use it to increase their informal power'.

Haig: "Without ego corruption does not exist"

Not true at all. This simply rules out corruption due to greed. There are tons of people who do corrupt things for 'noble causes'. Just as a quick example, regardless of the truth of the component claims of Global Warming, there are tons of people who commit corrupt acts with an eye towards relieving global warming.

Stuart Armstrong:

The examples you give are worded similarly, but are actually quite different. I'm pretty sure you actually meant:

A builder, or a secratary, who looks out for his friends and does them favours is... a good friend. A politician who does the same with public resources is... a corrupt politician.

A sad bastard who will sleep with anyone he can is a sad bastard. A politician who will sleep with anyone he can is using the power of his office to coerce those under him.

You will note that in all cases, the politician has expanded his imformal powers to be greater than his formal ones.

Phil Goetz: or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.

You are assuming that an AI would be subject to the same sort of evolutionary mechanism that humans traditionally were: namely, that only AIs with a natural tendency towards a particular behavior would survive. But an AI isn't cognitively limited in the way animals were. While animals had to effectively be pre-programmed with certain behaviors or personality traits, as they weren't intelligent or knowledgable enough to just derive all the useful subgoals for fitness-maximizing behavior once they were told the goal, this isn't the case for AIs. An AI can figure out that a certain course of action is beneficial in a certain situation and act to implement it, then discard that behavior when it's no longer needed. In a competitive environment, there will certainly be selection that eliminates AIs that are for some reason unable to act in a certain way, but probably very little selection that would add new behavioral patterns for the AIs involved (at least ones that couldn't be discarded when necessary).

in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder ... I refuse to extend this reply to myself, because the epistemological state you ask me to imagine, can only exist among other kinds of people than human beings.

Interesting reply. But the AIs are programmed by corrupted humans. Do you really expect to be able to check the full source code? That you can outsmart the people who win obfuscated code contests?

How is the epistemological state of human-verified, human-built, non-corrupt AIs, any more possible?

We're likely to insert our faulty cached wisdom deliberately. We're unlikely to insert our power-corrupts biases deliberately. We might insert something vaguely analogous accidentally, though.

As for obfuscated source code -- we would want programmatic verification of correctness, which would be another huge undertaking on top of solving the AI and FAI problems. Obfuscation doesn't help you there.

Eliezer: If you create a friendly AI, do you think it will shortly thereafter kill you? If not, why not?
At present, Eliezer cannot functionally describe what 'Friendliness' would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI.

Imagine if Isaac Asimov not only lacked the ability to specify how the Laws of Robotics were to be implanted in artificial brains, but couldn't specify what those Laws were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. 'Friendly AI' is not so specified and cannot be analyzed. No one can find problems with the concept because it's not substantive enough - it is essentially nothing but one huge, undefined problem.

All the discussion so far indicates that Eliezer's AI will definitely kill me, and some others posting here, as soon as he turns it on.

It seems likely, if it follows Eliezer's reasoning, that it will kill anyone who is overly intelligent. Say, the top 50,000,000 or so.

(Perhaps a special exception will be made for Eliezer.)

Hey, Eliezer, I'm working in bioinformatics now, okay? Spare me!

Eliezer: If you create a friendly AI, do you think it will shortly thereafter kill you? If not, why not?

But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences.

This point and the subsequent discussion are tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions. To see this, limit the universe of discourse to actions which have predictable effects and note that Eliezer's argument still makes strong claims about how humans should act.

The thing is, an AI doesn't have to use mental tricks to compensate for known errors in its reasoning, it can just correct those errors. An AI never winds up in the position of having to strive to defeat its own purposes.

A self-modifying AI. Not all AI has to be self-modifying, although superhuman Friendly AI probably does have to be in order to work.

He may have some model of an AI as a perfect Bayesian reasoner that he uses to justify neglecting this. I am immediately suspicious of any argument invoking perfection.
It may also be that what Eliezer has in mind is that any heuristic that can be represented to the AI, could be assigned priors and incorporated into Bayesian reasoning.

Eliezer has read Judea Pearl, so he knows how computational time for Bayesian networks scales with the domain, particularly if you don't ever assume independence when it is not justified, so I won't lecture him on that. But he may want to lecture himself.

(Constructing the right Bayesian network from sense-data is even more computationally demanding. Of course, if you never assume independence, then the only right network is the fully-connected one. I'm pretty certain that suggesting that a non-narrow AI will be reasoning over all of its knowledge with a fully-connected Bayesian network is computationally implausible. So all arguments that require AIs to be perfect Bayesian reasoners are invalid.)

I'd like to know how much of what Eliezer says depends on the AI using Bayesian logic as its only reasoning mechanism, and whether he believes that is the best reasoning mechanism in all cases, or only one that must be used in order to keep the AI friendly.

Kaj: I will restate my earlier question this way: "Would AIs also find themselves in circumstances such that game theory dictates that they act corruptly?" It doesn't matter whether we say that the behavior evolved from accumulated mutations, or whether an AI reasoned it out in a millisecond. The problem is still there, if circumstances give corrupt behavior an advantage.

As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. [...] I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.

It seems a strong claim to suggest that the limits you impose on yourself due to epistemological deficiency line up exactly with the mores and laws imposed by society. Are there some conventional ends-don't-justify-means notions that you would violate, or non-socially-taboo situations in which you would restrain yourself?

Also, what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?

what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?

If 3^^^^3 lives are at stake, and we assume that we are running on faulty or even hostile hardware, then it becomes all the more important not to rely on potentially-corrupted "seems like this will work".

Good point, Jef - Eliezer is attributing the validity of "the ends don't justify the means" entirely to human fallibility, and neglecting that part accounted for by the unpredictability of the outcome.

He may have some model of an AI as a perfect Bayesian reasoner that he uses to justify neglecting this. I am immediately suspicious of any argument invoking perfection.

I don't know what "a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences" means.

I believe that rule-utilitarianism was presented to dispose of this very idea. It is also why rule-utilitarianism is right. Using correct utilitarian principles to derive deontic-esque rules of behavior. Rule based thinking maximizes utility better than situational utilitarian calculation.

I received an email from Eliezer stating:


You're welcome to repost if you criticize Coherent Extrapolated Volition specifically, rather than talking as if the document doesn't exist. And leave off the snark at the end, of course.

There is no 'snark'; what there IS, is a criticism. A very pointed one that Eliezer cannot counter.

There is no content to 'Coherent Extrapolated Volition'. It contains nothing but handwaving, smoke and mirrors. From the point of view of rational argument, it doesn't exist.

Note for readers: I'm not responding to Phil Goetz and Jef Allbright. And you shouldn't infer my positions from what they seem to be arguing with me about - just pretend they're addressing someone else.
Is that on this specific question, or a blanket "I never respond to Phil or Jef" policy?

Huh. That doesn't feel very nice.
Nor very rational, if one's goal is to communicate.

@Zuban. I'm familiar with the contrivances used to force the responder into a binary choice. I just think that the contrivances are where the real questions are. Why am I in that situation? Was my behavior beyond reproach up to that point? Could I have averted this earlier? Is it someone else's evil action that is a threat? I think in most situations, the moral answer is rather clear, because there are always more choices. E.g., ask the fat man to jump. or do nothing and let him make his own choice, as I could only have averted it by committing murder. or even jump with him.

With the lever: who has put me in the position of having a lever? did they tie up the five people?

Someone tells me that if I shoot my wife, they will spare my daughter, otherwise he'll shoot both of them. What's the right choice? I won't murder, thus I have only one (moral) choice (if I believe him, and if I can think of a reductionist reason to have any morality, which I can't). The other man's choice is his own.

How would we know if this line of thought is a recoiling from the idea that if you shut up and multiply, you should happily kill 10,000 for a 10% chance at saving a million.

I wonder where this is leading ... 1) Morality is a complex computation, that seems to involve a bunch of somewhat independent concerns 2) Some concerns of human morality may not need to apply to AI

So it seems that building friendly AI involves not only correctly building (human) morality, but figuring out which parts don't need to apply to an AI that doesn't have the same flaws.

What if a AI decides, with good reason, that it's running on hostile hardware?

@ Caroline: the effect on overall human fitness is neither here nor there, surely. The revolutionary power cycle would be adaptive because of its effect on the reproductive success of those who play the game versus those who don't. That is, the adaptation would only have to benefit specific lineages, not the whole species. Or have I missed your point?

Why must the power structure cycle be adaptive? I mean, couldn't it simply be non-maladaptive?

Because if the net effect on human fitness is zero, then perhaps it's just a quirk. I'm not sure how this affects your argument otherwise, I'm just curious as to why you think it was an adaptive pattern and not just a pattern that didn't kill us at too high a rate.

And so I wouldn't say that a well-designed Friendly AI must necessarily refuse to push that one person off the ledge to stop the train. Obviously, I would expect any decent superintelligence to come up with a superior third alternative. But if those are the only two alternatives, and the FAI judges that it is wiser to push the one person off the ledge—even after taking into account knock-on effects on any humans who see it happen and spread the story, etc.—then I don't call it an alarm light, if an AI says that the right thing to do is sacrifice one to save five. Again, I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects. ...

This bit sounds a little alarming considering how much more seriously Eliezer has taken other kinds of AI problems before, for an example in this post.

I appreciate the straightforward logic of simply choosing the distinctly better option between two outcomes, but what this is lacking is the very automatic way for people to perceive things as agents and that I find it very alarming if an agent does not pay extra attention to the fact that it's actions are leading to someone being harmed - I'd say people acting that way could potentially be very Unfriendly.

Although the post is titled "Ends Don't Justify Means" it also carries that little thing in the parenthesis (Among Humans) ... And it's not like inability to generate better options is proper justification for taking action resulting into someone being harmed and other people not being harmed - even if it is the better of two evils. Or at least I find that in particular very "alarming".

Humans have an intrinsic mode to perceive things as agents, but it's not just our perception, instead sometimes things actually behave like agents - unless we consider the quite accurate anticipations often provided by models functioning on an agent basis a mere humane flaw. For the sake of simplicity let's illustrate by saying that someone else finds the superior third option, but in the meanwhile this particular agent unable to find that particular third option, decides to go for the better outcome of sacrificing one to save five. In such a case it would be a mistake. It's also taking a more active role in the causal chain of events influenced by agents.

Point being, I think it's plausible to propose that a friendly AI would NOT make that decision, because it should not be in the position to make that decision, and therefore potential harm and tragedy occurring would not originate from the AI. I'm not saying that it's the wrong decision, but certainly it should not be an obvious decision - unless this is what we're really talking about.

People doing this I think is a problem because people suck at genuinely deciding based on the issues. I would rather live in a society where people were such that they could be trusted with the responsibility to push guys in front of trains if they had sufficient grounds to reasonably believe this was a genuine positive action. But knowing that people are not such, I would much rather they didn't falsely believe they were, even if it sometimes causes suboptimal decisions in train scenarios.

In such a case it would be a mistake.

I don't think you can automatically call a suboptimal decision a mistake.

This actually has a real-life equivalent, in the situation of having to shoot down a plane that is believed to be in the control of terrorists and flying towards a major city. I would not want to be in the position of that fighter pilot, but I would also want him to fire.

And I'm much more willing to trust a FAI with that call than any human.

I don't think you can automatically call a suboptimal decision a mistake.

Huh? You wouldn't call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision? Note that I altered the hypothetical situation in the comment and this "suboptimal decision" was labeled a mistake in the event that a 3rd party would come up with a superior decision (ie. one that would save all the lives)

And I'm much more willing to trust a FAI with that call than any human.

Edited: There's no FAI we can trust yet and this particular detail seems to be about the friendliness of an AI, so your belief seems a little out of place in this context, but nevermind that since if there were an actual FAI, I suppose I'd agree.

I think there's potential for severe error in the logic present in the text of the post and I find it proper to criticize the substance of this post, despite it being 4 years old.

Anyway for an omniscient being not putting any weight on the potential of error would seem reasonable.

You wouldn't call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision?

I might decide to take a general, consistent strategy due to my own limitations. In this example, the limitation is that if I feel justified in engaging in this sort of behavior on occasion, I will feel justified employing it on other occasions with insufficient justifications.

If I employed a different general strategy with a similar level of simplicity, it would be less optimal.

Other strategies exist that are closer to optimal, but my limitations preclude me from employing them.

I think there's potential for severe error in the logic present in the text of the post

Of course there is. If you can show a specific error, that would be great.

The third alternative in the train example is to sacrifice one's own self. (Unless this has been stated already, I did not read the whole of the comments)

Assume that you are too light to stop the train. Otherwise you aren't really addressing the moral quandary that the scenario is intended to invoke.

Having run into this problem when presenting the trolly problem on many occasions, I've come to wonder whether or not it might just be the right kind of response: can we really address moral quandaries in the abstract? I suspect not, and that when people try to make these ad hoc adjustments to the scenario, they're coming closer to thinking morally about the situation, just insofar as they're imagining it as a real event with its stresses, uncertainties, and possibilities.

Maybe it's just that that trolley problem is a really terrible example. It seems to be asking us to consider trains and/or people which operate under some other system of physics than the one we are familiar with.

Maybe an adjustment would make it better. How about this:

A runaway train carrying a load of ore is coming down the track and will hit 5 people, certainly killing them, unless a switch is activated which changes the train's path. Unfortunately, the switch will activate only when a heavy load is placed on a connected pressure plate (set up this way so that when one train on track A drops off its cargo, the following train will be routed to track B). Furthermore, triggering the pressure plate has an unfortunate secondary effect; it causes a macerator to activate nearly instantly and chop up whatever is on the plate (typically raw ore) so that it can be sucked easily through a tube into a storage area, rather like a giant food disposal.

Standing next to the plate, you consider your options. You know, from your experience working on the site, that the plate and track switch system work quite reliably, but that you are too light to trigger it even if you tried jumping up and down. However, a very fat man is standing next to you; you are certain that he is heavy enough. With one shove, you could push him onto the plate, saving the lives of the five people on the tracks but causing his grisly death instead. Also, the switch's design does not have any manual activation button near the plate itself; damn those cheap contractors!

There are only a few seconds before the train will pass the switch point, and from there only a few seconds until it hits the people on the track; not enough time to try anything clever with the mechanism, or for the 5 people to get out of the narrow canal in which the track runs. You frantically look around, but no other objects of any significant weight are nearby. What should you do?

That works, or at any rate I can't think of plausible ways to get out of your scenario. My worry though is that people's attempts to come up with alternatives is actually evidence that hypothetical moral problems have some basic flaw.

I'm having a hard time coming up with an example of what I mean, but suppose someone were to describe a non-existant person in great detail and ask you if you loved them. It's not that you couldn't love someone who fit that description, but rather that the kind of reasoning you would have to engage in to answer the question 'do you love this person?' just doesn't work in the abstract.

So my thought was that maybe something similar is going on with these moral puzzles. This isn't to say moral theories aren't worthwhile, but rather that the conditions necessary for their rational application exclude hypotheticals.

It's not a flaw in the hypotheticals. Rather, it's a healthy desire in humans to find better tradeoffs than the ones initially presented to them.