We expect that post-singularity there will still be limited resources in the form of available computational resources until heat death.
Those resources do not necessarily need to be allocated fairly. In fact, I would guess that if they were allocated unfairly the most like beneficiaries would be those people that helped contribute to the creation of a friendly AI.
Now for some open questions:
What probability distribution of extra resources do you expect with respect to various possible contributions to the creation of friendly AI?
Would donating to the SIAI suffice for acquiring these extra resources?
Comparison result: NOT EQUAL. (For multiple reasons, come to think of it. Those being multiple results, parameterisation, and currently being ambiguously specified.)
My comment rather clearly assumed for that and further asserted that:
That is, there is a class of artificial intelligence algorithms which can be considered 'friendly' and within that class there are algorithms that would reward and other algorithms which would not reward. This is in stark contrast to other behaviors which could be output by algorithms which would necessarily exclude them from being in the class 'friendly' - such as torturing or killing anyone I cared about.
(Just finished updating my reply, hopefully resolving some ambiguities present in its original form.)