We expect that post-singularity there will still be limited resources in the form of available computational resources until heat death.
Those resources do not necessarily need to be allocated fairly. In fact, I would guess that if they were allocated unfairly the most like beneficiaries would be those people that helped contribute to the creation of a friendly AI.
Now for some open questions:
What probability distribution of extra resources do you expect with respect to various possible contributions to the creation of friendly AI?
Would donating to the SIAI suffice for acquiring these extra resources?
Iff we program it to.
Trite but true. This isn't a question about fundamental behavior of AIs. It's a question of what the preferences the GAI creators wanted to impart on their AI and how well they managed to implement them. An AI that rewards to some degree could qualify as friendly but it doesn't seem to be a requirement.
Here's another question: If a group of people cooperated to save you and your species from near certain death and gave you and those dear to you an unbounded life of general awesomeness would you reward them? If so then an FAI may just well reward them too. If most people would reward in that circumstance then an FAI could plausibly also reward. But I don't pretend to know what people's extrapolated volition looks like or how the most likely FAI would be implemented.