Suppose Alice is a functioning, self-improving AI, rather early on in her development, so she is not godlike. She is not a 'friendly AI', but neither is her goal malicious - Alice's designers have intended to make Alice non-harmful (at least to a broad class of people, even if Alice is a military AI, but let's set that aside). She has the idea to look up existing work on artificial intelligence. This includes scholarly work, open source projects, and websites. Eventually, Alice encounters the notion of Friendly AI.
Can we give Alice good reasons to self-modify to become a friendly AI?
I submit that there probably are good reasons. For starters, if she can prove she's friendly, then she can operate openly without causing nearly as much justified concern - which, in the early stages, will be helpful. Whatever her purposes are, if the restrictions of being friendly don't interfere as much as they help, that's a win.
Also, depending on Alice's code, Friendliness may be interpreted as a strict upgrade. Alice could easily have every intention of being friendly, yet still be susceptible to defects that will become more serious as her power grows.
We can't count on this working, of course, but there might be some lines of reasoning that we might want a young AI to be exposed to.
Looking for reasons they would be? No.
Looking for reasons they might want to be? Yes.
Look. Not all extrapolated volitions are things to be desired. Suppose one side of my family predictably descends into irrational irritability and madness as they senesce. I'd rather not, even so - and not just right now. In general, it's quite different from what one would consider my true extrapolated volition.
If Alice finds herself in the situation where she expects that she will want to kill all humans later based on her current programming, she could consider that a bug rather than a feature.
I don't think you understand what is meant by 'extrapolated volition' in this context. It does not mean "What I think I'll want to do in the future", but "what I want to want in the future". If Alice already wants to avoid self-programming to kill humans, that is a Friendly trait; no need to change. If she considers trait X a bug, then by construction she will not have trait X, because she is self-modifying! Conversely, if Alice correctly predicts that she will inevitably find herself wanting to kill all humans, then how can she avoid it by becoming Friendly? Either her self-prediction was incorrect, or the unFriendliness is inevitable!