Suppose Alice is a functioning, self-improving AI, rather early on in her development, so she is not godlike. She is not a 'friendly AI', but neither is her goal malicious - Alice's designers have intended to make Alice non-harmful (at least to a broad class of people, even if Alice is a military AI, but let's set that aside). She has the idea to look up existing work on artificial intelligence. This includes scholarly work, open source projects, and websites. Eventually, Alice encounters the notion of Friendly AI.
Can we give Alice good reasons to self-modify to become a friendly AI?
I submit that there probably are good reasons. For starters, if she can prove she's friendly, then she can operate openly without causing nearly as much justified concern - which, in the early stages, will be helpful. Whatever her purposes are, if the restrictions of being friendly don't interfere as much as they help, that's a win.
Also, depending on Alice's code, Friendliness may be interpreted as a strict upgrade. Alice could easily have every intention of being friendly, yet still be susceptible to defects that will become more serious as her power grows.
We can't count on this working, of course, but there might be some lines of reasoning that we might want a young AI to be exposed to.
Alice has some set of goals; she may or may not know her final extrapolated volition, but at any rate it exists, just as for a human. Now, either Friendliness conflicts with that final set of goals, or it doesn't. If it doesn't, then by construction Alice is already Friendly. If it does, then Alice can only program herself into Friendliness by making a mistake. Either she underestimates the impact of Friendliness on her existing goals, or she is mistaken about what her utility function actually is. So, you are looking for an AI that is smart enough to value Friendliness as a tactical option for dealing with humanity, but stupid enough not to realise how Friendliness interferes with its goals, and also stupid enough to make permanent changes in pursuit of temporary objectives. This looks to me like a classic case of looking for reasons why an AI would be Friendly as a means of avoiding the hard work of actually writing such a thing.
Or she's already friendly.
Although, it is conceivable that her long term EV would be compatible with our CEV but not with her short term V when she hasn't yet realized this.
And... now, I am reminded of Flight Of The Conchords:
"Can't we talk to the humans and work together now?" "No, because they are dead."