Introduction to Local Interpretable Model-Agnostic Explanations (LIME)

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:02 PM
Select new highlight date
All comments loaded

It's an interesting idea, though I wouldn't call it "model-agnostic".

Basically they're jiggering the inputs and figuring out what you can't change without the prediction (classification) changing as well. In effect they are answering the question "given this model, which input values are essential to producing this particular output?"