Image classification models can depend on multiple different semantic attributes of the image. An explanation of the decision of the classifier needs to both discover and visualize these properties. Here we present StylEx, a method for doing this, by training a generative model to specifically explain multiple attributes that underlie classifier decisions. A natural source for such attributes is the StyleSpace of StyleGAN, which is known to generate semantically meaningful dimensions in the image. However, because standard GAN training is not dependent on the classifier, it may not represent those attributes which are important for the classifier decision, and the dimensions of StyleSpace may represent irrelevant attributes. To overcome this, we propose a training procedure for a StyleGAN, which incorporates the classifier model, in order to learn a classifier-specific StyleSpace. Explanatory attributes are then selected from this space. These can be used to visualize the effect of changing multiple attributes per image, thus providing image-specific explanations. We apply StylEx to multiple domains, including animals, leaves, faces and retinal images. For these, we show how an image can be modified in different ways to change its classifier output. Our results show that the method finds attributes that align well with semantic ones, generate meaningful image-specific explanations, and are human-interpretable as measured in user-studies.
|Original language||American English|
|Title of host publication||Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||10|
|State||Published - 2021|
|Event||18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 - Virtual, Online, Canada|
Duration: 11 Oct 2021 → 17 Oct 2021
|Name||Proceedings of the IEEE International Conference on Computer Vision|
|Conference||18th IEEE/CVF International Conference on Computer Vision, ICCV 2021|
|Period||11/10/21 → 17/10/21|
Bibliographical noteFunding Information:
Work performed by authors at Google.
© 2021 IEEE