“To comprehend inner data representations constructed by convolutional networks” is an ambitious task attracting a constantly growing number of researchers across the globe. The joint effort has led to the broad space of interpretability techniques that roughly fall into two groups: attribution and feature visualization. The former shines the light on “where” the algorithm looks, while the latter reveals “what” the network sees. Combining them into rich interfaces provides new ways to understand convolutional networks and develop deeper intuitions about their complex behavior. In this thesis, two novel techniques are proposed to advance the analyses in the field of interpretability: Latent Factor Attribution (LFA) and Distilled Class Factors Atlas. LFA identifies distinct concepts in the activation tensor using matrix decomposition and estimates their influence on the classification result. Distilled Class Factors Atlas then aggregates these concepts and presents them in an interactive, exploratory interface that makes it possible to see the entire class of images through the model’s eyes by leveraging feature visualization. Both techniques introduced in this thesis extend the holistic view on latent representations of convolutional networks by enabling the collective perspective on the patterns that recur in the activation tensors.