As machine learning moves from theoretical applications in academia to promising solutions to problems across industry and healthcare, effective interpretability strategies are critically impor- tant to adoption. However, model interpretability strategies can be extended to offer more than validation for the predictions a model is making. Learned models offer a proxy for the data by capturing relationships between feature inputs and target outcomes, offering a representation that can be analysed. To that end, this work describes a fault analysis system that leverages learned models to characterize faults by using SHapley Additive exPlanations (SHAP). In particular, this fault analysis system was designed for large structured datasets such as those available in telecommunications networks. The strategy works by forming a learned representation with tree-based models using gradient-boosting. Once a problematic sample is selected for analysis, the computationally efficient implementation of the SHAP algorithm specialized for tree-based models is employed to gauge feature contributions to the performance degradation observed in the sample. Thus, this fault analysis strategy effectively provides an explanation for the degradation in a problematic sample informed through a model-based representation of the relevance of input characteristics across contexts. An evaluation of the strategy is performed, demonstrating its reliability for structured communications data using a 4G LTE dataset.