Deep learning models excel when tested on images within their training distribution. However, introducing minor perturbations like noise or blurring to the model’s input image and presenting it with out-of-distribution (OOD) data can significantly reduce accuracy, limiting real-world appli- cability. While data augmentation enhances model robustness against OOD data, there is a gap in comprehensive studies on augmentation types and their impact on OOD robustness. A common belief suggests that augmenting datasets to bias models towards shape-based fea- tures improves OOD robustness for convolutional neural networks trained on ImageNet. However, our evaluation of 39 augmentations challenges this belief, showing that an augmentation-induced in- crease in shape bias does not necessarily correlate with higher OOD robustness. Analyzing results, we identify biases in ImageNet that can be mitigated through appropriate augmentation. Contrary to expectations, our evaluation reveals no inherent trade-off between in-domain accuracy and OOD robustness. Strategic augmentation choices can simultaneously enhance both. Model performance is influenced not only by perturbations but also by the image compression format. Efficient algorithms for image compression play a crucial role in managing costs associ- ated with data storage. We propose an innovative region-based lossy image compression method named PatchSVD, leveraging the Singular Value Decomposition (SVD) algorithm. Experimental results demonstrate that PatchSVD surpasses SVD-based image compression across three common image compression metrics. Furthermore, we conduct a comparative analysis of compression arti- facts between PatchSVD, JPEG, and SVD-based compression, revealing scenarios where PatchSVD artifacts are preferable over both JPEG and SVD artifacts.