[1] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134, 2017. [2] A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and trans- fer,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341–346, ACM, 2001. [3] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin, “Image analogies,” in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 327–340, ACM, 2001. [4] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denois- ing,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 60–65, IEEE, 2005. [5] D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 2650–2658, 2015. [6] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision, pp. 649–666, Springer, 2016. [7] Y. Shih, S. Paris, F. Durand, and W. T. Freeman, “Data-driven hallucination of different times of day from a single outdoor photo,” ACM Transactions on Graphics, vol. 32, no. 6, p. 200, 2013. [8] P.-Y. Laffont, Z. Ren, X. Tao, C. Qian, and J. Hays, “Transient attributes for high-level understanding and editing of outdoor scenes,” ACM Transactions on Graphics, vol. 33, no. 4, p. 149, 2014. [9] S. Xie and Z. Tu, “Holistically-nested edge detection,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1395–1403, 2015. [10] T. Chen, M.-M. Cheng, P. Tan, A. Shamir, and S.-M. Hu, “Sketch2photo: In- ternet image montage,” in ACM transactions on graphics, vol. 28, p. 124, ACM, 2009. [11] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576, 2015. [12] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image transla- tion using cycle-consistent adversarial networks,” 2017 IEEE International Con- ference on Computer Vision, Oct 2017. [13] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neu- ral Information Processing Systems (Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds.), pp. 2672–2680, Curran Associates, Inc., 2014. [15] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. The MIT Press, 2016. [16] G. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning, vol. 112. Springer, 2013. [17] A. Géron, Hands-on machine learning with Scikit-Learn and TensorFlow: con- cepts, tools, and techniques to build intelligent systems. ” O’Reilly Media, Inc.”, 2017. [18] C. M. Bishop et al., Neural Networks for Pattern Recognition. Oxford university press, 1995. [19] S. Haykin and N. Network, “A comprehensive foundation,” Neural networks, vol. 2, no. 2004, p. 41, 2004. [20] C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, “Activation functions: Comparison of trends in practice and research for deep learning,” arXiv preprint arXiv:1811.03378, 2018. [21] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal repre- sentations by error propagation,” tech. rep., California Univ San Diego La Jolla Inst for Cognitive Science, 1985. [22] P. Werbos, “Beyond regression:” new tools for prediction and analysis in the behavioral sciences,” Ph. D. dissertation, Harvard University, 1974. [23] D. H. Hubel and T. N. Wiesel, “Receptive fields and functional architecture of monkey striate cortex,” The Journal of physiology, vol. 195, no. 1, pp. 215–243, 1968. [24] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012. [26] C.-C. J. Kuo, “Understanding convolutional neural networks with a mathemati- cal model,” Journal of Visual Communication and Image Representation, vol. 41, pp. 406–413, 2016. [27] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learn- ing (ICML-10), pp. 807–814, 2010. [28] J. R. Gardner, P. Upchurch, M. J. Kusner, Y. Li, K. Q. Weinberger, K. Bala, and J. E. Hopcroft, “Deep manifold traversal: Changing labels with convolutional features,” arXiv preprint arXiv:1511.06421, 2015. [29] C. Li and M. Wand, “Combining markov random fields and convolutional neu- ral networks for image synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2479–2486, 2016. [30] A. Selim, M. Elgharib, and L. Doyle, “Painting style transfer for head portraits using convolutional neural networks,” ACM Transactions on Graphics, vol. 35, no. 4, p. 129, 2016. [31] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 8789–8797, 2018. [32] S. Zhu, R. Urtasun, S. Fidler, D. Lin, and C. Change Loy, “Be your own prada: Fashion synthesis with structural coherence,” in Proceedings of the IEEE Inter- national Conference on Computer Vision, pp. 1680–1688, 2017. [33] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolu- tional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423, 2016. [34] H. Zhang and K. Dana, “Multi-style generative network for real-time transfer,” arXiv preprint arXiv:1703.06953, 2017. [35] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022, 2016. [36] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” arXiv preprint arXiv:1802.05957, 2018. [37] S. Zhang and D. Yang, “Pet hair color transfer based on cyclegan,” in 2018 5th International Conference on Systems and Informatics (ICSAI), pp. 998–1004, IEEE, 2018. [38] R. Longman and R. Ptucha, “Embedded cyclegan for shape-agnostic image-to- image translation,” in 2019 IEEE International Conference on Image Processing (ICIP), pp. 969–973, Sep. 2019. [39] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision, pp. 694– 711, Springer, 2016. [40] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recog- nition,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. [41] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4681–4690, 2017. [42] C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in European Conference on Computer Vision, pp. 702–716, Springer, 2016. [43] M. E. Celebi, Q. Wen, S. Hwang, and G. Schaefer, “Color quantization of der- moscopy images using the k-means clustering algorithm,” in Color Medical Image Analysis, pp. 87–107, Springer, 2013. [44] S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on information theory, vol. 28, no. 2, pp. 129–137, 1982. [45] E. Forgy, “Cluster analysis of multivariate data: Efficiency versus interpretability of classification,” Biometrics, vol. 21, no. 3, pp. 768–769, 1965. [46] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [47] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Im- proved training of wasserstein gans,” in Advances in Neural Information Pro- cessing Systems, pp. 5767–5777, 2017.