Login | Register

Geometric Deep Learned Descriptors for 3D Shape Recognition

Title:

Geometric Deep Learned Descriptors for 3D Shape Recognition

Luciano, Lorenzo (2018) Geometric Deep Learned Descriptors for 3D Shape Recognition. PhD thesis, Concordia University.

[img]
Preview
Text (application/pdf)
Luciano_PhD_F2018.pdf - Accepted Version
Available under License Spectrum Terms of Access.
7MB

Abstract

The availability of large 3D shape benchmarks has sparked a flurry of research activity in the development of efficient techniques for 3D shape recognition, which is a fundamental problem in a variety of domains such as pattern recognition, computer vision, and geometry processing. A key element in virtually any shape recognition method is to represent a 3D shape by a concise and compact shape descriptor aimed at facilitating the recognition tasks.

The recent trend in shape recognition is geared toward using deep neural networks to learn features at various levels of abstraction, and has been driven, in large part, by a combination of affordable computing hardware, open source software, and the availability of large-scale datasets. In this thesis, we propose deep learning approaches to 3D shape classification and retrieval. Our approaches inherit many useful properties from the geodesic distance, most notably the capture of the intrinsic geometric structure of 3D shapes and the invariance to isometric deformations. More specifically, we present an integrated framework for 3D shape classification that extracts discriminative geometric shape descriptors with geodesic moments. Further, we introduce a geometric framework for unsupervised 3D shape retrieval using geodesic moments and stacked sparse autoencoders. The key idea is to learn deep shape representations in an unsupervised manner. Such discriminative shape descriptors can then be used to compute pairwise dissimilarities between shapes in a dataset, and to find the retrieved set of the most relevant shapes to a given shape query. Experimental evaluation on three standard 3D shape benchmarks demonstrate the competitive performance of our approach in comparison with existing techniques.

We also introduce a deep similarity network fusion framework for 3D shape classification using a graph convolutional neural network, which is an efficient and scalable deep learning model for graph-structured data. The proposed approach coalesces the geometrical discriminative power of geodesic moments and similarity network fusion in an effort to design a simple, yet discriminative shape descriptor. This geometric shape descriptor is then fed into the graph convolutional neural network to learn a deep feature representation of a 3D shape. We validate our method on ModelNet shape benchmarks, demonstrating that the proposed framework yields significant performance gains compared to state-of-the-art approaches.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Concordia Institute for Information Systems Engineering
Item Type:Thesis (PhD)
Authors:Luciano, Lorenzo
Institution:Concordia University
Degree Name:Ph. D.
Program:Information and Systems Engineering
Date:1 July 2018
Thesis Supervisor(s):Ben Hamza, Abdessamad
Keywords:Geodesic moments; deep learning; Laplace-Beltrami; stacked autoencoders; shape classification; shape retrieval; geodesic distance; similarity network fusion; graph networks
ID Code:984470
Deposited By: LORENZO LUCIANO
Deposited On:31 Oct 2018 17:49
Last Modified:31 Oct 2018 17:49

References:

[1] A. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, “ShapeNet: An information-rich 3D model repos- itory,” arXiv:1512.03012, 2015.
[2] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
[3] K. Noda, Y. Yamaguchi, K. Nakadai, H. Okuno, and T. Ogata, “Audio-visual speech recog- nition using deep learning,” Applied Intelligence, vol. 42, no. 4, pp. 722–737, 2015.
[4] Y. Bengio, “Learning deep architectures for AI,” Foundations and Trends in Machine Learn- ing, vol. 2, no. 1, pp. 1–127, 2009.
[5] M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric deep learn- ing: going beyond Euclidean data,” arXiv:1611.08097, 2016.
[6] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” in Proc. ICCV, pp. 945–953, 2015.
[7] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in Proc. CVPR, pp. 1912–1920, 2015.
[8] Z. Zhu, X. Wang, S. Bai, C. Yao, and X. Bai, “Deep learning representation using autoen- coder for 3D shape retrieval,” Neurocomputing, 2016.
[9] C. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. Guibas, “Volumetric and multi-view CNNs for object classification on 3D data,” in Proc. CVPR, 2016.
[10] S. Bu, Z. Liu, J. Han, J. Wu, and R. Ji, “Learning high-level feature by deep belief networks for 3-D model retrieval and recognition,” IEEE Trans. Multimedia, vol. 24, no. 16, pp. 2154– 2167, 2014.
[11] S. Bai, X. Bai, Z. Zhou, Z. Zhang, and L. J. Latecki, “Gift: A real-time and scalable 3d shape search engine,” in Proc. CVPR, pp. 5023–5032, 2016.
[12] A. Brock, T. Lim, J. Ritchie, and N. Weston, “Generative and discriminative voxel modeling with convolutional neural networks,” arXiv:1608.04236, 2016.
[13] N. Sedaghat, M. Zolfaghari, and T. Broxn, “Orientation-boosted voxel nets for 3d object recognition,” arXiv:1604.03351, 2016.
[14] S. Rosenberg, The Laplacian on a Riemannian Manifold. Cambridge University Press, 1997.
[15] A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-rigid Shapes. Springer, 2008.
[16] H. Krim and A. Ben Hamza, Geometric methods in signal and image analysis. Cambridge University Press, 2015.
[17] J. Xie, G. Dai, and Y. Fang, “Deep multi-metric learning for shape-based 3D model re- trieval,” IEEE Trans. Multimedia, 2017.
[18] A. Ioannidou, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris, “Deep learning advances in computer vision with 3D data: A survey,” ACM Computing Surveys, 2017.
[19] H. Lee, C. Ekanadham, and A. Y. Ng, “Sparse deep belief net model for visual area v2,” in Proc. NIPS 20, MIT Press, 2008.
[20] A. Ng, “Sparse autoencoder,” CS294A Lecture notes, vol. 72, pp. 1–19, 2011.
[21] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc.
CVPR, 2016.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolu-
tional neural networks,” in Proc. NIPS, pp. 1097–1105, 2012.
[23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich, “Going deeper with convolutions,” in Proc. CVPR, pp. 1–9, 2015.
[24] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image
recognition,” CoRR, vol. abs/1409.1556, 2014. 91
[25] B. B. Le Cun, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Proc. NIPS, 1990.
[26] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, 1989.
[27] Y. Bengio and Y. LeCun, “Scaling learning algorithms towards ai,” Large-scale kernel ma- chines, MIT Press, vol. 34, no. 5, 2007.
[28] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proc. CVPR, pp. 1912–1920, 2015.
[29] I. Kokkinos, M. Bronstein, R. Litman, and A. Bronstein, “Intrinsic shape context descriptors for deformable shapes,” in Proc. CVPR, pp. 159–166, 2012.
[30] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” Proc. BMVC, 2014.
[31] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. CVPR 2009, pp. 248–255, 2009.
[32] D.-Y. Chen, X.-P. Tian, Y.-T. Shen, and M. Ouhyoun, “On visual similarity based 3D model retrieval,” Computer Graphics Forum, vol. 22, no. 3, pp. 223–232, 2003.
[33] M. Kazhdan, T. Funkhouser, and S. Rusinkiewicz, “Rotation invariant spherical harmonic representation of 3D shape descriptors,” in Proc. Eurographics/ACM SIGGRAPH Symp. Geometry Processing, pp. 156–164, 2003.
[34] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” in Proc. ICCV, 2015.
[35] G. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006.
[36] V. Nair and G. E. Hinton, “3-D object recognition with deep belief nets,” in Proc. NIPS 22, 2009.
[37] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to docu- ment recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[38] D. Pickup, X. Sun, P. Rosin, R. Martin, Z. Cheng, Z. Lian, M. Aono, A. Ben Hamza, A. Bronstein, M. Bronstein, S. Bu, U. Castellani, S. Cheng, V. Garro, A. Giachetti, A. Godil, J. Han, H. Johan, L. Lai, B. Li, C. Li, H. Li, R. Litman, X. Liu, Z. Liu, Y. Lu, A. Tatsuma, and J. Ye, “SHREC’14 track: Shape retrieval of non-rigid 3D human models,” in Proc. Eurographics Workshop on 3D Object Retrieval, pp. 1–10, 2014.
[39] R. Litman, A. Bronstein, M. Bronstein, and U. Castellani, “Supervised learning of bag-of- features shape descriptors using sparse coding,” Computer Graphics Forum, vol. 33, no. 5, pp. 127–136, 2014.
[40] S. Biasotti, A. Cerri, M. Abdelrahman, M. Aono, A. Ben Hamza, M. El-Melegy, A. Farag, V. Garro, A. Giachetti, D. Giorgi, A. Godil, C. Li, Y.-J. Liu, H. Martono, C. Sanada, A. Tat- suma, S. Velasco-Forero, and C.-X. Xu, “SHREC’14 track: Retrieval and classification on textured 3D models,” in Proc. Eurographics Workshop on 3D Object Retrieval, pp. 111–120, 2014.
[41] J. Z. Z. Lian, S. Choi, H. ElNaghy, J. El-Sana, T. Furuya, A. Giachetti, R. G. L. Isaia, L. Lai, C. Li, H. Li, F. Limberger, R. Martin, R. Nakanishi, A. N. L. Nonato, R. Ohbuchi, K. Pevzner, D. Pickup, P. Rosin, A. Sharf, L. Sun, X. Sun, S. Tari, G. Unal, and R. Wilson, “SHREC’15 track: Non-rigid 3D shape retrieval,” in Proc. Eurographics Workshop on 3D Object Retrieval, pp. 1–14, 2015.
[42] M. Savva, F. Yu, H. Su, M. Aono, B. Chen, D. Cohen-Or, W. Deng, H. Su, S. Bai, X. Bai, J. H. N. Fish, E. Kalogerakis, E. Learned-Miller, Y. Li, M. Liao, S. Maji, Y. Wang, N. Zhang, and Z. Zhou, “SHREC’16 track: Large-scale 3D shape retrieval from ShapeNet Core55,” in Proc. Eurographics Workshop on 3D Object Retrieval, 2016.
[43] R. Rustamov, “Laplace-Beltrami eigenfunctions for deformation invariant shape represen- tation,” in Proc. Symp. Geometry Processing, pp. 225–233, 2007.
[44] J. Sun, M. Ovsjanikov, and L. Guibas, “A concise and provably informative multi-scale signature based on heat diffusion,” Computer Graphics Forum, vol. 28, no. 5, pp. 1383– 1392, 2009.
[45] M. Bronstein and I. Kokkinos, “Scale-invariant heat kernel signatures for non-rigid shape recognition,” in Proc. CVPR, pp. 1704–1711, 2010.
[46] M. Aubry, U. Schlickewei, and D. Cremers, “The wave kernel signature: A quantum me- chanical approach to shape analysis,” in Proc. Computational Methods for the Innovative Design of Electrical Devices, pp. 1626–1633, 2011.
[47] C. Li and A. Ben Hamza, “A multiresolution descriptor for deformable 3D shape retrieval,” The Visual Computer, vol. 29, pp. 513–524, 2013.
[48] F. Limberger and R. Wilson, “Feature encoding of spectral signatures for 3D non-rigid shape retrieval,” in Proc. BMVC, 2015.
[49] J. Ye and Y. Yu, “A fast modal space transform for robust nonrigid shape retrieval,” The Visual Computer, vol. 32, no. 5, pp. 553–568, 2015.
[50] M. Meyer, M. Desbrun, P. Schröder, and A. Barr, “Discrete differential-geometry operators for triangulated 2-manifolds,” Visualization and mathematics III, vol. 3, no. 7, pp. 35–57, 2003.
[51] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, “Shape distributions,” ACM Trans. Graphics, vol. 21, no. 4, pp. 807–832, 2002.
[52] S. Chaudhuri and V. Koltun, “Data-driven suggestions for creativity support in 3d model- ing,” ACM Trans. Graphics, vol. 29, no. 6, p. 183, 2010.
[53] A. M. Bronstein, M. M. Bronstein, L. J. Guibas, and M. Ovsjanikov, “Shape google: Geo- metric words and expressions for invariant shape retrieval,” ACM Trans. Graphics, vol. 30, no. 1, p. 1, 2011.
[54] J. Knopp, M. Prasad, G. Willems, R. Timofte, and L. Van Gool, “Hough transform and 3D surf for robust three dimensional classification,” in Proc. ECCV, pp. 589–602, 2010.
[55] H. Murase and S. K. Nayar, “Visual learning and recognition of 3-d objects from appear- ance,” International Journal of Computer Vision, vol. 14, no. 1, pp. 5–24, 1995.
[56] D. Macrini, A. Shokoufandeh, S. Dickinson, K. Siddiqi, and S. Zucker, “View-based 3D object recognition using shock graphs,” in Proc. ICPR, vol. 3, pp. 24–28, 2002.
[57] C. M. Cyr and B. B. Kimia, “A similarity-based aspect-graph approach to 3D object recog- nition,” International Journal of Computer Vision, vol. 57, no. 1, pp. 5–22, 2004.
[58] J. J. Koenderink, “The structure of images,” Biological cybernetics, vol. 50, no. 5, pp. 363– 370, 1984.
[59] R. G. Schneider and T. Tuytelaars, “Sketch classification and classification-driven analysis using fisher vectors,” ACM Trans. Graphics, vol. 33, no. 6, p. 174, 2014.
[60] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image classification with the fisher vector: Theory and practice,” International Journal of Computer Vision, vol. 105, no. 3, pp. 222–245, 2013.
[61] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition.,” in Proc. ICML, pp. 647–655, 2014.
[62] A. Vedaldi and B. Fulkerson, “Vlfeat: An open and portable library of computer vision algorithms,” in Proc. International Conference on Multimedia, pp. 1469–1472, 2010.
[63] B. T. Phong, “Illumination for computer generated pictures,” Communications of the ACM, vol. 18, no. 6, pp. 311–317, 1975.
[64] L. Luciano and A. Ben Hamza, “Deep learning with geodesic moments for 3D shape clas- sification,” Pattern Recognition Letters, vol. 105, pp. 182–190, 2018.
[65] L. Luciano and A. Ben Hamza, “Geodesic-based 3d shape retrieval using sparse autoen- coders,” in Proc. 11th Euro-graphics Workshop on 3D Object Retrieval, 2018.
[66] L. Luciano and A. Ben Hamza, “A global geometric framework for 3D shape retrieval using deep learning,” Submitted to Computers & Graphics (special issue), 2018.
[67] L. Luciano and A. Ben Hamza, “Deep similarity network fusion for 3D shape classification,” Submitted to Information Fusion, 2018.
[68] H. Ghodrati, L. Luciano, and A. Ben Hamza, “Convolutional shape-aware representation for 3D object classification,” Neural Processing Letters, 2018.
[69] W. Shen, Y. Wang, X. Bai, H. Wang, and L. Latecki, “Shape clustering: Common structure discovery,” Pattern Recognition, vol. 46, no. 2, pp. 539–550, 2013.
[70] C. Li, A. Stevens, C. Chen, Y. Pu, Z. Gan, and L. Carin, “Learning weight uncertainty with stochastic gradient MCMC for shape classification,” in Proc. CVPR, 2016.
[71] D. Pickup, X. Sun, P. Rosin, R. Martin, Z. Cheng, Z. Lian, M. Aono, A. Ben Hamza, A. Bronstein, M. Bronstein, S. Bu, U. Castellani, S. Cheng, V. Garro, A. Giachetti, A. Godil, L. Isaia, J. Han, H. Johan, L. Lai, B. Li, C. Li, H. Li, R. Litman, X. Liu, Z. Liu, Y. Lu,
95
L. Sun, G. Tam, A. Tatsuma, and J. Ye, “Shape retrieval of non-rigid 3d human models,” International Journal of Computer Vision, vol. 120, no. 2, pp. 169–193, 2016.
[72] M. Masoumi and A. Ben Hamza, “A spectral graph wavelet approach for nonrigid 3D shape retrieval,” Pattern Recognition Letters, vol. 83, pp. 339–348, 2016.
[73] M. Reuter, F. Wolter, and N. Peinecke, “Laplace-Beltrami spectra as ‘Shape-DNA’ of sur- faces and solids,” Computer-Aided Design, vol. 38, no. 4, pp. 342–366, 2006.
[74] A. Chaudhari, R. Leahy, B. Wise, N. Lane, R. Badawi, and A. Joshi, “Global point signature for shape analysis of carpal bones,” Physics in Medicine and Biology, vol. 59, pp. 961–973, 2014.
[75] Z. Gao, Z. Yu, and X. Pang, “A compact shape descriptor for triangular surface meshes,” Computer-Aided Design, vol. 53, pp. 62–69, 2014.
[76] Z. Lian, A. Godil, B. Bustos, M. Daoudi, J. Hermans, S. Kawamura, Y. Kurita, G. Lavoué, H. Nguyen, R. Ohbuchi, Y. Ohkita, Y. Ohishi, F. Porikli, M. Reuter, I. Sipiran, D. Smeets, P. Suetens, H. Tabia, and D. Vandermeulen, “A comparison of methods for non-rigid 3D shape retrieval,” Pattern Recognition, vol. 46, no. 1, pp. 449–461, 2013.
[77] C. Li and A. Ben Hamza, “Spatially aggregating spectral descriptors for nonrigid 3D shape retrieval: A comparative survey,” Multimedia Systems, vol. 20, no. 3, pp. 253–281, 2014.
[78] D. Aouada and H. Krim, “Squigraphs for fine and compact modeling of 3D shapes,” IEEE Trans. Image Processing, vol. 19, no. 2, pp. 306–321, 2010.
[79] O. Calin and D.-C. Chang, Gemetric Mechanics on Riemannian Manifolds: Applications to Partial Differential Equations. Birkhäuser, 2005.
[80] Z. Lian, A. Godil, T. Fabry, T. Furuya, J. Hermans, R. Ohbuchi, C. Shu, D. Smeets, P. Suetens, D. Vandermeulen, and S. Wuhrer, “SHREC’10 track: Non-rigid 3D shape re- trieval,” in Proc. Eurographics/ACM SIGGRAPH Sympo. 3D Object Retrieval, pp. 101–108, 2010.
[81] Z. Lian, A. Godil, B. Bustos, M. Daoudi, J. Hermans, S. Kawamura, Y. Kurita, G. Lavoué, H. Nguyen, R. Ohbuchi, Y. Ohkita, Y. Ohishi, F. Porikli, M. Reuter, I. Sipiran, D. Smeets, P. Suetens, H. Tabia, and D. Vandermeulen, “SHREC’11 track: Shape retrieval on non- rigid 3D watertight meshes,” in Proc. Eurographics/ACM SIGGRAPH Symp. 3D Object Retrieval, pp. 79–88, 2011.
[82] M. Khabou, L. Hermi, and M. Rhouma, “Shape recognition using eigenvalues of the Dirich- let Laplacian,” Pattern Recognition, vol. 40, pp. 141–153, 2007.
[83] C. Li and A. Ben Hamza, “Intrinsic spatial pyramid matching for deformable 3d shape retrieval,” International Journal of Multimedia Information Retrieval, vol. 2, no. 4, pp. 261– 271, 2013.
[84] Y. Chen, Y.-K. Lai, Z.-Q. Cheng, R. R. Martin, and S.-Y. Jin, “A data-driven approach to efficient character articulation,” in Proc. Computer-Aided Design and Computer Graphics, pp. 32–37, 2013.
[85] P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser, “The Princeton shape benchmark,” in Proc. SMI, pp. 167–178, 2004.
[86] A. Giachetti and C. Lovato, “Radial symmetry detection and shape characterization with the multiscale area projection transform,” Computer Graphics Forum, vol. 31, no. 5, pp. 1669– 1678, 2012.
[87] D. Pickup, X. Sun, P. Rosin, and R. Martin, “Geometry and context for semantic correspon- dences and functionality recognition in manmade 3D shapes,” Pattern Recognition, vol. 48, no. 8, pp. 2500–2512, 2015.
[88] K. Siddiqi, J. Zhang, D. Macrini, A. Shokoufandeh, S. Bouix, and S. Dickinson, “Retrieving articulated 3-d models using medial surfaces,” Machine vision and applications, vol. 19, no. 4, pp. 261–275, 2008.
[89] L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, pp. 2579–2605, 2008.
[90] Y. Lipman, R. Rustamov, and T. Funkhouser, “Biharmonic distance,” ACM Trans. ics, vol. 29, no. 3, pp. 1–11, 2010.
[91] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolu- tional neural networks,” in NIPS, pp. 1097–1105, 2012.
[92] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in IEEE CVPR, vol. 1, p. 3, 2017.
[93] P. Zanuttigh and L. Minto, “Deep learning for 3d shape classification from multiple depth maps,” in Proceedings of IEEE International Conference on Image Processing (ICIP), 2017.
[94] M. Simonovsky and N. Komodakis, “Dynamic edge-conditioned filters in convolutional neural networks on graphs,” in Proc. CVPR, 2017.
[95] K. Sfikas, T. Theoharis, and I. Pratikakis, “Exploiting the panorama representation for con- volutional neural network classification and retrieval,” in Eurographics Workshop on 3D Object Retrieval, 2017.
[96] X. Xu and S. Todorovic, “Beam search for learning a deep convolutional neural network of 3d shapes,” in Pattern Recognition (ICPR), 2016 23rd International Conference on, pp. 3506–3511, IEEE, 2016.
[97] A. Sinha, J. Bai, and K. Ramani, “Deep learning 3d shape surfaces using geometry images,” in European Conference on Computer Vision, pp. 223–240, 2016.
[98] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,” in Advances in Neural In- formation Processing Systems, pp. 82–90, 2016.
[99] B. Shi, S. Bai, Z. Zhou, and X. Bai, “Deeppano: Deep panoramic representation for 3-d shape recognition,” IEEE Signal Processing Letters, vol. 22, no. 12, pp. 2339–2343, 2015.
[100] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920, 2015.
[101] B. Wang, A. M. Mezlini, F. Demir, M. Fiume, Z. Tu, M. Brudno, B. Haibe-Kains, and A. Goldenberg, “Similarity network fusion for aggregating data types on a genomic scale,” Nature methods, vol. 11, no. 3, p. 333, 2014.
[102] Y. Hechtlinger, P. Chakravarti, and J. Qin, “A generalization of convolutional neural net- works to graph-structured data,” arXiv preprint arXiv:1704.08165, 2017.
[103] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to docu- ment recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[104] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
[105] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv:1511.05644, 2015.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top