The availability of large 3D shape benchmarks has sparked a flurry of research activity in the development of efficient techniques for 3D shape recognition, which is a fundamental problem in a variety of domains such as pattern recognition, computer vision, and geometry processing. A key element in virtually any shape recognition method is to represent a 3D shape by a concise and compact shape descriptor aimed at facilitating the recognition tasks. The recent trend in shape recognition is geared toward using deep neural networks to learn features at various levels of abstraction, and has been driven, in large part, by a combination of affordable computing hardware, open source software, and the availability of large-scale datasets. In this thesis, we propose deep learning approaches to 3D shape classification and retrieval. Our approaches inherit many useful properties from the geodesic distance, most notably the capture of the intrinsic geometric structure of 3D shapes and the invariance to isometric deformations. More specifically, we present an integrated framework for 3D shape classification that extracts discriminative geometric shape descriptors with geodesic moments. Further, we introduce a geometric framework for unsupervised 3D shape retrieval using geodesic moments and stacked sparse autoencoders. The key idea is to learn deep shape representations in an unsupervised manner. Such discriminative shape descriptors can then be used to compute pairwise dissimilarities between shapes in a dataset, and to find the retrieved set of the most relevant shapes to a given shape query. Experimental evaluation on three standard 3D shape benchmarks demonstrate the competitive performance of our approach in comparison with existing techniques. We also introduce a deep similarity network fusion framework for 3D shape classification using a graph convolutional neural network, which is an efficient and scalable deep learning model for graph-structured data. The proposed approach coalesces the geometrical discriminative power of geodesic moments and similarity network fusion in an effort to design a simple, yet discriminative shape descriptor. This geometric shape descriptor is then fed into the graph convolutional neural network to learn a deep feature representation of a 3D shape. We validate our method on ModelNet shape benchmarks, demonstrating that the proposed framework yields significant performance gains compared to state-of-the-art approaches.