Over the past few years, considerable progress has been made in the area of machine learning. However, when these learning machines, including support vector machines (SVMs) and neural networks, are applied to massive sets of high-dimensional data, many challenging problems emerge, such as high computational cost and the way to adapt the structure of a learning system. Therefore, it is important to develop some new methods with computational efficiency and high accuracy such that learning algorithms can be applied more widely to areas such as data ruining, Optical Character Recognition (OCR) and bio-informatics. In this thesis, we mainly focus on three problems: methodologies to adapt the structure of a neural network learning system, speeding up SVM's training and facilitating test on huge data sets. For the first problem, a local learning framework is proposed to automatically construct the ensemble of neural networks, which are trained on local subsets so that the complexity and training time of the learning system can be reduced and its generalization performance can be enhanced. With respect to SVM's training on a very large data set with thousands of classes and high-dimensional input vectors, block diagonal matrices are used to approximate the original kernel matrix such that the original SVM optimization process can be divided into hundreds of sub-problems, which can be solved efficiently. Theoretically, the run-time complexity of the proposed algorithm linearly scales to the size of the data set, the dimension of input vectors and the number of classes. For the last problem, a fast iteration algorithm is proposed to approximate the reduced set vectors simultaneously based on the general kernel type so that the number of vectors in the decision function of each class can be reduced considerably and the testing speed is increased significantly. The main contributions of this thesis are to propose effective solutions to the above three problems. It is especially worth mentioning that the methods which are used to solve the last two problems are crucial in making support vector machines more competitive in tasks where both high accuracy and classification speed are required. The proposed SVM algorithm runs at a much higher training speed than the existing ones such as svm-light and libsvm when applied to a huge data set with thousands of classes. The total training time of SVM with the radial basis function kernel on Hanwang's handwritten Chinese database (2,144,489 training samples, 542,122 testing samples, 3755 classes and 392-dimensional input vectors) is 19 hours on P4. In addition, the proposed testing algorithm has also achieved a promising classification speed, 16,000 patterns per second, on MNIST database. Besides the efficient computation, the state-of-the-art generalization performances have also been achieved on several well-known public and commercial databases. Particularly, very low error rates of 0.38%, 0.5% and 1.0% have been reached on MNIST, Hanwang handwritten digit databases and ETL9B handwritten Chinese database