Recent interest in second language acquisition has resulted in studying the relationship between linguistic indices and writing proficiency in English. This thesis investigates the influence of basic linguistic notions, introduced early in English grammar, on automatic proficiency evaluation tasks. We discuss the predictive potential of verb features (tense, aspect, voice, type and degree of em- bedding) and compare them to word level n-grams (unigrams, bigrams, trigrams) for proficiency assessment. We conducted four experiments using standard language corpora that differ in authors’ cultural backgrounds and essay topic variety. Tense showed little variation across proficiency lev- els or language of origin making it a bad predictor for our corpora, but tense and aspect showed promise, especially for more natural and varied datasets. Overall, our experiments illustrated that verb features, when examined individually, form a baseline for writing proficiency prediction. Feature combinations, however, perform better for these verb features, which are grammatically not independent. Finally, we investigate how language homogeneity due to corpus design influences the performance of our features. We find that the majority of the essays we examined use present tense, indefinite aspect and passive voice, thus greatly limiting the discriminative power of tense, aspect, and voice features. Thus linguistic features have to be tested for their interoperability together with their effectiveness on the corpora used. We conclude that all corpus-based research should include an early validation step that investigates feature independence, feature interoperability, and feature value distribution in a reference corpus to anticipate potentially spurious data sparsity effects.