When it comes to character motions, especially articulated character animation, the majority of efforts are spent on accurately capturing the low level and high level action styles. Among the many techniques which have evolved over the years, motion capture (mocap) and key frame animations are the two popular choices. Both techniques are capable of capturing the low level and high level action styles of a particular individual, but at great expense in terms of the human effort involved. In this thesis, we make use of performance data in video format to augment the process of character animation, considerably decreasing human effort for both style preservation and motion regeneration. Two new methods, one for high-level and another for low-level character animation, which are based on learning from video data to augment the motion creation process, constitute the major contribution of this research. In the first, we take advantage of the recent advancements in the field of action recognition to automatically recognize human actions from video data. High level action patterns are learned and captured using Hidden Markov Models (HMM) to generate action sequences with the same pattern. For the low level action style, we present a completely different approach that utilizes user-identified transition frames in a video to enhance the transition construction in the standard motion graph technique for creating smooth action sequences. Both methods have been implemented and a number of results illustrating the concept and applicability of the proposed approach are presented.