Uncrewed Aerial Vehicles (UAVs) are playing an increasingly signifcant role in modern life. In the past decades, lots of commercial and scientifc communities all over the world have been developing autonomous techniques of UAV for a broad range of applications, such as forest fre monitoring, parcel delivery, disaster rescue, natural resource exploration, and surveillance. This brings a large number of opportunities and challenges for UAVs to improve their abilities in path planning, motion control and fault-tolerant control (FTC) directions. Meanwhile, due to the powerful decisionmaking, adaptive learning and pattern recognition capabilities of machine learning (ML) and deep reinforcement learning (DRL), the use of ML and DRL have been developing rapidly and obtain major achievement in a variety of applications. However, there is not many researches on the ML and DRl in the feld of motion control and real-time path planning of UAVs. This thesis focuses on the development of ML and DRL in the path planning, motion control and FTC of UAVs. A number of ontributions pertaining to the state space defnition, reward function design and training method improvement have been made in this thesis, which improve the effectiveness and efciency of applying DRL in UAV motion control problems. In addition to the control problems, this thesis also presents real-time path planning contributions, including relative state space defnition and human pedestrian inspired reward function, which provide a reliable and effective solution of the real-time path planning in a complex environment.