Living entities have an innate ability to replicate others' behaviour. As this mechanism helps with overcoming time, mobility and resources constraints in learning new abilities, it is not surprising then that the Imitation Learning framework has played a vital role in many AI systems. In the context of machine learning, Imitation Learning algorithms have been used to infer optimal behaviour for a task using traces of the execution performed by another expert agent. This paradigm has the potential to apply to any setting where an expert's demonstrations are available. The first part of the present work develops an example of a system where imitation learning principles were applied to the problem of visually impaired people guidance at intersections. As an indirect learning method to transfer skills among intelligent agents, imitation learning techniques helped with capturing the knowledge of sighted individuals into a solution for helping blind individuals with the task of intersection crossing. A system of this kind has the potential to change the lives of its users as it aids their mobility and exploration capabilities. However, in order to deploy a system of this kind, it is required to guarantee that a policy derived from machine learning-based methods can consistently perform in familiar environments and safely react to the unknown. Hence, the second part of this work is devoted to the development of a theoretical and experimental framework to improve safety on the Imitation Learning process through interactivity and uncertainty estimation. Uncertainty-aware Interactive Imitation Learning algorithms will help the derivation of policies that can guarantee safety in AI systems, thus expanding the range of areas where they can be applied.