In this thesis, the global convergence of model-based and model-free gradient descent and natural policy gradient descent algorithms is studied for a class of linear quadratic deep structured teams. In such systems, agents are partitioned into a few sub populations wherein the agents in each sub-population are coupled in the dynamics and cost function through a set of linear regressions of the states and actions of all agents. Every agent observes its local state and the linear regressions of states, called deep states. For a sufficiently small risk factor and/or sufficiently large population, we prove that model-based policy gradient methods globally converge to the optimal solution. Given an arbitrary number of agents, we develop model-free policy gradient and natural policy gradient algorithms for the special case of risk-neutral cost function. The proposed algorithms are scalable with respect to the number of agents due to the fact that the dimension of their policy space is independent of the number of agents in each sub-population. Furthermore, the connection between the model-based and model-free methods is investigated for systems having unknown nonlinear terms with bounded Lipschitz constants. As an extension, the existence of a near-optimal solution in the convex vicinity of initialized controllers obtained from model-based LQR methods is proved. We show these initialized control strategies are derived by solving an algebraic Riccati equation (ARE), obtained by neglecting the nonlinear terms. Finally, we provide convergence guarantees to the optimal solution using a derivative-free policy gradient approach. Simulations confirm the validity of the analytical results.