Artificial Intelligence (AI) is at the forefront of the Fourth Industrial Revolution, fundamentally transforming industries and societies through unprecedented automation and data-driven applications. The Fourth Industrial Revolution is characterized by a fusion of software and hardware improvements, creating a seamless blend of the physical, digital, and biological spheres. These improvements make it possible for AI to leverage and process vast amounts of information to generate actionable insights, and perform complex tasks more quickly and more accurately than humans, leading to more informed decisions and efficient processes. Despite its success and promising results in other domains, the adoption and integration of AI innovations in healthcare has been complex and slow. Few AI innovations have met with success and have been incorporated into daily practice. This thesis addresses technological, legal and ethical issues that must be mitigated before AI-based systems can be fully adopted and trusted into clinical trials and workflows. We identify an opportunity to further the state-of-the-art of AI solutions and their adoption in healthcare through privacy-preserving aggregation algorithms and human-centered evaluations of transparency in clinical decision support systems. In particular, this dissertation explores advanced methodologies in Federated Learning (FL) for improving collaborative learning, data privacy, and decision-making across various domains. We improve the core FL aggregation algorithm for better handling the learning of distributed heterogeneous data sources, with a method named Precision-weighted Federated Learning. We perform extensive evaluations with benchmark datasets on resource-constrained environments to measure its limits and perform additional tests on clinical data to enhance the quality of clinical assessment analysis, validating its utility. Our research also aims to understand how to visualize AI model outputs to enhance transparency in clinical decision support systems. We conduct extensive evaluations to assess the impact of visualizing AI uncertainty and personal traits on decision-making, supporting the design of AI outputs that are interpretable by clinicians. Initially, we explore the effects in low-risk gaming scenarios, followed by an examination of AI uncertainty representation in high-stakes clinical decision-making, particularly in Alzheimer’s disease prognosis. In summary, this dissertation presents significant advancements in FL and clinical decision support systems. We address some of the current limitations and challenges of adopting AI systems, and demonstrate improvements in collaborative learning, data privacy, and human-AI decision-making. These findings offer valuable insights for designing robust, efficient, and trustworthy AI and FL systems. We believe that user-centered design practices will eventually play a more prominent role in the development of AI tools and technologies, becoming the driving force behind moving innovations from the laboratory to the clinic.