The thesis aims to enhance the conversational capabilities of social robotics using LLMs while ensuring reliable and consistent responses. The overall goal is to design a framework that not only improves interaction capabilities but also introduces transparency, explainability, and interpretability to LLM-based social robotics. The methodology includes the following main stages: 1. Leveraging the powerful capabilities of LLMs 2. Including multi-modal signals 3. Utilizing appropriate approaches to offer human-like explanations for the outputs and 4. Building models of user trust. The research will consider cognition-based, affection-based, and relation-based trust to model and evaluate the system's trustworthiness. Specifically, the thesis will develop approaches to monitor user trust during interactions and decide on appropriate interventions.