Predictions made by machine learning models urge to be reliable. Understanding if a prediction is under-confident or falsely over-confident is critical for deploying machine learning, especially in real-world applications. However, extracting models from data is inseparably connected with uncertainty, due to the intrinsic and irreducible stochastic variability in the samples as well as the lack of knowledge about the underlying model parameters that best explain the data. This research thus aims to develop practical methods to identify sources of uncertainties during both optimization and decision-making processes. The goal is to conceptualize interpretable and trustworthy techniques that overcome inexplicable and overly confident predictions currently common in deep learning models.