Łukasz Janisiów
Deep learning models have demonstrated a remarkable ability to detect subtle and complex patterns that often surpass human capabilities. However, without proper interpretability tools, the valuable insights learned by these models remain hidden within their internal weights. This is why explainability techniques are essential - they help translate a model's decision-making process into terms that are understandable to humans. By combining powerful models with meaningful explanations, we can transform machine learning from a purely predictive tool into one that enables scientific discovery, revealing patterns and relationships too complex for humans to detect. My work will focus on the domain of drug discovery, as I believe that enhancing model interpretability will not only accelerate scientific progress in pharmacology but also significantly reduce dependence on costly and time-consuming laboratory experiments.