About Me
I am a Ph.D. candidate in Computer Engineering and a Research Assistant at Toronto Metropolitan University (TMU), Toronto, Ontario, Canada, affiliated with the Trustworthy AI Laboratory (TAILab). I have received my B.Sc. in Computer Engineering (software) from Shahid Beheshti University (2016) and my M.Sc. in Computer Engineering (Machine Learning, Algorithms, and Computation) from the University of Tehran (2019). My doctoral research at TMU advances both theoretical and practical aspects of trustworthy AI and machine learning, supported by my peer-reviewed publications.
My research focuses on confidence and uncertainty quantification in deep neural networks, with emphasis on conformal prediction and risk control (distribution-free uncertainty estimation), evidential and Bayesian methods, and information-theoretic techniques to provide statistically rigorous uncertainty quantification and enhance the reliability in AI systems. I am also keenly interested in uncertainty in Large Language Models (LLMs), OOD detection, calibration, adversarial robustness, and interpretability in machine learning. Passionate about bridging rigorous, uncertainty-aware methodologies with real-world AI deployment, I am dedicated to developing more reliable, transparent, and robust AI solutions.
My research focuses on confidence and uncertainty quantification in deep neural networks, with emphasis on conformal prediction and risk control (distribution-free uncertainty estimation), evidential and Bayesian methods, and information-theoretic techniques to provide statistically rigorous uncertainty quantification and enhance the reliability in AI systems. I am also keenly interested in uncertainty in Large Language Models (LLMs), OOD detection, calibration, adversarial robustness, and interpretability in machine learning. Passionate about bridging rigorous, uncertainty-aware methodologies with real-world AI deployment, I am dedicated to developing more reliable, transparent, and robust AI solutions.
Research Highlights
- Model uncertainty quantification derived from CP prediction sets
- OOD detection using information-theoretic and evidential class properties
- Spatially-Adaptive Conformal Prediction (SACP) for medical image segmentation
- Evidential Conformal Prediction (ECP) for efficient, adaptive prediction sets in deep classification
- LLM uncertainty quantification using conformal semantic entropy
- Adversarial Robustness via diverse ensembles and noisy logits
