About
I am an Associate Professor working in artificial intelligence and the head of the Responsible AI Lab (RAIL). My work focuses on building reliable, fair, and interpretable AI systems.
Brief bio: I lead the Responsible AI Lab at De Montfort University (DMU), where I supervise research on fairness, interpretability, and trustworthy machine learning. I teach undergraduate and postgraduate courses in AI and mentor PhD students working on responsible and reliable AI systems.
Research
Research interests include Responsible AI, fairness, interpretability, and trustworthy ML systems. See my publications and projects (links to be added).
PhD Projects
I am currently accepting PhD students. Interested students should contact ahmed.moustafa@dmu.ac.uk.
(1) Heterogeneous Data-Aware Federated Learning for Medical AI
This project develops strategies to handle non-independent and identically distributed (non-iiD) data in federated learning for healthcare. Current federated learning algorithms assume iiD datasets, which limits their effectiveness with real-world heterogeneous healthcare data sources. The research aims to enable privacy-preserving machine learning on diverse medical datasets.
(2) Promoting Fairness in AutoML Systems
This project develops dynamic strategies to mitigate unfairness in AutoML systems by addressing societal biases in training data and development decisions. The research focuses on optimizing accuracy-fairness trade-offs to determine when and how to apply fairness interventions across different ML models in automated machine learning pipelines.
(3) Safe and Interpretable Reinforcement Learning for Personalized Healthcare
This project develops Safe RL frameworks with interpretable models (e.g., PIRL) and human-in-the-loop systems for personalized treatment optimization and drug discovery. By integrating multi-objective optimization, it addresses data sparsity, black-box decision-making, and clinical bias to enable trustworthy AI-driven healthcare solutions.
(4) Multi-Agent Reinforcement Learning for Resilient Supply Chains
This project develops scalable MARL systems to optimize supply chain resilience against disruptions and market volatility. By integrating Multi-Objective RL to balance cost, availability, and robustness, it addresses coordination challenges, non-stationarity, and credit assignment while improving sample efficiency for large-scale networks.
(5) Human-in-the-Loop Reinforcement Learning for Proactive Cybersecurity Defence
This project combines AI-driven threat detection with human expertise through Human-in-the-Loop RL for proactive, accountable cyber defense. The system enables continuous learning from evolving attacks while ensuring transparency, bias mitigation through human oversight, and Safe RL principles to prevent harmful autonomous actions in cybersecurity operations.
Contact
For collaborations or inquiries, email ahmed.moustafa@dmu.ac.uk.