MenaML Winter School — Doha, Qatar

I had the incredible opportunity to attend the MenaML Winter School in Doha, Qatar, from February 9 to 14. It was an enriching experience, featuring a vibrant community, insightful lectures, and hands-on sessions covering key areas of machine learning. The winter school included multiple lectures, practical coding sessions, mentorship sessions, and poster presentations. I also had the privilege of presenting my research poster during the poster session.

The following lectures were presented:

The full program as well as the recordings and slides can be found here.

Learnings from MenaML Winter School 2025

Below I summarize some of the most interesting lectures and key takeaways from the winter school:

1. Deep Learning Foundations

Sahra Ghalebikesabi (Google DeepMind) introduced the fundamentals of deep learning—from single hidden-layer neural networks to modern architectures such as CNNs and LSTMs. Key concepts included:

  • Backpropagation and gradient descent (resource).
  • Activation functions and optimizers like Adam and AdaGrad (overview).
  • Overfitting, regularization, and strategies for stable training.
  • Applications in computer vision through convolutional neural networks.

2. Diffusion Models & Generative AI

Two deep-dive sessions explored diffusion models, which are redefining generative AI:

  • Foundations: Safa Messaoud explained forward processes that gradually add noise and backward processes that denoise to generate realistic images, audio, and text. We also learned about conditional diffusion (scalar, text, spatial conditions) and variational training with ELBO loss.
  • Real-World Applications: Andrew El-Kadi (Google DeepMind) showcased cutting-edge uses—from photorealistic text-to-image generation to GenCast, a diffusion-based model for medium-range global weather forecasting. Evaluation metrics such as Fréchet Inception Distance (FID) and Inception Score (IS) were introduced for model assessment.

3. AI for Health

David Barrett & Alan Karthikesalingam (Google DeepMind) explored the opportunities and challenges of applying AI in healthcare. Key areas covered:

  • Applications: Medical question answering, clinical decision support, and automated radiology reporting using deep learning, self-supervised learning, diffusion models, and large language models.
  • Case Studies: Flamingo-CXR, a visual–language model fine-tuned to write chest X-ray reports, and Med-Gemini, a multimodal medical assistant for diagnostic dialogue.
  • Challenges: Key translational hurdles including reliability, interactivity, generalization, safety, trust, equity, and regulatory considerations before large-scale clinical deployment.

4. AI for Biology

Alex Graves (Google DeepMind) showed how generative AI can act as a "data explorer" for complex biological datasets. Key areas covered:

  • Applications: Protein structure prediction, inverse folding, and de novo antibody design using generative models that learn full joint probability distributions across genomics, proteomics and evolutionary data.
  • Bayesian Flow Networks (BFNs): Introduction of ProtBFN and AbBFN, protein-focused variants that unify ideas from diffusion and autoregressive models to generate natural, diverse and novel protein sequences and antibodies.
  • Challenges: Noisy data, heterogeneous modalities, and evaluation without human feedback, along with the promise of generative modelling for discovery in life sciences.

My Research Poster

Research Poster