DEEP LEARNING
Course Description
Deep learning is a subfield of machine learning that focuses on training neural networks with multiple layers (hence the term “deep”). These networks learn to automatically discover complex patterns and representations from data. Unlike traditional machine learning, where feature engineering is crucial, deep learning models can learn relevant features directly from raw data. Applications of deep learning span various domains, including computer vision, natural language processing, speech recognition, and recommendation systems. By leveraging large datasets and powerful computational resources, deep learning has achieved remarkable breakthroughs, such as image classification, language translation, and autonomous driving.
Syllabus
Introductions:
- Introduction to Machine, Learning Concepts,
- Must Known Classical Methods,
- Model Fitness,
- Data Splitting,
- Performance Measures.
Essential mathematics for machine learning:
- Linear Algebra,
- Random Vectors,
- PCA and SVD,
- Optimization (SGD Family).
Shallow Linear Classification and Regression:
- Single layer perceptron (SLP),
- Multilayer perceptron (MLP),
- Error back propagation (EBP) algorithm,
- Most important theorems,
- Why Deep Not Shallow.
Regularization and Optimization:
- The Bias-Variance Rethinking,
- L1, L2, L1+L2 Regularization,
- Data Augmentation,
- Dropout,
- Early Stopping,
- Optimization Challenges,
- Normalizations.
Convolutional Neural Networks (CNN):
- History,
- CNN Architecture,
- Different Types of Convolutions,
- Learning Tricks (Pooling, …),
- Foundations Architecture (From AlexNet to MobileNet and state-of-art architecture).
Application of CNN in Computer Vision:
- Computer Vision Tasks,
- Deep CNN in Computer Vision,
- Segmentation (Brief),
- Object Detection (Brief).
Sequence Modelling:
- RNN Family,
- Backpropagation Through Time,
- LSTM, GRU, and their variants,
- Introduction to Natural Language Processing (NLP)
- Word Embedding (word2vec: CBoW, SkipGram)
- Attention,
- Self-Attention, Transformers, and their application in NLP
- Vision Transformer and Applications.
Unsupervised Learning:
- Auto Encoder (AE), and its variants (SAE, DAE, CAE, …)
- Variational Auto Encoder (VAE) Mathematics and ELBO.
- VAE Family (CVAE, HVAE, β-VAE, and VQ-VAE).
Adversarial Learning:
- GAN Mathematics
- JS (Jensen–Shannon) Divergence
- GAN Hardness (Gradient Vanishing and Mode Collapse)
- GAN Family (CGAN, DC-GAN, CycleGAN, Wasserstein-GAN, Progressive-GAN, Style-GAN)
Diffusion Models
References:
- Deep Learning, I. Goodfellow, Y. Bengio, and A. Courville, MIT Press (2016).
- Probabilistic Machine Learning – An Introduction, K. Murphy, MIT Press (2022).
- Machine Learning: A Bayesian and Optimization Perspective, S. Theodoridis, Academic Press (2015).
- Mathematics for Machine Learning, M. Deisenroth, A. Faisal, and C. Ong. Cambridge University Press, (2020).
- Matrix Cookbook, K. B. Petersen and M. S. Pedersen, Technical University of Denmark (2012)
- Top Hot Paper