DEEP LEARNING
Course Description
Deep learning is a subfield of machine learning that focuses on training neural networks with multiple layers (hence the term “deep”). These networks learn to automatically discover complex patterns and representations from data. Unlike traditional machine learning, where feature engineering is crucial, deep learning models can learn relevant features directly from raw data. Applications of deep learning span various domains, including computer vision, natural language processing, speech recognition, and recommendation systems. By leveraging large datasets and powerful computational resources, deep learning has achieved remarkable breakthroughs, such as image classification, language translation, and autonomous driving.
Syllabus
- Introductions: Introduction to Machine Learning Concepts, importance, applications, and examples.
- Rapid Survey: Essential mathematics for machine learning (Linear Algebra, Statistics and Probability)
- Shallow and Deep Neural Networks for Classification and Regression Tasks: Single layer perceptron (SLP) Multilayer perceptron (MLP) Error back propagation (EBP) algorithm, most important theorems Parameter Tuning Methodology (Optimization, Regularization, and Normalization)
- Convolutional Neural Networks (CNN): History Foundations Architecture Learning Tricks.
- Application of CNN in Computer Vision: Most important networks (AlexNet, GoogleNet, VGGNet, ResNet, and state of art networks)
- Sequence Modelling: Vanilla RNN, LSTM, GRU, and their variants, Introduction to Natural Language Processing (NLP) Attention, Self-Attention, Transformers, and their application in NLP and Image Understanding Applications in natural language and signal/image processing.
- Unsupervised Learning: Auto Encoder (AE), and its variants (SAE, DAE, CAE, …) Variational Auto Encoder (VAE) and its variation (CVAE, HVAE, VQ-VAE, …).
- Adversarial Learning: Generative Adversarial Networks (GAN) GAN variations (CGAN, DC-GAN, CycleGAN, WGAN, Progressive-GAN, Style-GAN)
- Diffusion Models
References:
- Deep Learning, I. Goodfellow, Y. Bengio, and A. Courville, MIT Press (2016).
- Probabilistic Machine Learning – An Introduction, K. Murphy, MIT Press (2022).
- Machine Learning: A Bayesian and Optimization Perspective, S. Theodoridis, Academic Press (2015).
- Mathematics for Machine Learning, M. Deisenroth, A. Faisal, and C. Ong. Cambridge University Press, (2020).
- Matrix Cookbook, K. B. Petersen and M. S. Pedersen, Technical University of Denmark (2012)
- Top Hot Paper