Course Outline
Part 1 – Deep Learning and DNN Concepts
Introduction to AI, Machine Learning & Deep Learning
- History, basic concepts, and common applications of artificial intelligence, separating reality from the myths surrounding this domain
- Collective Intelligence: aggregating knowledge shared by multiple virtual agents
- Genetic algorithms: evolving a population of virtual agents through selection
- Conventional Machine Learning: definitions
- Types of tasks: supervised learning, unsupervised learning, reinforcement learning
- Types of actions: classification, regression, clustering, density estimation, dimensionality reduction
- Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Trees
- Machine Learning vs. Deep Learning: scenarios where Machine Learning remains the state-of-the-art (e.g., Random Forests & XGBoosts)
Basic Concepts of a Neural Network (Application: multi-layer perceptron)
- Review of mathematical foundations
- Definition of a neuron network: classical architecture, activation functions
- Weighting of previous activations, network depth
- Definition of network learning: cost functions, back-propagation, stochastic gradient descent, maximum likelihood
- Neural network modeling: modeling input and output data based on the problem type (regression, classification, etc.). The curse of dimensionality
- Distinction between multi-feature data and signals. Selection of a cost function based on the data
- Function approximation by a network of neurons: presentation and examples
- Distribution approximation by a network of neurons: presentation and examples
- Data Augmentation: techniques to balance a dataset
- Generalization of results from a neural network
- Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization
- Optimization and convergence algorithms
Standard ML / DL Tools
A concise overview including advantages, disadvantages, ecosystem positioning, and usage is planned.
- Data management tools: Apache Spark, Apache Hadoop tools
- Machine Learning: Numpy, Scipy, Sci-kit
- High-level DL frameworks: PyTorch, Keras, Lasagne
- Low-level DL frameworks: Theano, Torch, Caffe, TensorFlow
Convolutional Neural Networks (CNN).
- Presentation of CNNs: fundamental principles and applications
- Basic operation of a CNN: convolutional layer, kernel usage
- Padding & stride, feature map generation, pooling layers. 1D, 2D, and 3D extensions
- Presentation of different CNN architectures that have achieved state-of-the-art results in classification
- Images: LeNet, VGG Networks, Network in Network, Inception, ResNet. Presentation of innovations introduced by each architecture and their broader applications (e.g., 1x1 Convolution, residual connections)
- Use of attention models
- Application to common classification cases (text or image)
- CNNs for generation: super-resolution, pixel-to-pixel segmentation
- Key strategies for enhancing feature maps in image generation
Recurrent Neural Networks (RNN).
- Presentation of RNNs: fundamental principles and applications
- Basic operation of the RNN: hidden activation, back propagation through time, unfolded version
- Evolution towards Gated Recurrent Units (GRUs) and LSTM (Long Short-Term Memory)
- Presentation of different states and evolution brought by these architectures
- Convergence and vanishing gradient problems
- Classical architectures: Time series prediction, classification, etc.
- RNN Encoder-Decoder architecture. Use of an attention model
- NLP applications: word / character encoding, translation
- Video applications: prediction of the next generated image in a video sequence
Generative Models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).
- Presentation of generative models and their link with CNNs
- Auto-encoder: dimensionality reduction and limited generation
- Variational Auto-encoder: generative model and approximation of the distribution of a given input. Definition and use of latent space. Reparameterization trick. Applications and observed limitations
- Generative Adversarial Networks: Fundamentals
- Dual Network Architecture (Generator and discriminator) with alternating learning and available cost functions
- GAN convergence and encountered difficulties
- Improved convergence: Wasserstein GAN, BEGAN, Earth Mover Distance
- Applications for the generation of images or photographs, text generation, super-resolution
Deep Reinforcement Learning.
- Presentation of reinforcement learning: control of an agent within a defined environment
- Defined by a state and possible actions
- Use of a neural network to approximate the state function
- Deep Q Learning: experience replay and application to video game control
- Optimization of learning policy. On-policy && off-policy. Actor-critic architecture. A3C
- Applications: control of a single video game or digital system
Part 2 – Theano for Deep Learning
Theano Basics
- Introduction
- Installation and Configuration
Theano Functions
- inputs, outputs, updates, givens
Training and Optimization of a neural network using Theano
- Neural Network Modeling
- Logistic Regression
- Hidden Layers
- Training a network
- Computing and Classification
- Optimization
- Log Loss
Testing the model
Part 3 – DNN using TensorFlow
TensorFlow Basics
- Creation, Initializing, Saving, and Restoring TensorFlow variables
- Feeding, Reading and Preloading TensorFlow Data
- How to use TensorFlow infrastructure to train models at scale
- Visualizing and Evaluating models with TensorBoard
TensorFlow Mechanics
- Prepare the Data
- Download
- Inputs and Placeholders
-
Build the Graphs
- Inference
- Loss
- Training
-
Train the Model
- The Graph
- The Session
- Train Loop
-
Evaluate the Model
- Build the Eval Graph
- Eval Output
The Perceptron
- Activation functions
- The perceptron learning algorithm
- Binary classification with the perceptron
- Document classification with the perceptron
- Limitations of the perceptron
From the Perceptron to Support Vector Machines
- Kernels and the kernel trick
- Maximum margin classification and support vectors
Artificial Neural Networks
- Nonlinear decision boundaries
- Feedforward and feedback artificial neural networks
- Multilayer perceptrons
- Minimizing the cost function
- Forward propagation
- Back propagation
- Improving the way neural networks learn
Convolutional Neural Networks
- Goals
- Model Architecture
- Principles
- Code Organization
- Launching and Training the Model
- Evaluating a Model
Basic Introductions to be given to the below modules (Brief Introduction to be provided based on time availability):
TensorFlow - Advanced Usage
- Threading and Queues
- Distributed TensorFlow
- Writing Documentation and Sharing your Model
- Customizing Data Readers
- Manipulating TensorFlow Model Files
TensorFlow Serving
- Introduction
- Basic Serving Tutorial
- Advanced Serving Tutorial
- Serving Inception Model Tutorial
Requirements
Background in physics, mathematics, and programming. Involvement in image processing activities.
Participants should have a prior understanding of machine learning concepts and practical experience with Python programming and its libraries.
Testimonials (2)
The training was organized and well-planned out, and I come out of it with systematized knowledge and a good look at topics we looked at
Magdalena - Samsung Electronics Polska Sp. z o.o.
Course - Deep Learning with TensorFlow 2
I really liked the end where we took the time to play around with CHAT GPT. The room was not set up the best for this- instead of one large table a couple of small ones so we could get into small groups and brainstorm would have helped