Performance Tuning Deep Learning Models Master Class

A Step-by-Step Guide to Tuning Deep Learning Models

4.20 (13 reviews)
Udemy
platform
English
language
Other
category
instructor
Performance Tuning Deep Learning Models Master Class
221
students
5 hours
content
Jul 2021
last update
$54.99
regular price

What you will learn

How to accelerate learning through better configured stochastic gradient descent batch size and loss functions.

How to combine the predictions from multiple models saved during a single training run.

How to accelerate learning through choosing better initial weights with greedy layer-wise pretraining and transfer learning.

How to reduce overfitting by updating the loss function using techniques such as weight regularization, weight constraints, and activation regularization.

How to combine the predictions from multiple models saved during a single training run with techniques such as horizontal ensembles and snapshot ensembles.

Description

** Mike's courses are popular with many of our clients." Josh Gordon, Developer Advocate, Google **

Great course to see the impacts of model inputs, such as the quantity of epochs, batch size, hidden layers, and nodes, on the accuracy of the results. - Kevin

Best course on neural network tuning I've taken thus far. -Jazon Samillano

Very nice explanation. - Mohammad

Welcome to Performance Tuning Deep Learning Models Master Class.

Deep learning neural networks have become easy to create. However, tuning these models for maximum performance remains something of a challenge for most modelers. This course will teach you how to get results as a machine learning practitioner. This is a step-by-step course in getting the most out of deep learning models on your own predictive modeling projects.

My name is Mike West and I'm a machine learning engineer in the applied space. I've worked or consulted with over 50 companies and just finished a project with Microsoft. I've published over 50 courses and this is 53 on Udemy. If you're interested in learning what the real-world is really like then you're in good hands.

This course was designed around three main activities for getting better results with deep learning models: better or faster learning, better generalization to new data, and better predictions when using final models.

Who is this course for? 

This course is for developers, machine learning engineers and data scientists that want to enhance the performance of their deep learning models. This is an intermediate level to advanced level course. It's highly recommended the learner be proficient with Python, Keras and machine learning.

What are you going to Learn? 

  • An introduction to the problem of overfitting and a tour of regularization techniques

  • Accelerate learning through better configured stochastic gradient descent batch size, loss functions, learning rates, and to avoid exploding gradients via gradient clipping.

  • Learn to combat overfitting and an introduction of regularization techniques.

  • Reduce overfitting by updating the loss function using techniques such as weight regularization, weight constraints, and activation regularization.

  • Effectively apply dropout, the addition of noise, and early stopping.

  • Combine the predictions from multiple models and a tour of ensemble learning techniques.

  • Diagnose poor model training and problems such as premature convergence and accelerate the model training process.

  • Combine the predictions from multiple models saved during a single training run with techniques such as horizontal ensembles and snapshot ensembles.

  • Diagnose high variance in a final model and improve the average predictive skill.

This course is a hands on-guide. It is a playbook and a workbook intended for you to learn by doing and then apply your new understanding to your own deep learning Keras models. To get the most out of the course, I would recommend working through all the examples in each tutorial. If you watch this course like a movie you'll get little out of it. 

In the applied space machine learning is programming and programming is a hands on-sport.

Thank you for your interest in Performance Tuning Deep Learning Models Master Class.

Let's get started!


Content

Introduction

Introduction
Course Overview
Is This Course Right for You?
Course Structure
Neural Network Defined
Framework for Optimal Learning
Optimal Learning Techniques
Optimal Generalizations Techniques
Optimal Prediction Techniques
Framework Application
Diagnostic Learning Curves
The Fit of the Model
Unrepresentative Dataset

Optimal Learning

Neural Networks Learn a Mapping Function
Error Surface
Features of the Error Surface
Non-Convex Error Surface
Deep Learning Neural Network Components: Part 1
Deep Learning Neural Network Components: Part 2
Neural Network Model Capacity
Anatomy of a Keras Model
Demo: Case Study on Model Capacity: Part 1
Demo: Case Study on Model Capacity: Part 2
Demo: Case Study on Model Capacity: Part 3
Gradient Precision with Batch Size
Demo: Case Study on Batch Size: Part 1
Demo: Case Study on Batch Size: Part 2
Demo: Case Study on Batch Size: Part 3
Loss Function Defined
Choosing a Loss Function
Demo: Case Study on Regression Loss Functions: Part 1
Demo: Case Study on Regression Loss Functions: Part 13
Demo: Case Study on Binary Classification Loss Functions: Part 1
Demo: Case Study on Binary Classification Loss Functions: Part 2
Demo: Case Study on Binary Classification Loss Functions: Part 3
Demo: Case Study on Multiclass Classification Loss Functions: Part 1
Demo: Case Study on Multiclass Classification Loss Functions: Part 2
Learning Rate Defined
Configuring the Learning Rate
Learning Rate Schedules and Adaptive Learning Rates
Defining Learning Rates in Keras
Demo: Case Study on Learning Rates: Part 1
Demo: Case Study on Learning Rates: Part 2
Demo: Case Study on Learning Rates: Part 3
Demo: Case Study on Learning Rates: Part 4
Data Scaling
Scaling the Input and Ouput Variables
Normalize and Standardize (Rescaling)
Demo: Case Study on Data Scaling: Part 1
Demo: Case Study on Data Scaling: Part 2
Demo: Case Study on Data Scaling: Part 3
Demo: Case Study on Data Scaling: Part 4
Activation Functions and Vanishing Gradients
Rectified Linear Activation Function Defined and Implemented in Python
When ReLU is the Appropriate Choice
Demo: Case Study on Vanishing Gradients: Part 1
Demo: Case Study on Vanishing Gradients: Part 2
Correct Exploding Gradients with Clipping
Gradient Clipping in Keras
Demo: Case Study on Exploding Gradients Part 1
Demo: Case Study on Exploding Gradients Part 2
Batch Normalization
Tips for Applying Batch Normalization
Demo: Case Study on Batch Normalization: Part 1
Demo: Case Study on Batch Normalization: Part 2
Greedy Layer-Wise Pretraining
Demo: Greedy Layer-Wise Pretraining Case Study: Part 1
Demo: Greedy Layer-Wise Pretraining Case Study: Part 2

Optimal Generalization

The Problem of Overfitting
Reduce Overfitting by Constraining Complexity
Regularization Approaches for Neural Networks
Penalize Large Weights via Regularization
How to Penalize Large Weights
Tips for Using Weight Regularization
Demo: Weight Regularization Case Study: Part 1
Demo: Weight Regularization Case Study: Part 2
Activity Regularization
Encouraging Smaller Activations
Tips for Activity Regularization
Activity Regularization in Keras
Demo: Activity Regularization Case Study
Forcing Small Weights
How to Use a Weight Constraint
Tips for Appling Weight Constraints
Weight Constraints in Keras
Demo: Weight Constraint Case Study
Dropout
Dropout Mechanics
Dropout Tips
Dropout in Keras
Demo: Dropout Case Study
Noise Regularization
How to add Noise
Noise Tips
Adding Noise in Keras
Demo: Noise Regularization Case Study

Optimal Predictions

Ensemble Learning
Ensemble Neural Network Models
Varying the Major Elements
Model Averaging Ensembles
Ensembles in Keras
Demo: Model Averaging Ensemble Case Study: Part 1
Demo: Model Averaging Ensemble Case Study: Part 2
Demo: Model Averaging Ensemble Case Study: Part 3
Weighted Average Ensembles
Demo: Weighted Average Ensemble Case Study: Part 1
Demo: Weighted Average Ensemble Case Study: Part 2
Demo: Weighted Average Ensemble Case Study: Part 3
Demo: Weighted Average Ensemble Case Study: Part 4
Resampling Ensembles
Demo: Resampling Ensemble Case Study: Part 1
Demo: Resampling Ensemble Case Study: Part 2
Demo: Resampling Ensemble Case Study: Part 3
Demo: Resampling Ensemble Case Study: Part 4
Horizontal Voting Ensembles
Demo: Horizontal Ensemble Case Study: Part 1
Demo: Horizontal Ensemble Case Study: Part 2
Congratulations

Reviews

Yaron
March 29, 2021
Covers hyperparameter tuning of Deep Learning networks - with 16 notebooks that show the Keras syntax for implementing the hyperparameter use. There's also a chapter on ensembles with 4 notebooks. What's missing is a summary ["cheatsheet"] that for each hyperparameter shows one or two lines of Keras code incorporating the hyperparameter. Which means that if you need a refresher - you need to go over the relevant course parts again
Kevin
July 28, 2020
Great course to see the impacts of model inputs, such as the quantity of epochs, batch size, hidden layers, and nodes, on the accuracy of the results.

Charts

Price

Performance Tuning Deep Learning Models Master Class - Price chart

Rating

Performance Tuning Deep Learning Models Master Class - Ratings chart

Enrollment distribution

Performance Tuning Deep Learning Models Master Class - Distribution chart
3051566
udemy ID
4/25/2020
course created date
7/24/2020
course indexed date
Bot
course submited by