Udemy

Platform

English

Language

Data Science

Category

Autonomous Cars: The Complete Computer Vision Course 2021

Learn OpenCV 4, YOLO, road markings and pedestrians detection, and traffic sign classification for self-driving cars

3.65 (13 reviews)

Students

13 hours

Content

Jun 2021

Last Update
Regular Price

EXCLUSIVE SKILLSHARE OFFER
Exclusive  SkillShare  Offer
Unlimited access to 30 000 Premium SkillShare courses
30-DAY FREE TRIAL

What you will learn

YOLO

OpenCV

Detection with the grayscale image

Colour space techniques

RGB space

HSV space

Sharpening and blurring

Edge detection and gradient calculation

Sobel

Laplacian edge detector

Canny edge detection

Affine and Projective transformation

Image translation, rotation, and resizing

Hough transform

Masking the region of interest

Bitwise_and

KNN background subtractor

MOG background subtractor

MeanShift

Kalman filter

U-NET

SegNet

Encoder and Decoder

Pyramid Scene Parsing Network

DeepLabv3+

E-Net

YOLO

OpenCV


Description

Autonomous Cars: Computer Vision and Deep Learning

The automotive industry is experiencing a paradigm shift from conventional, human-driven vehicles into self-driving, artificial intelligence-powered vehicles. Self-driving vehicles offer a safe, efficient, and cost effective solution that will dramatically redefine the future of human mobility. Self-driving cars are expected to save over half a million lives and generate enormous economic opportunities in excess of $1 trillion dollars by 2035. The automotive industry is on a billion-dollar quest to deploy the most technologically advanced vehicles on the road.

As the world advances towards a driverless future, the need for experienced engineers and researchers in this emerging new field has never been more crucial.

The purpose of this course is to provide students with knowledge of key aspects of design and development of self-driving vehicles. The course provides students with practical experience in various self-driving vehicles concepts such as machine learning and computer vision. Concepts such as lane detection, traffic sign classification, vehicle/object detection, artificial intelligence, and deep learning will be presented. The course is targeted towards students wanting to gain a fundamental understanding of self-driving vehicles control. Basic knowledge of programming is recommended. However, these topics will be extensively covered during early course lectures; therefore, the course has no prerequisites, and is open to any student with basic programming knowledge. Students who enroll in this self-driving car course will master driverless car technologies that are going to reshape the future of transportation.

Tools and algorithms we'll cover include:

  • OpenCV.

  • Deep Learning and Artificial Neural Networks.

  • Convolutional Neural Networks.

  • YOLO.

  • HOG feature extraction.

  • Detection with the grayscale image.

  • Colour space techniques.

  • RGB space.

  • HSV space.

  • Sharpening and blurring.

  • Edge detection and gradient calculation.

  • Sobel.

  • Laplacian edge detector.

  • Canny edge detection.

  • Affine and Projective transformation.

  • Image translation, rotation, and resizing.

  • Hough transform.

  • Masking the region of interest.

  • Bitwise_and.

  • KNN background subtractor.

  • MOG background subtractor.

  • MeanShift.

  • Kalman filter.

  • U-NET.

  • SegNet.

  • Encoder and Decoder.

  • Pyramid Scene Parsing Network.

  • DeepLabv3+.

  • E-Net.

If you’re ready to take on a brand new challenge, and learn about AI techniques that you’ve never seen before in traditional supervised machine learning, unsupervised machine learning, or even deep learning, then this course is for you.

Moreover, the course is packed with practical exercises that are based on real-life examples. So not only will you learn the theory, but you will also get some hands-on practice building your own models. There are five big projects on healthcare problems and one small project to practice. These projects are listed below:

  • Detection of road markings.

  • Road Sign Detection.

  • Detecting Pedestrian Project.

  • Frozen Lake environment.

  • Semantic Segmentation.

  • Vehicle Detection.

That is all. See you in class!


"If you can't implement it, you don't understand it"

  • Or as the great physicist Richard Feynman said: "What I cannot create, I do not understand".

  • My courses are the ONLY course where you will learn how to implement deep REINFORCEMENT LEARNING algorithms from scratch

  • Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code?

  • After doing the same thing with 10 datasets, you realize you didn't learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times...


Screenshots

Autonomous Cars: The Complete Computer Vision Course 2021
Autonomous Cars: The Complete Computer Vision Course 2021
Autonomous Cars: The Complete Computer Vision Course 2021
Autonomous Cars: The Complete Computer Vision Course 2021

Content

Introduction

Course structure

How To Make The Most Out Of This Course

What is ANN

What is Neuron

What is Multilayer Neural Network

What is keras (Optional from AI in Healthcare course)

Important Terms in this course

Important note about tools in this course

Introduction to Self-Driving Cars

Benefit of Self-Driving Cars

Building the safe systems

Deep learning and computer vision approaches for Self-Driving Cars

LIDAR and computer vision for Self-Driving Cars vision

Activation function

What is activation function

What is Rectified Linear Unit function

What is Leaky ReLU function

What is tanh function

What is Softmax function

What is The Exponential linear unit function

What is Swish function

What is sigmoid function

Activation Function Implementation

Basic Deep Learning Project

Introduction to the project

Importing Data and Libraries

Splitting the dataset into training test and test set

Visualizing data

Standardizing data

Building and compiling the model

Training the model

Predicting new, unseen data

Evaluating the model's performance

Saving and loading models

Summary of the project

Computer vision for Self-driving Cars

Introduction

Computer vision Introduction

Challenges in Computer Vision

Requirement of Self-Driving Cars

Digital representation of an image

Converting images from RGB to grayscale

Detection with the grayscale image

QUICK FIX Video

Introduction to the Color space techniques

Introduction to RGB space

Introduction to HSV space

Introduction to Color space manipulation

Implementing Color space manipulation Part 1

Implementing Color space manipulation Part 2

Implementing Color space manipulation Part 3

Introduction to convolution

Introduction to Sharpening and blurring

Sharpening and blurring Implementation

Introduction to Edge detection and gradient calculation

Introduction to Sobel

Introduction to Laplacian edge detector

Canny edge detection

Application of image transformation

Introduction to Affine and Projective transformation

Image rotation Implementation

Image translation Implementation

Image resizing Implementation

Introduction to Perspective transformation

Perspective transformation Implementation

Cropping, dilating, and eroding an image Implementation

Masking regions of interest

Introduction to The Hough transform

The Hough transform Implementation

Summary of the section

Detection of road markings by OpenCV

Introduction to the project

Loading the image using OpenCV and Converting the image into grayscale

Smoothing the image and Implementing Canny Edge detection

Masking the region of interest

Applying bitwise_and

Applying the Hough transform

Optimizing the detected road markings

Detecting road markings in a video

Summary of the section

Road Sign Detection

Introduction to convolution neural network

Convolution Layers

Pooling Layers

Introduction to the project

Loading data

Exploring image

Data Preperation

Training model

Model accuracy

Summary of the project

Detecting Pedestrian Project

Introduction to tracking objects

Background subtraction

MOG background subtractor

KNN background subtractor

Detecting pedestrians Introduction

MeanShift Introduction

Kalman filter

Implementing pedestrians detection Part 1

Implementing pedestrians detection Part 2

Implementing pedestrians detection Part 3

Implementing pedestrians detection Part 4

Summary of the section

Semantic Segmentation

Introduction to semantic segmentation

Semantic Segmentation Achitecture

Different Semantic Segmentation Architectures

U-NET

SegNet

Encoder and Decoder

Pyramid Scene Parsing Network

DeepLabv3+

E-Net

Semantic segmentation Implementation Part 1

Semantic segmentation Implementation Part 2

Semantic segmentation Implementation Part 3

Semantic segmentation Implementation Part 4

Semantic segmentation Implementation Part 5

Summary of the section

Vehicle Detection

Introduction

What makes YOLO different

The YOLO loss function

The YOLO architecture

YOLO Implementation Part 1

YOLO Implementation Part 2

Thank you

Thank you


Reviews

E
Edward-Jun31 May 2021

Note that this review is a quarter way into the course, where you have a feel for how the instructor teaches. I will edit this review if experience changes when finished with the course completely. EDIT: Have finished the course. Here are my thoughts. When he is writing code, he just tends to say the code he is writing aloud. For example when inputting values he will just input values without giving a reason why. It would be better if he explains why he's doing it and where it is coming from, and what the code actually does. There is of course the Q&A section to ask, but in my opinion, these sorts of things should already be handled in the video, and the Q&A should be left for more complex questions. EDIT: I find that he explains his code in section 7-10, but only after he finishes writing it in the videos before, and the explanations I find are a bit lacking. Sometimes it feels like he isn't teaching and connecting with the viewer, and instead giving a presentation. Thus, some may find it hard to stay engaged with the content. Not to discredit his skills, he is an excellent engineer in the field, just that the teaching ability and engagement / communication could further be improved in how he structures his section videos and slides. For example, I had no idea what the objective of section 3 was for. What were we building a model for? It was not introduced clearly in the beginning. EDIT: Instructor has since revised section 3 to be more clear. However, I still think my points still stand. Sometimes it feels as if the content goes like this "Do this, do this, do that, and there you go." I think this is a problem when you have an expert trying to demonstrate what they're doing to an audience with lesser knowledge. It may seem like common knowledge to them, but to the viewer and intended audience it often feels like stuff is coming out from nowhere. I would wish for the teaching to be a bit more engaging than simply typing and speaking out the code, as us reading through the code would basically do the same thing. Instead, I would like the video to be an aid of what individual code snippets actually do, and why we use certain values and inputs, while we have the code notebook open in another window. Also, I wish we trained our models instead of using already pre-trained models to get our results. Even if accuracy suffered I would've liked it more if we trained our own yolo / enet models from a given dataset. I would not recommend this to beginners of coding / machine learning as you will be stuck in a lot of places and questioning a lot of stuff. However, if you have dabbled into the space and are familiar with concepts of machine learning and computer vision, I think it is another great resource to look into. Also for the instructor: Please mute system notifications / phone notifications / discord pings when you are recording. Also, for the hard of hearing like me, I think proper captioned subtitles are a must, and even more if the instructor has an accent. Your English is very good, however, it is just the truth that sometimes accents can get in the way of coherency and understanding.

K
Khalid5 April 2021

I did complete the course yet. However, when I used his project to apply for a job, I did place in an interview. After preparing for an interview with the understanding of his project, I have a job in computer vision. I really love it. Thank you so much for creating a great course.


3872152

Udemy ID

2/24/2021

Course created date

4/17/2021

Course Indexed date
Bot
Course Submitted by

Twitter
Telegram