Neural Networks in Python from Scratch: Learning by Doing

From intuitive examples to image recognition in 3 hours - Experience neuromorphic computing & machine learning hands-on

4.60 (35 reviews)
Udemy
platform
English
language
Data Science
category
536
students
3.5 hours
content
Jan 2024
last update
$74.99
regular price

What you will learn

Program neural networks for 3 different problems from scratch in plain Python

Start simple: Understand input layer, output layer, weights, error function, accuracy, training & testing at an intuitive example

Complicate the problem: Introduce hidden layers & activation functions for building more useful networks

Real-life application: Use this network for image recognition

Description

** The quickest way to understanding (and programming) neural networks using Python **


This course is for everyone who wants to learn how neural networks work by hands-on programming!

Everybody is talking about neural networks but they are hard to understand without setting one up yourself. Luckily, the mathematics and programming skills (python) required are on a basic level so we can progam 3 neural networks in just over 3 hours. Do not waste your time! This course is optimized to give you the deepest insight into this fascinating topic in the shortest amount of time possible.

The focus is fully on learning-by-doing and I only introduce new concepts once they are needed.


What you will learn

After a short introduction, the course is separated into three segments - 1 hour each:

1) Set-up the most simple neural network: Calculate the sum of two numbers.
You will learn about:

  • Neural network architecture

  • Weights, input & output layer

  • Training & test data

  • Accuracy & error function

  • Feed-forward & back-propagation

  • Gradient descent

2) We modify this network: Determine the sign of the sum.
You will be introduced to:

  • Hidden layers

  • Activation function

  • Categorization

3) Our network can be applied to all sorts of problems, like image recognition: Determine hand-written digits!
After this cool and useful real-life application, I will give you an outlook:

  • How to improve the network

  • What other problems can be solved with neural networks?

  • How to use pre-trained networks without much effort


Why me?

My name is Börge Göbel and I am a postdoc working as a scientist in theoretical physics where neural networks are used a lot.
I have refined my advisor skills as a tutor of Bachelor, Master and PhD students in theoretical physics and have other successful courses here on Udemy.


"Excellent course! In a simple and understandable way explained everything about the functioning of neural networks under the hood." - Srdan Markovic


I hope you are excited and I kindly welcome you to our course!

Content

Introduction: Interpolation & Machine learning

Overview of the course
Template files for this course
Interpolation (or regression) - The fundamental principle of machine learning
Interpolation

Your first neural network: Sum of two numbers

Let's get started!
From interpolation to neural networks
What are neural networks?
[Project 1] Most simple neural network: Sum of two numbers
Prepare the training and testing data
Initialize the weights & Calculate the output
Accuracy & Error functions
Gradient of the error function
Training the neural network via gradient descent
Using the trained network on the test data
Neural networks

Modifying the problem: Sign of the sum of two numbers

[Project 2] Complete neural network: Sign of the sum of two numbers
Modify input, output & weights
Add an activation function to the neural network
Modify accuracy and error functions
Modify gradient of the error function
Training & Testing the modified neural network

Same code, different problem: Image recognition

[Project 3] Same neural network: Applied to recognize hand-written digits
Apply our neural network to the new problem: Number recognition
Improve the gradient function
Analysis of the trained neural network

Outlook & Goodbye

How to improve the network?
Outlook: Pretrained neural networks & Machine learning in Wolfram Mathematica
Goodbye!

[Resources]

[Installation] Python and Jupyter Notebook via Anaconda
Template files
Finalized jupyter notebooks

Screenshots

Neural Networks in Python from Scratch: Learning by Doing - Screenshot_01Neural Networks in Python from Scratch: Learning by Doing - Screenshot_02Neural Networks in Python from Scratch: Learning by Doing - Screenshot_03Neural Networks in Python from Scratch: Learning by Doing - Screenshot_04

Reviews

Amish
August 1, 2023
I just completed the Computational Physics course today and I got this course recommended to me just after and I immediately bought it because I knew it would be good ?
Jc
June 20, 2023
This course is an incredible introduction to neural networks. The teacher explained the topics very well; however, I felt a little bit confused by the constant change of terms, such as, testingIn, testingIndex, testingOut, etc. Overall, I liked and enjoyed this course, and I'm looking forward to learning with his other courses.
Srdan
November 12, 2022
Excellent course! In a simple and understandable way explained everything about the functioning of neural networks under the hood.
Tom
August 14, 2022
I am just on the beginning, but so far what can I say... Borge is pretty likeable guy! His "voice" and explanations are really nice to listen. He is really trying to explain everything so far into detail. UNFORTUNATELY, I have a huge problem with the naming convention of the variables / functions and so... I had the pretty same problem on my University in the past when we have been going through image recognition algorithms and I am starting to thinking, that this problem is not about the AI / Machine learning, but in the Physics (or me because of I am dumb :D)... But, really... Come on guys, testingIn, testingOut, trainingIn, trainingOut... Who can spot a difference? I am just simply lost in this one, trying to figure out things like "oh, in this vector, how many indexes are in there? Aaaaaah grrrrr that is testingIn, there are like 300, not training which has 700.... Aaaaah grrrrr, this is testingOut, why I am looking on trainingIn.... No, wait.... Its testingIn but trainingOut? WTF!!!" Please, I know that it can seems to be not important. But instead of "data[1,:]" just assign the data[1,:] into variable x and data[0,:] into variable a... So we can look on the equation and compare it to the algorithm we are writing... Think about the naming conventions, because sometimes less is more and vice versa. Calling the testingOut / trainingOut like testingData / realData or instead of Out / In using Data / Results can help a lot! Just compare: testingIn / trainingOut / trainingIn / testingOut testingData / realResults / realData / testingResults You can spot the difference on first sight without trying to blink on monitor trying to recognize the difference and writing some variable recognition tool for getting through that course. But, maybe it is just mine problem :) overall, so far amazing course!

Charts

Price

Neural Networks in Python from Scratch: Learning by Doing - Price chart

Rating

Neural Networks in Python from Scratch: Learning by Doing - Ratings chart

Enrollment distribution

Neural Networks in Python from Scratch: Learning by Doing - Distribution chart
4722462
udemy ID
6/7/2022
course created date
7/17/2022
course indexed date
Bot
course submited by