Backpropagation from Scratch in Python

  Рет қаралды 25,247

Machine Learning Explained

Machine Learning Explained

Күн бұрын

In this video we will learn how to code the backpropagation algorithm from scratch in Python (Code provided!)
Excellent Backpropagation Tutorial: mattmazur.com/2015/03/17/a-st...
Credit for the videos used: losslandscape.com/videos/
Credit for the song used: Sunshine on Sand (available on KZbin music)
Github link to the code: github.com/yacineMahdid/artif...
Previous video on backpropagation theory: • Learn Backpropagation ...
Previous video on forward propagation: • Understanding Forward ...
Previous video on perceptron: • Create a Perceptron fr...
Previous video on gradient descent: • Gradient Descent from ...
Here is some information from our beloved Wikipedia on what Backpropagation is:
"
In machine learning, backpropagation (backprop, BP) is a widely used algorithm in training feedforward neural networks for supervised learning. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions generally - a class of algorithms referred to generically as "backpropagation". In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input-output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming.
The term backpropagation strictly refers only to the algorithm for computing the gradient, not how the gradient is used; but the term is often used loosely to refer to the entire learning algorithm, including how the gradient is used, such as by stochastic gradient descent. Backpropagation generalizes the gradient computation in the delta rule, which is the single-layer version of backpropagation, and is in turn generalized by automatic differentiation, where backpropagation is a special case of reverse accumulation (or "reverse mode"). The term backpropagation and its general use in neural networks was announced in Rumelhart, Hinton & Williams (1986a), then elaborated and popularized in Rumelhart, Hinton & Williams (1986b), but the technique was independently rediscovered many times, and had many predecessors dating to the 1960s; see § History. A modern overview is given in the deep learning textbook by Goodfellow, Bengio & Courville (2016).
"
----
Join the Discord for general discussion: / discord
----
Follow Me Online Here:
Twitter: / codethiscodeth1
GitHub: github.com/yacineMahdid
LinkedIn: / yacine-mahdid-809425163
Instagram: / yacine_mahdid
___
Have a great week! 👋

Пікірлер: 24
@surafelm.w4058
@surafelm.w4058 3 жыл бұрын
Hi, how to use a Data frame inputs (X1, X2 and X3) having shape of (30, 10) for each with 1 output? Greatly appreciate for your prompt response.
@machinelearningexplained
@machinelearningexplained 3 жыл бұрын
Hi there, sorry for the wait! Do you mean how to use the code presented above when you have inputs 2D input of size (30, 10) with one label?
@surafelm.w4058
@surafelm.w4058 3 жыл бұрын
Hi@@machinelearningexplained, first thank you a lot for the response. To make clear the question, I have 3 inputs neurons (P, T and and S) of each have 30 rows and 10 columns; 1 hidden layer with 4 neurons, and 1 output neuron (or more than 2 outputs) - which is [3, 4, 1]. Below is the sample datasets. The 1st input (P) is: P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5.77 5.77 5.31 5.18 0 0 0 0 0 0 10.86 10.86 9.99 9.75 0 0 0 0 4.71 4.66 6.88 6.88 6.33 6.17 0 0 0 4.77 9.18 9.07 9.61 9.61 8.84 8.62 0 6.58 7.12 9.30 14.90 14.72 3.96 3.96 0 0 29.14 10.68 11.55 15.09 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4.01 3.99 0 0 0 0 0 4.01 7.97 4.20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The 2nd input (T) is: T01 T02 T03 T04 T05 T06 T07 T08 T09 T10 18.6 18.8 17.8 18 18.2 18.4 18.6 18.7 18.9 19 18.7 18.9 18 18.2 18.3 18.5 18.7 18.8 19 19.1 18.2 18.3 17.6 17.7 17.9 18 18.2 18.3 18.4 18.5 19.2 19.4 18.4 18.6 18.8 18.9 19.1 19.3 19.5 19.6 19.8 20 18.8 19.1 19.3 19.5 19.7 19.9 20.1 20.3 19.9 20.1 19 19.2 19.4 19.6 19.8 20 20.2 20.4 20.6 20.8 19.5 19.8 20 20.3 20.5 20.7 20.9 21.1 20.7 20.9 19.7 20 20.2 20.4 20.6 20.8 21 21.2 20.7 20.9 19.8 20 20.2 20.4 20.6 20.8 21 21.1 20.4 20.6 19.5 19.7 19.9 20.1 20.3 20.6 20.8 20.9 20.9 21.1 20 20.2 20.4 20.6 20.9 21.1 21.3 21.4 21.2 21.4 20.1 20.4 20.6 20.8 21.1 21.3 21.5 21.7 21.5 21.7 20.4 20.7 20.9 21.2 21.4 21.6 21.8 22 21.8 22 20.9 21.1 21.3 21.5 21.7 21.9 22.1 22.3 21.4 21.6 20.4 20.6 20.9 21.1 21.3 21.5 21.7 21.9 22.2 22.3 21.2 21.4 21.6 21.9 22.1 22.3 22.5 22.7 22.3 22.5 21.4 21.6 21.8 22 22.2 22.4 22.6 22.8 22.4 22.6 21.5 21.7 21.9 22.1 22.3 22.5 22.7 22.9 22.8 23 21.7 22 22.2 22.5 22.7 22.9 23.2 23.4 22.2 22.4 21.2 21.4 21.6 21.9 22.1 22.3 22.5 22.7 22.5 22.7 21.5 21.7 21.9 22.2 22.4 22.6 22.8 23 22.1 22.2 21 21.3 21.5 21.7 22 22.2 22.4 22.6 21.8 22 20.8 21 21.2 21.5 21.7 21.9 22.1 22.3 22.4 22.6 21.3 21.6 21.8 22.1 22.3 22.5 22.8 23 22.5 22.7 21.3 21.6 21.9 22.1 22.4 22.6 22.9 23.1 22.6 22.8 21.3 21.6 21.9 22.2 22.5 22.7 23 23.2 22.1 22.3 20.9 21.2 21.5 21.8 22 22.3 22.5 22.7 21.9 22.2 20.8 21 21.3 21.6 21.8 22.1 22.3 22.5 21.6 21.8 20.5 20.7 21 21.2 21.5 21.7 21.9 22.1 21.4 21.6 20.3 20.5 20.8 21 21.3 21.5 21.7 21.9 The 3rd input (S) is: S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 13.74 13.31 13.01 13.99 12.45 13.10 12.80 12.30 12.46 12.99 This last input is constant and it repeats it self for the number rows in P and T datasets. The actual data is: Actual 3.17 3.26 4.22 5.19 3.88 17.03 15.8 4.22 50.42 15.56 5.19 4.57 4.11 3.99 4.22 3.78 12 15.56 9.86 49.04 19.64 13.28 9.67 7.79 7.47 5.99 5.45 4.93 4.69 4.57 I hope this may explicit my previous questions. Once again thank you in advance for your support. FYI, the expected output is the summation of P, T and S adjusted with the corresponding initialized weights and biases of the network, i.e., P+T+S = Output. Kindly.
@sumitbarua7358
@sumitbarua7358 2 жыл бұрын
Kindly update the github link, its not working!!
@machinelearningexplained
@machinelearningexplained 2 жыл бұрын
It's right there: github.com/yacineMahdid/artificial-intelligence-and-machine-learning/tree/master/deep-learning-from-scratch-python Sorry for the inconvenience!
@ssshukla26
@ssshukla26 3 жыл бұрын
Great video, however github link not working.
@lucassanchezfellay4344
@lucassanchezfellay4344 3 жыл бұрын
Yes!!! Yacine, could you please re-upload ir? Thanks
@sumitbarua7358
@sumitbarua7358 2 жыл бұрын
did you get the code link?
@lucassanchezfellay4344
@lucassanchezfellay4344 2 жыл бұрын
@@sumitbarua7358 nop
@machinelearningexplained
@machinelearningexplained 2 жыл бұрын
Sorry guys, I've messed up the link because I had spaces in my folder structure lol! Here it is: github.com/yacineMahdid/artificial-intelligence-and-machine-learning/tree/master/deep-learning-from-scratch-python
@Singer_Sonali
@Singer_Sonali 4 жыл бұрын
Same for all datasets. ..??
@machinelearningexplained
@machinelearningexplained 3 жыл бұрын
Yes this should work on most dataset if you use this type of neural network architecture (feed-forward). However, like I said in the video there is a better and more general way to implement gradient descent by using a computational graph. This is what is being used in neural network packages like Pytorch or Tensorflow. I'll make a video about this in the coming week.
@Singer_Sonali
@Singer_Sonali 3 жыл бұрын
@@machinelearningexplained can you make a video on how to import excel dataset in neural networks and I'm confused with the how to choose biases and weights from excel file
@machinelearningexplained
@machinelearningexplained 3 жыл бұрын
@@Singer_Sonali The weights and biases of the neural network are set randomly at first. As you train your neural network this is where they will get adjusted by gradient descent in order to fit your data! For importing excel dataset into a neural network I would suggest to take a look at the Pandas library. It is a very convenient library in data science to load dataset and structure them in dataframe! But yes, I will make a video soon where I import a actual dataset and train my neural network on it!
@Singer_Sonali
@Singer_Sonali 3 жыл бұрын
@@machinelearningexplained thanks I'll be waiting
@MafazaSPutra
@MafazaSPutra 3 жыл бұрын
@@Singer_Sonali Did you ever tried to save the ready-to-feed excel file as a .csv file? As far as I know the best practice is to be at least comfortable playing around with .csv files
@MalichiMyers
@MalichiMyers 9 ай бұрын
hi
@machinelearningexplained
@machinelearningexplained 9 ай бұрын
Hello 👋 Let me know if you have questions!
@MalichiMyers
@MalichiMyers 9 ай бұрын
​@@machinelearningexplained Do you know a way to use the MNIST dataset without using sklearn, pytorch, or tensorflow? If not, what are some datasets that you recommend?
@CGMossa
@CGMossa 2 жыл бұрын
There is no dislikes. Is this good or bad?
@randomdude79404
@randomdude79404 2 жыл бұрын
Dislikes not being showed to viewers by KZbin
@machinelearningexplained
@machinelearningexplained 2 жыл бұрын
I can pull you the stats from KZbin Studio: - 90,1% liked - 201 likes - 22 dislikes
@CGMossa
@CGMossa 2 жыл бұрын
@@machinelearningexplained that's awesome. I guess I'll subscribe to you!
@machinelearningexplained
@machinelearningexplained 2 жыл бұрын
@@CGMossa Nice! Let me know if there is a topic you want me to cover!
Neural Networks Explained from Scratch using Python
17:38
Bot Academy
Рет қаралды 310 М.
Не пей газировку у мамы в машине
00:28
Даша Боровик
Рет қаралды 7 МЛН
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 46 МЛН
How to implement Perceptron from scratch with Python
13:46
AssemblyAI
Рет қаралды 30 М.
Neural Network from Scratch | Mathematics & Python Code
32:32
The Independent Code
Рет қаралды 114 М.
RNN From Scratch In Python
52:51
Dataquest
Рет қаралды 19 М.
Gradient Descent From Scratch in Python - Visual Explanation
28:44
Backpropagation Algorithm | Neural Networks
13:14
First Principles of Computer Vision
Рет қаралды 30 М.
Create a Simple Neural Network in Python from Scratch
14:15
Polycode
Рет қаралды 759 М.
Neural Networks from Scratch - P.1 Intro and Neuron Code
16:59
sentdex
Рет қаралды 1,3 МЛН
Backpropagation in Convolutional Neural Networks (CNNs)
9:21
The spelled-out intro to neural networks and backpropagation: building micrograd
2:25:52
The Absolutely Simplest Neural Network Backpropagation Example
9:22
Samsung or iPhone
0:19
rishton vines😇
Рет қаралды 6 МЛН
Самый маленький Iphone в мире!📱(@ghoul.arena)
0:22
Взрывная История
Рет қаралды 1,7 МЛН
Best Gun Stock for VR gaming. #vr #vrgaming  #glistco
0:15
Glistco
Рет қаралды 3,2 МЛН
M4 iPad Pro Impressions: Well This is Awkward
12:51
Marques Brownlee
Рет қаралды 6 МЛН