Tensors for Deep Learning - Broadcasting and Element-wise Operations with PyTorch

  Рет қаралды 31,233

deeplizard

deeplizard

Күн бұрын

Пікірлер: 38
@ravihammond
@ravihammond 6 жыл бұрын
I think it's very humble of you to include clips from other streamers/speakers/youtubers in your videos. Rather than purely ripping off their exact explanation and delivery, you have identified that this other person X has explained something the best, so let's just put X's explanation in my video directly, rather than copying their explanation.
@ShahxadAkram
@ShahxadAkram 2 жыл бұрын
This series is so much underrated. I don't know why this has so low views and likes. It should be on the top.
@sytekd00d
@sytekd00d 6 жыл бұрын
Dude!! This is probably the best learning channel for anything Deep Learning with Python. The explanations with the visuals make things SO much easier to understand.
@deeplizard
@deeplizard 6 жыл бұрын
Thank you sytekd00d. Really appreciate you letting me know!
@rubencg195
@rubencg195 6 жыл бұрын
I love this channel! keep the good work. It will be great if you can continue explaining more advance architectures after going through Deep NN and Convolutionary NN. Maybe LSTM's, GAN's and many other interesting and useful tools.
@aiahmed608
@aiahmed608 2 жыл бұрын
I see the speciality here! The way you're combining the math to improve the way we're writing the code is the meaning of a comprehensive workflow. Thank you so much for your efforts!
@ravitejavarma1307
@ravitejavarma1307 5 жыл бұрын
I feel like you are like god... this channel literally saved me...I desperately need someone who can explain Pytorch functionality to me and this channel is best of best of the bestest... Thank you soo much please post more videos...
@Brahma2012
@Brahma2012 5 жыл бұрын
Thank you for this exhaustive explanation of the important and critical concept of broadcasting. This really helps.
@BenjaminGolding
@BenjaminGolding 5 жыл бұрын
As one who studied computer science, these are basic matrix transformations (scalar multiplications) and it is explained really intuitively in the video for people without any linear algebra knowledge.
@mdafjalhossain
@mdafjalhossain 6 жыл бұрын
Great works! I love your Pytorch videos.
@abdelhakaissat7041
@abdelhakaissat7041 4 жыл бұрын
Very well explained, thank you
@deeplizard
@deeplizard 6 жыл бұрын
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/QscEWm0QTRY
@adarshkedia8074
@adarshkedia8074 5 жыл бұрын
Please add videos on GANS, autoencoders etc. Videos are way too good and explanation is perfect.
@deeplizard
@deeplizard 5 жыл бұрын
Thanks, Adarsh! Will consider. In the mean time, we do have an overview of autoencoders in our Unsupervised Learning video and blog. Check it out! deeplizard.com/learn/video/lEfrr0Yr684
@李祥泰
@李祥泰 6 жыл бұрын
Great works!! Nice detailed explanation
@deeplizard
@deeplizard 6 жыл бұрын
嘿李祥泰 - Thank you!
@biplobdas2560
@biplobdas2560 5 жыл бұрын
what is that minus zero by t.neg() , at time 10:48 :)
@felipealco
@felipealco 5 жыл бұрын
7:35 I decided to try another case with t3 = torch.tensor([[2],[4]], dtype=torch.float32). I expected t1 + t3 to be equal to tensor([[3., 3.], [5., 5.]]), but instead it returned just the same as t1 + t2.
@deeplizard
@deeplizard 5 жыл бұрын
Your expectation is correct. t1 + t3 is indeed tensor([[3., 3.], [5., 5.]]). The result of t1 + t2 is slightly different: tensor([[3., 5.],[3., 5.]]). Maybe double check your variable assignments?
@felipealco
@felipealco 5 жыл бұрын
@@deeplizard yeah I had a "typo". I thought I had written t1 + t3, but I wrote t1 + t2. 😅️
@minimatamou8369
@minimatamou8369 5 жыл бұрын
Hi, thank you for your videos, they're really useful and I love them. Question though: is there a way to write our own element-wise function and ask PyTorch to apply it for us, like at 10:45 where methods "t.neg()" and "t.sqrt()" are applied element-wise ? Something like this: t.func(lambda x: x*x) # Output would be the same as t.sqrt() Or even including other tensors like so: t1 = torch.tensor([1, 1, 1]) t2 = torch.tensor([2, 2, 2]) add = lambda a, b: a + b t1.func(add, t2) # Output would be the same as t1 + t2 Thanks.
@deeplizard
@deeplizard 5 жыл бұрын
Hey Minimata - You should be able to do it by extending the Tensor class. However, I'm not sure how it will work with autograd. I'd try digging around in the docs. Maybe here: pytorch.org/docs/stable/notes/extending.html
@minimatamou8369
@minimatamou8369 5 жыл бұрын
@@deeplizard Will check it out. Thanks !
@xingchenzhao5331
@xingchenzhao5331 4 жыл бұрын
it is truly a great tutorial!
@林盟政
@林盟政 5 жыл бұрын
Love this series XD
@stacksonchain9320
@stacksonchain9320 6 жыл бұрын
thankyou for introducing tensors, its a topic many shy away from explaining but it now seems very simple. a topic i still dont quite get is merge layers such as dot, more specifically the axes argument in keras, (not sure what is the pytorch equivalent), is it similar to the .cat function? perhaps i should start using pytorch, it seems more practical. thanks again.
@deeplizard
@deeplizard 6 жыл бұрын
Hey Carl - You are welcome! Appreciate your feedback. PyTorch also has a dot function. Keras and PyTorch both compute a dot product. See: en.wikipedia.org/wiki/Dot_product Likewise, Keras has a concatenate function. Check it out here: keras.io/layers/merge/ It does what the PyTorch one does. I do like PyTorch. #intuitive Keras is cool though! #cool Glad tensors are now #simple
@spearchew
@spearchew 3 жыл бұрын
great vid , subscribed
@donfeto7636
@donfeto7636 2 жыл бұрын
love this 2
@toremama
@toremama 4 жыл бұрын
Isn't the typing sound the smoothest and most awesome thing you've ever heard in your life?
@hassaanahmad3970
@hassaanahmad3970 4 жыл бұрын
Hello.. I have a question. i have a 100 x 768 matrix of test data. And 100 x768 matrix of train data. I'm doing KNN so i need to compute the Euclidean Distance between the test and train data, and map it into a 100x100 matrix. Now the trick it, i cant use any loops here. So I've got to do it completely through broadcasting. Any ideas how i might go about?
@rizvanahmedrafsan
@rizvanahmedrafsan 4 жыл бұрын
When I was trying out the element-wise comparison operations on Jupyter Notebook it showed me True/False instead of 1/0 as an output. I wrote exactly the same code shown here. Can anyone please explain to me why that happened?
@deeplizard
@deeplizard 4 жыл бұрын
Hey Rizvan - The difference you are seeing here is due to an update that was included in PyTorch version 1.2.0. Thank you for spotting this change. I've updated the text version of this video on the site. Anytime a change like this occurs, you can track it down by searching the release notes on PyTorch's Github page. See here (look at the top of the breaking changes section): github.com/pytorch/pytorch/releases/tag/v1.2.0 See the comparison operation section here: deeplizard.com/learn/video/QscEWm0QTRY
@antonmarshall5194
@antonmarshall5194 5 жыл бұрын
Great Tutorial. How can I check if every element in a tensor is True (not truthy)? I already tried: any(t.reshape(1, -1).numpy().squeeze()) but any() also returns True if every element is not zero (truthy).
@grombly
@grombly 5 жыл бұрын
Kinda random, but can you link the audio file during the coding segments? the intense vacuum noise lol
@deeplizard
@deeplizard 5 жыл бұрын
🤣 I must know! Do you plan to run this on a loop while you code? Maybe white noise for sleeping? 🤣🤣🤣 It's an awesome sound! Link: freesound.org/people/swiftoid/sounds/119782/
@ruchiagrawal600
@ruchiagrawal600 4 жыл бұрын
Can someone help me by explaining how -1 in .reshape(1,-1) determines the shape of tensor?
@deeplizard
@deeplizard 4 жыл бұрын
The number of elements inside the tensor is fixed. The -1 tells the reshape function to calculate the value based on the other dimensions and the number of elements constraint. Suppose we have an array A of 12 elements. Now, suppose we do A.reshape(3,-1). In this case, the A.shape would be (3,4) since 3 x 4 is 12. Hope this helps 😄 Chris
Code for Deep Learning - ArgMax and Reduction Tensor Ops
13:41
deeplizard
Рет қаралды 28 М.
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
Creating PyTorch Tensors for Deep Learning - Best Options
11:02
deeplizard
Рет қаралды 37 М.
Tensors Explained - Data Structures of Deep Learning
6:06
deeplizard
Рет қаралды 106 М.
Build PyTorch CNN - Object Oriented Neural Networks
23:23
deeplizard
Рет қаралды 51 М.
Dataset for Deep Learning - Fashion MNIST
16:04
deeplizard
Рет қаралды 29 М.
CNN Forward Method - PyTorch Deep Learning Implementation
10:41
deeplizard
Рет қаралды 27 М.
PyTorch Tensors Explained - Neural Network Programming
10:17
deeplizard
Рет қаралды 51 М.
CUDA Explained - Why Deep Learning uses GPUs
13:33
deeplizard
Рет қаралды 245 М.
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН