I think it's very humble of you to include clips from other streamers/speakers/youtubers in your videos. Rather than purely ripping off their exact explanation and delivery, you have identified that this other person X has explained something the best, so let's just put X's explanation in my video directly, rather than copying their explanation.
@ShahxadAkram2 жыл бұрын
This series is so much underrated. I don't know why this has so low views and likes. It should be on the top.
@sytekd00d6 жыл бұрын
Dude!! This is probably the best learning channel for anything Deep Learning with Python. The explanations with the visuals make things SO much easier to understand.
@deeplizard6 жыл бұрын
Thank you sytekd00d. Really appreciate you letting me know!
@rubencg1956 жыл бұрын
I love this channel! keep the good work. It will be great if you can continue explaining more advance architectures after going through Deep NN and Convolutionary NN. Maybe LSTM's, GAN's and many other interesting and useful tools.
@aiahmed6082 жыл бұрын
I see the speciality here! The way you're combining the math to improve the way we're writing the code is the meaning of a comprehensive workflow. Thank you so much for your efforts!
@ravitejavarma13075 жыл бұрын
I feel like you are like god... this channel literally saved me...I desperately need someone who can explain Pytorch functionality to me and this channel is best of best of the bestest... Thank you soo much please post more videos...
@Brahma20125 жыл бұрын
Thank you for this exhaustive explanation of the important and critical concept of broadcasting. This really helps.
@BenjaminGolding5 жыл бұрын
As one who studied computer science, these are basic matrix transformations (scalar multiplications) and it is explained really intuitively in the video for people without any linear algebra knowledge.
@mdafjalhossain6 жыл бұрын
Great works! I love your Pytorch videos.
@abdelhakaissat70414 жыл бұрын
Very well explained, thank you
@deeplizard6 жыл бұрын
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/QscEWm0QTRY
@adarshkedia80745 жыл бұрын
Please add videos on GANS, autoencoders etc. Videos are way too good and explanation is perfect.
@deeplizard5 жыл бұрын
Thanks, Adarsh! Will consider. In the mean time, we do have an overview of autoencoders in our Unsupervised Learning video and blog. Check it out! deeplizard.com/learn/video/lEfrr0Yr684
@李祥泰6 жыл бұрын
Great works!! Nice detailed explanation
@deeplizard6 жыл бұрын
嘿李祥泰 - Thank you!
@biplobdas25605 жыл бұрын
what is that minus zero by t.neg() , at time 10:48 :)
@felipealco5 жыл бұрын
7:35 I decided to try another case with t3 = torch.tensor([[2],[4]], dtype=torch.float32). I expected t1 + t3 to be equal to tensor([[3., 3.], [5., 5.]]), but instead it returned just the same as t1 + t2.
@deeplizard5 жыл бұрын
Your expectation is correct. t1 + t3 is indeed tensor([[3., 3.], [5., 5.]]). The result of t1 + t2 is slightly different: tensor([[3., 5.],[3., 5.]]). Maybe double check your variable assignments?
@felipealco5 жыл бұрын
@@deeplizard yeah I had a "typo". I thought I had written t1 + t3, but I wrote t1 + t2. 😅️
@minimatamou83695 жыл бұрын
Hi, thank you for your videos, they're really useful and I love them. Question though: is there a way to write our own element-wise function and ask PyTorch to apply it for us, like at 10:45 where methods "t.neg()" and "t.sqrt()" are applied element-wise ? Something like this: t.func(lambda x: x*x) # Output would be the same as t.sqrt() Or even including other tensors like so: t1 = torch.tensor([1, 1, 1]) t2 = torch.tensor([2, 2, 2]) add = lambda a, b: a + b t1.func(add, t2) # Output would be the same as t1 + t2 Thanks.
@deeplizard5 жыл бұрын
Hey Minimata - You should be able to do it by extending the Tensor class. However, I'm not sure how it will work with autograd. I'd try digging around in the docs. Maybe here: pytorch.org/docs/stable/notes/extending.html
@minimatamou83695 жыл бұрын
@@deeplizard Will check it out. Thanks !
@xingchenzhao53314 жыл бұрын
it is truly a great tutorial!
@林盟政5 жыл бұрын
Love this series XD
@stacksonchain93206 жыл бұрын
thankyou for introducing tensors, its a topic many shy away from explaining but it now seems very simple. a topic i still dont quite get is merge layers such as dot, more specifically the axes argument in keras, (not sure what is the pytorch equivalent), is it similar to the .cat function? perhaps i should start using pytorch, it seems more practical. thanks again.
@deeplizard6 жыл бұрын
Hey Carl - You are welcome! Appreciate your feedback. PyTorch also has a dot function. Keras and PyTorch both compute a dot product. See: en.wikipedia.org/wiki/Dot_product Likewise, Keras has a concatenate function. Check it out here: keras.io/layers/merge/ It does what the PyTorch one does. I do like PyTorch. #intuitive Keras is cool though! #cool Glad tensors are now #simple
@spearchew3 жыл бұрын
great vid , subscribed
@donfeto76362 жыл бұрын
love this 2
@toremama4 жыл бұрын
Isn't the typing sound the smoothest and most awesome thing you've ever heard in your life?
@hassaanahmad39704 жыл бұрын
Hello.. I have a question. i have a 100 x 768 matrix of test data. And 100 x768 matrix of train data. I'm doing KNN so i need to compute the Euclidean Distance between the test and train data, and map it into a 100x100 matrix. Now the trick it, i cant use any loops here. So I've got to do it completely through broadcasting. Any ideas how i might go about?
@rizvanahmedrafsan4 жыл бұрын
When I was trying out the element-wise comparison operations on Jupyter Notebook it showed me True/False instead of 1/0 as an output. I wrote exactly the same code shown here. Can anyone please explain to me why that happened?
@deeplizard4 жыл бұрын
Hey Rizvan - The difference you are seeing here is due to an update that was included in PyTorch version 1.2.0. Thank you for spotting this change. I've updated the text version of this video on the site. Anytime a change like this occurs, you can track it down by searching the release notes on PyTorch's Github page. See here (look at the top of the breaking changes section): github.com/pytorch/pytorch/releases/tag/v1.2.0 See the comparison operation section here: deeplizard.com/learn/video/QscEWm0QTRY
@antonmarshall51945 жыл бұрын
Great Tutorial. How can I check if every element in a tensor is True (not truthy)? I already tried: any(t.reshape(1, -1).numpy().squeeze()) but any() also returns True if every element is not zero (truthy).
@grombly5 жыл бұрын
Kinda random, but can you link the audio file during the coding segments? the intense vacuum noise lol
@deeplizard5 жыл бұрын
🤣 I must know! Do you plan to run this on a loop while you code? Maybe white noise for sleeping? 🤣🤣🤣 It's an awesome sound! Link: freesound.org/people/swiftoid/sounds/119782/
@ruchiagrawal6004 жыл бұрын
Can someone help me by explaining how -1 in .reshape(1,-1) determines the shape of tensor?
@deeplizard4 жыл бұрын
The number of elements inside the tensor is fixed. The -1 tells the reshape function to calculate the value based on the other dimensions and the number of elements constraint. Suppose we have an array A of 12 elements. Now, suppose we do A.reshape(3,-1). In this case, the A.shape would be (3,4) since 3 x 4 is 12. Hope this helps 😄 Chris