everytime you code, i learn something new. please never stop coding end-to-end in your videos. thank you, you are amazing!
@priyankasagwekar34082 жыл бұрын
This video was really helpful. It was 1 hour bootcamp covering everything about ANN with pytorch- from loading datasets, defining neural network architecture and optimizing the hyperparameters with optuna.
@sambitmukherjee17132 жыл бұрын
Super cool Abhishek. Loved every section, especially the "poor man's early stopping"... ;-)
@mikhaeldito4 жыл бұрын
Thank you for sharing your knowledge. This is an amazing tutorial with no inaccessible jargons. 10/10 highly recommend.
@AnubhavChhabra3 жыл бұрын
Great explanation! Making lives easier one layer at a time :)
@ephi1244 жыл бұрын
I am writing a research paper in this area. I can't wait!
@Phateau4 жыл бұрын
Really appreciate the effort you put in the video. This is world class. Thank you
@lokeshkumargmd4 жыл бұрын
This is first time I am watching your video. Very informative !!!. Thanks for sharing 😇
@bhumikachawla9149 Жыл бұрын
Great video, thank you!
@shaikrasool13164 жыл бұрын
Every time some things new.. thank you so much
@neomatrix3694 жыл бұрын
Love the video, Hyperparam optimisation is one of my favs and this video tops it all, so now I gotta do this on my model training! :tada:
@malachinelson26223 жыл бұрын
you prolly dont care at all but does anyone know a tool to log back into an Instagram account? I stupidly forgot the password. I would love any help you can offer me
@kaysencasen95193 жыл бұрын
@Malachi Nelson instablaster :)
@malachinelson26223 жыл бұрын
@Kaysen Casen i really appreciate your reply. I found the site through google and Im waiting for the hacking stuff atm. Takes a while so I will get back to you later with my results.
@malachinelson26223 жыл бұрын
@Kaysen Casen It worked and I finally got access to my account again. I am so happy:D Thanks so much, you really help me out!
@kaysencasen95193 жыл бұрын
@Malachi Nelson happy to help =)
@TheOraware3 жыл бұрын
wonderful mate , much appreciated for sharing it
@priyankasagwekar34082 жыл бұрын
For those looking for loading the models and using them on test dataset: model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) model.eval()
@yasserahmed27813 жыл бұрын
what a gem
@jeenakk78274 жыл бұрын
That was a very informative session. Is Hyperparameter tuning covered in your book? I think I should buy a copy!! Thanks
@abhishekkrthakur4 жыл бұрын
Yea. it is but if you just want hyperparameter optimization, watch my other video
@RajnishKumarSingh4 жыл бұрын
Love the fun part👌
@AayushThokchom4 жыл бұрын
A general question: Is HPO hyped? If ensemble performs much better, should we invest time in HPO given we have limited time? Thoughts!!
@tiendat36023 жыл бұрын
awesome. But one question that, how to deal with overfit and underfit issue while building the end-to-end fine-tuning model ?
@sindhujaj59072 жыл бұрын
Thanks for the amazing video! Here in this example will the hidden size and dropout change for each hidden layer or remain same for the hidden layers?
@siddharthsinghbaghel4412 жыл бұрын
Do you have any blogs??, I like reading more than watching
@kaspereinarson10612 жыл бұрын
Thanks for a great video! So just to be clear: you’re using standard 5 fold CV thus optimising for a set of hyper parameters that finds the best loss across (the mean of) all 5 folds. Wouldn’t it be more suitable to split the train data into train / val and then optimize the hyper parameters individually for each fold (nested CV) ?
@kannansingaravelu3 жыл бұрын
Hi Abhishek, just landed up on this video. I am not sure whether you addressed this earlier. I am curious to know your preference of torch as against tensorflow or keras.
@kuberchaurasiya4 жыл бұрын
Great. Waiting eagerly. Will you use (sklearn)pipelines?
@abhishekkrthakur4 жыл бұрын
pytorch
@MadrissS4 жыл бұрын
Hi Abhishel, very cool video as always, don't you think we should reset the early_stopping_counter at 0 after a new best_loss is found (line 62 at 41:20 in the video). Thanks !
@priyankasagwekar34082 жыл бұрын
I have 5 models saved for each fold at the end of execution. If I am not wrong they are essentially the same model saved 5 times. I was looking for a way to load the models and use them on test dataset. Pytorch Documentation shows following way, model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) model.eval() now initialising the model object (step 1) is an issue in the absence of logs and knowledge of exact architecture of best model. Also you need to define optuna sampler seed to reproduce the results.
@avinashmatani99804 жыл бұрын
Do you have any videos, if I want to learn the basics of what you did at the start. Like for eg: at the start you created a class.
@stilgarfifrawi71553 жыл бұрын
Great video . . . but when can we get a mustache tutorial?
@oligibbons2 жыл бұрын
Why do you keep the same number of neurons in every layer? How would you change your approach for deep learning models of different shapes?
@hiteshvaidya33313 жыл бұрын
why did you make loss function static?
@HabiburRahamaniit4 жыл бұрын
Respected sir , I have a question regarding a problem if we have a variable length input dataset and variable length output dataset how would we train or build a neural network model for that dataset?
@renatoviolin4 жыл бұрын
Maybe a Recurrent Neural Network (RNN), that aim to solve this problem of different input size for each sample.
@RajnishKumarSingh4 жыл бұрын
Sir, What best trial value tells us after every trial? I have used it with lightgbm seems working but doesn't do well with test dataset After every trial I calculated accuracy it is giving me approx 0.9942 for every trial not same but 1st two digit after decimal is same.
@valentinogolob91373 жыл бұрын
shouldn''t we set the early_stopping_counter to zero each time the valid_loss is smaller than the best_loss ?
@marwaneelazzouzi29993 жыл бұрын
i think we should
@hasanmoni39284 жыл бұрын
How can I buy your book in Bangladesh?
@ankushjamthikar97802 жыл бұрын
What is to be done if I want to tune the activation function as well in the neural network? How and where should include the line of code for it?
@jamesmiller25214 жыл бұрын
Where is your GM hoodie? 😤😁
@neomatrix3694 жыл бұрын
Same question from me, why were you not wearing it ;) :P
@abhishekkrthakur4 жыл бұрын
next time :)
@MisalRaj4 жыл бұрын
👏👏
@neomatrix3694 жыл бұрын
Any plans to make videos using other HyperParam Optimisation frameworks? I have a washlist I can share if you like ;)
@abhishekkrthakur4 жыл бұрын
check out my other video :) and send the list too please
@bjaniak1024 жыл бұрын
What is Julian from Trailer Park Boys doing in your thumbnail though?
@abhishekkrthakur4 жыл бұрын
lol
@jonatan01i3 жыл бұрын
You could speed up evaluation if you put the prediction in a torch.no_grad() context.
@mazharmumbaiwala92443 жыл бұрын
at 34:42, whats the use of `forward` function?
@fredoliveira12233 жыл бұрын
Its method of nn.Module, when you define a model the forward function is where you define how the data should pass through the layers of your neural network to make a prediction
@vasudhajoshi47662 жыл бұрын
Hello Sir I followed this tutorial to estimate the hyperparameters for my CNN model. When I am freezing the initial layers of my model, I am facing an error in the line: "optimizer = getattr(optim, param['optimizer'])(filter(lambda p: p.requires_grad, model.parameters()), lr=param['learning_rate'])" where param['optimizer'] is 'optimizer':trial.suggest_categorical('optimizer', ['Adam', "RMSprop"]) and param['learning_rate'] and param['learning_rate']: 'learning_rate':trial.suggest_loguniform("learning_rate",1e-6, 1e-3). The error is IndexError: too many indices for tensor of dimension 1. Can you please explain why I am facing this error?
@Falconoo73832 жыл бұрын
I want also for my CNN+LSTM model. If you resolve the error, can you please help me?
@rubenallaert96542 жыл бұрын
Hi, where can I find the code?
@abhishekkrthakur2 жыл бұрын
its more of a code along video.
@nikolabacic97904 жыл бұрын
Did not tune random seed smh
@Prasad-MachineLearningInTelugu4 жыл бұрын
🧚♀️🧚♀️🧚♀️🧚♀️🧚♀️
@Raghhuveer4 жыл бұрын
You said that this is just a dummy example, how to use such methods in some bigger problems, say training a RCNN?