This is terrific. Any plans to do an update, as the Canvas has changed a bit in a couple of years.
@leu23047 күн бұрын
Thank you so much. Excellent explanation!
@prince_JB7 күн бұрын
Wow.. It looks realistic ❤
@charifahmad21339 күн бұрын
Incredible
@andreashermle271611 күн бұрын
Great tutorial, very well explained. Keep up the good work. Thank you.
@womanlikeme3312 күн бұрын
I appreciate your explanation and teachings. Heard my colleagues talk about this often and I was curious to find more information which is really simplified for a none technical person like me. I am happy I watched this. Thank you
@mthandenimathabelacap546624 күн бұрын
Clear explainantion of the Ridge() model. Very intuitive. SUBSCRIBED.
@sudhirkulaye26129 күн бұрын
I recently completed the "Python, Data Science, & AI - Level 2" course on the CFA Institute website. I really enjoyed the course and learned a lot. I'm interested in continuing my education in this area. Do you have any suggestions for what I should study next?
@vtmintАй бұрын
Hello professor, the lecture is very good, can I have the slides please, thank you
@editors_2127Ай бұрын
How to use map instead of slicer for filtering ????
@CatanTechАй бұрын
2 mins deep and I have the concept already... Great Job Professor.
@VenujanSritharАй бұрын
Well explained. Can you donan example with balanced dataset and explain about f1 score in imbalanced dataset also ?
@GaneshEswarАй бұрын
Amazing well explained
@zeinabhajiabotorabi7699Ай бұрын
👍👍
@HansScharlerАй бұрын
Are you planning any update to this course?
@sekharsamanta6266Ай бұрын
Really Helpful video to get a glance on what to use when
@Mai.Data123Ай бұрын
Thank you so much for this. For a non-IT person trying to learn Python, I understood this finally!
@MoneySoulАй бұрын
Can you post the arcile link?
@MoneySoulАй бұрын
wow, great video
@imransohail9611Ай бұрын
Aoa The provided link for material is not working please post a working link
@roshanshah8540Ай бұрын
RDS and Aurora are 2 different things
@sarahsarpong2076Ай бұрын
Mn
@mohammedobad2174Ай бұрын
I think distance based algorithms required scaling. Please double check
@xXMo7aLXx2 ай бұрын
جزاك الله خير بروف ريان, شرحك جدًا ممتع وواضح وطريقة تفسيرك للخطوات شيء هائل. شكرًا لك
@GeorgeWilliams-v9d2 ай бұрын
Harris Carol Harris Edward Jackson Jason
@chubbycheeks27312 ай бұрын
Demonastery
@Alias.Nicht.Verfügbar2 ай бұрын
the best explanation! finally understood, thanks!!
@mengistuaraya29342 ай бұрын
wow it wonderfull video my brother thanks to you describe prons and cons
@deshbhakt9922 ай бұрын
finally understand the regularization after wasting my all day. this is by far the best video on topic. thanks sir
@tamerelkot78072 ай бұрын
how can i download the csv file of the data u have used
@PekassAdams2 ай бұрын
Hi Prof Ryan, Please I want to know how could I reach you? Thanks in advance...
@PekassAdams2 ай бұрын
Amazing...
@mohammedshamil39762 ай бұрын
awesome..you are a gem sir
@ShandukaniVhulondo-wn5eh2 ай бұрын
Thank you, Prof., for very well-articulated AWS Sage Maker.
@wuyanchu2 ай бұрын
Thx and god bless 😊
@rueldonato81863 ай бұрын
I love your teaching, straight forward and clear. Also, the visual presentation is very easy to understand.
@faturismee3 ай бұрын
hi sir! i like your videos so much, but you're not uploading anymore? how's it going?
@amirshahmie3 ай бұрын
You're the best prof!
@moleculardescriptor3 ай бұрын
Something is not right in this lecture. If each subsequent tree is _the_same_, as shown here, then after 10 steps the 0.1 learning rate will be nullified, e.g. equivalent to the scaling = 1.0! In other words, no regularization. Hence, trees must be different, right?
@PrithviBathla-ub8oy3 ай бұрын
3:23
@maheshmichael69553 ай бұрын
Beautifully Explained :)
@rohithgowdax3 ай бұрын
It's was really helpful ❤
@TomekMachalewski3 ай бұрын
I really liked that you provided few examples for those more difficult distributions. What I would love to see is an explanation of where do the probability formulas in Poisson and Binomial distributions come from. I see that they are combinatorics formulas, but I'll have to check euler^(- miu) in Poisson.
@rex-qh9sy4 ай бұрын
standardization has nothing to do with normal distribution
@kaydigitalacademy72404 ай бұрын
Thank you
@mohanafathollahi76564 ай бұрын
Very well explained, thank you
@Unstable_Diffusion894 ай бұрын
So say you pass two trainign examples, the gradients get calculated for example 1 and the weights are adjusted. Then the same happens for example 2, but what if the net gradient adjustments are 0, this would be a computational redundancy. Is there any area of research or techniques where you can prevent unnecessary gradient calculations?
@BioniChaos4 ай бұрын
a year later and chatgpt is still generating great code! It can write quite complex code - the context window is gradually increasing