I know this maybe a silly question but at 21:47 how do I know that the "item_id" is going to be "7"?
@Makima-qi3zg22 сағат бұрын
great🧠
@dkieransКүн бұрын
Please tell me you revoked that key!
@fft47522 күн бұрын
thank you for making this videos
@syedistiukraja24544 күн бұрын
Is these videos are AI generated?
@sumankhatri26795 күн бұрын
Thank you professor for this great mastercourse
@sumankhatri26795 күн бұрын
Thank you professor for this nice course
@cdtavijit7 күн бұрын
This was quite amazing. It covered quite a lot in that one hour including various agentic workflows, reflection and tool calling. Of course it was just a starter, but I kept taking notes to search for more in depth materials or code on autogen.
@shubhamkumarsingh94937 күн бұрын
00:04 Supervised learning is the process of filling a straight line to your data using a linear regression model. 01:16 Using a linear regression model to estimate the price of a house based on its size. 02:20 Linear regression is a type of supervised learning model used for regression problems. 03:33 Classification involves a small number of possible outputs, while regression has infinitely many possible numbers as outputs. 04:45 Understanding data plot and notation in machine learning. 06:11 The input data used to train the model is called a training set. 07:41 The dataset has 47 rows, each representing a different training example. 09:13 An index I refers to a specific row in the table Crafted by Merlin AI.
@SajjadAhmad-oi5oq8 күн бұрын
You make a great app as a like teacher when the topic is not understandable we send them a picture of this topic she explain us on this picture and further explain
@mentalhealthcore8 күн бұрын
Excellent!
@ThrivewithLotus8 күн бұрын
Thank you so much dear Andrew, I watched all the sessions and it was really informative, just need to start deploy some models in practice :)
@_XY_9 күн бұрын
👏👏
@challengeyourmind39379 күн бұрын
Your content is falling off
@MuhammadOmar-qx6nh9 күн бұрын
59:00
@Andreas-gh6is9 күн бұрын
This has become mostly a platform for startups advertising their products, right?
@cdtavijit7 күн бұрын
While it seems like it since they keep producing more and more videos from these new startups, I found them very useful as I can quickly go through the videos to see if they are useful.
@MuhammadOmar-qx6nh10 күн бұрын
55:57
@MuhammadOmar-qx6nh10 күн бұрын
44:00
@prvizpirizaditweb232410 күн бұрын
why do we assume that the parameters for the layers are the same ?
@christopheprotat11 күн бұрын
Awesome course. So much better and educational content than loads of random content from the web. I wasted time until I found this course. Now things looks clearer and I think I am ready to explore my use case. You made my day!
@AfnanKhan-ni6zc11 күн бұрын
Leaky ReLU (Rectified Linear Unit) and ReLU are both popular activation functions in Deep Learning, but with a key difference in how they handle negative inputs. **ReLU (Rectified Linear Unit):** * **Function:** ReLU simply sets any negative input to zero, and keeps positive inputs unchanged. Imagine a switch that turns on (outputs a value) only for positive inputs and stays off (outputs zero) for negative inputs. * **Advantage:** ReLU is computationally efficient and helps prevent vanishing gradients, a problem that can hinder training in deep neural networks. * **Disadvantage:** ReLU suffers from a problem called "dying ReLU." If a neuron consistently receives negative inputs and its weights aren't adjusted properly, it can become permanently inactive (stuck at zero output) and never learn. **Leaky ReLU:** * **Function:** Leaky ReLU addresses the "dying ReLU" problem. It acts like a regular ReLU for positive inputs, but for negative inputs, it allows a small, non-zero gradient to flow through. This small gradient helps keep the neuron slightly active and allows it to potentially learn from negative data points. * **Advantage:** Leaky ReLU overcomes the "dying ReLU" issue and can learn from both positive and negative data. * **Disadvantage:** Leaky ReLU introduces an additional parameter, the slope for the negative input region. Choosing the optimal slope can require some experimentation. It's also slightly less computationally efficient than ReLU due to the extra calculation for the negative slope.
@MuhammadOmar-qx6nh11 күн бұрын
32:40
@leodexter19111 күн бұрын
Indians assemble
@MuhammadOmar-qx6nh12 күн бұрын
20:18
@MuhammadOmar-qx6nh13 күн бұрын
11:39
@SimonJohnPhillips13 күн бұрын
Your vids are really helping me man and i've been in digital marketing for 13 years, generated hundreds of millions! so good! i'm inspired to keep actually releasing stuff!
@tzaidi234913 күн бұрын
I had no intent of watching upto this point but Im loving it. Thanks for your and your team’s hard work!
@rostamdinyarisharifabad280815 күн бұрын
where is the lab link??????????????????????????????
@pradachan11 күн бұрын
its paid tho
@aryanbodke988915 күн бұрын
cat meow, dog woof
@redline729815 күн бұрын
Is this course still relevant today after a year has passed?
@rostamdinyarisharifabad280815 күн бұрын
Where is the Lab file?!!! I couldn't find it!!!!
@brunofilipeaguiar16 күн бұрын
3:42 random Andrew Ng flex on chinese logographs writing skills
@MaramHasan-ii3ey16 күн бұрын
This course is not running properly on the website just like RLHF, please help. I really want to take this course
@michaellai87216 күн бұрын
thats fire
@ShoyomboRaphael17 күн бұрын
I'm so excited about this course. Andrew Ng is da GOAT!
@sandeepvk17 күн бұрын
Andrew is doing a fab job. I have foolowing him for quite a while and he piqued my interest in long before chatgpt. I am building my foundations in python and C before I pick AI, honestly which area in AI should one pick is hard one in itself
@VenkatesanVenkat-fd4hg17 күн бұрын
Very valuable course for me and AI engineers by Senior Data Scientist.....Always doing great job by Andrew and kudos to the tech teams.....
@STRATEGICBUSINESSful17 күн бұрын
How do I enroll
@Deeplearningai17 күн бұрын
Hi @STRATEGICBUSINESSful! You can enroll here: learn.deeplearning.ai/courses/introduction-to-on-device-ai
@datal3x19 күн бұрын
Second/Third ? :)
@pnachtwey19 күн бұрын
I have a gradient descent with momentum optimizing parameters for a differential equation. There are 5 parameters. It is impossible to know the shape of the ovals in 5 dimensions. It is also real and not perfect data. Determining alpha and beta is a trick and often takes trial and error. I wish these videos would use the same symbols. There is no consistency. Just about any minimize routine is faster than gradient descent for small number of parameters.
@blairt810120 күн бұрын
explained extremely well!
@user-jw7yl9er4b20 күн бұрын
Hi! To all those people who are searching for the optional lab codes, you can ask chatgpt with detail requirements addressing the course name etc. and it will give you all the codes. Since you have been able to come this far, you will be able to find the code too. Happy coding.
@tilkesh20 күн бұрын
Thank you very much.
@aaen941721 күн бұрын
Awesome! Thank you
@shujaa21 күн бұрын
First 😂
@virendrasingha742520 күн бұрын
congrats 🎉
@healthtechbro-8821 күн бұрын
Not complete course found for free
@MrJekyllDrHyde122 күн бұрын
Why do we need multiple-agents ? Can't a crew of Agents be replaced by ONE very long prompt used for 1 "agent" ? You can concatenate the goals of the agents into 1 single prompt, so that GenAI can do the steps sequentially.
@alejandro_ao22 күн бұрын
This is awesome! Thank you so much to both of you for this 🔥
@kylev.824822 күн бұрын
Bless you both. The miracles you provide are underrated. 🙏