Why is deep learning taking off? (C1W1L04)

  Рет қаралды 119,320

DeepLearningAI

DeepLearningAI

7 жыл бұрын

Take the Deep Learning Specialization: bit.ly/3bEJYFN
Check out all our courses: www.deeplearning.ai
Subscribe to The Batch, our weekly newsletter: www.deeplearning.ai/thebatch
Follow us:
Twitter: / deeplearningai_
Facebook: / deeplearninghq
Linkedin: / deeplearningai

Пікірлер: 17
@normalchannel4747
@normalchannel4747 4 жыл бұрын
Thank you dear Andrew Ng I truly love your courses :)
@behnamsadat
@behnamsadat 4 жыл бұрын
at Minute 1, the effect of the amount of data on different machine learning algorithms compared to deep learning is explained. I can not find a reference for this. Does anyone know the reference?
@kazenokize2
@kazenokize2 3 жыл бұрын
thank you for this incredible series from a humble brazilian
@anirbansaha1996
@anirbansaha1996 3 жыл бұрын
If you don't understand what it means, don't worry about it. Finally @6:26. Okay, now I am seeing Andrew Ng finally.
@kshitijdeansam8093
@kshitijdeansam8093 6 жыл бұрын
for text classification what is the standard amount of data is good for deeplearning
@saanvisharma2081
@saanvisharma2081 5 жыл бұрын
@@Horopter exactly. And we do need high end processors to perform text classification.
@rickjamesbtch9417
@rickjamesbtch9417 4 жыл бұрын
Depends on: -How powerful is your system -Size of text -Desired accuracy One rule that fits all does not exist :) it would be gr8 if it did!
@machinistnick2859
@machinistnick2859 3 жыл бұрын
Thanks sir ji
@AmitKumar-hm4gx
@AmitKumar-hm4gx 5 жыл бұрын
At 07:04 Andrew states that though the slope for ReLu function is also 0 on the left, it seems that this function works better than Sigmoid. I would like to know if slopes for both the function are 0 on the left, what is the reasoning for ReLu to be better?
@MrCmon113
@MrCmon113 5 жыл бұрын
Learning is faster, because the gradient is equal to the output for positive weights and weights being effectively turned off leads to a sparse model. Still there is problems with ReLU, but in practice they seem to work best.
@dheerajkura5193
@dheerajkura5193 5 жыл бұрын
In the video the explanation is more inclined towards deep learning... Andrew mentioned that when the data is more deep learning is applied but didn't talk about the varsity of the data... Is that mean when the size of the data Increased varacity of the data will also increase..??
@dheerajkura5193
@dheerajkura5193 5 жыл бұрын
@ting machineintellect more dimensions in the data.. For say to predict the image there are lot of dimensions.. Machine has to understand the pixel color coding which ranges from 0-255, at this situation deep learning is useful.. When it comes to predicting the binary classification problem like predicting the loan probability usually around 20-25 parameters(its not limited) are considered which decides the applicant financial credibility .. And the classification prediction is either customer will replay loan/not So simply to say... When the data is scaled we can't directly apply deep learning that leads to overfitting problem... Before applying algorithms proper analysis of data needs to be done... Since Andrew is talking about deep learning and the examples he mentioned like speech recognition / image recognition.. Machine learning models wont give better predictions for speech/image recognition or text sequence predictions Hope this explanation suits the question
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
nice explanation
@lemyul
@lemyul 5 жыл бұрын
thanks dee
@user-qg6yl7gl8c
@user-qg6yl7gl8c 8 ай бұрын
Where is the test, quizes??
@retelligence_wading
@retelligence_wading 4 жыл бұрын
항공대 1빠
@user-oh2zg9nx9e
@user-oh2zg9nx9e 8 ай бұрын
the content is good, but the video quality is pretty bad
About This Course (C1W1L05)
2:28
DeepLearningAI
Рет қаралды 82 М.
CUDA Explained - Why Deep Learning uses GPUs
13:33
deeplizard
Рет қаралды 232 М.
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 84 МЛН
Spot The Fake Animal For $10,000
00:40
MrBeast
Рет қаралды 175 МЛН
DAD LEFT HIS OLD SOCKS ON THE COUCH…😱😂
00:24
JULI_PROETO
Рет қаралды 16 МЛН
Why Deep Representations? (C1W4L04)
10:34
DeepLearningAI
Рет қаралды 46 М.
Gradient Descent Explained
7:05
IBM Technology
Рет қаралды 61 М.
Why Non-linear Activation Functions (C1W3L07)
5:36
DeepLearningAI
Рет қаралды 91 М.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 16 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,2 МЛН
Logistic Regression Cost Function (C1W2L03)
8:12
DeepLearningAI
Рет қаралды 134 М.
11. Introduction to Machine Learning
51:31
MIT OpenCourseWare
Рет қаралды 1,6 МЛН
Support Vector Machines: All you need to know!
14:58
Intuitive Machine Learning
Рет қаралды 139 М.
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 84 МЛН