**Advantages and Disadvantages of RMSE (Root Mean Square Error):** 1. **Advantages:****Advantages and Disadvantages of RMSE (Root Mean Square Error):** 1. **Advantages:** - **Time of Convergence:** RMSE tends to converge faster than Mean Squared Error (MSE). This is because RMSE puts more emphasis on large errors due to the square root operation, which can help the optimization algorithm converge more quickly. - **Robustness to Outliers:** RMSE is less sensitive to outliers compared to MSE. Outliers have a smaller impact on RMSE because the square root tends to reduce the influence of extreme values. - **Gradient Descent:** RMSE works well with gradient descent optimization algorithms. The square root in RMSE helps in preventing the algorithm from getting stuck in regions with large errors. 2. **Disadvantages:** - **Interpretability:** The square root operation makes RMSE less intuitive to interpret compared to MSE. It might be harder to explain the significance of the error to non-technical stakeholders. - **Sensitivity to Large Errors:** RMSE can be sensitive to large errors, and the model may be penalized heavily for a few significant mistakes. In some cases, this sensitivity may not be desirable. - **Non-Negativity:** RMSE cannot handle negative prediction errors well, as the square root of a negative value is undefined. This can be a limitation in certain applications. **Comparison with MSE (Mean Squared Error):** - **Time of Convergence:** RMSE often converges faster than MSE due to its emphasis on larger errors. - **Robustness to Outliers:** RMSE is more robust to outliers than MSE. Outliers have a reduced impact on RMSE because of the square root. - **Gradient Descent:** Both RMSE and MSE work well with gradient descent, but RMSE may converge faster. - **Shape of Gradient Descent:** The shape of the gradient descent curve for RMSE is smoother compared to MSE. The square root operation in RMSE can help prevent the algorithm from getting stuck in regions with large errors. In simple terms, RMSE is like MSE with a square root, making it converge faster, less sensitive to outliers, and having a smoother gradient descent curve. However, it may be less intuitive and sensitive to large errors. Choose between them based on the specific characteristics of your data and the goals of your model.
@aryanthombre29718 ай бұрын
chat gpt used*
@Analytix_AI2 ай бұрын
Bilkul sahi bola bhai ** hatana hi bhul gaya 😂@@aryanthombre2971
@Bhupi_Journey Жыл бұрын
RMSE advantages are: Advantage- 1- Differentiable (Explanation - Because it is same is MSE, only it is sqrt (MSE) 2- It has local min and Global min (Explanation- similar to point 1) 3- Robust to outliers (since it is square root of a square) 4- It has the same unit (since it is square root of a square)
@demon3769 Жыл бұрын
I'm just saying Disadvantages: Convergence take more time?
@nishchalbasyal Жыл бұрын
Sry But It is not Robust to Outliers - Because It already give the square difference before taking the square root
@pragyantiwari3885 Жыл бұрын
Thank u so much for making us understand these metrics and you are focusing on this channel too... Plz make videos on feature engineering and feature selection for these ml models as it has become a very crucial part to learn.
@someshkumarmishra82797 ай бұрын
bhaiya thank you for this amazing series 🙏🙏🙏🙏
@celestialgamer3604 ай бұрын
Advantages of RMSE: RMSE combines the advantages of both MSE and MAE. Like MSE, it penalizes larger errors more, but the square root brings it back to the same unit as the actual values, making it more interpretable. RMSE is often preferred when you want to measure how spread out the errors are, especially when large errors are particularly undesirable.
@lakshyakumarpandey3827 ай бұрын
advantages of RMSE (according to me obviously) are :- it would be robust to outliers , there will be parabolic curve , ofc differentiable , same unit
@ashishvinod2193 Жыл бұрын
RMSE: when we have outliers of data and we find mse and we do the rmse which root of mse then the error will also decrease due to the root square. Unit will be same
@dhruvilsheladiya62508 ай бұрын
rmse is work for root in mean squared error then will be error of value can be high range and its after use rmse so will be output is same as a...suppose 15 of error value so mse is 225 then use rmse so output is 15 that's it... so very closely error can be small its every time for used RMSE...
@rubayetalam8759 Жыл бұрын
Why are the cost function and mean squared errors called the same thing? WHEN THE COST FUNCTION IS 1/2M AND THE MSE IS 1/N. AND M=N.....
@SwapanRaj-ry3mm6 ай бұрын
in RSME GRAPH is differentiable, has same unit, mostly used in deep learning. disadvantage - not robust to outliers
@SaurabhSingh-sy1pe5 ай бұрын
@1:47 it should be 1/2n in cost function
@malixahmed4 ай бұрын
exactly
@prasadtelalwar8394 Жыл бұрын
Thank you
@DevrajSingh-mt3wwАй бұрын
nice
@mdhasan_3134 Жыл бұрын
cant able to download notes from github , can anyone help please
@rahulsinghkiddys Жыл бұрын
Sir, I have bought your Data Science Hindi course, but it is not working. Not a single video has come. What's the problem so far?
@krishnaikhindi Жыл бұрын
Please try contacting the support..they will guide you
@praveenhannehiya Жыл бұрын
How to buy data science course
@Bhupi_Journey Жыл бұрын
My question is why we are using MSE instead of RMSE in Linear regression?
@ashishvinod2193 Жыл бұрын
Suppose we have outliers in data and when we implement L.R then it also trying to find best fit line so risk is it also considered outliers data and trying to find best fit line among them so and if we find mse then the error also squared due to the mse so our model can’t predict properly and model accuracy also decrease .. and cost will be high.. major difference /changes in output value .. Rmse : we’re trying reduce those errors due to the outliers if we apply rmse so we’re square root those error and trying to minimize those errors and find best find line among them.. Unit will be same bcz error will be squared root that’s why..i hope you got it👍🏻
@rubayetalam8759 Жыл бұрын
hello sir, for the cost function we used 1/2n now for MSE we are using 1/n. Is it okay?
@arshim4782 Жыл бұрын
That 1/2 is just there to cancel the 2 which will be in numerator after differenctiation of x^2. You can ignore it completely if you want. For even detailed explanation check video no. 2 in this playlist