Why NEVER use pandas' get dummies for creating dummy variables | Machine Learning

  Рет қаралды 5,009

Rachit Toshniwal

Rachit Toshniwal

Күн бұрын

Пікірлер: 27
@akashkunwar
@akashkunwar 2 жыл бұрын
But, I was taught that cleaning and encoding should be done before splitting and that scaling shoud be performed . Is it wrong?
@joeyk2346
@joeyk2346 3 жыл бұрын
Hi Rachit - why not use get dummies with the entire data (before splitting into train/test)? Wouldn't it solve the potential problem? Thanks!
@rachittoshniwal
@rachittoshniwal 3 жыл бұрын
No, if you use it before splitting, you're essentially looking at the entire data, which would lead to data leakage. If you have an unknown category in the test set and you use get dummies, you'd make a column for that beforehand itself, which is incorrect technically
@akashkunwar
@akashkunwar 2 жыл бұрын
​@@rachittoshniwal But, I was taught that cleaning and encoding should be done before splitting and that scaling shoud be performed . Is it wrong?
@utkar1
@utkar1 2 жыл бұрын
@@akashkunwar yes, look up data leakage
@BiologyIsHot
@BiologyIsHot Жыл бұрын
@@utkar1 standard one-hot encoding before splitting should not lead to data leakage. In fact you generally *should* one-hotencode before splitting if you want to be efficient about it. That said I don't use pd.get_dummies() so I am not sure if it does something weird.
@BiologyIsHot
@BiologyIsHot Жыл бұрын
@@rachittoshniwal maybe it's a language thing, but I'm not sure what you mean by "unknown categories" but if you mean missing values that is not data leakage. Data leakage is when you have some information encoded in a variable that,is incorporating information about the distribution of the test split. For instance, if you have some encoding/processing step that involves min/max calculations or taking the mean of the entire dataset. This should be done using only values from the training split. One-hot encoding doesn't incorporate info about the distribution of categorical predictors in the test split. It sounds like you're theorizing some sort of scenario where uncommon categorical labels are included in onlg one split. That is an entirely separate issue from data leakage. And if you really have so few observations relative to the abundance of categorical levels you will never find a satisfactory solution.. Encoding after splitting to somehow "avoid" that is *not* a good solution, nor is it an example of data leakage.
@devpython8956
@devpython8956 2 жыл бұрын
Hi, Rachit Toshniwal , what a good point. See, to overcome that, after applying get.dummies you have to align the dateframe. If you do that, then you can run any machine learning model and it will be ok. Please see an example below: X_train = pd.get_dummies(X_train) X_valid = pd.get_dummies(X_valid) X_test = pd.get_dummies(X_test) X_train, X_valid = X_train.align(X_valid, join='left', axis=1) X_train, X_test = X_train.align(X_test, join='left', axis=1)
@wtfashokjr
@wtfashokjr 4 ай бұрын
why pd.get_dummies not working for me ?
@KA00_7
@KA00_7 5 ай бұрын
learned something new today. Thank you so much
@eleonoraocello610
@eleonoraocello610 2 жыл бұрын
Hi Rachit, so what to do to encode categorical variables avoiding mismatch? I'm working on a large dataset 8before the splitting) and I already missed some categories.
@rachittoshniwal
@rachittoshniwal 2 жыл бұрын
You can use one hot encoder
@eleonoraocello610
@eleonoraocello610 2 жыл бұрын
@@rachittoshniwal thanks!
@venkyvenky4715
@venkyvenky4715 2 ай бұрын
but you can do getdummies before traintestsplit
@atiaspire
@atiaspire 4 жыл бұрын
I was doing wrong for whole time. you saved
@rachittoshniwal
@rachittoshniwal 4 жыл бұрын
I'm glad I could help! :)
@Chiefempress
@Chiefempress 3 жыл бұрын
Me 2...🥺🥺
@jeweltilak767
@jeweltilak767 2 жыл бұрын
Namaste and Thank you. Your videos are very helpful
@sandeshkharat2273
@sandeshkharat2273 3 жыл бұрын
What to use when dataset column has multiple categorical values??? ( like 200 )
@rachittoshniwal
@rachittoshniwal 3 жыл бұрын
Hi, I believe you could try categorizing the "rare" categories into a new "other" category, by saying that "any category having less than 1% records will be put into "other".
@sumanthpichika5295
@sumanthpichika5295 3 жыл бұрын
For the Sandesh answer. It means we are replacing one more category with the name "other" for the the categories with less than 1% if we have more categories. So that we can avoid having more categories now
@ArpitRawat
@ArpitRawat 2 жыл бұрын
@Sandesh check hash method.
@Brain_quench
@Brain_quench 3 жыл бұрын
Thank you.
@BiologyIsHot
@BiologyIsHot Жыл бұрын
This video is wrong and I advise you all to ignore it.
@udayak6964
@udayak6964 Жыл бұрын
what is wrong
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 330 М.
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 275 #shorts
00:29
Não sabe esconder Comida
00:20
DUDU e CAROL
Рет қаралды 50 МЛН
REAL 3D brush can draw grass Life Hack #shorts #lifehacks
00:42
MrMaximus
Рет қаралды 12 МЛН
How do I encode categorical features using scikit-learn?
27:59
Data School
Рет қаралды 139 М.
Turn numbers into categories with the Pandas "cut" method
9:44
Python and Pandas with Reuven Lerner
Рет қаралды 5 М.
Normalization Vs. Standardization (Feature Scaling in Machine Learning)
19:48
Pandas for Data Science in 20 Minutes | Python Crash Course
23:06
Nicholas Renotte
Рет қаралды 132 М.
How do I create dummy variables in pandas?
13:14
Data School
Рет қаралды 86 М.
One Hot Encoder with Python Machine Learning (Scikit-Learn)
9:03
Ryan & Matt Data Science
Рет қаралды 22 М.
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 275 #shorts
00:29