🌟Check out my second KZbin channel (Coding Professor) kzbin.info/door/JzlfIoF8nmWqJIv_iWQVRw 🌟 Download Kite for FREE www.kite.com/get-kite/?
@tahabihaouline23333 жыл бұрын
nice video, i just want to know, how can i train this to get training data and testing data. an example will be really good
@matriks_yang_bikin_bingung2 жыл бұрын
Hallo prof, how to handle imbalance dataset in multilabel classification data text?
@matriks_yang_bikin_bingung2 жыл бұрын
Hallo prof, how to handle imbalance dataset in multilabel classification data text?
@alexioannides33053 жыл бұрын
It would have been nice to demonstrate the impact these resampling methods have on the test metrics of some benchmark model (especially one that can use class weights in the loss function). In my experience, resampling can sometimes make a model perform worse and it can be better to use models with class-weighted loss functions.
@caioglech3 жыл бұрын
Great example. Perhaps you could make another video showing the oversampling on training data. Lots of people (myself included) start doing the oversampling on the whole dataset, which leads to data leakage... which is a mistake.
@naveenkumarmangal96533 жыл бұрын
Thanks very much for this comment.
@xin26682 жыл бұрын
Really helpful comment, thank you
@michellpayano50513 жыл бұрын
This is a clear and simple guide to get started, thanks for sharing! About your last question, I am curious what would be your answer, which approach do you prefer from your experience?
@DataProfessor3 жыл бұрын
Hi, I prefer undersampling
@michellpayano50513 жыл бұрын
@@DataProfessor Could you please tell some reasons why?
@DataProfessor3 жыл бұрын
@@michellpayano5051 I prefer to use actual data and thus undersampling. Oversampling introduces artificial data upon balancing.
@michellpayano50513 жыл бұрын
@@DataProfessorI understand , thank you!!
@TinaHuang13 жыл бұрын
Ooo awesome tutorial! Love how clear it is
@DataProfessor3 жыл бұрын
Thank you! Cheers!
@eduardodimperio3 жыл бұрын
Why do undersampling instead slice the dataset do take the same amount of results?
@aashishmalhotra2 жыл бұрын
Can u explain how does logistics regression behave with imbalanced dataset
@thinamG3 жыл бұрын
It's helpful for me and many more. Great tutorial, Chanin. Thank you so much for sharing with us.
@DataProfessor3 жыл бұрын
Happy to hear that! Thanks Thinam!
@rattaponinsawangwong54823 жыл бұрын
Oh, I seem to be the first guy here. As a rookie DS, I have to deal with the imbalanced dataset, too. My curiosity is we should perform undersampling or oversampling within the pipeline of cross-validation (say, K-fold cv) or should we do it before cross validation?
@DataProfessor3 жыл бұрын
Hi, You can apply this prior to CV.
@sericthueksuban91513 жыл бұрын
I've been following your channel since the collab with Ken Jee without realizing your name. Now you're inspiring me to pursue Data science even more! Thank you krub Ajarn Chanin! 🙏😂
@samuelbaba54063 жыл бұрын
Very great job professor ! Thank you so much for this clear video . By the way , do you think that after applying oversampling for example and after training a model (like XGBoost ) on the data , it would be interesting to use the Matthews Correlation Coefficient as a KPI to measure the efficiency of the model ? Or do you think it is not necessary? Thank you 🙏🏽
@DataProfessor3 жыл бұрын
Yes, definitely, MCC is a great way to measure the performance of classification models, the plus side is that it is also more resistant to imbalanced data than that of accuracy.
@Ibraheem_ElAnsari3 жыл бұрын
Great tutorial Prof ! I could see how someone would use this in a test dataset, does it have other usecases ? Thanks a lot !
@DataProfessor3 жыл бұрын
Hi, thanks for watching Ibraheem. Actually, we could use it in the training set in order to obtain a balanced model.
@kvdsagar3 жыл бұрын
Professor can you share your contact details
@amaransi49003 жыл бұрын
Thanks a lot. I am looking for your explain protein ligand interaction through AI.
@muhammaddanial45493 жыл бұрын
Hey, @amar I am also working on Machine learning-based virtual screening almost completed ML Models for VS. If u have any publications on it i need some help thanks.
@amaransi49003 жыл бұрын
@@muhammaddanial4549 hi i am on the beginning
@Gopaliofficial......7 күн бұрын
Thanks for the video... really appreciate
@gunjankumar22673 жыл бұрын
thanks for this quick guide to overcoming the imbalance issue. I like to know, before applying these oversampling or undersampling techniques.. do i need to like standardize my dataset, or I can go with the original form of the data set?
@akbaraliotakhanov12213 жыл бұрын
I cama here through Notification, thanks Professor. We will wait for new and interesting videos
@DataProfessor3 жыл бұрын
Awesome, glad to hear and thanks for supporting the channel!
@sherifarafa903 жыл бұрын
I want to thank you for the Bioinformatics Project from Scratch.. I managed to apply it on AChE and I am willing to apply it to other target. Thanks so much and waiting for other Models 😁
@DataProfessor3 жыл бұрын
Fantastic! Glad to hear that.
@sherifarafa903 жыл бұрын
@@DataProfessor Can you do a tutorial on how to implement Neural networks on drug discovery?
@muhammaddanial45493 жыл бұрын
@sherif Arafa can I get the link of these scratches?
@muhammaddanial45493 жыл бұрын
I am also working on AChe and BChe
@DataProfessor3 жыл бұрын
@@muhammaddanial4549 Awesome, sure the link is here kzbin.info/aero/PLtqF5YXg7GLlQJUv9XJ3RWdd5VYGwBHrP
@sanam68662 жыл бұрын
Should we calculate the molecular descriptors and then balance the data?
@Ghasforing23 жыл бұрын
Great tutorial as usual. Thanks for sharing, Professor!
@DataProfessor3 жыл бұрын
Glad you liked it!
@hubbiemid62093 жыл бұрын
in my data science course, we used the stratification parameter from train_test_split() from sklearn, how do they differ?
@DataProfessor3 жыл бұрын
That's a great question! Thanks for bring it up. Stratification maintains the ratio of the classes such that they train/test splits have roughly the same ratio of the classes (it does nothing with the class imbalance). On the other hand, data balancing will either bring up or bring down the minority or majority class, respectively, in order to make both to be the same size.
@ifeanyiedward2789 Жыл бұрын
Thanks alot . very precise and easy to understand
@Мага123-о2о3 жыл бұрын
Thanks for the lesson, professor! I'd like to ask one question if you don't mind. Should we always over/undersample to 1:1 ratio? I guess, in case the initial ratio of majority and minority classes is 99:1, it can cause some problems while modelling.
@DataProfessor3 жыл бұрын
Hi, the practice of addressing data balancing for a wide range of scenarios is a topic for research and experimentation. It might be worthwhile to check out published paper on the topic for various use case. Please feel free to share what you find.
@Мага123-о2о3 жыл бұрын
Thank you for your response! I will definitely research on this topic :D
@ahmedjamel421 Жыл бұрын
Great tutorial Sir, When you split the data into X and Y and performed the resampling method, how can you make a concatenation with each other later?
@ranahamed-h8s3 ай бұрын
Well, if my data are 3.000 and 17.000, what is better when using the ML models? After this, is there a possibility the data is still biased toward a specific target?
@allanmarzuki55343 жыл бұрын
What the side effect if we use synthetic data when handling the imbalance for building the models? And what if we have a lot of data, should we use oversample or undersample? Thank you prof
@minicorefacility3 жыл бұрын
Thank you so so much. This is something that I am looking for. I struggled with this step in R-language for many months. I understand that by randomly sampling the overweight samples to mix with the underweight samples, just one time and further do model developing -- would create a poor model. Thus, my question is 1. How many times should I randomly sample 2. Does the distribution of both overweight and underweight samples affect times that we have to sample? Could you please share your thoughts?
@mukeshkund44653 жыл бұрын
I think there are some scenarios where we can use this technique differently..Can you tell us the different scenarios where we can perform oversampling, undersampling or random sampling
@sebastiancastro41263 жыл бұрын
I think that in this case oversampling would be the right approuch due to the low number of compounds. Is this correct?
@DataProfessor3 жыл бұрын
Both are valid approaches, it is subjective, depending on the practitioner. Personally, I like to use undersampling.
@nikhilwagle84662 жыл бұрын
@@DataProfessor undersampling should only be done when the when the data is in millions or thousands. orelse the accuracy will get reduced.
@caiyu538 Жыл бұрын
I have a question. I have a lot of negative samples which means that the data are unlabeled. Their number is much bigger than labeled data. I must include them. In this situation, how to handle this kind of imbalance?
@tahabihaouline23333 жыл бұрын
nice video, i just want to know, how can i train this to get training data and testing data
@DataProfessor3 жыл бұрын
Hi, once the data is balanced, you can take the balanced data to perform data splitting to train and test data using the train_test_split function.
@aashishmalhotra2 жыл бұрын
Awesome explained every line of code lot helpful for Novice in understanding ipynb
@budisantosa98927 ай бұрын
do we not need to split the data into test and train before balancing?
@donrachelteo94513 жыл бұрын
Thanks Data Professor; may I also know if this method is applicable to imbalance datasets in text classification model? Thanks
@DataProfessor3 жыл бұрын
Yes this is application to imbalanced classes for classification model.
@donrachelteo94513 жыл бұрын
@@DataProfessor thanks for your reply professor 👍🏻
@negusuworkugebrmichael385610 ай бұрын
Thank you Prof. Very helpful
@joeyng73663 жыл бұрын
Hi professor, I am trying to do binary classification on advertising conversions using Markov Chain but I'm not sure how should I implement it. Do you have any suggestions on this?
@เท่กองสมบูรณ์3 жыл бұрын
What step for fix imbalance Before splits data or after splits in train set only
@aryasarkar16923 жыл бұрын
Hi! I have a doubt should we prefer undersampling or oversampling
@DataProfessor3 жыл бұрын
Hi, both are valid approaches and depends on the practitioner. Personally, I like to use undersampling.
@anandodayil6081 Жыл бұрын
How to know if we should use oversampling or undersampling?
@robinsonflores64823 жыл бұрын
Great video. Thanks for sharing!!
@DataProfessor3 жыл бұрын
It’s my pleasure, thank you 😊
@muhammaddanial45493 жыл бұрын
Hello sir I calculate the descriptors of ligand it 10k, use recursive features elimination then SVM and KNN model but my accuracy is low 0.82 and 0.83 how can I improve the accuracy(low mean paper published om same enzyme having accuracy 0.88...I tried correlation and drop the negative columns but it's not working. Need your help please.
@DataProfessor3 жыл бұрын
Hi, there's no sure path for achieving high model performance. Several factors come into play (descriptor type, feature selection approach, learning algorithm, parameter optimization, data splitting, etc.), which is a part of research. I would recommend to try out addressing the different factors mentioned previously. Hope this helps.
@kl88013 жыл бұрын
Thanks for the video but where is the notebook?
@DataProfessor3 жыл бұрын
Thanks for the reminder, the link is now in the video description.
@farahilyana99642 жыл бұрын
prof, thankyou for the nice video. But, i want to ask, how to show the balance data after had do SMOTE?
@kaustavdas65502 жыл бұрын
What do we do if there are more than 2 classes which are imbalanced?
@gamingdudes...75752 жыл бұрын
hi, how should i save this in the form of csv file
@DataProfessor2 жыл бұрын
You can use the to_csv function from pandas.
@gamingdudes...75752 жыл бұрын
@@DataProfessor when i handle my dataset using under sampling my accuracy is decreasing by 20 percent what should i do so,
@karmanyakumar82952 жыл бұрын
Data is missing. Link is not working for input
@jairovillamizar65883 жыл бұрын
You are a great professor!! Thanks a lot
@DataProfessor3 жыл бұрын
Thank you! 😃
@rafael_l03213 жыл бұрын
Thank you for the explanation! What is your opinion on creating decoys, that is, artificial data derived from the least represented class, for balancing? Do you know if this functionality is available in some library?
@KhadejaAl-nashad Жыл бұрын
how I appreciate a balance between more than two categories......example of diabetic retinopathy's classification is 5 categories and two balance
@cozyfootball Жыл бұрын
Helpful, thx
@hasankuluk6859 ай бұрын
There is a flaw, we should apply the methos on train set, not all data
@debatradas15973 жыл бұрын
Thank you so much
@DataProfessor3 жыл бұрын
You're welcome!
@datasciencezj33033 жыл бұрын
It's not been talked about: why is imbalance an issue?
@DataProfessor3 жыл бұрын
Yes, you're right. Here goes. Imagine we have a dataset consisting of 1000 samples. 800 belongs to class A and 200 belongs to class B. As class A has 4 times higher samples than class B, there is a high possibility that the model may be biased towards class A. To avoid such scenario, we can either perform undersampling where 800 is reduced to 200. Or we can perform oversampling where 200 is resampled to 800 samples. In both cases, the samples are balanced for both classes.
@datasciencezj33033 жыл бұрын
@@DataProfessor wil be "biased"? only if you use the accuracy as a measure.
@datasciencezj33033 жыл бұрын
use AUC to measure
@incentivee13 күн бұрын
Code Not avb in github
@juanmiranda40542 жыл бұрын
Te amo señor extraño mi modelo despegó
@samdaniazad30432 жыл бұрын
Over sampling
@budisantosa98927 ай бұрын
do we not need to split the data into test and train before balancing?