I spent hours trying to figure this stuff out through reading chapters and chapters in Python books. Then I come here, and everything I was trying to figure out was explained in 9 minutes. This was IMMENSELY helpful, thanks!
@dataschool8 жыл бұрын
Awesome!! That's so great to hear!
@mea979058 жыл бұрын
I like your concise and precise videos. I really appreciate your efforts.
@dataschool8 жыл бұрын
Thanks, I appreciate your comment!
@reubenwyoung5 жыл бұрын
Thanks so much for this! You helped me combine 629 files and remove 250k duplicate rows! You're the man! *Subscribed*
@dataschool5 жыл бұрын
Great to hear! 😄
@hongyeegan7334 жыл бұрын
wow! you are already teaching data science in 2014 when it is not even popular! Btw, your videos are really good, you speak slow and clear, easy to understand and for me to catch. Kudos to you!
@dataschool4 жыл бұрын
Thanks very much for your kind words!
@jordyleffers92444 жыл бұрын
lol, just when I felt you wouldn't handle the exact subject I was looking for: there came the bonus! Thanks!
@minaha92132 жыл бұрын
just find your channel , just watched this as my first watch for your videos , and pressed subscribe !!! , cause your explanation for the idea as whole is very remarkable 😃 thanks a lot .
@dataschool2 жыл бұрын
Thank you!
@cablemaster88744 жыл бұрын
Really, your teaching method is very good, your videoes give more knowledge, Thanks Data School
@dataschool4 жыл бұрын
You're very welcome!
@rashayahya5 жыл бұрын
I always find what I need in your channel.. and more... Thank you
@dataschool5 жыл бұрын
Great to hear!
@emanueleco73634 жыл бұрын
You are the greatest teacher in the world
@shashwatpaul33304 жыл бұрын
I have watched a lot of your videos; and I must say that the way, you explain is really good. Just to inform you that I am new to programming let alone Python. I want to learn a new thing from you. Let me give you a brief. I am working on a dataset to predict App Rating from Google Play Store. There is an attribute by name "Rating" which has a lot of null values. I want to replace those null values using a median from another attribute by name "Reviews". But I want to categorize the attribute "Reviews" in multiple categories like: 1st category would be for the reviews less than 100,000, 2nd category would be for the reviews between 100,001 and 1,000,000, 3rd category would be for the reviews between 1,000,001 and 5,000,000 and 4th category would be for the reviews anything more than 5,000,000. Although, I tried a lot, I failed to create multiple categories. I was able to create only 2 categories using the below command: gps['Reviews Group'] = [1 if x
@MrTheAnthonyBielecki7 жыл бұрын
Exactly what I needed! Why not set up a Patreon so we can show some love?
@dataschool7 жыл бұрын
Thanks for the suggestion! I am planning to set one up soon, and will let you know when it's live :)
@dataschool6 жыл бұрын
I just launched my Patreon campaign! I'd love to have your support: www.patreon.com/dataschool/overview
@ranveersharma16664 жыл бұрын
love u brother . u r changing so many lives, thanku ....the best teacher award goes to Data school.
@dataschool4 жыл бұрын
Thanks very much for your kind words!
@supa.scoopa10 ай бұрын
THANK YOU for the keep tip, that's exactly what I was looking for!
@dataschool10 ай бұрын
Great to hear!
@dhananjaykansal80975 жыл бұрын
I didn't find much in Duplicates. Thanks so much sir. I can't thank u enough.
@dataschool5 жыл бұрын
You're welcome!
@Kristina_Tsoy2 жыл бұрын
Kevin your videos are super helpful! thank you!!!
@dataschool2 жыл бұрын
You're very welcome!
@balajibhaskarraokondhekar18233 жыл бұрын
You have done very Good jobs about under standing of DataFrame and make very easy to understanding DataFrame it so easy with the people which are working in excel Best wishes from me
@dataschool3 жыл бұрын
Thanks!
@oeb55425 жыл бұрын
A very much appreciated efforts. Thanks a million for sharing with us your python knowledge. It has been a wonderful journey with your precise explanation. keep the hard work! Warm regards.
@dataschool5 жыл бұрын
Thanks very much! 😄
@cradleofrelaxation64732 жыл бұрын
This is so helpful! Pandas has the best duplicates handling. Better than spreadsheets and SQL.
@dataschool2 жыл бұрын
Thanks!
@tushargoyaliit6 жыл бұрын
Myself from Punjab .M studying at IIT even then i got satisfaction of pandas from ur videos only . Thanks please give all u done in text format or like tutorial ,
@dataschool6 жыл бұрын
Is this what you are looking for? nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/pandas.ipynb
@Beny1236 жыл бұрын
Thank you! here is a way to extract the non-duplicate rows df=df.loc[~df.A.duplicated(keep='first')].reset_index(drop=True)
@dataschool6 жыл бұрын
Thanks for sharing!
@jessicafletcher06102 жыл бұрын
OMG I WANT TO THAT YOU SOOOO MUCH 😊I been on the problem for days and the way you explain it make so easy then how I learned in class. I was so happy not to see that error message 😂 Thank you
@dataschool2 жыл бұрын
You're so very welcome! Glad I could help!
@cyl10404 жыл бұрын
I can solve the duplicate data from my CSV file~~~ Thank you. However, I suggest you can do more in this video. I think you can show after the delete result list. Such as: >> new_data=df.drop_duplicates(keep='first') >> new_data.head(24898) If you have to add it, I think this video will be more perfect~~~
@randyle25117 жыл бұрын
I like it the way you explain things...it's very clearly and precisely. My problem is little more complex where I want to remove the entire row where it met the following conditions. If any rows in Latitude column that has the same value as previous row (-1) AND the same row in the Longitude column that has the same values as previous row THEN remove the whole entire row that duplicated. Basically we have to compare two consecutive ROWS and COLUMNS and IF both conditions are met then remove the entire row. Let's say if there are 15 rows have the same values(i.e, If Lat[1,1] == Lat[0,1] & Lon[1,2] ==Lon [0,2] then remove, else skip, # Lat = Col1, Long = Col2) in both Latitude and Longitude columns then remove them all except keep one. Hope you got my points... :-). Looking forward to see your code.
@dataschool7 жыл бұрын
Glad you like the videos! It's not immediately obvious to me how I would approach this problem, but I think that the 'shift' function from pandas might be useful. Good luck! Sorry that I can't provide any code.
@anthonygonsalvis1213 жыл бұрын
Very methodical explanation
@dataschool3 жыл бұрын
Thanks!
@KaiZergTV2 жыл бұрын
Thank you so much, you made my day. Finally i found the row of code, that i really needed to finish my task:)(Code Line 17)
@dataschool2 жыл бұрын
Glad I could help!
@chandrapatibhanuprakashap18622 жыл бұрын
It helps me a lot. Can you explain how do we get the count of each duplicated value.
@imad_uddin3 жыл бұрын
Thanks a lot. It was a great help. Much appreciated!
@dataschool3 жыл бұрын
You're welcome!
@deki90to3 жыл бұрын
HOW DO YOU KNOW WHAT I NEED? YOU ARE MY FAV TEACHER FROM NOW
@dataschool3 жыл бұрын
Ha! Thank you! 😊
@narbigogul57236 жыл бұрын
That's exactly what I was looking for, great explanation, thanks for sharing!
@dataschool6 жыл бұрын
You're welcome!
@harneetlamba95126 жыл бұрын
Hi, In the above video, at 1:12 minutes - the pandas DataFrame is displayed in Tabular form, with all the variables separated by vertical line. But in latest jupyter notebook, we get a single line below variable name. Can we get the same display as earlier, with new Jupyter version ?
@dataschool6 жыл бұрын
There's probably a way, but it's probably not easy. I'm sorry!
@rajoptional4 жыл бұрын
Amazing and thanks bro , the right place for data queries
@dataschool4 жыл бұрын
Happy to help
@mariusnorheim6 жыл бұрын
How can I remove duplicate rows based on 2 column values? I want to drop a row if two column values are the same. E.g. I have one column with Country = [USA, USA, Canada, USA] and an income column with values = [1000, 900, 900, 900]. I only want to drop the duplicate where both the country AND the income is 900. While if one row has country = Canada and income = 900 and second row has USA with income 900 I want to keep them both. Answers appreciated! Your videos are really helpful for learning pandas. Keep up the good work!
@dataschool6 жыл бұрын
Sorry, I'm not quite clear on what the rules are for when a row should be kept and when it should be dropped. Perhaps you could think of this task in terms of filtering the DataFrame, rather than using the drop duplicates functionality?
@mariusnorheim6 жыл бұрын
Thanks for the reply! I managed to improve my code to avoid the duplicates in the first place. Keep up your great work with the videos, really helpful for improving my skills!
@dataschool6 жыл бұрын
Great to hear! :)
@mahdibouaziz53534 жыл бұрын
you're amazing we need more videos in your channel
@dataschool4 жыл бұрын
I do my best! I've got 20+ hours of additional videos available to Data School Insiders at various levels: www.patreon.com/dataschool
@jeffhale7396 жыл бұрын
Great video, Kevin! Super useful!
@dataschool6 жыл бұрын
Thanks Jeff! :)
@rationalindian54523 жыл бұрын
Brilliant video .
@dataschool3 жыл бұрын
Thanks!
@prakmyl4 жыл бұрын
Awesome videos Kevin. Thanks a to for the knowledge share.
@dataschool4 жыл бұрын
Thanks Prakash!
@omgthisana109 ай бұрын
very well explained ty !
@dataschool9 ай бұрын
You're very welcome!
@robind9996 жыл бұрын
simple and useful. thanks Kevin.
@dataschool6 жыл бұрын
You're welcome!
@alishbakhan10842 жыл бұрын
Thank you so much💕 your videos are really amazing...can you tell how to read any csv(without header on first line) and set first row with non null values as header...
@halildurmaz78273 жыл бұрын
Clean and informative !
@dataschool3 жыл бұрын
Thanks!
@cafdo4 жыл бұрын
Great video. This helped me tremendously. How would you go about finding duplicates "case insensitive" with a certain field?
@goldensleeves4 жыл бұрын
At the end are you saying that "age" + "zip code" must TOGETHER be duplicates? Or are you saying "age" duplicates and "zip code" duplicates must remove their individual duplicates from their respective columns? Thanks
@lindafl25283 жыл бұрын
hello, thank you for the video, I'm wondering if you can make some tutorials about the API requests
@dataschool3 жыл бұрын
Thanks for your suggestion!
@MrMukulpandey2 жыл бұрын
love to have more videos like this
@dataschool2 жыл бұрын
Thanks for your support!
@somantalha48882 жыл бұрын
beneficial videos. ❤
@dataschool2 жыл бұрын
Thanks!
@deltatv93356 жыл бұрын
Hey Buddy, You are amazing and you remind me of Sheldon Cooper (BBT) because of the way you talk and also both of you are super smart. :-) One request- Please cover outliers sometime. Thanks.
@dataschool6 жыл бұрын
Ha! Many people have commented something similar :) And, thanks for your topic suggestion!
@engineeringlife2775 Жыл бұрын
Bonus Question 7:55
@benogidan7 жыл бұрын
cheers for this :) will definitely consider purchasing the package
@dataschool7 жыл бұрын
You're very welcome! The pandas library is open source, so it's free!
@benogidan7 жыл бұрын
sorry i meant on your website, the course ;)
@dataschool7 жыл бұрын
Awesome! Let me know if you have any questions about the course. More information is here: www.dataschool.io/learn/
@ravinduabeygunasekara8336 жыл бұрын
Great video! Btw, how do you know all these stuff? Do you take classes or read books?
@dataschool6 жыл бұрын
Work experience, reading documentation, trying things out, teaching, reading tutorials, etc.
@jamesdoone35168 жыл бұрын
Really great gob. Thank you very much!!
@anantgosai88843 жыл бұрын
That was so accurate, thanks a lot genius!
@dataschool3 жыл бұрын
You're very welcome!
@peekayji7 жыл бұрын
Great! Very well explained.
@dataschool7 жыл бұрын
Thanks!
@abdulazizalsuayri49087 жыл бұрын
full of useful info. Thanx man
@dataschool7 жыл бұрын
You're very welcome! :)
@jatinshetty4 жыл бұрын
Yo! You are a superb teacher!
@dataschool4 жыл бұрын
Thank you!
@JoshKelson6 жыл бұрын
Trying to figure out how to replace values above/below a threshold with the mean or median. If I find values that are skewing the data from a column, but don't want to exclude the whole row and drop the row, I just want to replace the value in one of the columns with a mean/median value. Can't figure out how to do this! IE: I want to replace all values in column 'age' that are above 130 (erroneous data), with the mean age of all the other values in 'age' column.
@dataschool6 жыл бұрын
I'm sorry, I don't know the code for this off-hand. However, this would be a great question to ask during one of my monthly live webcasts with Data School Insiders: www.patreon.com/dataschool (join at the "Classroom Crew" level to participate)
@arpitmittal78654 жыл бұрын
very useful videos.. can you please tell me how to find duplicate of just one specific row?
@dataschool4 жыл бұрын
Sorry, I don't fully understand. Good luck!
@da_ta6 жыл бұрын
thanks for tips and bonus ideas
@dataschool6 жыл бұрын
You're welcome!
@syyamnoor97926 жыл бұрын
you are a hero...
@dataschool6 жыл бұрын
That's very kind of you! :)
@mansoormujawar12797 жыл бұрын
Because of your quality panda series I started following you. @duplicate - in my use case instead of drop duplicate I would like to keep 1st instance and just remove other duplicate values from specific column, so shape will remain same after removing duplicate values from column. Really appreciate if you got some time to answer this, thanks.
@dataschool7 жыл бұрын
Glad you like the series! I'm not sure I understand your question - perhaps the documentation for drop_duplicates will help? pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html
@emilyyyjw5 жыл бұрын
Hi, I am wondering whether you could identify an issue that I am having whilst cleaning a dataset with the help of your tutorials. I will post the commands that I have used below: df["is_duplicate"]= df.duplicated() # make a new column with a mark of if row is a duplicate or not df.is_duplicate.value_counts() -> False 25804 True 1591 df.drop_duplicates(keep='first', inplace=True) #attempt to drop all duplicates, other than the first instance df.is_duplicate.value_counts() # -> False 25804 True 728 I am struggling to identify why there are still some duplicates that are marked 'True'? Kind regards,
@dataschool5 жыл бұрын
That's an excellent question! The problem is that by adding a new column called "is_duplicate", you actually reduce the number of rows which are duplicates of one another! Instead of adding that column, you should first check the number of duplicates with df.duplicated().sum(), then drop the duplicates, then check the number of duplicates again. Hope that helps!
@dandixon94668 жыл бұрын
Great work man!
@dataschool8 жыл бұрын
Thanks!
@ItsWithinYou3 жыл бұрын
If I have a datataframe with a million rows and 15 columns, how do I figure out if any columns in my dataframe has mixed data type?
@DimasAnggaFM5 жыл бұрын
great video!!
@dataschool5 жыл бұрын
Thanks!
@ayatbadayatbad76885 жыл бұрын
Thank you for this useful tutorial. Quick question, how do you check whether a value in column A is present in column B or not; not necessarily on the same row. It is like the samething that VLOOKUP function looks for in Excel. Many thanks for your feed-back!
@dataschool5 жыл бұрын
I'm not sure I understand your question, I'm sorry!
@sujaysonar84255 жыл бұрын
Thanks for the video
@dataschool5 жыл бұрын
You're welcome!
@brianwaweru97644 жыл бұрын
wait Kevin, keep=first means what is duplicated are the rows towards the bottom, meaning they have a much higher index. Keep= last means ?? Oh men am getting mixed up. Could someone please explain to me. Kevin,Please?
@killaboody78895 жыл бұрын
you are amazing. thank you ever much
@dataschool5 жыл бұрын
You're very welcome!
@chandramohanbettadpura49935 жыл бұрын
I have some missing dates in my dataset and want to add the missing dates to the dataset. I used isnull() to track these dates but I don't know how to add those dates into my dataset..Can you please help.Thanks
@dataschool5 жыл бұрын
You might be able to use fillna and specify a method: pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
@mmarva35973 жыл бұрын
Thank you for this content! I have a question : how can we handle quasi redundant values in different columns ? (Imagine two different columns each containing similar values at 80%). Thanks a lot
@dataschool3 жыл бұрын
When you say "handle", what is your goal? If you want to identify close matches, you can do what is called "fuzzy matching". Here's an example: pbpython.com/record-linking.html Hope that helps!
@mmarva35973 жыл бұрын
@@dataschool Merci beaucoup for the reply. Let me explain my question : I have two variables/features named categories (milk, snack,pasta,oil,etc) and categories_en(en:milk , en:snack, en: pasta). My goal is to keep only one feature since both features share the same information. It was suggested that running a chi square test would help me decide which feature to keep but it seems silly to me :( ( I have almost 2millions records)
@dataschool3 жыл бұрын
It probably doesn't matter which feature you keep, if they contain roughly the same information.
@zma3141253 жыл бұрын
Thank you!
@dataschool3 жыл бұрын
You're welcome!
@ajithtolroy54416 жыл бұрын
This is what I want, thanks for sharing :)
@dataschool6 жыл бұрын
Great!
@reazahmed70043 жыл бұрын
How do I access iPython Jupyter Notebook link? it is not available in the github repository.
@dataschool3 жыл бұрын
Is this what you were looking for? nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/pandas.ipynb
@antonyjoy54944 жыл бұрын
This is case of complete duplicates. So what should we do when we have to deal with incomplete duplicates..Ex age,gender and occupation same but zip is different.. could you also make a video on that please..
@oasisgod14213 жыл бұрын
Great video. But I'd like just to find a duplicate column and then go to another column and find the duplicate and go to another column and find the duplicate and remain only one row with certain information.
@moremirinplease3 жыл бұрын
i love you, sir.
@dataschool3 жыл бұрын
😊
@asadghnaim23323 жыл бұрын
When I use the parameter keep=False I get a number of rows less than the first and last combined what is the reason of that??
@VNTHOTA5 жыл бұрын
You should have used sort_values option with users.loc[users.duplicated(keep=False)].sort_values(by='age')
@dataschool5 жыл бұрын
Thanks for your suggestion!
@zhaoqilong19948 жыл бұрын
is that any simple regular expression on python tutorial available?
@dataschool8 жыл бұрын
For learning regular expressions, I like these two resources: developers.google.com/edu/python/regular-expressions www.pythonlearn.com/html-270/book012.html
@sherlocksu11318 жыл бұрын
HI, when you mention the "inplace" in the video, I am happy that PD have this parameter for experiment, but a problem comes, should I rember all the method that have the inplace parameter ;and rember the method that affect the origial dataframe in case that I use the DF already change when doing the calculation. That is a hugh job to remove all the method that have 'inplace' parameter or doesnot have ,isn't it..... TOT
@sherlocksu11318 жыл бұрын
That is a huge
@dataschool8 жыл бұрын
The 'inplace' parameter is just for convenience. I do recommend trying to memorize when that parameter is available. But if you forget, that's fine, because you can always write code like this: ufo = ufo.drop('Colors Reported', axis=1) ...instead of this: ufo.drop('Colors Reported', axis=1, inplace=True)
@sherlocksu11318 жыл бұрын
Is all inplace argument in method way default by "False"? My problem is that: I worry that somethimes the method change original dataframe by method that have "inplace parameter"; somethimes the method does not change original dataframe. so i confuse when it affect the original DataFrame , since the wrong judgemet might be lead to bad conclusion.
@dataschool8 жыл бұрын
I think that 'inplace' is always False (by default) for all pandas functions.
@SahibzadaIrfanUllahNaqshbandi7 жыл бұрын
Thanks for good channel. I like it very much. I have a query. I am working on tweets, I have to remove duplicate tweets as well as tweets which are different in at most one word. I can do first part, Will you please guide me how can I do the second part?? Thanks
@dataschool7 жыл бұрын
That's probably beyond the scope of what you can do with pandas. Perhaps you can take advantage of a fuzzy string matching library.
@SahibzadaIrfanUllahNaqshbandi7 жыл бұрын
Thanks...I will look into it.
@asifsohail59003 жыл бұрын
How can we efficiently find near duplicates from a dataset?
@srincrivel16 жыл бұрын
you're doing god's work son!
@dataschool6 жыл бұрын
Thanks!
@muralikrishnapolipallivenk25727 жыл бұрын
Hi I am big fan of you work, and I have learned a lot from the videos, can you please help me on how can I use v-lookups of excel in pands
@dataschool7 жыл бұрын
This might help: medium.com/importexcel/common-excel-task-in-python-vlookup-with-pandas-merge-c99d4e108988 Good luck!
@artistz18317 жыл бұрын
Hey Kevin, I am confused for the drop duplicates here: the number of duplicated age and zipcode is 14; but after your drop the duplicates, the shape is 927. The total shape is 943, so the correct shape should be 943 - 14 = 929? Thanks a lot for your help!!!
@dataschool7 жыл бұрын
I disagree with your statement "the number of duplicated age and zipcode is 14"... could you explain how you came to that conclusion? Thanks!
@sagarbhadani19326 жыл бұрын
Hi, need help. Suppose if we have table such as transaction contains atleast 1 common item in the item column. How to code which are the transactions having coffee atleast? Transaction Item 1 Tea 2 Cookies 2 Coffee 3 cookies 4 Bread 4 Cookies 4 Coffee
@dataschool6 жыл бұрын
I'm not sure off-hand, good luck!
@grijeshmnit5 жыл бұрын
💯+ like. Thank you very much sir.
@dataschool5 жыл бұрын
Thank you!
@prakmyl4 жыл бұрын
i get a error when i run users.drop_duplicates(subset=['age','zip_code']).shape . error "'bool' object is not callable" even i get the same error if i run users.duplicated().sum()
@dataschool4 жыл бұрын
Remove the .shape, and see what the results look like. Also, compare your code against mine in this notebook: nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/pandas.ipynb
@krzysztofszeremeta11256 жыл бұрын
how is the best way to compare data from tow file (in the same schema)
@dataschool6 жыл бұрын
I don't know if there's one right way to do this... it depends on the details. Sorry I can't give you a better answer!
@oszi70585 жыл бұрын
You are amazing!
@dataschool5 жыл бұрын
Thank you!
@subuktageenshaikh20417 жыл бұрын
Hi, I have a doubt how do i remove duplicates from rows which are text or sentences like in RCV1 data set.
@dataschool7 жыл бұрын
The same process showed in the video will work for text data, as long as the duplicates are exact matches. Does that answer your question?
@KimmoHintikka8 жыл бұрын
I had weird error with this one. Setting index col with index_col='user_id' does not work for me it raises KeyError: 'user_id' error. Instead I had to run users = pd.read_table('bit.ly/movieusers', sep='|', header=None, names=user_cols) first and then users.set_index('user_id') for this tutorial to work
@dataschool8 жыл бұрын
Interesting! I'm not sure why that would be. But thanks for mentioning the workaround!
@Animesh190075 жыл бұрын
How to keep rows that contains null values in any column and remove completed rows?
@dataschool5 жыл бұрын
Does this help? kzbin.info/www/bejne/nHSwo4KVi9-Ygpo
@captainamericary87973 жыл бұрын
thank you ...!!!
@dataschool3 жыл бұрын
You're welcome!
@harshitagrwal9975 Жыл бұрын
user id are not same then how it can be duplicated?
@maheshaknur7 жыл бұрын
Thanks for this video :) How can we remove duplicates,delete columns,delete rows and insert new columns using python script ?
@dataschool7 жыл бұрын
Glad you liked the video! This video shows how to remove rows or columns: kzbin.info/www/bejne/nZ-4fJ6JbptnjbM Does that help to answer your question?
@Drivebyeasy7 жыл бұрын
Hello I want to know the concept of ReSampling please help
@dataschool7 жыл бұрын
I'm sorry, I don't have any resources to offer you. Good luck!
@Ishkatan2 жыл бұрын
Good lesson, but the datatype has to match. I found I had to process my pandas tables with .astype(str) before this worked.
@hiericzhu7 жыл бұрын
Hi, I have question here. I want to mark the continue duplicate value like this [1,1,1,0,2,3,2,4,2], my expected result is [True,True, True,False,False,False,False,...]. But the pandas.duplicated(keep=False) returns [True,True,True,False,True,False,True,False,True], The function treat the '2' in 2,x,2,y,2,z,2 sequence as duplicated. but it is not I want. How to remove it? I just want to mark the 1,1,1 as true. thanks.
@dataschool7 жыл бұрын
How about just using code like this: df.columnname == 1 Does that help?
@sasa48406 жыл бұрын
Thanks my question how we can sort months name
@dataschool6 жыл бұрын
This video might be helpful to you: kzbin.info/www/bejne/r3TKe3qpnJWLl5Y
@ashishacharya84278 жыл бұрын
replace similar duplicate values with one of the values how to solve it??
@dataschool8 жыл бұрын
I think the process would depend a lot on the particular details of the problem you are trying to solve.