is code se mere saare 9 test cases pass ho gye. lekin sochne me thoda tym laga.. if __name__ == '__main__': n = int(input()) arr = map(int, input().split()) first_highest = -9999 second_highest = -9999 for number in arr: if number > first_highest: second_highest = first_highest first_highest = number elif (number > second_highest) and (first_highest != number): second_highest = number print(second_highest)
@BeingSam73 күн бұрын
Calculator code likha hu, kuch test cases jupyter pe chalaya to sahi result de to raha h, Manish tum bhi thoda dekh ke plz suggest karna agar kuch galat ho ya fir isko better kaise kare.. waise Google karke clear_output nikala kyunki output screen "Ghich pich" ho ja raha tha.. And once again thanks alot Manish. from loguru import logger from IPython.display import clear_output user_input = int(input("Please enter the first number ")) operation = input("Please enter one of the mathematical operation i.e. +, -, *, /, = ") result=user_input while(operation != '='): if operation not in ['+', '-', '*', '/']: clear_output() operation = input('Please enter the correct mathematical operation i.e. +, -, *, / or press = for result ') continue user_input_next = int(input("Please enter the next number ")) if operation == '+': result = result + user_input_next clear_output() elif operation == '-': result = result - user_input_next clear_output() elif operation == '*': result = result * user_input_next clear_output() elif operation == '/': if user_input_next == 0: logger.info("Can't divide by Zero(0)") continue result = result / user_input_next clear_output() operation = input("Please enter the next mathematical operation i.e. +, -, *, / or press = for result ") logger.info(f"The result is {result}")
@BeingSam73 күн бұрын
Manish, kabhi kabhi hum from xyz se import karte hain abc, but kabhi kabhi from nahi use kar ke direct import likhte h, to woh kaha se import karega kaise pata chalta h usko and difference kya h. kab kya use karenge..??
@PragteeTathe3 күн бұрын
number= int(input("enter the number")) if(number%2==0): print("number is even") else: print("number is odd")
@BeingSam73 күн бұрын
count = 0 for i in range(len(data.get('MAINDATA'))): for dict1 in data.get('MAINDATA')[i].get('HeaderFields'): if 'FieldTypeName' in dict1: count+=1 print(count) output - 38 Manish isme kitna optimize kar sakte hai plz batana..
@ajaypatil18813 күн бұрын
please a playlist on SQL
@mansigoyal47964 күн бұрын
There is a correction - Snowflake schema takes more storage than star schema
@manish_kumar_14 күн бұрын
I don't agree. Could you please tell me why you think so. Or any blog post which talks about this in detail.
@RohitSingh-we9mo4 күн бұрын
@Manish Kumar -Aapne bataya tha ki coalesce me partition size reduce hota hai but yha pe add kyu kr rhe dynamically coalescing me.Iska explain kr denge
@uditkapadia71044 күн бұрын
Partition automatically hota hai kya
@ajaypatil18814 күн бұрын
Bhiaya make a playlist on SQL
@shashank.rajput5 күн бұрын
Bahut sahi insaan hai, hamesha seedhe mudde ki baat karta hai... Waah !❤
@dineshpandey50085 күн бұрын
I have watched so many videos of spark but never find such explanation... amazing ..... You have very deep knowledge about spark.... solute....
@gaurav_singh10175 күн бұрын
Hello Manish, I am presently working as a Data Engineer at Hyland, and it is your theory and practical playlists that I've been following to get me up and running with regard to learning Spark. Your content is very useful not just to me but to many professionals out there. Please do keep up the good work-although some may be casually watching, there are plenty of us who are seriously following through in the long term. Your work is making a real difference, and I really value your commitment. Thank you for all that you do, and please continue the great work! Best regards, Gaurav.
@siddhantmishra65815 күн бұрын
Hi Manish, Thanks for the videos.. I have one question. How the dataframe is immutable if I can make the changes in the dataframe and save in made changes in the same dataframe. for ex ,df = df.dropDuplicates()
@abhijeetsinghbatra81435 күн бұрын
x = {'A': 10, 'B': 20, 'C': 20} abs = {'A':2,'B':3} tot={} total=50 for key,value in x.items(): if key in abs: days = total-abs[key] tot[key] = days*x[key] else: tot[key]=total*x[key] print(tot)
@gudiatoka5 күн бұрын
Bhaiya Please add Structured Streaming Delta table Unity catalog,hive metastore Delta live table And workflows App bahot sahi se padhate ho ....sequential manner...baki sab k videos se kuch pata nai chalta hai
@PARESH_RANJAN_ROUT5 күн бұрын
App kar pate ho toh, mein bhi kar sakta Manish Bhai
@L-Surya5 күн бұрын
Bro,i would be better if the session is in English,
@PARESH_RANJAN_ROUT6 күн бұрын
Great Bhai
@AKSHAY28ful6 күн бұрын
The best teacher❤
@akhiladevangamath12776 күн бұрын
Whats the year of experience they asked in this interview
@manish_kumar_16 күн бұрын
3
@Anupriya-d6e5 күн бұрын
@@manish_kumar_1Could please tell how to prepare for someone who is 2 years experienced
@akhiladevangamath12776 күн бұрын
Whats the year of experience they asked in this interview
@manish_kumar_16 күн бұрын
3
@akhiladevangamath12776 күн бұрын
Whats the year of experience they asked in this interview
@akhiladevangamath12776 күн бұрын
Whats the year of experience they asked in this interview
@akhiladevangamath12776 күн бұрын
Whats the year of experience they asked in this interview
@akhiladevangamath12776 күн бұрын
Whats the year of experience they asked in this interview
@Learner_attitude6 күн бұрын
Hello manish sir very nice explanation thanks a lot for making videos
@welcomefoodies69016 күн бұрын
Hi manish bhaiya, yahan pr 4 actions hit hue ha : read, inferschema, sum, show
@abhishekrajput40127 күн бұрын
Thank you Manish Bhaiya
@fashionate65277 күн бұрын
🎉congratulations
@SajidKhanWORLDWIDE3057 күн бұрын
Hi Manish bhai, one question, When we create schema for corrupt record @11:00, I type "corrupt_record" instead of "_corrupt_record". Due to this missing prefix "_" , the corrupt records displayed to me was in different format. Can you please explain why an underscore mattered here, though it was passed in quotes as a column name? Anyone else who knows the reason can pitch in here. Schema created by you: emp_schema = StructType( [ StructField("id", IntegerType(), True), StructField("name", StringType(), True), StructField("age", IntegerType(), True), StructField("salary", IntegerType(), True), StructField("address", StringType(), True), StructField("nominee", StringType(), True), StructField("_corrupt_record", StringType(), True) ] ) Output: +---+--------+---+------+------------+--------+-------------------------------------------+ |id |name |age|salary|address |nominee |_corrupt_record | +---+--------+---+------+------------+--------+-------------------------------------------+ |1 |Manish |26 |75000 |bihar |nominee1|null | |2 |Nikita |23 |100000|uttarpradesh|nominee2|null | |3 |Pritam |22 |150000|Bangalore |India |3,Pritam,22,150000,Bangalore,India,nominee3| |4 |Prantosh|17 |200000|Kolkata |India |4,Prantosh,17,200000,Kolkata,India,nominee4| |5 |Vikash |31 |300000|null |nominee5|null | +---+--------+---+------+------------+--------+-------------------------------------------+ ========================================================================================================================================================= Schema created by me: emp_schema = StructType( [ StructField("id", IntegerType(), True), StructField("name", StringType(), True), StructField("age", IntegerType(), True), StructField("salary", IntegerType(), True), StructField("address", StringType(), True), StructField("nominee", StringType(), True), StructField("corrupt_record", StringType(), True) ] ) Output +---+--------+---+------+------------+--------+--------------+ |id |name |age|salary|address |nominee |corrupt_record| +---+--------+---+------+------------+--------+--------------+ |1 |Manish |26 |75000 |bihar |nominee1|null | |2 |Nikita |23 |100000|uttarpradesh|nominee2|null | |3 |Pritam |22 |150000|Bangalore |India |nominee3 | |4 |Prantosh|17 |200000|Kolkata |India |nominee4 | |5 |Vikash |31 |300000|null |nominee5|null | +---+--------+---+------+------------+--------+--------------+ Enjoying this playlist❤ Thanks, Sajid
@devmaharaj46407 күн бұрын
Hello Manish Sir, Do you have any course that teaches from end to end real world projects ? Kindly let me know please.
@manish_kumar_17 күн бұрын
No devmaharaj, I don't take any course, only upload videos on youtube
@sachin_gupt7 күн бұрын
Bhai databricks bar bar Your session has expired, please authenticate again. aaa rha , any solution ?
@trainsam228 күн бұрын
I am from USA. I am a Senior Manager Data Engineering. You are an amazing teacher , keep it up.
@deveshkumar37458 күн бұрын
Today, i found this gem video. Awesome explanation of each questions. Please keep on making the videos. Btw I am also from Bihar and worked in ZS Associates for around 2 yrs 😊
@laxmanprajapati39458 күн бұрын
lis=[] for key in data['MAINDATA']: for eles in key['HeaderFields']: lis.append(eles['FieldTypeName']) print(len(lis)) answer: 38
@PARESH_RANJAN_ROUT8 күн бұрын
Thankyou Bhai
@laxmanprajapati39458 күн бұрын
labour_cost = {"Mahesh": 500, "Ramesh": 400 , "Mithilesh": 400, "Suresh": 300, "Jagmohan": 1000, "Rampyare": 800} total_working_day = 50 labour_absent_day = {"Mahesh": 3,'Jagmohan' : 7} total_bill = {} for key1,value1 in labour_cost.items(): for key2,value2 in labour_absent_day.items(): if key1 == key2: total_bill[key2] = (total_working_day - value2) * value1 break else: total_bill[key1] = total_working_day * value1 print("Total Labour Cost:",sum(total_bill.values())) print("Individual Cost:") for key in total_bill: print(key,total_bill[key])