Web Scraping Multiple Pages with Python and Selenium + CSV File

  Рет қаралды 26,478

Neroplus IT

Neroplus IT

Күн бұрын

Пікірлер: 34
@ahmetturk5675
@ahmetturk5675 2 ай бұрын
Hey! I was checking for something like this , Thank's a lot
@jawadkazim6610
@jawadkazim6610 3 жыл бұрын
You're so underrated , You should have millions of subscriber 😌
@734833
@734833 Жыл бұрын
Finally what I searched for.
@manpreetkaur-dk5mm
@manpreetkaur-dk5mm 2 жыл бұрын
Thank you sir, you this tutorial helped me so much.
@athakur33
@athakur33 3 ай бұрын
Hi Alex, Could you pls make a tutorial on linkedin profile scrapping which can scrap Name, job title, location, Education, experience will be much appreciated
@marinuss6004
@marinuss6004 3 жыл бұрын
Underrated content creator LX_schlee
@TheMilind222
@TheMilind222 2 жыл бұрын
you're amazing love you sir
@dawidlorek363
@dawidlorek363 3 жыл бұрын
fantastic tutorial! ❤
@informationtoallj
@informationtoallj Жыл бұрын
Thankks again
@temblux2449
@temblux2449 2 жыл бұрын
Sir, I have completed your two courses. Web Scraping through APIs and Selenium. Sir, I’m fan of your teaching. Your way of teaching is easy and comprehensive. In Selenium, I’m facing issue regarding pagination. Structure of pagination is not being built by me. I saw this video but that’s complicated for me because method is different to udemy course. Sir, Please help me regarding to pagination structure regarding to udemy course for Selenium Thanks😊
@carachen4082
@carachen4082 2 жыл бұрын
Thank you so much for the tutorial! I have one small question about writing csv file that I wrote exactly same code with you but the output is not clearly categorized as yours. In my file it was all in one cell even though i split with semicolon. May you advise why it happened? I'm using VSCode.
@amrasfoor6269
@amrasfoor6269 3 жыл бұрын
thanks for this video may I ask you how to handle it when there is no id attribute for search bar
@diegocamelomartinez5566
@diegocamelomartinez5566 3 жыл бұрын
Hi :) I actually have a page where the lengths of the lists are not the same always. I am scrapping over 35 pages and on page 12 there is an item that doesn't have one of the attributes (e.g. description). Which ends up messing up everything. Do you have any suggestions on how to approach that?
@NihumDe
@NihumDe 2 жыл бұрын
maybe go with a "try: " and "except:" approach?
@marcogelsomini7655
@marcogelsomini7655 3 жыл бұрын
woow good stuff thanks man. writerow() function is maybe more handy?
@praveenhosamani2274
@praveenhosamani2274 3 жыл бұрын
Hey !!!!...please make a tutorial on web scrapping of store locators such as cars dealers information,two wheelers, etc using both beautiful soup and selenium...!
@justwowtv4998
@justwowtv4998 2 жыл бұрын
Great but what happens when you have a 1000 pages website? You will loop through all pages one by one? This will take days...
@informationtoallj
@informationtoallj Жыл бұрын
THANKS
@architmishra015
@architmishra015 Ай бұрын
How and where are you writing the xpath? I can't find any such options in inspect
@sowmiyasivakumar2425
@sowmiyasivakumar2425 3 жыл бұрын
Hi, Your demonstration was really helpful. I have a doubt...how to click on those jobs eg: Software Engineer and nav to the other tab? actually i need to do it for the entire listed jobs?...is there a way to do it?
@neroplus-it
@neroplus-it 3 жыл бұрын
there is a way to do that but it was not covered in the video. If there is a demand i will plan to create a tutorial which explicitly shows how to navigate to a separate link and grab the information from there. I actually use BeautifulSoup for this since its a quite straight forward process. Thanks for your question and the feedback!
@sowmiyasivakumar2425
@sowmiyasivakumar2425 3 жыл бұрын
@@neroplus-it yaa am expecting for your tutorial. Thanks for your quick response
@andrewmosola4013
@andrewmosola4013 3 жыл бұрын
@@neroplus-it Thanks for the tutorial. I am also wondering the same as @Sowmiya above. I hope that there can be a demand. Thanks
@diegocamelomartinez5566
@diegocamelomartinez5566 3 жыл бұрын
Also would be interested on seeing that
@ProjectSkillsQMUL
@ProjectSkillsQMUL 3 жыл бұрын
Thanks for the tutorial! I am new to coding and web scraping. What code would need to be written if you want to scrape all pages possible until you can’t click the next button instead of just 3 pages? Also, I need a include a wait function for each page since they are so JavaScript heavy.
@marinuss6004
@marinuss6004 3 жыл бұрын
time.sleep(1) will wait for one second ( remember to import time )
@friendsforever715
@friendsforever715 10 ай бұрын
how to import selenium library in Jupyter Python? I'm getting an error prompt " No module named 'selenium' " once I run the script.. thanks..
@neroplus-it
@neroplus-it 10 ай бұрын
"pip install selenium" in your CMD, or "!pip install selenium" directly in your notebook first
@azadsaifi5919
@azadsaifi5919 2 жыл бұрын
Hey, In my laptop, there is no option to search here is only option filter, so How can I search xpath or anything let me know please :)
@technoscopy
@technoscopy 3 жыл бұрын
Hello can you scrape the Jsf formats based website please
@Chariotzable
@Chariotzable 2 жыл бұрын
getting this error.. else: ^^^^ SyntaxError: expected 'except' or 'finally' block
@dark_legions2227
@dark_legions2227 Жыл бұрын
Hii... Your content is awesome.. I'm from India your help I'm scraping Imdb top 250 movies bt I want scrape all pages data..how to scrape data from next page becoz next is button
@deepak7317
@deepak7317 Жыл бұрын
just write the x path of the next botton as a click and iterate the for loop to your desired page
@deepak7317
@deepak7317 Жыл бұрын
I ahve done this to scrap the sunglasses data from flipkart for first 120 glasses #Empty lists to store the attributes brand_name=[] prod_tags=[] price_v=[] time.sleep(3) start=0 end=3 for page in range(start,end): p_tags=driver.find_elements(By.XPATH,'//a[@class="IRpwTa"]') for i in p_tags[0:100]: prod_tags.append(i.text) next_button=driver.find_element(By.XPATH,'/html/body/div/div/div[3]/div[1]/div[2]/div[12]/div/div/nav/a[11]') next_button.click() time.sleep(1) for i in range(len(p_tags)): break brand_tags=driver.find_elements(By.XPATH,'//div[@class="_2WkVRV"]') for i in brand_tags[0:100]: brand_name.append(i.text) next_button=driver.find_element(By.XPATH,'/html/body/div/div/div[3]/div[1]/div[2]/div[12]/div/div/nav/a[11]') next_button.click() time.sleep(1) for i in range(len(brand_tags)): break
Web Scraping to CSV | Multiple Pages Scraping with BeautifulSoup
29:06
WORLD BEST MAGIC SECRETS
00:50
MasomkaMagic
Рет қаралды 50 МЛН
The CUTEST flower girl on YouTube (2019-2024)
00:10
Hungry FAM
Рет қаралды 55 МЛН
So Cute 🥰
00:17
dednahype
Рет қаралды 66 МЛН
Angry Sigma Dog 🤣🤣 Aayush #momson #memes #funny #comedy
00:16
ASquare Crew
Рет қаралды 51 МЛН
Scrapy for Beginners - A Complete How To Example Web Scraping Project
23:22
John Watson Rooney
Рет қаралды 270 М.
Indeed Jobs Web Scraping Save to CSV
20:55
John Watson Rooney
Рет қаралды 91 М.
Scraping Data from a Real Website | Web Scraping in Python
25:23
Alex The Analyst
Рет қаралды 452 М.
XPath Crash Course For Python Web Scraping
30:07
NeuralNine
Рет қаралды 28 М.
Scrape Amazon Data using Python (Step by Step Guide)
24:14
Darshil Parmar
Рет қаралды 155 М.
WORLD BEST MAGIC SECRETS
00:50
MasomkaMagic
Рет қаралды 50 МЛН