Best Web Scraping Combo? Use These In Your Projects

  Рет қаралды 44,124

John Watson Rooney

John Watson Rooney

Күн бұрын

Пікірлер: 124
@y2kdeuce2
@y2kdeuce2 2 жыл бұрын
Hey JWR, JRW here. I've been "scraping" for 20 years now. Amazing how the tools have matured. Dead simple these days. That said, this video is a fantastic example of a cherry picked site to demo these tools. Few real world websites are this simple to parse using CSS. Please dedicate some time to digging through more challenging selectors. Thanks in advance - John
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
Hey! Cherry picked example are unfortunately a part of it, people simply won’t watch a long video where I am trying to work stuff out. Also this video was more of a demo of how different tools work, but also end up at the same result. I’m not sure I agree about the parsing css part though, I don’t often find an issue on sites where it is most html and css and minimal js
@contrarypagan
@contrarypagan 2 жыл бұрын
@@JohnWatsonRooney I think if you did some master classes where you tackled some complex sites and worked through things that you would have a fair few views on those videos. The easy options you use are VERY useful for specific tips but I would love to see you work through some real difficult situations as well. But your content is awesome so believe me I am not complaining! Thank you so much.
@AmodeusR
@AmodeusR Жыл бұрын
@@JohnWatsonRooney I garantee there will be people that will watch a long video to see a professional trying to figure out things. Most true learners are just sick of the magic and smooth programming experience many videos show, when in reality, trying to do it by ourselves we just end up struggling a lot. And that's even unhealthy to those starting in the area, thinking everything is always that simple just for the sheer amount of such cherry picked contents. Just make clear from the beginning that is a "advanced" content, an example of when things are not that simple so people can relate to it and feel compelled to watch it.
@arabymo
@arabymo 8 ай бұрын
Putting your mind and thinking into the code! What a way to explain and learn. Thank you.
2 жыл бұрын
I'm starting with Python ans web scraping all along and this video is amazing and teach me a lot of basic things! Thank you a lot for such a fantastic video.
@nuritas8424
@nuritas8424 2 жыл бұрын
the way you explain is so clean. thanks a lot
@drac.96
@drac.96 2 жыл бұрын
Nice to see a new intro and the step by step explanation is really good
@GelsYT
@GelsYT 2 жыл бұрын
OH MY GOODNESS! THANK YOU! THIS IS SO MUCH EASIER TO COLLECT THE DATA AND CONSTRUCT IN A DATA STRUCTURE LIKE DICT THANK YOU!
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
thanks I'm happy to help!
@Septumsempra8818
@Septumsempra8818 2 жыл бұрын
My business got funding!!! Thank you Mr Rooney.
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
great! thanks for watching!
@hrvojematosevic8769
@hrvojematosevic8769 2 жыл бұрын
Are you hiring? :)))
@Septumsempra8818
@Septumsempra8818 2 жыл бұрын
@@hrvojematosevic8769 developers, Yes.
@roshanyadav4459
@roshanyadav4459 2 жыл бұрын
Can i join u i worked with scrapy playwright Beautifulsoup selenium I am an intermediate programer
@hrvojematosevic8769
@hrvojematosevic8769 2 жыл бұрын
@@Septumsempra8818 it's a broad term T_T
@MrRementer
@MrRementer 2 жыл бұрын
Yes, happy to see a new Video!
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
thanks, i hope you enjoyed it!
@MrRementer
@MrRementer 2 жыл бұрын
@@JohnWatsonRooney I did! I've got a longer question regarding my Amazon Scraping Project, which i am currently doing with Selenium. Everything works fine, its just quite slow.. Is it okay to hit you up with a direct message/email?
@rics6035
@rics6035 2 жыл бұрын
John! Thanks so much for your amazing videos, they are super useful and interesting to watch!
@pypypy4228
@pypypy4228 Жыл бұрын
I like this approach. Thank you!
@JKnight
@JKnight Жыл бұрын
That was fantastic. Cory Schafer tier content. Love to see it.
@JohnWatsonRooney
@JohnWatsonRooney Жыл бұрын
Thank you - he’s the best so very happy to be included there!
@hayat_soft_skills
@hayat_soft_skills 2 жыл бұрын
Love the content & specially how to write code clean and neat. The best channel in my 5 years youtube journey. May Allah give you more power and we are enjoying the best content. thanks!
@Lahmeinthehouse
@Lahmeinthehouse 2 жыл бұрын
Nice video! What do you use for screen recording ?
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
OBS! It’s free
@Lahmeinthehouse
@Lahmeinthehouse 2 жыл бұрын
Great, thanks! Also, do you have LinkedIn ?
@OPPACHblu_channel
@OPPACHblu_channel 2 жыл бұрын
Thanks u for sharing experience, very interesting and helpful!👍
@RonWaller
@RonWaller Жыл бұрын
John, Thanks for your tutorials. Enjoying the web scrapping I am planning to dig into this more. Curious, this tool used "css" to get the data. Are there other tools to get "dynamic" data or JS data? Just wondering thanks
@thepoorsultan5112
@thepoorsultan5112 2 жыл бұрын
Already have been using selectolax and httpx combo
@UniquelyCritical
@UniquelyCritical Жыл бұрын
1000th like here at 5:12 AM CST. Thanks!!!
@JohnWatsonRooney
@JohnWatsonRooney Жыл бұрын
Thank you!!
@gabiie9839
@gabiie9839 2 жыл бұрын
nice work john, web scraping lord.
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
thanks for watching!
@lucasmoratoaraujo8433
@lucasmoratoaraujo8433 Жыл бұрын
Nice video! Thank you for sharing your knowledge with others!
@guillaumebignon6957
@guillaumebignon6957 2 жыл бұрын
Using requests and beautifulsoup up to now, it's great to discover competitive alternatives. Would append data in a json file instead of csv also work ?
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
Always good to see other options in case they fit your needs better. Yes you can append to a json file, look at json lines too it might be better for you
@mirkolantieri
@mirkolantieri 2 жыл бұрын
Nice video John!
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
thanks for watching!
@jasonbischer4568
@jasonbischer4568 Жыл бұрын
Hey John, great video here. I wasn’t getting the price in my results, it was just empty for some reason? I messed around and used the span code as well but it just returned “no text found”. Any ideas? Thanks for everything, your videos are great
@karimbenamar362
@karimbenamar362 6 ай бұрын
same here😢 any idea how to solve the issue ?
@michakuczma4076
@michakuczma4076 2 жыл бұрын
great video John. Thanks for that. One question come to my mind. Why do you use dataclasses first and then transfer them to dictionaries. Why not to use dictionaries from the beginning? Whats the advantage of dataclassess here besides IDE hints? Don't have much experience with them thats why I'm asking.
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
Thanks. In this case there wasn’t much of a benefit I am just in the habit of using them now. The benefit comes in the validation you can do with them when accepting data in and out of your program
@frynoodles1274
@frynoodles1274 2 жыл бұрын
Hi John, I love your videos. What if view-source doesn't return all the HTML on the page that we want? Do we need to use a headless browser and wait for elements to load? Or is there a good requests library we can use instead? Thanks
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
if it doesn't you have a few options, headless browser is one, or seeing if there are AJAX requests you can use too
@7Trident3
@7Trident3 2 жыл бұрын
No dickin around, meat and potatoes! This should be the gold standard on how to make a programming vid.
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
Thanks I appreciate it!
@philtoa334
@philtoa334 2 жыл бұрын
Thanks.
@lucianocarvajal6698
@lucianocarvajal6698 Жыл бұрын
And can httpx to scrap info from dynamic / javascript web pages? Because what i see in the video is that is being used in a normal html website
@JohnWatsonRooney
@JohnWatsonRooney Жыл бұрын
you can, if you can find the backend API, or otherwise you will need to render the page with browser automation like playwright
@digitalbangladesh6977
@digitalbangladesh6977 2 жыл бұрын
hello sir..what are downsides of scrapy in respect of this project??
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
None really - I just sometimes feel like it’s overkill for a small project and think of it more for larger scrapers and crawlers
@rosaarzabala5189
@rosaarzabala5189 2 жыл бұрын
Great combo! Thanks for ur videos 🙌 almost didn't get the doge 👀
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
thanks ;D
@bakasenpaidesu
@bakasenpaidesu 2 жыл бұрын
Great video... Btw u can use pandas to convert dictionary to CSV.
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
thanks - yes its much easier too, but pandas is a big library to import in
@jasonbischer4568
@jasonbischer4568 Жыл бұрын
Also, a csv file is not being created when I run the script? Any idea?
@gisleberge4363
@gisleberge4363 2 жыл бұрын
Why you "left" requests and beautifulsoup? Just curious about their downsides compared to the ones you recommend here.
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
HTTPX works just like requests, but it can be Async when needed. Selectolax is faster and more focused (CSS selectors only) than BS4. I always say use what you prefer I have found after exploring different tools that these 2 work the best for me!
@eternogigante685
@eternogigante685 11 ай бұрын
Does this work on SPAs rendered by frameworks like react and such?
@tarik9563
@tarik9563 2 жыл бұрын
maybe a stupid question, what about scraping data that is only generated from a request + captcha?
@juliohernandezpabon
@juliohernandezpabon Жыл бұрын
Thanks for wonderful material. Maybe is me but right now price is not saved. Thanks
@joaoalmirante4268
@joaoalmirante4268 2 жыл бұрын
hey. nice video. But the most problem this days on scraping its the amount of js/non html things that make us a lot of difficulty to get. But overall thanks for sharing
@arsalan0561
@arsalan0561 2 жыл бұрын
so I've been learning scrapy basics and following your channel for quite a while. So as per this video this is the latest method to scrape the pages ! what about those ol scrapy start_url and responses to get the whole page and link extractors and follow_url to get to next pages and stuff! i mean do we still need to use them at some point or we could replace them with this method altogether. ? And thanks for the sharing new ways to scrape. cheers
@nztitirangi
@nztitirangi Жыл бұрын
very cool. I kept getting timeouts so did this to solve: client = httpx.Client(timeout=None) resp = client.get(url) return HTMLParser(resp.text)
@dungphung2252
@dungphung2252 2 жыл бұрын
Hi,i just want to know if it work on all websites. Tks
@pythonprogrammer2186
@pythonprogrammer2186 2 жыл бұрын
Very nice!
@jfk1337
@jfk1337 2 жыл бұрын
Why no async?
@karlblau2
@karlblau2 2 жыл бұрын
Great video
@rovshenhojayev1843
@rovshenhojayev1843 Жыл бұрын
can we add link and image of that product on this lib?
@JohnWatsonRooney
@JohnWatsonRooney Жыл бұрын
Yes it would work the same way
@thorstenwidmer7275
@thorstenwidmer7275 Жыл бұрын
May I ask which Lenovo this is?
@JohnWatsonRooney
@JohnWatsonRooney Жыл бұрын
It’s an old x200. I’ve put an SSD and more RAM in it and it’s a workable machine
@bittumonkb866
@bittumonkb866 Жыл бұрын
What about sites needed login?
@yawarvoice
@yawarvoice Жыл бұрын
Hi, @John I've been following you for a long time and watching all your scraping videos with Python. I have started to create scraper but the website is not allowing me to access as it is considering my script as a bot, though I have changed the user-agent to latest chrome but still, that website is recognizing me as a bot. My question is that which combo I should use for scraping little complex JS/AJAX/bot-aware websites? People say that selenium is good for that purpose, but you say that selenium is not a good option now a days as it is slow, then what do you suggest, which combo should I use, that can fit in many scenarios, if not all. Looking forward! Thanks.
@garymichalske2274
@garymichalske2274 Жыл бұрын
Thanks for the video, John. I was finally able to run my code successfully following the steps in this video. I was following the older videos for selenium and playwright but couldn't get the results you displayed in the video. I think the html code on the websites had changed since you recorded the video. The only issue I ran into for this one is my csv file has a blank row between every exported line. So instead of 300 rows, I have 600. Any idea why?
@JohnWatsonRooney
@JohnWatsonRooney Жыл бұрын
thanks - yes unfortunately that is part of it, websites change so my examples often expire. I try to show the methods as much as I can. As for your CSV, some of your data probably has a newline character at the end, try adding .strip() to each line to see which one it is!
@losefaithinhumanity8238
@losefaithinhumanity8238 2 жыл бұрын
Hey man I've been watching your content for the past couple of weeks and it's fire. A good content idea would be to create a beginner series where you go through the absolute basics, I'm proposing this because nearly all of the videos on the topic are very outdated. Cheers.
@azwan1992
@azwan1992 Жыл бұрын
I love you man.
@joshman844
@joshman844 Жыл бұрын
does this work in google colab?
@sirtoruk
@sirtoruk Жыл бұрын
It sayts venv/bin/activate doesn't exists; theres only a file falled python and another one called python3 there :(
@karthikshaindia
@karthikshaindia 2 жыл бұрын
Good one. However, request HTML may replaced and comfortable instead BS4. This one have to be decoded for specifically
@pranavanand24
@pranavanand24 2 жыл бұрын
Hey John, great video! I am a beginner at webscraping and vscode in general. I saw that your import csv part got added automatically, I think? Can you please tell me how to do that? Is that some extension like Auto Import?
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
i'm gonna be honest with you... i think i typed it in but forgot and edited that part out... sorry
@pranavanand24
@pranavanand24 2 жыл бұрын
@@JohnWatsonRooney oh, okay, got it. No problem. I ended up searching Google and came to know about this extension auto import and included it in my vscode, which is indeed able to add those import lines by itself.
@Relaxing_Sounds_Rain
@Relaxing_Sounds_Rain 2 жыл бұрын
Hi thank you john. This work on windows very well but linux ubuntu do not work. help me please
@itzcallmepro4963
@itzcallmepro4963 2 жыл бұрын
I Got Errors and Searched And Found that the site iam trying to scrape uses CloudFlare Protection is there anyway to bypass that ?
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
Try cloudscraper, it can have some good results
@itzcallmepro4963
@itzcallmepro4963 2 жыл бұрын
@@JohnWatsonRooney i ALready Searched and Have Used it Thanks Very Much , But i Have Another Problem now , iam Scraping Some Data , One of them is Prices , There SomeTimes are 2 prices , The 2 prices are always in the html but there is sometimes one only that's displayed on page . i Can't Find any Class or anything to difeerentiiate between them to get the element that's appearing on the screen only ,
@MrPaynealex6
@MrPaynealex6 2 жыл бұрын
Any recommendations to avoid rate limiting aside from rotating proxies?
@jhonjuniordelaguilapinedo2746
@jhonjuniordelaguilapinedo2746 Жыл бұрын
I guess keep using requests library, because the get function lets you put the header and the proxy you want.
@shivambajaj6228
@shivambajaj6228 2 жыл бұрын
I usually encounter Error 429 scraping web pages. Is there any way I could bypass that?
@rajkumargerard5474
@rajkumargerard5474 2 жыл бұрын
Can we run this from Spyder or Jupiter? Also request you to please try and scrapp Tesco link.. I had tried it and it was working fine for sometime but now due to the restrictions my code doesn't work.
@hossamgamal8661
@hossamgamal8661 2 жыл бұрын
Thanks for sharing such important information I didn't know that there are modules other than beautifulsoup and requests I have a question Can you make a video on how to use authentication proxy with selenium? I have used options.add_argument('--proxy-server=ip:port') it doesn't work with me It doesn't show the alert box which I should input the username and password
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
I'm going to do some more selenium vids i will try to cover this in those, but i'm not sure exactly why that doesn't work
@hossamgamal8661
@hossamgamal8661 2 жыл бұрын
@@JohnWatsonRooney Thanks
@y2kdeuce2
@y2kdeuce2 2 жыл бұрын
@@JohnWatsonRooney how about a Firefox AWS lambda function with a rotating proxy? :D
@mectoystv
@mectoystv 2 жыл бұрын
Hello, in order to use web scraping, you must ask permission from the owner of the content of the web page?
@nztitirangi
@nztitirangi Жыл бұрын
if your scraper behaves similar to a human then its fine. If it totally smashes some poor sods ecommerce page then no. If its a well built webapp then they coing to be throttling you anyway, IMHO
@CreativeCorners
@CreativeCorners 2 жыл бұрын
Respected Sir, I need your help to get links or download F B marketplace images using the scraping tool, I did a lot of work but I am confused, even though I got links to all listed items but couldn't get the link to images in the individual list shown in the new tab. Please guide me
@CreativeCorners
@CreativeCorners 2 жыл бұрын
I'm still waiting for your favorable reply please
@tanekapace8080
@tanekapace8080 2 жыл бұрын
I’m interested in a bot that can fill out online forms at multiple websites. Kindly respond if you could help me
@yoanbello6891
@yoanbello6891 2 жыл бұрын
great video like always, i have this error scraping a site with python playwright … intercepts pointer events retrying click action, attempt #57, Is a heavy javascript site, i am tryingn to click a button. Thanks
@artabra1019
@artabra1019 2 жыл бұрын
nc asdict method save much time
@disrael2101
@disrael2101 2 жыл бұрын
Source code
@scottmiller2591
@scottmiller2591 2 жыл бұрын
This was a good example of how to get started, but I still had some questions: - In your opinion, why are httpx and selectolax better than requests and BeautifulSoup? - There are so many places where things can fail - status code =/= 200, website sends you to a "I'm busy" page, etc. - that are missing here. If you are communicating with an unreliable website, this code may fail even with the hobby application, much less something that is scraping professionally. Is there anything in httpx/selectolax that helps with the exception handling compared to requests/BS4?
@JohnWatsonRooney
@JohnWatsonRooney 2 жыл бұрын
Httpx has async ready for you when you need it and selectolax is a much faster parser than bs4. It still comes down to preference- use what works for you! Yes I’m this video I didn’t flesh it out fully with error handling, and retries and other parts that would make the script more complete for more professional use. I didn’t want to cover too much in one go and also reach as many people as possible
@scottmiller2591
@scottmiller2591 2 жыл бұрын
@@JohnWatsonRooney Thanks for your prompt and useful reply!
@nztitirangi
@nztitirangi Жыл бұрын
@@scottmiller2591 client = httpx.Client(timeout=None) resp = client.get(url) return HTMLParser(resp.text)
@ramarajesh9554
@ramarajesh9554 2 жыл бұрын
First
@nicolasalarcon58
@nicolasalarcon58 2 жыл бұрын
Hey thank you so much for your explanation!, What happen when products have this structure? ... ... ... ... ... because I cant get anything from this web I tried everything like html.css(div.Fractal-ProductCard__productcard--container ) or html.css(div.productcard--container) or html.css(div.t:m|n:productcard|v:default) and much more
@nadavnesher8641
@nadavnesher8641 2 жыл бұрын
Hi John, Thanks for the awesome video! I really like your clear explanations. I was trying to run your code but on a Google Search page but got into some difficulties. I was hoping you could please tell me what I'm doing wrong. The div class I'm trying to grab: (which represents a Google Search result). But what's returned is an empty list: [] def parse_queries(html): queries = html.css("div.MjjYud") print(queries) I, therefore, cannot grab nested "div", "h3", and "cite" classes which hold the information I require to populate my dataclass attributes (website address, website title, website text). For example: address --> title --> text --> (*) As you suggested, I also looked at the page source and did find this "MjjYud" div class. My code: import httpx from selectolax.parser import HTMLParser from dataclasses import dataclass, asdict import csv @dataclass class Query: website: str title: str information: str def get_html(): url = "www.google.com/search?q=data+science+courses" resp = httpx.get(url) html = HTMLParser(resp.text) return html def parse_queries(html): queries = html.css("div.MjjYud") print(queries) results = [] for item in queries: new_item = Query( website=item.css_first("cite.iUh30 qLRx3b tjvcx").text(), news_title=item.css_first("h3.LC20lb MBeuO DKV0Md").text(), textual_info=item.css_first("div.VwiC3b yXK7lf MUxGbd yDYNvb lyLwlc lEBKkf").text() # the inside ) results.append(asdict(new_item)) print("new_item") return results def to_csv(res): with open("results.csv", "a") as f: writer = csv.DictWriter(f, fieldnames=["website", "news_title", "textual_info"]) writer.writerows(res) def main(): html = get_html() res = parse_queries(html) to_csv(res) main() Thank you very much for taking the time to read my comment 🙏🏼
@zelt7466
@zelt7466 2 жыл бұрын
requests_html one love)
@mitchconnor8764
@mitchconnor8764 2 жыл бұрын
Great video
This is How I Scrape 99% of Sites
18:27
John Watson Rooney
Рет қаралды 216 М.
Want To Learn Web Scraping? Start HERE
10:54
John Watson Rooney
Рет қаралды 28 М.
Scrape LIVE scores - No BeautifulSoup or Selenium NEEDED!
15:44
John Watson Rooney
Рет қаралды 53 М.
Modern HTML Scraping with Pythons BEST Tools
24:47
John Watson Rooney
Рет қаралды 14 М.
Introducing SPIKE Secret Metadata API
18:09
Vadideki Geyik
Рет қаралды 171
The Biggest Mistake Beginners Make When Web Scraping
10:21
John Watson Rooney
Рет қаралды 123 М.
How To Scrape (almost) ANY Website with Python
13:45
John Watson Rooney
Рет қаралды 40 М.
Beautifulsoup vs Selenium vs Scrapy - Which Tool for Web Scraping?
6:54
John Watson Rooney
Рет қаралды 78 М.
Try My Price Monitoring Beginner Python Project
18:19
John Watson Rooney
Рет қаралды 19 М.
Don't Start Web Scraping without Doing These First
7:52
John Watson Rooney
Рет қаралды 28 М.
Always Check for the Hidden API when Web Scraping
11:50
John Watson Rooney
Рет қаралды 654 М.