Hey JWR, JRW here. I've been "scraping" for 20 years now. Amazing how the tools have matured. Dead simple these days. That said, this video is a fantastic example of a cherry picked site to demo these tools. Few real world websites are this simple to parse using CSS. Please dedicate some time to digging through more challenging selectors. Thanks in advance - John
@JohnWatsonRooney2 жыл бұрын
Hey! Cherry picked example are unfortunately a part of it, people simply won’t watch a long video where I am trying to work stuff out. Also this video was more of a demo of how different tools work, but also end up at the same result. I’m not sure I agree about the parsing css part though, I don’t often find an issue on sites where it is most html and css and minimal js
@contrarypagan2 жыл бұрын
@@JohnWatsonRooney I think if you did some master classes where you tackled some complex sites and worked through things that you would have a fair few views on those videos. The easy options you use are VERY useful for specific tips but I would love to see you work through some real difficult situations as well. But your content is awesome so believe me I am not complaining! Thank you so much.
@AmodeusR Жыл бұрын
@@JohnWatsonRooney I garantee there will be people that will watch a long video to see a professional trying to figure out things. Most true learners are just sick of the magic and smooth programming experience many videos show, when in reality, trying to do it by ourselves we just end up struggling a lot. And that's even unhealthy to those starting in the area, thinking everything is always that simple just for the sheer amount of such cherry picked contents. Just make clear from the beginning that is a "advanced" content, an example of when things are not that simple so people can relate to it and feel compelled to watch it.
@arabymo8 ай бұрын
Putting your mind and thinking into the code! What a way to explain and learn. Thank you.
2 жыл бұрын
I'm starting with Python ans web scraping all along and this video is amazing and teach me a lot of basic things! Thank you a lot for such a fantastic video.
@nuritas84242 жыл бұрын
the way you explain is so clean. thanks a lot
@drac.962 жыл бұрын
Nice to see a new intro and the step by step explanation is really good
@GelsYT2 жыл бұрын
OH MY GOODNESS! THANK YOU! THIS IS SO MUCH EASIER TO COLLECT THE DATA AND CONSTRUCT IN A DATA STRUCTURE LIKE DICT THANK YOU!
@JohnWatsonRooney2 жыл бұрын
thanks I'm happy to help!
@Septumsempra88182 жыл бұрын
My business got funding!!! Thank you Mr Rooney.
@JohnWatsonRooney2 жыл бұрын
great! thanks for watching!
@hrvojematosevic87692 жыл бұрын
Are you hiring? :)))
@Septumsempra88182 жыл бұрын
@@hrvojematosevic8769 developers, Yes.
@roshanyadav44592 жыл бұрын
Can i join u i worked with scrapy playwright Beautifulsoup selenium I am an intermediate programer
@hrvojematosevic87692 жыл бұрын
@@Septumsempra8818 it's a broad term T_T
@MrRementer2 жыл бұрын
Yes, happy to see a new Video!
@JohnWatsonRooney2 жыл бұрын
thanks, i hope you enjoyed it!
@MrRementer2 жыл бұрын
@@JohnWatsonRooney I did! I've got a longer question regarding my Amazon Scraping Project, which i am currently doing with Selenium. Everything works fine, its just quite slow.. Is it okay to hit you up with a direct message/email?
@rics60352 жыл бұрын
John! Thanks so much for your amazing videos, they are super useful and interesting to watch!
@pypypy4228 Жыл бұрын
I like this approach. Thank you!
@JKnight Жыл бұрын
That was fantastic. Cory Schafer tier content. Love to see it.
@JohnWatsonRooney Жыл бұрын
Thank you - he’s the best so very happy to be included there!
@hayat_soft_skills2 жыл бұрын
Love the content & specially how to write code clean and neat. The best channel in my 5 years youtube journey. May Allah give you more power and we are enjoying the best content. thanks!
@Lahmeinthehouse2 жыл бұрын
Nice video! What do you use for screen recording ?
@JohnWatsonRooney2 жыл бұрын
OBS! It’s free
@Lahmeinthehouse2 жыл бұрын
Great, thanks! Also, do you have LinkedIn ?
@OPPACHblu_channel2 жыл бұрын
Thanks u for sharing experience, very interesting and helpful!👍
@RonWaller Жыл бұрын
John, Thanks for your tutorials. Enjoying the web scrapping I am planning to dig into this more. Curious, this tool used "css" to get the data. Are there other tools to get "dynamic" data or JS data? Just wondering thanks
@thepoorsultan51122 жыл бұрын
Already have been using selectolax and httpx combo
@UniquelyCritical Жыл бұрын
1000th like here at 5:12 AM CST. Thanks!!!
@JohnWatsonRooney Жыл бұрын
Thank you!!
@gabiie98392 жыл бұрын
nice work john, web scraping lord.
@JohnWatsonRooney2 жыл бұрын
thanks for watching!
@lucasmoratoaraujo8433 Жыл бұрын
Nice video! Thank you for sharing your knowledge with others!
@guillaumebignon69572 жыл бұрын
Using requests and beautifulsoup up to now, it's great to discover competitive alternatives. Would append data in a json file instead of csv also work ?
@JohnWatsonRooney2 жыл бұрын
Always good to see other options in case they fit your needs better. Yes you can append to a json file, look at json lines too it might be better for you
@mirkolantieri2 жыл бұрын
Nice video John!
@JohnWatsonRooney2 жыл бұрын
thanks for watching!
@jasonbischer4568 Жыл бұрын
Hey John, great video here. I wasn’t getting the price in my results, it was just empty for some reason? I messed around and used the span code as well but it just returned “no text found”. Any ideas? Thanks for everything, your videos are great
@karimbenamar3626 ай бұрын
same here😢 any idea how to solve the issue ?
@michakuczma40762 жыл бұрын
great video John. Thanks for that. One question come to my mind. Why do you use dataclasses first and then transfer them to dictionaries. Why not to use dictionaries from the beginning? Whats the advantage of dataclassess here besides IDE hints? Don't have much experience with them thats why I'm asking.
@JohnWatsonRooney2 жыл бұрын
Thanks. In this case there wasn’t much of a benefit I am just in the habit of using them now. The benefit comes in the validation you can do with them when accepting data in and out of your program
@frynoodles12742 жыл бұрын
Hi John, I love your videos. What if view-source doesn't return all the HTML on the page that we want? Do we need to use a headless browser and wait for elements to load? Or is there a good requests library we can use instead? Thanks
@JohnWatsonRooney2 жыл бұрын
if it doesn't you have a few options, headless browser is one, or seeing if there are AJAX requests you can use too
@7Trident32 жыл бұрын
No dickin around, meat and potatoes! This should be the gold standard on how to make a programming vid.
@JohnWatsonRooney2 жыл бұрын
Thanks I appreciate it!
@philtoa3342 жыл бұрын
Thanks.
@lucianocarvajal6698 Жыл бұрын
And can httpx to scrap info from dynamic / javascript web pages? Because what i see in the video is that is being used in a normal html website
@JohnWatsonRooney Жыл бұрын
you can, if you can find the backend API, or otherwise you will need to render the page with browser automation like playwright
@digitalbangladesh69772 жыл бұрын
hello sir..what are downsides of scrapy in respect of this project??
@JohnWatsonRooney2 жыл бұрын
None really - I just sometimes feel like it’s overkill for a small project and think of it more for larger scrapers and crawlers
@rosaarzabala51892 жыл бұрын
Great combo! Thanks for ur videos 🙌 almost didn't get the doge 👀
@JohnWatsonRooney2 жыл бұрын
thanks ;D
@bakasenpaidesu2 жыл бұрын
Great video... Btw u can use pandas to convert dictionary to CSV.
@JohnWatsonRooney2 жыл бұрын
thanks - yes its much easier too, but pandas is a big library to import in
@jasonbischer4568 Жыл бұрын
Also, a csv file is not being created when I run the script? Any idea?
@gisleberge43632 жыл бұрын
Why you "left" requests and beautifulsoup? Just curious about their downsides compared to the ones you recommend here.
@JohnWatsonRooney2 жыл бұрын
HTTPX works just like requests, but it can be Async when needed. Selectolax is faster and more focused (CSS selectors only) than BS4. I always say use what you prefer I have found after exploring different tools that these 2 work the best for me!
@eternogigante68511 ай бұрын
Does this work on SPAs rendered by frameworks like react and such?
@tarik95632 жыл бұрын
maybe a stupid question, what about scraping data that is only generated from a request + captcha?
@juliohernandezpabon Жыл бұрын
Thanks for wonderful material. Maybe is me but right now price is not saved. Thanks
@joaoalmirante42682 жыл бұрын
hey. nice video. But the most problem this days on scraping its the amount of js/non html things that make us a lot of difficulty to get. But overall thanks for sharing
@arsalan05612 жыл бұрын
so I've been learning scrapy basics and following your channel for quite a while. So as per this video this is the latest method to scrape the pages ! what about those ol scrapy start_url and responses to get the whole page and link extractors and follow_url to get to next pages and stuff! i mean do we still need to use them at some point or we could replace them with this method altogether. ? And thanks for the sharing new ways to scrape. cheers
@nztitirangi Жыл бұрын
very cool. I kept getting timeouts so did this to solve: client = httpx.Client(timeout=None) resp = client.get(url) return HTMLParser(resp.text)
@dungphung22522 жыл бұрын
Hi,i just want to know if it work on all websites. Tks
@pythonprogrammer21862 жыл бұрын
Very nice!
@jfk13372 жыл бұрын
Why no async?
@karlblau22 жыл бұрын
Great video
@rovshenhojayev1843 Жыл бұрын
can we add link and image of that product on this lib?
@JohnWatsonRooney Жыл бұрын
Yes it would work the same way
@thorstenwidmer7275 Жыл бұрын
May I ask which Lenovo this is?
@JohnWatsonRooney Жыл бұрын
It’s an old x200. I’ve put an SSD and more RAM in it and it’s a workable machine
@bittumonkb866 Жыл бұрын
What about sites needed login?
@yawarvoice Жыл бұрын
Hi, @John I've been following you for a long time and watching all your scraping videos with Python. I have started to create scraper but the website is not allowing me to access as it is considering my script as a bot, though I have changed the user-agent to latest chrome but still, that website is recognizing me as a bot. My question is that which combo I should use for scraping little complex JS/AJAX/bot-aware websites? People say that selenium is good for that purpose, but you say that selenium is not a good option now a days as it is slow, then what do you suggest, which combo should I use, that can fit in many scenarios, if not all. Looking forward! Thanks.
@garymichalske2274 Жыл бұрын
Thanks for the video, John. I was finally able to run my code successfully following the steps in this video. I was following the older videos for selenium and playwright but couldn't get the results you displayed in the video. I think the html code on the websites had changed since you recorded the video. The only issue I ran into for this one is my csv file has a blank row between every exported line. So instead of 300 rows, I have 600. Any idea why?
@JohnWatsonRooney Жыл бұрын
thanks - yes unfortunately that is part of it, websites change so my examples often expire. I try to show the methods as much as I can. As for your CSV, some of your data probably has a newline character at the end, try adding .strip() to each line to see which one it is!
@losefaithinhumanity82382 жыл бұрын
Hey man I've been watching your content for the past couple of weeks and it's fire. A good content idea would be to create a beginner series where you go through the absolute basics, I'm proposing this because nearly all of the videos on the topic are very outdated. Cheers.
@azwan1992 Жыл бұрын
I love you man.
@joshman844 Жыл бұрын
does this work in google colab?
@sirtoruk Жыл бұрын
It sayts venv/bin/activate doesn't exists; theres only a file falled python and another one called python3 there :(
@karthikshaindia2 жыл бұрын
Good one. However, request HTML may replaced and comfortable instead BS4. This one have to be decoded for specifically
@pranavanand242 жыл бұрын
Hey John, great video! I am a beginner at webscraping and vscode in general. I saw that your import csv part got added automatically, I think? Can you please tell me how to do that? Is that some extension like Auto Import?
@JohnWatsonRooney2 жыл бұрын
i'm gonna be honest with you... i think i typed it in but forgot and edited that part out... sorry
@pranavanand242 жыл бұрын
@@JohnWatsonRooney oh, okay, got it. No problem. I ended up searching Google and came to know about this extension auto import and included it in my vscode, which is indeed able to add those import lines by itself.
@Relaxing_Sounds_Rain2 жыл бұрын
Hi thank you john. This work on windows very well but linux ubuntu do not work. help me please
@itzcallmepro49632 жыл бұрын
I Got Errors and Searched And Found that the site iam trying to scrape uses CloudFlare Protection is there anyway to bypass that ?
@JohnWatsonRooney2 жыл бұрын
Try cloudscraper, it can have some good results
@itzcallmepro49632 жыл бұрын
@@JohnWatsonRooney i ALready Searched and Have Used it Thanks Very Much , But i Have Another Problem now , iam Scraping Some Data , One of them is Prices , There SomeTimes are 2 prices , The 2 prices are always in the html but there is sometimes one only that's displayed on page . i Can't Find any Class or anything to difeerentiiate between them to get the element that's appearing on the screen only ,
@MrPaynealex62 жыл бұрын
Any recommendations to avoid rate limiting aside from rotating proxies?
@jhonjuniordelaguilapinedo2746 Жыл бұрын
I guess keep using requests library, because the get function lets you put the header and the proxy you want.
@shivambajaj62282 жыл бұрын
I usually encounter Error 429 scraping web pages. Is there any way I could bypass that?
@rajkumargerard54742 жыл бұрын
Can we run this from Spyder or Jupiter? Also request you to please try and scrapp Tesco link.. I had tried it and it was working fine for sometime but now due to the restrictions my code doesn't work.
@hossamgamal86612 жыл бұрын
Thanks for sharing such important information I didn't know that there are modules other than beautifulsoup and requests I have a question Can you make a video on how to use authentication proxy with selenium? I have used options.add_argument('--proxy-server=ip:port') it doesn't work with me It doesn't show the alert box which I should input the username and password
@JohnWatsonRooney2 жыл бұрын
I'm going to do some more selenium vids i will try to cover this in those, but i'm not sure exactly why that doesn't work
@hossamgamal86612 жыл бұрын
@@JohnWatsonRooney Thanks
@y2kdeuce22 жыл бұрын
@@JohnWatsonRooney how about a Firefox AWS lambda function with a rotating proxy? :D
@mectoystv2 жыл бұрын
Hello, in order to use web scraping, you must ask permission from the owner of the content of the web page?
@nztitirangi Жыл бұрын
if your scraper behaves similar to a human then its fine. If it totally smashes some poor sods ecommerce page then no. If its a well built webapp then they coing to be throttling you anyway, IMHO
@CreativeCorners2 жыл бұрын
Respected Sir, I need your help to get links or download F B marketplace images using the scraping tool, I did a lot of work but I am confused, even though I got links to all listed items but couldn't get the link to images in the individual list shown in the new tab. Please guide me
@CreativeCorners2 жыл бұрын
I'm still waiting for your favorable reply please
@tanekapace80802 жыл бұрын
I’m interested in a bot that can fill out online forms at multiple websites. Kindly respond if you could help me
@yoanbello68912 жыл бұрын
great video like always, i have this error scraping a site with python playwright … intercepts pointer events retrying click action, attempt #57, Is a heavy javascript site, i am tryingn to click a button. Thanks
@artabra10192 жыл бұрын
nc asdict method save much time
@disrael21012 жыл бұрын
Source code
@scottmiller25912 жыл бұрын
This was a good example of how to get started, but I still had some questions: - In your opinion, why are httpx and selectolax better than requests and BeautifulSoup? - There are so many places where things can fail - status code =/= 200, website sends you to a "I'm busy" page, etc. - that are missing here. If you are communicating with an unreliable website, this code may fail even with the hobby application, much less something that is scraping professionally. Is there anything in httpx/selectolax that helps with the exception handling compared to requests/BS4?
@JohnWatsonRooney2 жыл бұрын
Httpx has async ready for you when you need it and selectolax is a much faster parser than bs4. It still comes down to preference- use what works for you! Yes I’m this video I didn’t flesh it out fully with error handling, and retries and other parts that would make the script more complete for more professional use. I didn’t want to cover too much in one go and also reach as many people as possible
@scottmiller25912 жыл бұрын
@@JohnWatsonRooney Thanks for your prompt and useful reply!
Hey thank you so much for your explanation!, What happen when products have this structure? ... ... ... ... ... because I cant get anything from this web I tried everything like html.css(div.Fractal-ProductCard__productcard--container ) or html.css(div.productcard--container) or html.css(div.t:m|n:productcard|v:default) and much more
@nadavnesher86412 жыл бұрын
Hi John, Thanks for the awesome video! I really like your clear explanations. I was trying to run your code but on a Google Search page but got into some difficulties. I was hoping you could please tell me what I'm doing wrong. The div class I'm trying to grab: (which represents a Google Search result). But what's returned is an empty list: [] def parse_queries(html): queries = html.css("div.MjjYud") print(queries) I, therefore, cannot grab nested "div", "h3", and "cite" classes which hold the information I require to populate my dataclass attributes (website address, website title, website text). For example: address --> title --> text --> (*) As you suggested, I also looked at the page source and did find this "MjjYud" div class. My code: import httpx from selectolax.parser import HTMLParser from dataclasses import dataclass, asdict import csv @dataclass class Query: website: str title: str information: str def get_html(): url = "www.google.com/search?q=data+science+courses" resp = httpx.get(url) html = HTMLParser(resp.text) return html def parse_queries(html): queries = html.css("div.MjjYud") print(queries) results = [] for item in queries: new_item = Query( website=item.css_first("cite.iUh30 qLRx3b tjvcx").text(), news_title=item.css_first("h3.LC20lb MBeuO DKV0Md").text(), textual_info=item.css_first("div.VwiC3b yXK7lf MUxGbd yDYNvb lyLwlc lEBKkf").text() # the inside ) results.append(asdict(new_item)) print("new_item") return results def to_csv(res): with open("results.csv", "a") as f: writer = csv.DictWriter(f, fieldnames=["website", "news_title", "textual_info"]) writer.writerows(res) def main(): html = get_html() res = parse_queries(html) to_csv(res) main() Thank you very much for taking the time to read my comment 🙏🏼