This video is great John, I watch you with great excitement.
@elmzlan5 ай бұрын
Please create a Course!!!!
@aimattant4 ай бұрын
Great content - love this quick way. A few things; 1: now just need to figure out the Google sheet in the pipeline - do you have a video on this? 2/ Can you use cron scheduling with this, to scrape every 20 minutes? and 3/ You are the best scraping tutorial guy out there. I will bring some clients your way in the future.
@JohnWatsonRooney4 ай бұрын
thank you, very kind! I have an old video on google sheets - the python package is called gsheets however I havent used it for a number of years so not sure if it currently works. Yes to cron, I do this all the time, video coming soon actually on how to run code in the cloud with a cron job schedule!
@aimattant4 ай бұрын
@@JohnWatsonRooney Thanks. Tried the pipeline with Google Sheets, maybe something I am missing. After data extraction to a CSV file, and finish. No data is pushed to the Google Sheet - will keep working on it. I am looking forward to that video on cron jobs.
@stevensilitonga5 ай бұрын
When should I use scrapy, and when should I use aiohttp + selectolax? Thanks!
@A_Warmachine5 ай бұрын
Thanks how can i reach you in person i need help with customising my code
@heroe14865 ай бұрын
Hi, first thanks for the video. Scrapy seems a bit like Django in the sense that you can choose to use all of its "magic" or ignore most of it to make things less black boxy and more customizable. My question is what amount of Scrapy do you advice to use ? For example here you're using follow_all but in your "150k products" video you just used the more intuitive scrapy.Request with a simple loop, which would have been possible to do here as well.
@JohnWatsonRooney5 ай бұрын
I usually lean to creating my own requests using yield scrapy.Request but they are both different ways of achieving the same thing so it’s up to you. Think about it as a request response cycle and how you choose to go about it is your decision. I use scrapy more and more now and utilise lots of it magic!
@karthikbsk1445 ай бұрын
Great content. Can you please let me know how did you set up neovim and installation of packages any tutorials please
@einekleineente15 ай бұрын
Great Video. Any rough estimate what the proxy costs for this job total up to?
@JohnWatsonRooney5 ай бұрын
Depends on price per go but maybe $1
@einekleineente15 ай бұрын
@@JohnWatsonRooney wow! That sounds very reasonable! I worried it was more in the $10+ range...
@proxyscrape5 ай бұрын
You can always try checking the avarage request size and calculate the estimated total usage :)
@arturdishunts36875 ай бұрын
How do you bypass cloudflare?
@AllenGodswill-im3op5 ай бұрын
This style will probably not work on Amazon.
@BhuvanShivakumar5 ай бұрын
I watch your videos to learn how to scrap but I'm doing a project to scrap a uni website but I'm unable to do that. Uni website has many hyperlinks and if I try to extract them I'm getting extracted link and work embedded with link separate in two different column. I can please make a video to scrap a uni website to extract all the data please
@BhuvanShivakumar5 ай бұрын
Word embeded* you please*
@bakasenpaidesu5 ай бұрын
First?
@proxyscrape5 ай бұрын
Second 🤗
@larenlarry57735 ай бұрын
Hey John, I’m also a fellow nvim user, i realised there might be better vim motions to navigate around your editor and some nvim plugins are available to train us to do so (precognition.nvim & hardtime.nvim). Hope that helps!