I used to binge watch Netflix, now I'm binge watching all your videos. Thank you, Alex for all your amazing videos!
@AlexTheAnalyst Жыл бұрын
Glad you like them!
@thepinner619 ай бұрын
@@AlexTheAnalyst Thank you so much! Made my day
@ParalegalEagle-d7q7 күн бұрын
#Me2 kinda sorta ….🎉🎉🎉🎉😂😂😂😂😂
@VennisaOwusu-Barfi9 ай бұрын
I am pretty new to data analysis and I was working on a project where I would need to scrape data from a website and this tutorial has been so helpful! I spent hours trying to figure it out and the other tutorials on KZbin don't explain anything or skip steps and so it's hard to learn and personalize it for your own project. This however was detailed and straight to the point! Thank you so much. You're a lifesaver!
@MichaelDavid-y3y17 күн бұрын
I remember watching this a few years ago when starting my journey, it was the best tutorial I have watched ever since, I am currently a senior engineer
@shahrukhahmad412710 ай бұрын
I tried learning web scrapping atleast 5 time and failed everytime. But you made everything simple and handy, please please its a request from my side to resume this playlist and teach basics to advanced scrapping using python. I cant be able to learn without you, thank you inadvance and waiting for your more videos in same playlist Alex.
@franciscoflor6125 Жыл бұрын
You are the best, your videos have really helped me a lot. But this series of Web Scraping videos has been like you were reading my mind. I was thinking of doing a project on my own, but the only way to get the database is through Web Scraping. Waiting for the next video, one of the questions I have is the procedure to continue if I want to extract information from the hockey teams but from page 2,3, etc.
@ErenKıraç-g5m2 ай бұрын
you don't need to use find function to get text, just try soup.find_all(arguments...)[x].text.strip() . You can write 0,1,2,3.... for x depending on which data you want. for example in 10:15 for x=1 the data text must be "Year". because 1 is the second index in python after first index 0
@ENTJ616 Жыл бұрын
Mate, you are out of this world.
@nnamdiLdavid10 ай бұрын
Thanks for all you do Alex. Can you be so kind to continue this series, especially for advanced scrapping, like scrapping from unstructured data etc
@katcirceАй бұрын
Thank you for this! Awesome starting point for my nlp project!
@kaliportis Жыл бұрын
Hello, I commented on one of your previous videos enquiring about the offer you had made in one of your "How to Build a Resume" videos, concerning resume reviews. I completely understand if that is no longer the case, considering that video was 3 years ago, but if you still are reviewing resumes I would to send mine to you. Have a nice day and congratulations on hitting 500k.
@chu1452 Жыл бұрын
as a Informatics Engineering graduate, this is easier to me to understand since we've learnt html back then
@jmc18496 ай бұрын
Hi Alex (as if!) Thanks for all the content
@kajal6487 ай бұрын
Thank you so much sir I was caught up in a problem but I was able to solve after watching this video.
@meryemOuyouss200210 ай бұрын
Thank you ,I also finished this playlist
@ShivaSunkaranam-qx3jf5 ай бұрын
if i type soup. Find('div') .. nothing displays. But thats available on script
@ArisingProgram5 ай бұрын
Hey Alex, I'm trying to grab text that is randomly generated from Random Word Generator website for my hangman project. Problem is that the text I grab isn't displayed in HTML it's always displayed as loading... What new techniques can you teach us on how to grab this data thanks!
@Kaura_Victor5 ай бұрын
Thanks, Alex!
@mxdigitalmediamarketplace7 ай бұрын
Hello, thank you for your tutorial, great info. What editor do you use?
@LavanyaGopal-py6jd5 ай бұрын
Hello, thank you so much for this wonderful tutorial. However, I have one doubt that needs clarifying. So I tried this code out with the same set of codes and Url you have used but there seems to be a problem in this line -> print(Soup.find_all('p',class_="lead")). the output for this line shows [ ] .. which isn't the paragraph from the website. How do I rectify this problem? also, I use IDLE for Python. Once again your videos are awesome and I hope you continue making more great coding content.
@DeltaXML_Ltd Жыл бұрын
Interesting video, keep it up!
@mohammed-hananothman555813 күн бұрын
.find_all(...).text does not show the ' ' on my pc even though you could see the escape character at work. is there a setting i could use to show these characters so I can clean the text easily?
@monsieurm29049 ай бұрын
Where we can find the same notebooks page you use during all the video ? :)
@Syrviuss Жыл бұрын
Is it work only with static pages? Not like amazon or any shops ? There are some problems with past toturial when we try make Amazone Web Screping Using Python, how can we know the differences ? Thank for all your videos ;)
@geoffreycg56507 ай бұрын
Is there a next video in the series?
@rockcaesarpaper291 Жыл бұрын
@elphasluyuku4167 Жыл бұрын
Hey guys i am getting 'SSLCertVerificationError' can anyone kindly help me resolve this?
@vahidmehdizade5781 Жыл бұрын
You can fix this with these lines of code. It typically occurs because there is an issue with the SSL certificate verification during an HTTPS connection. When the SSL certificate of the remote server cannot be verified. requests.packages.urllib3.disable_warnings() page = requests.get(url, verify=False)