This AI Agent can Scrape ANY WEBSITE!!!

  Рет қаралды 38,565

Reda Marzouk

Reda Marzouk

Күн бұрын

In this video, we'll create a python script together that can scrape any website with only minor modifications
_______ 👇 Links 👇 _______
🤝 Discord: / discord
💼 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻: / reda-marzouk-rpa
📸 𝗜𝗻𝘀𝘁𝗮𝗴𝗿𝗮𝗺: / redamarzouk.rpa
🤖 𝗬𝗼𝘂𝗧𝘂𝗯𝗲: / @redamarzouk
Website: www.automation-campus.com/
FireCrawl: www.firecrawl.dev/
Github repo: github.com/redamarzouk/Scrapi...
_______ 👇 Content👇 _______
Introduction to Web Scraping with AI - 0:00
Advantages Over Traditional Methods - 0:36
Overview of FireCrawl Library - 1:13
Setting Up FireCrawl Account and API Key - 1:24
Scraping with FireCrawl : Example and Explanation - 1:36
Universal Web Scraping Agent Workflow - 2:33
Setting Up the Project in VS Code - 3:52
Writing the Scrape Data Function - 5:41
Formatting and Saving Data - 6:58
Running the Code: First Example - 10:14
Handling Large Data and Foreign Languages - 13:17
Conclusion and Recap - 17:21

Пікірлер: 112
@redamarzouk
@redamarzouk 23 күн бұрын
Hey everyone! 😊 I'm curious about your thoughts-was the explanation and flow of the video too fast, or was it clear and to the point?
@nikhai321
@nikhai321 22 күн бұрын
It was perfect!
@roblesrt
@roblesrt 22 күн бұрын
Its clear and easy to follow. Thanks for sharing! Just subscribed & tweeted as well :)
@truehighs7845
@truehighs7845 22 күн бұрын
Is there a free tier/ community edition by installing the firecrawl repo locally and generating an local API?
@redamarzouk
@redamarzouk 21 күн бұрын
@@truehighs7845 everything I've done in this video was for free, they give enough free credits at first, so you can just use all of it and then go to "jena ai" that is still free for now.
@telluricscout
@telluricscout 19 күн бұрын
@@redamarzouk Pace is ok.But the text is too small.
@todordonev
@todordonev 21 күн бұрын
Webscraping as it is right now is here to stay and AI will not replace it (it can just enhance it in certain scenarios). First of all the term "scraping" is tossed everywhere and being used vaguely. When you "scrape" all you do is move information from one place to another. For example getting a website's HTML into your computer's memory. Then comes "parsing", which is extracting different entities from that information. For example extracting product price and title, from the HTML we "scraped". These are separate actions, they are not interchangeable, one is not more important than the other, and one can't work without the other. Both actions come with their own challenges. What these kind of videos promise to fix is the "parsing" part of it. It doesn't matter how advanced AI gets, there is only ONE way to "scrape" information, and that is to make a connection to the place the information is stored(whether its HTTP request, browser navigation, RSS feed request, FTP download or a stream of data). It's just semi-automated in the background. Now that we have the fundamentals, let me clearly state this: For the vast majority(99%) of the cases "web scraping with AI" is a waste of time, money, resources and our environment. Time: its deceiving, as AI promises to extract information with a "simple prompt", you'll need to iterate over that prompt quite a few times in order to make a somewhat reliable data parsing solution. In that time you could have built a simple python script to extract the data required. More complicated scenarios will affect both the AI, and the traditional route. Money: You either use 3rd party services for LLM inference or you self-host an LLM. Both solutions in the long term will be in orders of magnitude more expensive than a traditional python script. Resources: A lot of people don't realize this but running an LLM for cases in which an LLM is not needed is extremely wasteful on resources. Ive ran scrapers on old computers, raspberry pi's and serverless functions, this is just a spec of dust of hardware requirements compared to running an LLM on an industrial grade computer with powerful GPU(s) Environment: As per the resources needed, this affects our environment greatly, as new and more powerful hardware needs to be invented, manufactured and ran. For the people that don't know, AI inference machines (whether self-hosted or 3rd party) are powerhouses, thus a lot of watt/hours wasted, fossil fuels burnt etc. Reliability: "Parsing" information with AI is quite unreliable, manly because of the nature of how LLMs work, but also because a lot more points of failure are introduced(information has to travel multiple times between services, LLM models change, you hit usage and/or budget limits, LLMs experience high loads and inference speed sucks or it fails all together, etc.) Finally: most of AI extraction is just marketing BS letting you believe that you'll achieve something that requires a human brain and workforce with just "a simple prompt". I've been doing web automation and data extraction for more than a decade for a living. Ive also started incorporating AI in some rare cases, where traditional methods just don't cut it. All that being said, for the last 1% of the cases that do make sense to use AI for data parsing, here's what I typically do (after the information is already scraped): 1. First I remove vast majority of the HTML. If you need an article from a website, its not going to be in the , , , tags(you get the idea), so using a python library (I love lxml) I remove all these tags, along with their content. Since we are just looking for an article I will also remove ALL of the HTML attributes, like classes(big one), ids, and so on. After that I will remove all the parent/sibling cases where it looks like a useless staircase of tags. I've tried converting to markdown and parsing, Ive tried parsing with a screenshot, but this method is vastly superior due to important HTML elements still being present, and the general HTML knowledge of LLMs. This step will make each request at least 10 times cheaper, and will allow us to use models with lower context sizes. 2. I will then manually copy the article content that I need and will put it along with the above resulting string into a json object + prompts to extract an article form given HTML, I will do this at least 15 times. This is the step where training data is created. 3. Then I will fine tune a GPT3.5Turbo model with that json data. After 10ish minutes of fine-tuning and around $5-10, I have an "article extraction fine-tuned model", that will always outperform any agentic solution in all areas(price, speed, accuracy, reliability). Then I just feed the model a new(un-seen) piece of HTML that has passed step1(above) and it will reliably spew out an article for a fraction of a cent in a single step (no agents needed). I have a few of those running in production for clients(for different datapoints), and they do very good, but its important that a human goes over the results every now and again. Also if there is an edge case and the fine-tune did not perform well, you just iterate and feed it more training data, and it just works.
@ilianos
@ilianos 20 күн бұрын
Thanks for taking the time to explain this! Very useful to clarify!
@rafael_tg
@rafael_tg 19 күн бұрын
Thanks man. I am specializing in web scraping in my career. Do you have some blog or similar where you share content of web scraping as a career?
@morespinach9832
@morespinach9832 19 күн бұрын
Nonsense. Scraping has for 10 years included both fetching data and then structuring it in some format, XML or JSON. Then we can do whatever we want with that structured that. Introducing "parsing" as some distinct construct is inane. More importantly, the way scraping can work today is leagues better than what the likes of APIFY used to do until 2 year ago, and yes this uses LLMs. Expand your reading.
@morespinach9832
@morespinach9832 19 күн бұрын
@@ilianos his "explanation" is stupid.
@morespinach9832
@morespinach9832 19 күн бұрын
@@rafael_tg watch more sensible videos and comments.
@LucesLab
@LucesLab 23 күн бұрын
Very helpful. Great job and thanks for sharing
@redamarzouk
@redamarzouk 23 күн бұрын
Really Appreciate the kind words, Thank you.
@paulham.2447
@paulham.2447 23 күн бұрын
Good work ! Nice presentation, nice code ! 😃 It will help me a lot. Merci Reda
@redamarzouk
@redamarzouk 23 күн бұрын
Appreciate the nice words, you're welcome!
@tal7atal7a66
@tal7atal7a66 22 күн бұрын
wa ta fiiine a bba reda , scrape lya data a wld aami, w7rrak lya l agents , 💪
@ginocote
@ginocote 22 күн бұрын
It's easy to do it with free python library. Reading HTML convert to markdown, even convert it for free to vector with transformer ect
@actorjohanmatsfredkarlsson2293
@actorjohanmatsfredkarlsson2293 22 күн бұрын
Exactly I didn't really understand the point of firecrawl in this solution!? Does Firecrawl do anything better then free python library. Any suggestion on Python libraries btw?
@morespinach9832
@morespinach9832 19 күн бұрын
Have you used it on complex websites with s or many ads, or logins or progressive JS based loads, or infinite scrolls? Clearly not.
@redamarzouk
@redamarzouk 18 күн бұрын
firecrawl has 5K stars on GitHub, Jina ai has 4k and scrapegraph has 9k. Saying that you can just implement these tools easily is frankly disrespectful to the developers who have created these libraries and made them open source for the rest of us. in the example I covered, I didn't show the capabilities of filtering the markdown to only keep the main content in a page nor did I show how to scrape using a search query. I've done scraping professionally for 7+ years now, and the amount of problems you could encounter is immense, from websites blocking you to websites with table looking elements that are in fact just a chaos of divs to Iframes... About Vectorizing your markdown, I once did that on my machine in a "chat with pdf" project, and just with 1024 dimensions and 20 pages of pdf I have to wait long minutes to generate the vectorstore that has to be searched for every request also locally (not everyone has the hardware for it).
@iokinpardoitxaso8836
@iokinpardoitxaso8836 22 күн бұрын
Amazing video and great explanations. Many thanks.
@redamarzouk
@redamarzouk 22 күн бұрын
Appreciate it, Thank you for the kind word!
@SJ-rp2bq
@SJ-rp2bq 21 күн бұрын
In the US, a “bedroom” is a room with a closet, a window, and a door that can be closed.
@benoitcorvol7482
@benoitcorvol7482 22 күн бұрын
Damn that was good man !
@redamarzouk
@redamarzouk 22 күн бұрын
Glad you liked it, My pleasure 🙏
@tirthb
@tirthb 22 күн бұрын
Thanks for the helpful content.
@redamarzouk
@redamarzouk 21 күн бұрын
You're most welcome!
@user-se9qv5pi1q
@user-se9qv5pi1q 22 күн бұрын
You said that sometimes the model returning the response with different keynames, but if you pass the pydantic model to the OpenAI model as a function, you can expect invariable object with the keys that you need
@user-se9qv5pi1q
@user-se9qv5pi1q 22 күн бұрын
Also, pydantic models can be scripted to have nested structure, in contrast to json schemas
@redamarzouk
@redamarzouk 22 күн бұрын
Correct I've actually used them while I was playing around with my code (alongside function calling), the issue I found is that I have to explain both pydantic schema and how I made it dynamic, because if I want a universal web scrapper that can use different fields everytime we're scrapping a different website. That ultimately would've made the video a 30mins+ video, so I opted for the easier less performant way.
@tkp2843
@tkp2843 22 күн бұрын
Awesome videoooo!
@redamarzouk
@redamarzouk 22 күн бұрын
Appreciate it 🙏🙏
@Yassine-tm2tj
@Yassine-tm2tj 22 күн бұрын
In my experience, function calling is way better at extracting consistent JSON than just prompting. Anyway, تبارك الله على ولد بلادي.
@Chillingworth
@Chillingworth 22 күн бұрын
Good idea
@redamarzouk
@redamarzouk 22 күн бұрын
You're on point with this, using function calling is always better for JSON Consistency. I actually used it when I was creating my original code. The issue is that I have a parameter "Fields" that can change depending on the type of website being scraped. So to account for that in my code I either need to make the schema inside the function calling generic (not so great) or I make it dynamic (really didn't want to go there, it will make the tutorial much more complicated). I also tried using pydantic expressions since Firecrawl has their own LLM Extractor that can use them, but it didn't perform as well. But yeah you're right function calling is always better. Lah yhfdk a sat.
@Yassine-tm2tj
@Yassine-tm2tj 22 күн бұрын
​@@redamarzouk You have a knack for this bro. Keep up the good work. وفقك الله
@YOGiiZA
@YOGiiZA 19 күн бұрын
Helpful, Thank you
@redamarzouk
@redamarzouk 19 күн бұрын
Glad it helped!
@nabil-nc9sl
@nabil-nc9sl 21 күн бұрын
tbarkallah 3lik a bro mashallah
@redamarzouk
@redamarzouk 21 күн бұрын
Lah yhafdk
@AmanShrivastava23
@AmanShrivastava23 15 күн бұрын
I'm curious - what do you do after structuring the data - do you store it in a vector DB? If so, do you store the Json as it is or something else? And can it actually be completely universal - by that i mean can it structure data by us not providing the fields on which it should strucutre the data. Can we make it in some way where upload a website and it understands the data and structures it according to it?
@bls512
@bls512 21 күн бұрын
Neat overview. Curious about API costs associated with these demos. Try zooming into your code for viewers.
@morespinach9832
@morespinach9832 19 күн бұрын
watch on big monitors as most coders do
@redamarzouk
@redamarzouk 18 күн бұрын
for only the demo you've seen, I spent 0.5$, for creating the code and launching it 60+ times, I spent 3$. I will zoom in next time.
@sharifulislam7441
@sharifulislam7441 Күн бұрын
Good technology to keep in good book!
@shauntritton9541
@shauntritton9541 21 күн бұрын
Wow! The AI was even clever enough to convert square meters into square feet, no need to write a conversion function!
@karthickb1973
@karthickb1973 18 күн бұрын
awesome bro
@redamarzouk
@redamarzouk 18 күн бұрын
Glad you liked it
@Chillingworth
@Chillingworth 22 күн бұрын
You could just ask GPT-4 one time to generate the extraction code or the tags to look for, per website, so that it doesn't need to always use AI for scraping, and you might get better results, and then in that code if it fails you fall back to regenerating it and cache it again.
@redamarzouk
@redamarzouk 22 күн бұрын
Creating a dedicated script for a website is the best way to get the exact data you want, you're right in that sense, and you can always fix it with gpt-4 as well. But let say you're actively scraping 10 competitor websites where you only want to get their pricing updates and their new offerings, will it make sense to you to maintain 10 different scripts rather than have 1 script that can do the job and will need very minimal intervention? It depends on the use case, but there are times where customized scraping code isn't the best approach.
@Chillingworth
@Chillingworth 22 күн бұрын
@@redamarzouk I didn't mean like that. I meant you would basically do the same thing as your technique, but you could just use the AI one for each domain, asking it what the CSS selectors are for the elements you're interested in. That way when you're looking for updates you don't even need to do any calls to the LLM unless it fails because the structure is different. You don't even have to maintain multiple scripts, just make a Dictionary with the domain name and the CSS paths and there you go. Of course a lot of different pages may have different structure but you could probably just feed in the HTML from a few different pages of the site and use a prompt telling GPT-4 the URLs and the markup and tell it to figure out the URL pattern that will match the specific stuff to look for. You could even still do this with GPT-3.5-Turbo. Basically the only idea I'm throwing out there is to ask the AI to tell you the tag names and have your code simply extract the info using BeautifulSoup or something else that can grab info out of tags based on CSS query selectors. That way, you can cache that info and then scrape faster after you get that info the initial time. Would only be a little more work but might be a lot better for some use cases. Just thought it was a cool idea
@d.d.z.
@d.d.z. 22 күн бұрын
Thank you. I have a case use, can I use the tool to make querys to a database, save the results as your tutorial shows and also print to PDF the result of every query?
@redamarzouk
@redamarzouk 22 күн бұрын
If you already have a database you want to make queries against, you don't need any scraping (unless you need to scrape website to create that database). But yeah it sounds like you can do that without the need for any AI in the loop.
@titubhowmick9977
@titubhowmick9977 19 күн бұрын
Very helpful. How do you work around the output limit of 4096 tokens?
@redamarzouk
@redamarzouk 18 күн бұрын
Hello, if you're using open ai api, you need to add the parameter (max_tokens=xxxxxxxx) inside your client open ai call and define a number that don't exceed the max number of token of the model you're using (128 000 for gpt-4o for example)
@nguyenstephen8479
@nguyenstephen8479 4 күн бұрын
Is it possible to scrape data from an inserted command by user? Let's say I want to search for a place to stay on airbnb and I provide necessary details of place I'm looking for. Now that I finish my command, I want the system/AI agent to automate the process of searching through multiple websites itself with provided preferences then scrape out relevant data. How could I do that ?
@nkofr
@nkofr 18 күн бұрын
nice! any idea on how to self host firecrawl? like with Docker? also, can it be coupled with n8n? how?
@redamarzouk
@redamarzouk 18 күн бұрын
I gotta be honest, I didn't even try. I tried to self host an agentic software tool before and my pc was going crazy, it couldn't take the load from Llama3-8B running on LM Studio plus docker plus filming at the same time, I simply don't have the hardware for it. if you want to self host here is the link: github.com/mendableai/firecrawl/blob/main/SELF_HOST.md it is with docker.
@nkofr
@nkofr 18 күн бұрын
@@redamarzouk thanks. Is there any sense to use it with n8n? or maybe n8n can do the same without firecrawl? (noob here)
@nkofr
@nkofr 17 күн бұрын
@@redamarzouk or maybe with things like Flowise?
@JoaquinTorroba
@JoaquinTorroba 4 күн бұрын
What other options are beside Firecrawl? Thanks!
@JoaquinTorroba
@JoaquinTorroba 4 күн бұрын
Just found it in the comments: "Firecrawl has 5K stars on GitHub, Jina ai has 4k and scrapegraph has 9k."
@KCM25NJL
@KCM25NJL 22 күн бұрын
Hmmmm, I mean... it's pretty good. BUT.... and it's a pretty major BUT. For the sake of cost, I would much rather have a workflow that goes something like: URL -> GET MARKDOWN -> Use LLM to build Beautiful Soup Script for that URL -> Use that Script for future hits on that site. Why? Because it's very unlikely that you'll write a script to only hit a site once. Perhaps a follow up to your work would be something that does both.... URL -> DOES URL HAVE A BS SCRIPT? -> IF YES, run that script and return the data -> IF NO, pass markdown through LLM and create BS script -> Run BS Script
@MaliciousCode-gw5tq
@MaliciousCode-gw5tq 22 күн бұрын
Agree...TechSales man are keep poping up in KZbin ..
@actorjohanmatsfredkarlsson2293
@actorjohanmatsfredkarlsson2293 22 күн бұрын
I've tried this. It's not as strait forward as the brut force method. If the LLM cost decreases enough it might be more costly in the log run, given the brut force method is a set and forget all tricks tool.
@jatinsongara4459
@jatinsongara4459 3 күн бұрын
can we use this for email and phone number extraction
@ridabrahim7604
@ridabrahim7604 22 күн бұрын
Bghit ghir nfhm chno dawr dial firecrawl fhadchi kamel ? Banli la mafih ta haja spéciale ga3 !!!
@simonren4890
@simonren4890 22 күн бұрын
firecrawl is not open-sourced!!!
@paulocacella
@paulocacella 22 күн бұрын
You too nailed it. We need to refuse these false open source codes that are in reality commercial endeavours. I use only FREE and OPEN codes.
@redamarzouk
@redamarzouk 22 күн бұрын
Except it is. Refer to its repo, it shows how to run it locally github.com/mendableai/firecrawl/blob/main/CONTRIBUTING.md
@paulocacella
@paulocacella 22 күн бұрын
@@redamarzouk I'll take a look. Thanks.
@javosch
@javosch 21 күн бұрын
But you are not using the open source, you are using their API... perhaps for the next time that you could do it run locally
@everbliss7955
@everbliss7955 5 күн бұрын
​@@redamarzouk the open source repo is still not ready for self hosting.
@sammedia3d
@sammedia3d 23 күн бұрын
what is this "scraping" good for? I mean what can you use that for? sounds interesting tho
@rodrigoamora
@rodrigoamora 23 күн бұрын
Is used to automate work that otherwise would've been done mannualy
@redamarzouk
@redamarzouk 23 күн бұрын
Scraping in general is a huge industry, a lot of companies need to scrape data about their own products to analyze customer reviews for example and detect trends on which products work better (I know!! why not use an API?? you will be surprised at how little real life business add APIs to their web apps) Companies also scrape competitors websites (No API possible in this case) to stay up to date with their pricing and align their products. Another use case is scraping for advertisers, because they have to analyze sentiment about a person or advertising agency before they can approach them with a brand deal offer. Also people scrape (usually linkedin ) for potential leads that are interested in a certain service (I receive a ton of emails a day because of that) I'm gonna stop here, but yeah web scraping is quite important.
@ESmith
@ESmith 22 күн бұрын
Nefarious reasons. Steal content, creating seo pages on competitor keywords, make bots for social media. Generally nothing of great value.
@PointlessMuffin
@PointlessMuffin 21 күн бұрын
Does it parse JavaScript, infinity scroll, button click navigations?
@morespinach9832
@morespinach9832 19 күн бұрын
Yes, you can ask LLMs to do all that like a human would.
@zvickyhac
@zvickyhac 18 күн бұрын
can Use LLMA 3/ Phi3 on local pc ?
@redamarzouk
@redamarzouk 18 күн бұрын
You theoretically can use it when it comes to Data Extraction, but you will need a large context window version of Llama3 or Phi3. I've seen a model where they have extended the context length to 1M tokens for Llama3-7B. you need to keep in my that your hardware need to match the requirements.
@ilanlee3025
@ilanlee3025 18 күн бұрын
Im just getting "An error occurred: name 'phone_fields' is not defined"
@Brodielegget
@Brodielegget 22 күн бұрын
Why do it this way, if you can do this without coding? Make . C for example
@redamarzouk
@redamarzouk 22 күн бұрын
Yeah you can create the same process with no code tools like Make or Zapier or even with low code tools like UiPath and Power Automate, but I just feel more control over formatting my output and integrating my script with my other local processes when I use code. I still use no code tools for other things.
@Byte-SizedTech-Trends
@Byte-SizedTech-Trends 21 күн бұрын
Make and Zapier would get very pricey if this were automated at scale.
@user-xq4yj6ni8v
@user-xq4yj6ni8v 22 күн бұрын
Nice idea. Now wake me up when there are no credits involved (completely free).
@redamarzouk
@redamarzouk 22 күн бұрын
it's open source, this is how you can run it locally and contribute to the project. github.com/mendableai/firecrawl/blob/main/CONTRIBUTING.md but honestly as IT folks we gotta stop going at each other for wanting to charge for an app we've created, granted I'm not recommending this to my clients yet and 50$/month is high, but if that what they want to charge it's really up to them.
@egimessito
@egimessito 22 күн бұрын
What about captcha
@redamarzouk
@redamarzouk 22 күн бұрын
Websites don't like scrappers in general, so extensive scrapping will need a vpn (that can handle the volume of your scrapping).
@egimessito
@egimessito 22 күн бұрын
@@redamarzouk also a VPN would not defend from captcha. They are there for a good reason but would be interesting to find a way around it to build tools for customers
@santhoshkumar995
@santhoshkumar995 20 күн бұрын
I get Error code: 429 when running the code. -'You exceeded your current quota,...
@ilianos
@ilianos 19 күн бұрын
In case you haven't used your OpenAI API key in a while: they changed the way it works, you need to pay in advance to refill your quota
@EddieGillies
@EddieGillies 22 күн бұрын
What about Angie list 😢
@swinginsteel8519
@swinginsteel8519 22 күн бұрын
"Beds" mean number of bedrooms.
@redamarzouk
@redamarzouk 22 күн бұрын
That makes more sense, Thank you.
@stanpittner313
@stanpittner313 22 күн бұрын
50$ montly fee 🎉😂😅
@redamarzouk
@redamarzouk 22 күн бұрын
I actually filmed an hour and I wanted to go through the financials of this method and if it makes sense, but I edited that part out so the video is less than 30mins. but I agree 50$ is high, and the markdowns should be of quality for the tokens to be less therefore cheap LLM cost. btw I"m not sponsored by any means by firecrawl, I was gonna talk about jina ai or scrapegraph-ai which do the same thing before deciding on firecrawl.
@squiddymute
@squiddymute 22 күн бұрын
another api key to pay ? what's the point of this really ?
@paulocacella
@paulocacella 22 күн бұрын
You nailed it. We need to refuse these false open source codes that are in reality commercial endeavours. I use only FREE and OPEN codes.
@IdPreferNot1
@IdPreferNot1 23 күн бұрын
Looking forward to the day when all the effort wasted on webscraping warfare and costs for their vodoo is irrelevant with a sufficiently powerful opensource model run locally. It's a BS industry that should be made obsolete.
@AI-Wire
@AI-Wire 23 күн бұрын
When agents become ubiquitous it will no longer make economic sense for websites to block robots with captchas and anti-scraping tech.
@redamarzouk
@redamarzouk 23 күн бұрын
Totally agree with you, and we do have modified Llama3-8b models that can handle up to 1M tokens. With a state of the art GPU you can run it on your machine. The problem is the consistency of small models is not there yet, I see better results with Phi3 but it simply doesn't have the context window to handle the markdown I've shown in this video. hopefully we'll get there.
You've been using AI Wrong
30:58
NetworkChuck
Рет қаралды 343 М.
DELETE TOXICITY = 5 LEGENDARY STARR DROPS!
02:20
Brawl Stars
Рет қаралды 16 МЛН
100❤️
00:20
Nonomen ノノメン
Рет қаралды 69 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 203 М.
How to use AI for Web Scraping with Excel and Google Sheets
5:19
Peter Mangan
Рет қаралды 12 М.
Have You Picked the Wrong AI Agent Framework?
13:10
Matt Williams
Рет қаралды 36 М.
7 No-Code + AI  Tools That Can Make You a Millionaire
22:15
WeAreNoCode
Рет қаралды 184 М.
AUTOGEN STUDIO : The Complete GUIDE (Build AI AGENTS in minutes)
20:21
The simplest way to automate your browser for FREE
20:53
Mike Powers
Рет қаралды 78 М.
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 5 МЛН
Настоящий детектор , который нужен каждому!
0:16
Ender Пересказы
Рет қаралды 316 М.
MacBook Air Японский Прикол!
0:42
Sergey Delaisy
Рет қаралды 188 М.
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 1,6 МЛН