No video

Run LLM AI Model in Local machine with Zero effort (No Internet needed)⚡️

  Рет қаралды 743

Execute Automation

Execute Automation

Күн бұрын

In this video, we will discuss running Large Language Model in our Local machine with ZERO Effort with no internet needed once model is fully downloaded.
Use LLMs like
→ LLAMA
→ LLAMA 2
→ MISTRAL 2
→ GPT 3.5
→ GPT 4 and more.
➜ Advanced Course available in Udemy with latest discount coupon codes EA_JUN_24
► [Advanced Framework development Course in Selenium] www.udemy.com/...
► [ E2E Testing with Cypress] www.udemy.com/...
#executeautomation #ai #artificialintelligence #llm #gpt #gpt4
For more articles and videos, please follow
► [ExecuteAutomation] executeautomat...
► [Twitter] @executeauto
► [Subscribe] @Execute Automation
► [Udemy] www.udemy.com/...
► [XUnit with Selenium] • XUnit with Selenium
► [Git Basics] • Git Basics - Everyday ...
► [SpringBoot for Testing] • Spring Boot for Automa...
Selenium and C#
******************
► [C# for automation testing] • C# for Automation Testing
► [Selenium with C#] • Introduction to Seleni...
► [BDD with Specflow] • BDD and Specflow
► [BDD with Selenium] • BDD with Selenium and ...
► [Selenium .NET Core] • Playlist
Selenium &Java
******************
► [Cucumber with Selenium] • Section 1 - Cucumber w...
► [Cucumber with Selenium] • Section 2 - Cucumber W...
► [Cucumber 4 Upgrade] • Section 3 - Upgrade to...
► [Selenium Grid] • Selenium Grid
► [Selenium framework development] • Selenium Framework Des...
► [Selenium 4] • Selenium 4
► [Selenium Grid with Docker] • Selenium Grid with Docker
CI/CD with Microsoft Technologies
*************************************
► [Azure DevOps Service] • Azure DevOps Service 2019
► [Automated Build deployment] • Automated Build+Deploy...
► [Build + Deploy + Test with Jenkins] • Build+Deploy+Test with...
Docker & Kubernetes
************************
► [Understanding ABC of Docker] • Understanding ABC of D...
► [Understanding Docker for Windows] • Understanding Docker f...
► [Selenium Grid with Docker] • Selenium Grid with Docker
► [Kubernetes for Testers] • Kubernetes for Testers
Mobile Testing
****************
► [Understanding Appium] • Introduction to Appium...
► [Appium with C#] • Introduction to Appium...
► [Appium with Java] • Setting stage ready fo...
► [Appium with C# (Advanced)] • Introduction to Appium...
► [Appium Framework development] • Introduction to appium...
► [Mobile Automation testing with Xamarin.UITesting] • Part 1 - Introduction ...
► [Android automation with Robotium] • Part1 - Introduction t...
► [Flutter app automation with Flutter Driver] • Part 1 - Introduction,...

Пікірлер: 7
@kafkaesqued
@kafkaesqued 2 ай бұрын
Can we consider the indexing process as equivalent to training?
@ExecuteAutomation
@ExecuteAutomation 2 ай бұрын
That’s correct
@kishanlal676
@kishanlal676 2 ай бұрын
Not exactly. The indexing process just takes all your PDF text and turns it into vectors (think it as a numerical representation of the PDF content in a vector space) using an embedding model, in this case, it's the SBert model. When we ask a question, our query is also turned into vectors. Now that our query and the PDF content are in vector form, it's easier for the embedding model to perform a similarity search to find the top relevant results from the PDF. These similar results, along with our query (both in the text format), get sent to the language model like Llama, so we get a spot-on answer without any unnecessary or irrelevant stuff. As you can see, we’re not retraining our base model here, we’re simply using it to extract accurate answers from the top results. This process is called "Retrieval-augmented generation (RAG)". You can check it out more online if you’re curious.
@kafkaesqued
@kafkaesqued 2 ай бұрын
@@kishanlal676 Cool makes sense, in the current context, this indexing is nothing but pacing up the retrieval process just like Windows search indexing or even mongo db
@kishanlal676
@kishanlal676 2 ай бұрын
@@kafkaesqued Exactly, we cannot send/upload a whole PDF to LLM. It may take some time or it could error out due to token limitations. So we are using vector search to get some portion of the document where the answer may reside and provide that portion of PDF to LLM to get the exact answer for our question.
@debduttachatterjee-ng7os
@debduttachatterjee-ng7os 2 ай бұрын
Can we use this as apis to call inside code?
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 991 М.
Они так быстро убрались!
01:00
Аришнев
Рет қаралды 3,1 МЛН
I'm Excited To see If Kelly Can Meet This Challenge!
00:16
Mini Katana
Рет қаралды 35 МЛН
If Barbie came to life! 💝
00:37
Meow-some! Reacts
Рет қаралды 66 МЛН
This Free AI System Will Create Unique Content in Seconds!
13:32
Hasan Aboul Hasan
Рет қаралды 144 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,3 МЛН
Build Anything with AI Agents, Here's How
29:49
David Ondrej
Рет қаралды 262 М.
Creating an AI Agent with LangGraph Llama 3 & Groq
35:29
Sam Witteveen
Рет қаралды 43 М.
Они так быстро убрались!
01:00
Аришнев
Рет қаралды 3,1 МЛН