No video

Run LLM AI Model in Local machine with Zero effort (No Internet needed)⚡️

  Рет қаралды 744

Execute Automation

Execute Automation

Күн бұрын

In this video, we will discuss running Large Language Model in our Local machine with ZERO Effort with no internet needed once model is fully downloaded.
Use LLMs like
→ LLAMA
→ LLAMA 2
→ MISTRAL 2
→ GPT 3.5
→ GPT 4 and more.
➜ Advanced Course available in Udemy with latest discount coupon codes EA_JUN_24
► [Advanced Framework development Course in Selenium] www.udemy.com/...
► [ E2E Testing with Cypress] www.udemy.com/...
#executeautomation #ai #artificialintelligence #llm #gpt #gpt4
For more articles and videos, please follow
► [ExecuteAutomation] executeautomat...
► [Twitter] @executeauto
► [Subscribe] @Execute Automation
► [Udemy] www.udemy.com/...
► [XUnit with Selenium] • XUnit with Selenium
► [Git Basics] • Git Basics - Everyday ...
► [SpringBoot for Testing] • Spring Boot for Automa...
Selenium and C#
******************
► [C# for automation testing] • C# for Automation Testing
► [Selenium with C#] • Introduction to Seleni...
► [BDD with Specflow] • BDD and Specflow
► [BDD with Selenium] • BDD with Selenium and ...
► [Selenium .NET Core] • Playlist
Selenium &Java
******************
► [Cucumber with Selenium] • Section 1 - Cucumber w...
► [Cucumber with Selenium] • Section 2 - Cucumber W...
► [Cucumber 4 Upgrade] • Section 3 - Upgrade to...
► [Selenium Grid] • Selenium Grid
► [Selenium framework development] • Selenium Framework Des...
► [Selenium 4] • Selenium 4
► [Selenium Grid with Docker] • Selenium Grid with Docker
CI/CD with Microsoft Technologies
*************************************
► [Azure DevOps Service] • Azure DevOps Service 2019
► [Automated Build deployment] • Automated Build+Deploy...
► [Build + Deploy + Test with Jenkins] • Build+Deploy+Test with...
Docker & Kubernetes
************************
► [Understanding ABC of Docker] • Understanding ABC of D...
► [Understanding Docker for Windows] • Understanding Docker f...
► [Selenium Grid with Docker] • Selenium Grid with Docker
► [Kubernetes for Testers] • Kubernetes for Testers
Mobile Testing
****************
► [Understanding Appium] • Introduction to Appium...
► [Appium with C#] • Introduction to Appium...
► [Appium with Java] • Setting stage ready fo...
► [Appium with C# (Advanced)] • Introduction to Appium...
► [Appium Framework development] • Introduction to appium...
► [Mobile Automation testing with Xamarin.UITesting] • Part 1 - Introduction ...
► [Android automation with Robotium] • Part1 - Introduction t...
► [Flutter app automation with Flutter Driver] • Part 1 - Introduction,...

Пікірлер: 7
@kafkaesqued
@kafkaesqued 2 ай бұрын
Can we consider the indexing process as equivalent to training?
@ExecuteAutomation
@ExecuteAutomation 2 ай бұрын
That’s correct
@kishanlal676
@kishanlal676 2 ай бұрын
Not exactly. The indexing process just takes all your PDF text and turns it into vectors (think it as a numerical representation of the PDF content in a vector space) using an embedding model, in this case, it's the SBert model. When we ask a question, our query is also turned into vectors. Now that our query and the PDF content are in vector form, it's easier for the embedding model to perform a similarity search to find the top relevant results from the PDF. These similar results, along with our query (both in the text format), get sent to the language model like Llama, so we get a spot-on answer without any unnecessary or irrelevant stuff. As you can see, we’re not retraining our base model here, we’re simply using it to extract accurate answers from the top results. This process is called "Retrieval-augmented generation (RAG)". You can check it out more online if you’re curious.
@kafkaesqued
@kafkaesqued 2 ай бұрын
@@kishanlal676 Cool makes sense, in the current context, this indexing is nothing but pacing up the retrieval process just like Windows search indexing or even mongo db
@kishanlal676
@kishanlal676 2 ай бұрын
@@kafkaesqued Exactly, we cannot send/upload a whole PDF to LLM. It may take some time or it could error out due to token limitations. So we are using vector search to get some portion of the document where the answer may reside and provide that portion of PDF to LLM to get the exact answer for our question.
@debduttachatterjee-ng7os
@debduttachatterjee-ng7os 2 ай бұрын
Can we use this as apis to call inside code?
Generate LLM Embeddings On Your Local Machine
13:53
NeuralNine
Рет қаралды 18 М.
Алексей Щербаков разнес ВДВшников
00:47
Bony Just Wants To Take A Shower #animation
00:10
GREEN MAX
Рет қаралды 7 МЛН
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 26 МЛН
ISSEI & yellow girl 💛
00:33
ISSEI / いっせい
Рет қаралды 21 МЛН
Ollama UI - Your NEW Go-To Local LLM
10:11
Matthew Berman
Рет қаралды 110 М.
AI Agents Explained: How This Changes Everything
10:35
Bot Nirvana
Рет қаралды 24 М.
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 79 М.
How to Improve LLMs with RAG (Overview + Python Code)
21:41
Shaw Talebi
Рет қаралды 49 М.
Using Ollama To Build a FULLY LOCAL "ChatGPT Clone"
11:17
Matthew Berman
Рет қаралды 248 М.
Creating an AI Agent with LangGraph Llama 3 & Groq
35:29
Sam Witteveen
Рет қаралды 43 М.
Build Anything with AI Agents, Here's How
29:49
David Ondrej
Рет қаралды 263 М.
Run Your Own Local ChatGPT: Ollama WebUI
8:27
NeuralNine
Рет қаралды 65 М.
Алексей Щербаков разнес ВДВшников
00:47