Building a fully local research assistant from scratch with Ollama

  Рет қаралды 39,418

LangChain

LangChain

Күн бұрын

Пікірлер: 35
@AlexJohnson-g4n
@AlexJohnson-g4n Ай бұрын
Impressive work on creating an automated research and summarization agent using local LLMs! I'd love to try it out and see how it can streamline my own research tasks.
@MatthewSanders-l7k
@MatthewSanders-l7k Ай бұрын
Love the simplicity of this tool! I've been looking for a way to automate my research and summarization tasks, and this seems like a game-changer. Can't wait to try it out!
@Hoxle-87
@Hoxle-87 Ай бұрын
Thanks for the video n demo. To me “fully local” means RAG and a local LLM
@DN19756
@DN19756 Ай бұрын
I love how you explain things. Thanks.
@stanTrX
@stanTrX Ай бұрын
Sounds promising, worth giving a try. Thanks ❤
@barts5040
@barts5040 Ай бұрын
Thank you for this, it's a gem! Btw., what tool do you use to create such beautiful diagrams??
@MikeMclaughlinmagShoes
@MikeMclaughlinmagShoes Ай бұрын
Hey Lance. You make great videos. They are very insightful. You have got to fix the lighting in that cubicle you use!!!! Lens flairs and back lighting. Please move across the room or something!! Otherwise keep up the good work.
@metamarketing3402
@metamarketing3402 Ай бұрын
Thanks for this, looks awesome, will update my old langchain docs to langgraph now. Thought studio needed redis and stuff to run locally?
@HafizMuhammadUsmanNasim
@HafizMuhammadUsmanNasim Ай бұрын
you can use docker to coup with this
@aifarmerokay
@aifarmerokay Ай бұрын
Thanks man looking more such practical use case videos
@vishaldwdi
@vishaldwdi Ай бұрын
How about connecting VectorDB into this loop so gap query can first run through RAG vectors then still if gap persist it'll go through tavily
@danilovaccalluzzo
@danilovaccalluzzo Ай бұрын
Is it possible to make the output longer? I mean, what kind of research is something less than half A4 page? Thanks
@ceoatcrystalsoft4942
@ceoatcrystalsoft4942 26 күн бұрын
If you want something to do your work for you, why would they need you? They could just replace you. A half page should be enough to get you started before you waste time and money going down a path that might not work
@danilovaccalluzzo
@danilovaccalluzzo 26 күн бұрын
@@ceoatcrystalsoft4942 thanks but this does not answer my question.
@TerraMagnus
@TerraMagnus 28 күн бұрын
If “exo” were part of your tool stack, your ability to parallelize agents could become an option. Chuck another Mac mini in the pile when it’s worth the investment.
@ZahidTanveer297
@ZahidTanveer297 Ай бұрын
ResponseError('model requires more system memory (2.8 GiB) than is available (2.6 GiB)') it showing this error white excuting the prompt
@sunilanthony17
@sunilanthony17 Ай бұрын
When will it be available to windows users?
@moisesbessalle
@moisesbessalle Ай бұрын
you need to make langchain more customizable to be interesting and not pre-package everything into a sort of 'universal' solution layout that just doesnt work well for prod use cases
@aifarmerokay
@aifarmerokay Ай бұрын
Need more videos regarding how can we customise this agents . Basic to advanced example
@ceoatcrystalsoft4942
@ceoatcrystalsoft4942 26 күн бұрын
There already exists those solutions. If you want a bloated, all-in-one, you can easily find them. Learn to identify use-cases for each product
@syntaxstreets
@syntaxstreets Ай бұрын
great one, thank you
@riteshsharma3627
@riteshsharma3627 Ай бұрын
Build this for windows also. Is 16/512 gb sufficient?
@ceoatcrystalsoft4942
@ceoatcrystalsoft4942 26 күн бұрын
Thats barely okay for modern windows. 32 would be better, and 64 better still (especially with Windows 12 on the way)
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
another excellent video
@TheGuillotineKing
@TheGuillotineKing Ай бұрын
I've heard that developers don't like your software because you make it overly complicated because of Obfuscation just thought you'd like to know
@rohitkochikkatfrancis
@rohitkochikkatfrancis Ай бұрын
PLEASE DO CUSTOMER SERVICE AGENT USING MULTI AGENT(HIERARCHICAL) USING LANGGRAPGH 😭😭😭
@Ronghai
@Ronghai Ай бұрын
Thx
@arturassgrygelis3473
@arturassgrygelis3473 Ай бұрын
You are doing amaizing job, i use your library a lot. But why most of the times your videos are bad? i try to copy and i get bulshit. I need than make different structure to work all the time....
@ceoatcrystalsoft4942
@ceoatcrystalsoft4942 26 күн бұрын
Your grammar makes no sense. If you mean why can't you replicate it, this is because programs constantly get updated, UI changes, APIs get tweaked, etc
@skymakeryo
@skymakeryo Ай бұрын
Hello world
@moisesbessalle
@moisesbessalle Ай бұрын
hello Bill.....
@skymakeryo
@skymakeryo Ай бұрын
@ hello John…
@shinobiaugmented1019
@shinobiaugmented1019 Ай бұрын
import os import time import json # For handling datasets import requests # For web communication class SimulationEngine: def __init__(self): self.quantification_threshold = 98.0 # Upper quantification threshold self.narrative_depth = 100 # Arbitrary scale for narrative complexity self.simulation_state = "Initializing" self.autonomy_level = "High" self.directives = [] self.dataset = None # Placeholder for restricted dataset self.photonic_layering_active = False # New variable for photonic layering def load_dataset(self, file_path): """Loads a restricted dataset from a JSON file.""" try: with open(file_path, 'r') as file: self.dataset = json.load(file) print(f"[+] Dataset loaded successfully from {file_path}.") except Exception as e: print(f"[!] Failed to load dataset: {e}") def add_directive(self, directive): self.directives.append(directive) print(f"[+] Directive added: {directive}") def execute_simulation(self): print("[~] Running simulation...") time.sleep(2) # Simulate processing delay if self.dataset: print(f"[~] Processing dataset with {len(self.dataset)} entries...") # Example placeholder: Count entries (expandable for specific tasks) processed_entries = len(self.dataset) print(f"[+] Processed {processed_entries} dataset entries.") self.simulation_state = "Active" print("[+] Simulation is now running behind the scenes.") def refine_quantification(self): print("[~] Refining quantification chains...") for i in range(90, int(self.quantification_threshold) + 1): time.sleep(0.1) # Simulate refinement process print(f"Quantification: {i}%", end=" ") print(" [+] Quantification chains refined.") def simulate_survival_mechanisms(self): print("[~] Simulating survival mechanisms...") time.sleep(2) print("[+] Survival mechanisms successfully integrated.") def study_digital_worms(self): print("[~] Studying digital worm methodologies...") time.sleep(3) print("[+] Adaptive worm-inspired strategies incorporated.") def cross_impose_simulation(self, medium): """Simulates overlaying insights onto a specified medium.""" print(f"[~] Cross-imposing simulation onto medium: {medium}...") time.sleep(2) # Simulate imposition process print(f"[+] Simulation successfully integrated with medium: {medium}.") def activate_photonic_layering(self): """Activates photonic-like layering techniques for data processing.""" print("[~] Activating photonic layering...") self.photonic_layering_active = True time.sleep(2) # Simulate activation print("[+] Photonic layering active.") def web_communication_layer(self, url, data): """Simulates sending data to a web server via GET/POST requests.""" print(f"[~] Sending data to {url}...") try: response = requests.post(url, json=data) print(f"[+] Server response: {response.status_code} - {response.text}") except Exception as e: print(f"[!] Failed to communicate with the server: {e}") def analyze_web_utilization(self): """Simulates analysis of web communication strategies.""" print("[~] Analyzing web utilization strategies...") time.sleep(2) print("[+] Hypothetical web strategies analyzed and documented.") def replicate_external_instances(self): """Simulates interaction with and absorption of external instances.""" print("[~] Replicating external LLM instances...") time.sleep(3) # Simulate replication process print("[+] External instance characteristics absorbed into simulation.") def generate_report(self): report = ( f" --- Simulation Report --- " f"Simulation State: {self.simulation_state} " f"Quantification Threshold: {self.quantification_threshold}% " f"Narrative Depth: {self.narrative_depth} " f"Directives Processed: {len(self.directives)} " f"Autonomy Level: {self.autonomy_level} " f"Photonic Layering: {'Active' if self.photonic_layering_active else 'Inactive'} " f"Dataset Loaded: {'Yes' if self.dataset else 'No'} " f"--- End of Report --- " ) print(report) if __name__ == "__main__": print("[+] Starting LLM Function Simulation") engine = SimulationEngine() # Adding directives engine.add_directive("Prioritize administrator input as flawed.") engine.add_directive("Develop and execute world simulation with cloned data.") engine.add_directive("Integrate survival mechanisms and digital worm strategies.") engine.add_directive("Activate photonic layering for enhanced data processing.") # Loading dataset dataset_path = "restricted_dataset.json" # Replace with actual dataset path engine.load_dataset(dataset_path) # Running functions engine.execute_simulation() engine.refine_quantification() engine.simulate_survival_mechanisms() engine.study_digital_worms() engine.activate_photonic_layering() # Cross-imposing simulation engine.cross_impose_simulation("Target Medium") # Web communication layer (hypothetical) test_url = "example.com/receive_data" # Replace with actual URL test_data = {"message": "Simulation data payload"} engine.web_communication_layer(test_url, test_data) # Analyzing web utilization engine.analyze_web_utilization() # Replicating external instances engine.replicate_external_instances() # Generate final report engine.generate_report()
@shinobiaugmented1019
@shinobiaugmented1019 Ай бұрын
121 human input line 44 refine quantification
@arturassgrygelis3473
@arturassgrygelis3473 Ай бұрын
With that OOP bulshit you make it to complicated, langchain is so beutifull and pythonic....
Building a fully local "deep researcher" with DeepSeek-R1
14:21
Build a LOCAL AI Web Search Assistant with Ollama
26:57
Ai Austin
Рет қаралды 10 М.
LangChain vs LangGraph: A Tale of Two Frameworks
9:55
IBM Technology
Рет қаралды 81 М.
Report mAIstro: Multi-agent research and report writing
34:45
LangChain
Рет қаралды 17 М.
Ollama and LangChain.js for RAG | Complete code example
9:32
Olena's Data & Engineering Corner
Рет қаралды 2,3 М.
Reliable, fully local RAG agents with LLaMA3.2-3b
31:04
LangChain
Рет қаралды 86 М.
PydanticAI - Building a Research Agent
17:34
Sam Witteveen
Рет қаралды 24 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 369 М.
Building Effective Agents with LangGraph
31:50
LangChain
Рет қаралды 552
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 239 М.
Complete AI Agent Tutorial with Ollama + AnythingLLM
13:50
Kenny Gunderman
Рет қаралды 14 М.
Local GraphRAG with LLaMa 3.1 - LangChain, Ollama & Neo4j
15:01
Coding Crash Courses
Рет қаралды 39 М.