Obsidian with Ollama

  Рет қаралды 27,568

AIpreneur-J

AIpreneur-J

Күн бұрын

Пікірлер: 37
@mrashco
@mrashco 7 ай бұрын
Awesome! I've been using Backyard AI for local LLMs. Obsidian is new to me (switched from Notion) and Ollama looks PERFECT for integrating notes and AI. Thanks for the great video!
@AIpreneur-J
@AIpreneur-J 5 ай бұрын
Thanks for the support! As a developer and solopreneur, AI and Obsidian are my essential tools so I will keep uploading about them!
@CiaoKizomba
@CiaoKizomba 8 күн бұрын
it's unclear how you used the plug-in
@radonryder
@radonryder 8 ай бұрын
Excellent video! Going to try this out.
@AIpreneur-J
@AIpreneur-J 8 ай бұрын
Thanks and let me know your experience!
@JonDAlessandro
@JonDAlessandro 2 ай бұрын
Why isn't there an ollama (local) option in my default models?
@erinray878
@erinray878 7 ай бұрын
Thank you very much for this video! I just downloaded Obsidian a couple days ago, and was looking for free Copilot alternatives. Do you have any recommendations for the Whisper plugin? (Alternatives, or ways to us a local LLM like in this tutorial)? Thanks again!
@mikechen777
@mikechen777 Ай бұрын
Amazing, Thanks for sharing
@manojnayakdotcom
@manojnayakdotcom 29 күн бұрын
Can you give me the link to the copilit plugin as I cannot find one
@AIpreneur-J
@AIpreneur-J 19 күн бұрын
Thank you! I will share more of this for y'all
@SimonCliffordconnect
@SimonCliffordconnect 2 ай бұрын
I cant get this to work on a windwos machine. Please create an updated video, using Llama, with Docker and setup on a windows machine
@daedalusjones4228
@daedalusjones4228 4 ай бұрын
Great video. Thank you, brother! I, too, installed Llama 3 on my machine, and the program/machine was so slow, it just seemed to freeze. It would EVENTUALLY eke out a response, but...no. So thanks for the intel about Phi3, especially!
@peterbizik224
@peterbizik224 7 ай бұрын
Thank you for the video, this is long time on my todo list, but rather have it on homelab server instead of locally, (not sure if possible). Please, what are optimal hw requirements, more cpu, or more memory? What was the bottleneck, It was a bit slow with response locally? Any reason?
@AIpreneur-J
@AIpreneur-J 7 ай бұрын
Homelab server sounds really cool! Like I said, I'm using 2020 Mac Air m1 so I experienced slow performance when I used a bigger model like llama3. Phi was working great though
@envoy9b9
@envoy9b9 4 ай бұрын
how do you get it to read pdf's?
@HiltonT69
@HiltonT69 8 ай бұрын
What would be awesome is for this to be able to use an Ollama instance running in a container on another machine - that way I can use my container host for Ollama with all it's grunt, and keep the load off my smaller laptop.
@AIpreneur-J
@AIpreneur-J 8 ай бұрын
That is an interesting idea..! Thanks for the feedback I will look into this to see it’s possible
@tomw0w
@tomw0w 7 ай бұрын
​@@AIpreneur-J i have been experimenting with running Ollama on a Docker container using Proxmox LXC. After configuring the Ollama base URL field with my server's URL on Obsidian copilot, everything works like a charm
@waynethomas2118
@waynethomas2118 23 күн бұрын
I have ollama running on a separate box. In obsidian in the copilot plugin I just have to select ollama, then enter the http:URL:11434 of my ollama box and specify the model I already downloaded. Just tested it and it works great
@siliconhawk
@siliconhawk 9 ай бұрын
what is the hardware requirements to run models locally.
@TheGoodMorty
@TheGoodMorty 9 ай бұрын
It can run CPU-only, it can even run on a Raspberry Pi, it's just going to be slow if you don't have a beefy GPU. Pick a smaller model and it should be alright. But unless you care about being able to customize the model in a few ways or having extra privacy with your chats, it'd probably just be easier to use an external LLM provider
@coconut_bliss5539
@coconut_bliss5539 9 ай бұрын
I'm running Llama3 8B model with Ollama on a basic M1 Mac with 16gb RAM - it's snappy. There is no strict cutoff for hardware requirements - if you want to run larger models with less RAM, Ollama can download quantized models which enable this (for a performance tradeoff). If you're on PC with GPU, you need 16GB of VRAM to run Llama3 8B natively. Otherwise you'll need to use a quantized model.
@elgodric
@elgodric 9 ай бұрын
Can this work with LM Studio?
@AIpreneur-J
@AIpreneur-J 9 ай бұрын
Good question I haven’t played with LM studio. I will and let you know!
@etherhealingvibes
@etherhealingvibes 9 ай бұрын
Copilot needs integration with Groq AI, and Text to speech integration inside chat room.
@AIpreneur-J
@AIpreneur-J 9 ай бұрын
That sounds interesting idea!
@etherhealingvibes
@etherhealingvibes 9 ай бұрын
​@@AIpreneur-JI will cover the costs, allowing us to remove WebsUI and solely utilize Ollama or LMstudio for the backend. With LMstudio now featuring CLI command capabilities, it's even more beneficial as it reduces the layers above Copilot. I conducted a test with LMstudio's new feature today, and the Copilot responses were noticeably faster on my low-end laptop. Additionally, we can incorporate groq's fast responses and edge neural voices, which are complimentary.
@Ludwig6583
@Ludwig6583 5 ай бұрын
Thanks for the video. If it says the `address is already in use`, run this exact command: osascript -e 'tell app "Ollama" to quit'
@reddezimen
@reddezimen 5 ай бұрын
says "osascript: command not found"
@nevilleattkins586
@nevilleattkins586 7 ай бұрын
If you get an error when you try to run the serve command about port already being use then run 'osascript -e 'tell app "Ollama" to quit''
@reddezimen
@reddezimen 5 ай бұрын
says "osascript: command not found"
@IFTHENGEO
@IFTHENGEO 9 ай бұрын
Awesome video man! Just sent you connect on LinkedIn
@AIpreneur-J
@AIpreneur-J 9 ай бұрын
Thanks for the support and I will check it out!
@VasanthKumar-rh5xr
@VasanthKumar-rh5xr 8 ай бұрын
Good video. I get this message in the terminal while setting the server step 4. >>> OLLAMA_ORIGINS=app://obsidian.md* ollama serve The "OLLAMA_ORIGINS" variable in the context provided seems to be a custom configuration, and serving files with `ollama` would again follow standard Node.js practices: 1. To set an environment variable similar to "OLLAMA_ORIGINS", you could do so within your project's JavaScript file or use shell commands (again this is for conceptual purposes): I can connect with you through other channels to work on this step.
@rjt_y
@rjt_y 7 ай бұрын
can you please explain more i cant get mine working
Smart Second Brain for Obsidian(Free & Offline)
16:43
Prakash Joshi Pax
Рет қаралды 27 М.
The Free AI Tool that Knows Everything You've Written (Obsidian with AI)
16:37
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
Support each other🤝
00:31
ISSEI / いっせい
Рет қаралды 81 МЛН
Deploy Ollama and OpenWebUI on Amazon EC2 GPU Instances
45:18
StratusGrid
Рет қаралды 2,1 М.
You're not stupid: How to learn difficult things with Obsidian
6:16
Python Programmer
Рет қаралды 449 М.
How to Master Obsidian: Pro Tips, Plugins, and Workflow Hacks
11:28
Tech Talk Eng
Рет қаралды 12 М.
Updating My Obsidian Workflow: Unbelievable Power
12:15
Christopher Lawley
Рет қаралды 32 М.
Running ChatGPT Locally: A Guide to Open WebUI and Ollama Integration
13:31
How to Run LLM Locally on Your Mac
36:34
Jeremy Morgan
Рет қаралды 6 М.
5 Simple Obsidian Plugins to Supercharge Your Workflow
10:10
Santi Younger
Рет қаралды 16 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 252 М.
Obsidian and Ollama - Free Local AI Powered PKM
39:51
Antone Heyward
Рет қаралды 9 М.
Obsidian Canvas
29:36
Linking Your Thinking with Nick Milo
Рет қаралды 292 М.
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.