Great video. Thank you, brother! I, too, installed Llama 3 on my machine, and the program/machine was so slow, it just seemed to freeze. It would EVENTUALLY eke out a response, but...no. So thanks for the intel about Phi3, especially!
@envoy9b9Ай бұрын
how do you get it to read pdf's?
@campus_code_chroniclesАй бұрын
This is great. Looking forward to more content!
@the-writer-devАй бұрын
Thanks for the support! I will keep making content for y'all
@sean_vikorenАй бұрын
very nice...i am working toward this
@the-writer-devАй бұрын
@@sean_vikoren I’m glad you like it! There will be another one to fine tune for platforms!
@OldGamrАй бұрын
It does not maintain across multiple chats. It actually tells you it doesn't so you have to explain everything for each chat. And the project files are forgotten constantly as are custom instructions. I added a custom instruction to not provide snippets, only complete code. I had to add the instruction to every single message or it would forget. I specified "no snippets, no placeholder code" It often added placeholder code even when the request was in that direct message. It also gives the same solutions to code errors even when the solution failed. I spent most of my time trying to fix code errors that it said were fixed. if you're working on a more advanced project AI is not going to work.
@UTJK.Ай бұрын
It will improve over time, we hope. I guess the best way to use it for programming, is feeding him with small incremental tasks to achieve. I experienced AIs fail miserably if asked to build everything from a single prompt, while they instead work very well if you follow single steps. But in that case I suggest you to use AIs inside the code editor and do the step by step process, while using browser based for explaining or debugging, or learning. That worked best for me.
@Ludwig65832 ай бұрын
Thanks for the video. If it says the `address is already in use`, run this exact command: osascript -e 'tell app "Ollama" to quit'
@reddezimen2 ай бұрын
says "osascript: command not found"
@khairulakbar2 ай бұрын
the music is too loud, man
@the-writer-dev2 ай бұрын
I reflect the feedback for following videos :)
@drendelous3 ай бұрын
subscribed ty for covering it im thinking about subscribing to chatgpt though as my notevook is so low
@erinray8783 ай бұрын
Thank you very much for this video! I just downloaded Obsidian a couple days ago, and was looking for free Copilot alternatives. Do you have any recommendations for the Whisper plugin? (Alternatives, or ways to us a local LLM like in this tutorial)? Thanks again!
@kumardigvijaymishra59454 ай бұрын
What is GitHub?
@peterbizik2244 ай бұрын
Thank you for the video, this is long time on my todo list, but rather have it on homelab server instead of locally, (not sure if possible). Please, what are optimal hw requirements, more cpu, or more memory? What was the bottleneck, It was a bit slow with response locally? Any reason?
@the-writer-dev4 ай бұрын
Homelab server sounds really cool! Like I said, I'm using 2020 Mac Air m1 so I experienced slow performance when I used a bigger model like llama3. Phi was working great though
@mrashco4 ай бұрын
Awesome! I've been using Backyard AI for local LLMs. Obsidian is new to me (switched from Notion) and Ollama looks PERFECT for integrating notes and AI. Thanks for the great video!
@the-writer-dev2 ай бұрын
Thanks for the support! As a developer and solopreneur, AI and Obsidian are my essential tools so I will keep uploading about them!
@UncleDavid4 ай бұрын
u seem like a really nice guy
@the-writer-dev4 ай бұрын
Thanks!
@nevilleattkins5864 ай бұрын
If you get an error when you try to run the serve command about port already being use then run 'osascript -e 'tell app "Ollama" to quit''
@reddezimen2 ай бұрын
says "osascript: command not found"
@MODEST5004 ай бұрын
God bless bro, thanks , your video on Obsiidian X is good
@the-writer-dev4 ай бұрын
Thank you for the nice words! I will keep posting about Obsidian and other content for y'all!
@MODEST5004 ай бұрын
this is such a great plug in. I hope this doesn't discontinues. I will definitely install this God willingly. btw imagine having chatgpt GPTs in obsidian. that would be really good. I hope this supports goggle gemini 1.5 pro
@guruware86124 ай бұрын
How many notes do people take on an average day, so they need an application for this ? This world grows more and more stupid every day. Maybe start training your brains again instead of AI's ?
@NDIDIAHIAKWO5 ай бұрын
Spot-on tutorial! What about the audio overview interactive function in NotebookLM as advertised in the recent Google I/O event? Has it been rolled out to users already?
@the-writer-dev5 ай бұрын
Ohhh that is a nice one! I will cover that with some tweaks!
@RashadPrince5 ай бұрын
Is that new feature out yet?
@the-writer-dev5 ай бұрын
@@RashadPrince I don't think so, at least to my end
@radonryder5 ай бұрын
Excellent video! Going to try this out.
@the-writer-dev5 ай бұрын
Thanks and let me know your experience!
@VasanthKumar-rh5xr5 ай бұрын
Good video. I get this message in the terminal while setting the server step 4. >>> OLLAMA_ORIGINS=app://obsidian.md* ollama serve The "OLLAMA_ORIGINS" variable in the context provided seems to be a custom configuration, and serving files with `ollama` would again follow standard Node.js practices: 1. To set an environment variable similar to "OLLAMA_ORIGINS", you could do so within your project's JavaScript file or use shell commands (again this is for conceptual purposes): I can connect with you through other channels to work on this step.
@rjt_y4 ай бұрын
can you please explain more i cant get mine working
@ascan40885 ай бұрын
Great video, great work👍But please turn off the distracting and disturbing background music.
@the-writer-dev5 ай бұрын
I made sure for following videos! Thanks for the feedback :)
@HiltonT695 ай бұрын
What would be awesome is for this to be able to use an Ollama instance running in a container on another machine - that way I can use my container host for Ollama with all it's grunt, and keep the load off my smaller laptop.
@the-writer-dev5 ай бұрын
That is an interesting idea..! Thanks for the feedback I will look into this to see it’s possible
@tomw0w4 ай бұрын
@@the-writer-dev i have been experimenting with running Ollama on a Docker container using Proxmox LXC. After configuring the Ollama base URL field with my server's URL on Obsidian copilot, everything works like a charm
@dragonrain33435 ай бұрын
I wanted to watch this video but had to stop seconds into it as the music is overbearing and distracting - pity
@harinimittal5 ай бұрын
Can I use ChatGPT 4 instead of 3.5?
@the-writer-dev5 ай бұрын
Yes you can. It requires your API key so...
@elgodric5 ай бұрын
Can this work with LM Studio?
@the-writer-dev5 ай бұрын
Good question I haven’t played with LM studio. I will and let you know!
@Alex291965 ай бұрын
Copilot needs integration with Groq AI, and Text to speech integration inside chat room.
@the-writer-dev5 ай бұрын
That sounds interesting idea!
@Alex291965 ай бұрын
@@the-writer-devI will cover the costs, allowing us to remove WebsUI and solely utilize Ollama or LMstudio for the backend. With LMstudio now featuring CLI command capabilities, it's even more beneficial as it reduces the layers above Copilot. I conducted a test with LMstudio's new feature today, and the Copilot responses were noticeably faster on my low-end laptop. Additionally, we can incorporate groq's fast responses and edge neural voices, which are complimentary.
@siliconhawk5 ай бұрын
what is the hardware requirements to run models locally.
@TheGoodMorty5 ай бұрын
It can run CPU-only, it can even run on a Raspberry Pi, it's just going to be slow if you don't have a beefy GPU. Pick a smaller model and it should be alright. But unless you care about being able to customize the model in a few ways or having extra privacy with your chats, it'd probably just be easier to use an external LLM provider
@coconut_bliss55395 ай бұрын
I'm running Llama3 8B model with Ollama on a basic M1 Mac with 16gb RAM - it's snappy. There is no strict cutoff for hardware requirements - if you want to run larger models with less RAM, Ollama can download quantized models which enable this (for a performance tradeoff). If you're on PC with GPU, you need 16GB of VRAM to run Llama3 8B natively. Otherwise you'll need to use a quantized model.
@IFTHENGEO5 ай бұрын
Awesome video man! Just sent you connect on LinkedIn
@the-writer-dev5 ай бұрын
Thanks for the support and I will check it out!
@dayuguo20116 ай бұрын
Great video, a small suggestion: the background music is a bit loud, and the narration sound is a bit soft.
@the-writer-dev5 ай бұрын
I did take that into consideration for the next one! Thanks a lot :)
@TheTca2116 ай бұрын
Music is ok but its at thw same volume (maybe a littlenlouder) than you. You dont need to outright remive it but it should be much lower in volume compared to you.
@TheTca2116 ай бұрын
Which I'm realizing now a lot of people are saying the same lol. It's a good video though.
@NLPprompter6 ай бұрын
Obsidian copilot with ollama is the best. free AI and secure because it's offline
@nildesperandum20344 ай бұрын
Is it free ?? How ??
@NLPprompter3 ай бұрын
@@nildesperandum2034 obsidian copilot plug in + ollama i mean, it's the plugin, yes it free if you use local ollama model
@DonStrenz6 ай бұрын
can't quite make out what your saying. You voice is being drowned out by the music.
@the-writer-dev6 ай бұрын
I’d take this into account for next videos for sure!
@ksc8886 ай бұрын
Turn on subtitles...
@eightwheels6 ай бұрын
The music is very distracting and makes it difficult to hear you. I strongly recommend you not use it for future videos.
@pnddesign6 ай бұрын
Exact same remark
@the-writer-dev6 ай бұрын
I will not!
@eightwheels6 ай бұрын
@@the-writer-dev Unsubscribed. No problem.
@scatterbraincrafts6 ай бұрын
@@eightwheels I think he means he will not use the music in his future videos...
@eightwheels6 ай бұрын
@@scatterbraincrafts LOL Okay. I didn't read it that way, but I hope you're right.
@bear_jaws6 ай бұрын
Thanks for this, I have been wanting to use Obsidian daily but find myself going back to Notion. I will try this out soon. @2:50 I thought I left my music on. Tried to pause my music, but then I realized it's your video. Need to turn down the music channel.
@the-writer-dev6 ай бұрын
I will reduce the volume for the following videos! Hopefully it's useful to you!
@scrollop6 ай бұрын
Can i only review the current note, or can you ask it to scan all of your notes?
@the-writer-dev6 ай бұрын
I believe you can do vault review - QA mode. I will dig into it and let you know!
@abelmaestrogarcia66506 ай бұрын
¿Probaste Cannoli?
@abelmaestrogarcia66506 ай бұрын
Me encanto, haz mas videos para mas usos
@the-writer-dev6 ай бұрын
I will!!!
@robertf4209 Жыл бұрын
I think you have a lot of great information to share. I am having a lot of difficulty understanding your voice for three reasons 1) the background music overwhelms you speaking. Please consider removing the music track. It adds zero value but interferes with voice signal and 2) the sounds quality of the vocal track is pretty poor. The dynamic range seems compressed. And 3) the pace is way too slow. All these are mentioned because I want to hear what you have to say but there are barriers that make it difficult. Also, this is my experience and doesn’t mean others have the same issues.
@the-writer-dev Жыл бұрын
Thank you so much for your valuable feedback. Since I'm pretty new to making videos, I was pretty sure I need to improve on the quality. This is priceless to me!
@DrewMills Жыл бұрын
Great video. TX for the useful info
@the-writer-dev Жыл бұрын
Thank you for watching it!! I will keep making it
@visi7891 Жыл бұрын
yay walkthroughs. this was so organized and helpful. thanks
@the-writer-dev Жыл бұрын
Thank you for your valuable comment. I will make high quality videos, stay tuned!!