Thank you so much!! I was about to start a brand new vault with two years of research because I was concerned that I had too many notes. This has saved my life! Thank you so much. I really appreciate this.
@samukarbrj4 ай бұрын
I was thinking about creating some API to read all my notes, but this is already made haha Thanks for the great video and explanation! this is GOLD!
@Lohithdhaksha8 ай бұрын
Thank you Prakash for your videos. You have been providing immense value to the community.
@joroshinkai5 ай бұрын
Your channel is a goldmine.
@aivy-aigeneratedmusic63708 ай бұрын
By far the best ai use I have seen so far in obsidian
@dhruvirzala8 ай бұрын
Thank you for this awesome and insightful video, man!
@romangladiator11068 ай бұрын
Ollama is pronounced "Oh Lama".
@ultraspy54746 ай бұрын
Thank you so much for creating this video. Every aspect has been helpful.
@CrusNB8 ай бұрын
Pax, your video quality is getting better and better. What are you using for your video-Editing?
@beingpax8 ай бұрын
Thanks ❤️. Right now, I'm transitioning into Final cut pro, because of how fast and reliable it is.
@simonhakansson81874 ай бұрын
Cool and easy to use! Thanks for the tips.
@daedalusjones42283 ай бұрын
Great video. Thank you, brother!
@Maisonier8 ай бұрын
I'd love to know when they're going to add the Llama3 model. Also, could you make a video explaining the differences between Nomic and MXBai? How do they work, and what is best for each case? Is there a way to use LMStudio instead of Ollama? Liked and subscribed. Amazing video
@aivy-aigeneratedmusic63708 ай бұрын
It´s already possible
@ninadwrites8 ай бұрын
I'm using llama3 for both embedding and generation. And it works much better that way than the default embedding models.
@DesignDesigns8 ай бұрын
Thank you for making the video on AI......I strongly believe that A.I will dominate the PKM area.
@s-dv2nx3 ай бұрын
Brother you are great,I was struggling to install fabric.After your video it didn't take much time.
@DaKingof8 ай бұрын
Is there a plugin that can use A.I. to create and edit notes as well? This would be a big leap forward as far as efficiency of note taking goes!
@jichaelmorgan37966 ай бұрын
My idea was to take an idea explored with an LLM then have the LLM summarize the the conversation and conclusions, or you coupd write it in your own words, then the LLM can automatically generate the nodes. The original conversation could be saved in a separate text file that doesn't bloat your vault, which can be linked to in the note. It shouldn't be too hard to automate all this, but I'm not a programmer, or else I would love to make it a reality.
@artbillcorner8 ай бұрын
Love athe tutorial. When I have my new pc i will surely go into it.
@mrt.19032 ай бұрын
how did you make your obsidian look like that? truly amazing
@JosueMUHIRWA3 ай бұрын
❤ Great video Does it update as you add new note?
@arunkumaranand8 ай бұрын
This is wonderful!
@LeonardoLobato2 ай бұрын
Amazing!
@philippvanderheide74948 ай бұрын
thanks for showing - nice video 👏
@om64188 ай бұрын
I love that it is free and private. What do U think about other AI plugins like smart connections?
@beingpax8 ай бұрын
The main differentiator is this: It supports open source models. While Smart connections uses OpenAI APIs. And you have to pay for it. Smart connections also has license model for which you have to pay to access some features.
@om64188 ай бұрын
@@beingpax thanks for explaining👍 just one more question: if I add a note after Indexing, do I have to do a new index, and does it recognise, just the new note or do I have to index the whole vault again and where will it be saved?
@beingpax8 ай бұрын
Not 100% sure, but I think it will only index the new updated notes only.
@vijayragav18653 ай бұрын
Smart connections is free too, but the answers aren't reliable rightnow - love to see it improve
@martinb34838 ай бұрын
Wow. That looks useful. Did you try that with other languages than English?
@MichaelHHeng5 ай бұрын
Also from me: many thanks for the really helpful videos! I have another question: What kind of download manager can we see at 2:15? I've been looking for something like that for ages! Greetings from Germany!
@tcb1334 ай бұрын
so can i feed it a folder with pdfs as well as my own notes?
@joyebinger78692 ай бұрын
Is it possible to use Chat gpt with this plugin without open AI receiving all the Vault data
@JosueMUHIRWA3 ай бұрын
Why it tell error :ollama call failed with status code 500
@vladimirnezadorov9397 ай бұрын
Thank you for the comprehensive plugin presentation! I'm curious about the parsing notes into vectors process: does it take into account any linking or tagging, or does it parse only text information? What do you think?
@samtab94745 ай бұрын
how could i connect it if my ollama is inside the wsl of my windows?
@nguyenhuynghia9997 ай бұрын
Hello sir, I have encounter this "Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.". I have taskkilled the PID but it's not worked, another PID has come. Could you help me ?. Do I have to install the python or anything else ?
@micheleterrenzio23306 ай бұрын
have you found a way to exceed this problem?
@SanditaDattani8 ай бұрын
I am getting error message "Failed to initialize Smart Second Brain (Error: TypeError: Failed to fetch)" ?
@TheDamnBr07 ай бұрын
same problem here
@mintgreen56144 ай бұрын
hi! Yours is the only good video ive found on this. for some reason the chat only writes 1-2 sentences, and then the chat ends abruptly, but with no error message. Any idea why?
@rubenackermann62968 ай бұрын
Somehow, when I attempt to obtain sources from which the bot gathers its information, it only provides me with numbers, e.g., (Source: 2). Then, when I click on 2, a new note with 2 as the headline appears. I have the feeling it does not take information out of my volt.. Where am I going wrong? Thank you in advance!
@etherhealingvibes8 ай бұрын
just english and german, when llama3 can tak to spanish, there is a way to change assistant language?
@zaubermanninc43906 ай бұрын
sadly it does not show up after installing. must clash with some of the other plugins.
@LYT1018 ай бұрын
Благодарю за полезную информацию. Thank you for your assistance.
@asafsbg8 ай бұрын
Thanks for the great video! How is this model with languages other than English?
@glvite8 ай бұрын
Hello Prakash, Firstly, thank you for your videos, which are always very professional and comprehensive. I need your help. Following your guide on Linux, I installed the Obsidian plugin, configured it, and downloaded Ollama and Gemma. However, in the Smart Second Brain section of Obsidian, I am unable to configure the AI to function correctly. The aspect of my setup that is much less complete and detailed (only for point) than the one you show on macOS. In particular, I am missing the "start Ollama" command.
@beingpax8 ай бұрын
When you open the Smart second brain pane, you have the option to set Ollama origions to enable streaming responses. Follow those instructions and it will probably solve. Whats the exact problem you are having?
@antonhuber16547 ай бұрын
What LLM should I use for a different language (German)?
@xfilesxfilesxfiles7 ай бұрын
Schwierig
@Sebastian-oz1lj7 ай бұрын
Can i run this in Polish language by any chance? currently only english is avalibe i wonder if there is any LLM that works with Polish
@deserthorsedude8 ай бұрын
If you have .pdf files in our vault does this plugin index the PDF files? If so, does that significantly increase the indexing time? Also does it use the information in the PDF to answer questions?
@beingpax8 ай бұрын
It doesn't support pdfs, yet.
@mayankk28008 ай бұрын
next level...
@SoorajGopakumar8 ай бұрын
What's your hardware spec? It seems to be taking forever to index as well as generate answers on my ryzen 5 laptop
@beingpax8 ай бұрын
I'm on Macbook Pro M2. Some local models take way long time. I find the mixtral model with mxbai works the best.
@101RealTalker8 ай бұрын
My vault is over 3million words across 2k+ files, all geared towards one project. I was excited to use the Co-Pilot plugin because it advertises "vault mode", but was quickly disappointed when its default reference amount was only 3 notes/files at a time (lol), and can only stretch to 10 but gives a warning that it will prolly screw up the responses. I want to communicate with my vault as a whole for perfect macro context, but it seems my case use is still not possible with current Ai? SmartConnections doesn't seem to be any better, or am I mistaken?
@ninadwrites8 ай бұрын
This local solution might work for you. Try an embedding model like bge + llama3 for generation. It'll take a LONG while the first time though
@om64188 ай бұрын
What specs (ram / processor) do U need on your computer for running a local LLM like the new llama 3 7b ? I have a mini Mac with 8gb
@HiltonT697 ай бұрын
8 GB will HAMMER your swap (the internal SSD, which I gather is only 256 GB) and end up destroying the SSD.
@om64187 ай бұрын
@@HiltonT69an interesting detail to this whole story:)
@aivy-aigeneratedmusic63708 ай бұрын
do you know where on the computer these models are stored? I have trouble finding the files...and they are huge so after trying some models you might want delete them
@kerkieburgers7 ай бұрын
Did you happen to learn how to delete the models?
@Martin-UrbanАй бұрын
Might also be useful to know where they are to be able to transfer to different computers.
@uzeyiryariz31697 ай бұрын
my ai why is slow response ?
@ListenJock328 ай бұрын
❤❤❤❤❤
@FlorinBaci-sd7mk7 ай бұрын
thanks
@SamuelWebster7 ай бұрын
All these tests make me think that Smart Connections does a much better job of this...
@aivy-aigeneratedmusic63708 ай бұрын
Yt sucks hard and removed my comment about using Llama 3, which is possible to do!
@legendsarenagaming6 ай бұрын
How is it and whats the system requirements to run these stuff
@robinmolsom8133 ай бұрын
o lelel ya me
@chjpiu8 ай бұрын
From my point of view, I think the Copilot plugin is relatively better than this plugin.
@johnmichalek98024 ай бұрын
Meta and privacy is an oxymoron. 😂
@pushingpandas64793 ай бұрын
Whats obsidian?
@tiagocorreia753713 күн бұрын
You need to be honest, not everyone has powerful computers to run this locally. People like you seem more interested in gaining viewers and likes than addressing real value. Shame on you.
@countermeasuresecurityengi97198 ай бұрын
the llm is pronounced “OH-LA-MA” c’mon ma’an !!!!!