Ollama + Home Assistant Tutorial : The Easiest way to Control your Smart Home with AI

  Рет қаралды 15,723

fixtSE

fixtSE

Күн бұрын

Пікірлер: 69
@TheDuerden
@TheDuerden 14 күн бұрын
I am going to give this a shot, nice video explaining everything - this was something I was thinking about doing mysef but hopefully I can get this working and you saved me a lot of work.
@fixtse.
@fixtse. 11 күн бұрын
There are newer models, like the Llama 3.1 8B that can give you better results, remember to limit the amount of devices that you expose to assist, and add more as you go.
@donnyf12
@donnyf12 9 ай бұрын
First, thanks for posting this. I am super interested in getting this to work. I gave it a try on my HP EliteDesk 800 G3 with Intel Quad-Core i5-6500T, 16GB DDR4 RAM. I see that Ollama is using 100% of the CPU when I ask questions but Assist either times out waiting. I did have it come back saying it didn't know what one of my devices was despite giving it the actual friendly name. Also when I asked 'Is gateway light on' after a long wait it replied 'I'm sorry, as an AI language model, I do not have access to real-time information about your home's lighting system. However, you can check if the gateway light is on by going to its corresponding switch or using a smart home app that controls it.' It did however, respond correctly when I asked 'what is the state of "living room temp"' so I know that it is capable of interacting I had hoped that my computer would have enough power to do this but it's starting to seem like that's not the case. Do you have any ideas on what I might tweak? I did add the max token as mentioned on your site but did not seem to matter.
@fixtse.
@fixtse. 9 ай бұрын
Hi, sorry aparently I missed your comment, that might have something to do with the amount of devices you expose to the model. Try starting with just a few, and start increasing the number to see how it impact the performance. This is not really a problem of the integration, it kind of is a limitation that comes with using a very small model like the Home 3B, is easier for it to make mistakes if it has too many information. For running a small model like this, for real time response you need a Nvidia GPU of at least 6GB of RAM (Bigger models, require more RAM), if it could be of the series 2x, better. No other way around it so far. It could still be usefull for thing that are not time sensitive, like sumarizing calendars. The objective behind the project is to get the hardware requirements to run the model as low as possible, so more people can get access to the tech, but that comes with some limitations. Bigger models have more chance right now to do a better job, because of "a bigger set of skills they are trained on", but with enhanced training, the small model can get better an better to manage home assistant specific tasks.
@donnyf12
@donnyf12 9 ай бұрын
@@fixtse. Thanks for the response. I'm going to have some major issues adding a GPU unless someone comes out with a USB or other type of external GPU.
@fixtse.
@fixtse. 9 ай бұрын
@@donnyf12 I read about this today, one day it might be possible. www.tweaktown.com/news/95872/nvidia-geforce-rtx-4090-doesnt-lose-much-performance-in-external-gpu-using-oculink/index.html. For most people i think, would be easier to use a cloud service, but I guess that defeats the purpose
@jonlangford5224
@jonlangford5224 9 ай бұрын
This came at the perfect time! Literally followed your video on setting up Ollama yesterday! Thank you for your awesome videos!
@fixtse.
@fixtse. 9 ай бұрын
🙌 I rushed this as much as I could, hope it works good for your setup 🎉
@alexgaffney6781
@alexgaffney6781 4 ай бұрын
Can you make a video to explain how to configure the new version of Llama Conversation Integration with you model, as it is now called Local LLM Conversation and has a new configuration page since Home Assistant updated to 2024.6 and im not sure what this new configurations do?
@mice3d
@mice3d 9 ай бұрын
Working well! Yay, just running it in windows docker. So didnt need your script. Used this and futerproofhomes video to make a Rpi with the wake word, also put whisper with the small model on another machine in docker and working wel (still slow, but good enough!) Cant wait what the improvements are this year. Might have to build a better pc for this at some stage!
@fixtse.
@fixtse. 9 ай бұрын
🙌 Great! I have the same setup, running whisper and piper on a different PC with the wyoming docker image. I'm looking for a way to make them use the GPU to be able to get faster response times. But since most people have really limited hardware, the effort right now seems to make the hardware requirements lower, so it can reach more people, these are exciting times.
@bramiz86
@bramiz86 7 ай бұрын
no matter what i do i get Error: Could not find WSL IP address.
@snopz
@snopz 9 ай бұрын
Gonna try this tomorrow cuz the last video method not worked fine for me when I ask the model he can't see the devices and also I made sure that I exposed them and also the model ask himself questions and respond to them within the same response for my question and this make the model take more time to respond
@fixtse.
@fixtse. 9 ай бұрын
Hi, on my website disi shared a promising prompt to teach the model how to interpret better your data. Also, try limiting the amount of devices you shared with it at first. I really think that we are going to see a bigger improvement when the training dataset gets enhanced with new examples, but that is going to take some time, Alex is working on improving the training techniques and the code of the integration, to ensure easier support as the project gets bigger.
@dylan_00
@dylan_00 9 ай бұрын
I'm getting "Unexpected error during intent recognition" when I try this. I can access the Ollama WebUI from my network, it's running on Docker on Ubuntu Server. Is there any reason Home Assistant can't connect/talk to it? I did have this working with LocalGPT earlier on another machine so I know Home Assistant is capable. Thanks!
@mice3d
@mice3d 9 ай бұрын
same here, can chat no problem, access it from my other machine, i.e. "Ollama is running" rebooted, tried the ip address with in front of it. but no luck :(
@mice3d
@mice3d 9 ай бұрын
found the answer I think, re-did all of the voice assistant stuff in home assistant, and also got rid of most of the intents as I had a lot in there.. now it is working!
@dylan_00
@dylan_00 9 ай бұрын
@@mice3d I will have to give that a look and see if that fixes it for me, that very well may be my issue!
@gacekk87
@gacekk87 6 ай бұрын
Will this work in English only? How about other language promompts and home assistant that is set up to other languages
@Alpha7__
@Alpha7__ 9 ай бұрын
Would it be possible when using a home assistant assistant to be able to ask it general questions, like 'tell me a joke' or 'what year was xxxxx movie made' and have it use the model to answer?
@fixtse.
@fixtse. 9 ай бұрын
It is possible, but you need to take into consideration that the model only has access to the data it was trained (this is a small model) with and the context the integration provides. If you ask something it doesn't have the answer to, it will probably give you an inaccurate answer. So things like tell me a joke, yes, a question about dates, etc, unless it has the information, no.
@ercanyilmaz8108
@ercanyilmaz8108 9 ай бұрын
Thanks for sharing this video. I think that itbwill be better if Ollama decides to control de devices by itself based on circumstances. Because then you use the actual power of AI. Update: I had ported my Home Assistant to docker short time ago. Propably I can install Ollama in the docker container at the same box where Home Assistant is located. But since I havent much knowledge about the hardware requirements of Ollama I'm wondering how it will work performance wise. 🙂
@fixtse.
@fixtse. 9 ай бұрын
Hey, thank you for watching. To use it for real-time user interaction, you need a GPU, no way around it so far, for a small model like the Home 3B, a 6 GB Nvidia GPU would do a decent job, if it could be one of the 20x series onwards, better.
@rojoricardo
@rojoricardo 4 ай бұрын
I did as described on your video but now i can see written in the assist chat window what the commands for home assistant should be getting, i dont know what i did wrong, is someone else getting those errors?
@SmartTechArabic
@SmartTechArabic 4 ай бұрын
Thanks for the informative tutorial. I have set Ollama server on a separate server, and it the local LLM is working well through the open web-UI, and I setup the Olama integration on home assistant, and I setup a home assistant assist to use Ollama. But unfortunately whenever I ask a qesution, I am not getting any response. What am I missing?
@fixtse.
@fixtse. 4 ай бұрын
Hi, you don't get any response? not even an error? Have you checked the logs? Try to find something that we can use to figure out what is happening.
@adimoandree4747
@adimoandree4747 6 ай бұрын
I can't find your model anymore in the model list on ollama is that correct? Keep up the awesome work! You are very smart.
@fixtse.
@fixtse. 6 ай бұрын
Hi, no, It's still there ollama.com/fixt/home-3b-v3. Thank you for your comment, btw Look out for this week's video on Saturday, you might find it useful.
@jlkz
@jlkz 3 ай бұрын
I have mine settup and double checked expose settings but the ai can tell me the tempurature it can tell me if a light is on or off but it cannot control anything. I tried the ollama integration too but the same thing not sure what i'm doing wrong and can't find any relevant information about this issue.
@fixtse.
@fixtse. 3 ай бұрын
Play around with the amount of devices exposed to assist and their names. Small models have a hard time to be accurate, as many users have found out, but it is possible to get good results, it just requires time to adjust your setup, and prompt. Check the examples the integration shares with the AI, and try to emulate their naming scheme. You can also check the dataset they use to train the AI. any AI it's just a glorified text completion algorithm, so seeing what it's being use to train it, will help you adjust your entities naming to perform better.
@tannisroot
@tannisroot 8 ай бұрын
I would love if in the future you covered Ollama running on systems like Unraid, and Truenas, and whether you can get GPU support on those. I have yet to test it myself but there is very little info on GPU acceleration on Linux based systems like that
@RandomPedestrian07
@RandomPedestrian07 6 ай бұрын
I can confirm running it in an Unraid docker uses GPU, as long as you have the NVIDIA driver plugin. Been using Ollama and stable diffusion in a docker for about a week now
@mythbuster74
@mythbuster74 8 ай бұрын
Is there a larger model to use? I'm getting super mixed results with the 3b. Basically doesn't work actually.
@rainemc
@rainemc 4 ай бұрын
Summary of Ollama on HomeAssistant use an additional 170W of power to get wrong answers 100% of the time.
@tomaszpankowski8903
@tomaszpankowski8903 9 ай бұрын
Instead of wsl i've installed ollama on the windows docker. I can chat with it from home assistant but it's unable to control anything any tips?
@fixtse.
@fixtse. 9 ай бұрын
Hi, check the number of devices that you are sharing with the model, start with a couple, verify that it works, and start adding more and see how it behaves.
@NeutrinoTek
@NeutrinoTek 9 ай бұрын
Hi! Love the concept here, but can't seem to get it to actually control anything. Ollama itself works fine, but when I set up the HA voice assistant, it doesn't actually execute the desired task. For instance, I ask it to turn on the tv lights, and get the following response: Sure, I can do that for you. Switching on the TV Lights now. Failed to parse call from '{"service": "switch", "target_device": "switch.tv_light_center", "state": "on"}'! Any ideas what's going on here?
@fixtse.
@fixtse. 9 ай бұрын
Hi, sorry to hear that, the only advice I can give you for that right now is to check the number of devices you are exposing to assist, if you are exponsing too many devices, it might choose wrong, also, giving your devices a name that indicates the name and the area device is in, seems to work better for contextual use. Start small, verify that it is working, and then start adding more devices and see how it behaves.
@juan11perez
@juan11perez 9 ай бұрын
I also get the "Unexpected error during intent recognition" error
@fixtse.
@fixtse. 8 ай бұрын
Hi, that error can arise for too many reasons, but the most common one seems to be the Remote Request Timeout setting (go to the configuration options on the integration), try increasing the value there and reducing the amount of devices you expose to assist, start with a couple and test how it behaves.
@hebe1792
@hebe1792 7 ай бұрын
Also, a couple of things to note. Although I see the option to select Develop in redownload option, it always defaults to 2.8 no matter what. Last thing, I tried to leave a comment on your website comment section, but it does not work.
@fixtse.
@fixtse. 7 ай бұрын
You are right!, thank you for letting me know, i'll check what is happening with the comments on that article
@zephirusvideos
@zephirusvideos 9 ай бұрын
Maybe I didn't understood correctly but, is it possible to have this running in a Linux machine and then use the HA integration that uses the model running on that linux?
@fixtse.
@fixtse. 9 ай бұрын
Yes, it's possible. If you are using the docker image, you just need to expose the API port and point the integration to the IP address of the machine where you have ollama running
@zephirusvideos
@zephirusvideos 9 ай бұрын
@@fixtse.That was a fast reply! Should I follow first your video "Local AI with OLLAMA"? It explains how to install using docker?
@fixtse.
@fixtse. 9 ай бұрын
@@zephirusvideos yes, it uses docker compose to get rid of the configuration part 😁
@zephirusvideos
@zephirusvideos 9 ай бұрын
@@fixtse.Thank you. I've managed to install it but is failing: "Sorry, there was a problem talking to the backend: HTTPConnectionPool(host='192.168.x.xxx', port=11434): Read timed out. (read timeout=90)". Where can I find the port? Or is it correct?
@tomaszpankowski8903
@tomaszpankowski8903 9 ай бұрын
@@zephirusvideos Check if you see ports 3000 and 11434 when you run: docker container ls If not rebuild it with the new manifest. Check your firewall. Make sure that the model name in the settings of home assistant addon is the same as used by ollama.
@TheMadRocker
@TheMadRocker 8 ай бұрын
So if im running ubuntu in a Proxmox VM i can skip most of the windows steps.. correct?
@fixtse.
@fixtse. 8 ай бұрын
yes!!
@hotmonitor
@hotmonitor 6 ай бұрын
Error: Could not find WSL IP address ??
@fixtse.
@fixtse. 4 ай бұрын
Run this on a windows terminal and let me know the output. bash.exe -c "ip a | grep eth0 | grep -oP '(?
@CyberDevilSec
@CyberDevilSec 5 ай бұрын
Every time i follow a instruction of your videos i have errors.
@fixtse.
@fixtse. 4 ай бұрын
Hi, let me know what command is failing for you, maybe there is something I need to update in the tutorial.
@jameskiely5518
@jameskiely5518 4 ай бұрын
@@fixtse. bash.exe : The term 'bash.exe' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
@jameskiely5518
@jameskiely5518 4 ай бұрын
@@fixtse. Transcript started, output file is C:\WSLPorts\WSLPorts.log Error: Could not find WSL IP address.
@fixtse.
@fixtse. 4 ай бұрын
@@jameskiely5518 Hi, try this line on a terminal on windows: bash.exe -c "ip a | grep eth0 | grep -oP '(?
@jameskiely5518
@jameskiely5518 4 ай бұрын
@@fixtse. When I ran the command it says I dont have any distros installed ...which would be the best one?
@claudioguendelman
@claudioguendelman Ай бұрын
Can you Do un Linux .... windows sucks ..
@fixtse.
@fixtse. Ай бұрын
Hey, the instructions for installing Docker on Linux are identical, just skip the WSL part. I believe the same, but for many people, Linux is not really an option, that's why I made the tutorials Windows oriented.
Local Voice Assistant: Using your Cameras & Speakers in HA
10:37
Using Llama 3 to Control Home Assistant | Local AI
11:57
fixtSE
Рет қаралды 11 М.
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 27 МЛН
Human vs Jet Engine
00:19
MrBeast
Рет қаралды 182 МЛН
Triple kill😹
00:18
GG Animation
Рет қаралды 16 МЛН
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 104 М.
First AI Model Specially trained to Control Home Assistant
16:06
How My New Smart Home Dashboard is GENIUS
9:00
Smart Home Solver
Рет қаралды 201 М.
Make Your Security Camera Appear on Your Tablet in Home Assistant
11:33
Smart Home Junkie
Рет қаралды 63 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,2 МЛН
Build A Smart Home Control Panel EASILY!
11:20
Everything Smart Home
Рет қаралды 500 М.
Coral AI Person Detection with Home Assistant & Frigate
13:48
My Ultimate Smart Home Tour 2024!!!
27:45
Steve DOES
Рет қаралды 175 М.
I Installed Chat GPT In Home Assistant And The Results Were Amazing!
14:27
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 27 МЛН