8 Best ChatGPT Plugins
11:38
Жыл бұрын
GPT-4 Explained in 100 Seconds
1:33
Пікірлер
@garchafpv
@garchafpv 2 сағат бұрын
y u modulating your voice.. or is it ai?
@bipinpandit6000
@bipinpandit6000 21 сағат бұрын
I put the API key in the function but the code is not running, please help thanks
@cryptorich614
@cryptorich614 Күн бұрын
thank you so much bro. u saved my life. I wonder if we can use this on open webui OR we need to create gui ourself?
@OpenAITutor
@OpenAITutor 2 күн бұрын
AI Austin, Good stuff!!
@MrWereldman
@MrWereldman 5 күн бұрын
🎯 Key points for quick navigation: 00:00 *Memory in language models is used to keep tech users as the product. This tutorial provides a local AI solution for data privacy.* 00:13 *A local RAG agent using Ollama and vector databases stores conversations with 100% data privacy.* 00:40 *The AI uses retrieval augmented generation to find relevant data for queries.* 01:22 *Users can run open-source language models locally on their PCs, ensuring efficiency without cloud dependency.* 02:03 *Source code and tutorials are available in the creator's Discord server for early access.* 02:46 *Python 3.11 or 3.12 and Ollama are prerequisites for building this program.* 03:26 *Recommended Ollama models include Llama 3 and Mixl for performance on varying hardware.* 04:08 *The program prompts the language model and prints responses using Ollama.* 05:01 *The program achieves conversational memory by storing text in a vector database.* 06:12 *AMA’s streaming response feature reduces latency significantly during interactions.* 07:52 *Vector embeddings represent conversational data numerically to determine context relevance.* 09:01 *Chroma DB is used as the vector database for storing embeddings.* 11:44 *Apache Postgres is employed for long-term conversation storage.* 13:07 *Postgres installation and creation of a superuser for database management are demonstrated.* 16:07 *Functionality for storing conversations in SQL is added, replacing initial list method.* 17:28 *Multi-topic retrieval enhances context understanding by generating query-based embeddings.* 20:15 *Queries are transformed into Python lists for thorough context searches.* 24:23 *User experience is enhanced by adding a recall command and a visual loading bar.* 26:02 *Forget, memorize, and recall commands control database interactions and context usage.* 26:54 *Color-coding output improves readability; features and enhancements can be added by users.* Made with HARPA AI
@IAmMisterD
@IAmMisterD 5 күн бұрын
could have included the code in your description 👎
@Ai_Austin
@Ai_Austin 5 күн бұрын
i could but i won't work free. mcdonalds doesn't expect people to work for free. weird that anyone expects someone who spent 8 years learning to code to give their work away for free. if you don't want to support the channel by watching the video, then i have a PRO membership where my source code is released weeks before the videos. this is youtube, where they want me to create content that you watch to the end. if your time is too valuable to support my channel by watching the video. then you should be able to afford $25 a month to get code from a professional software engineer that has companies which will pay him to write code 👍
@lepinecode4298
@lepinecode4298 6 күн бұрын
Also would be nice to have reverse : text to speech
@lepinecode4298
@lepinecode4298 6 күн бұрын
Only one line is shown.
@itsjoker1990
@itsjoker1990 10 күн бұрын
lol i added a web frontend using flask and feww button as well as textual command for the /recall /memorize and / forget
@codeman99-dev
@codeman99-dev 11 күн бұрын
Have you heard of Docker? A simple docker-compose would have saved you a lot of pain.
@Ai_Austin
@Ai_Austin 11 күн бұрын
what pain would that have saved me as the instructor? i would have to explain a complex software tool designed to make software scalable and able to be deployed faster. it adds complexity in development, and the users will only deploy it on one local pc. i would never use docker, except for developing production software that i need to be able to deploy on 100 servers fast. since this isn't one of those situations, i don't want to confuse people by telling them to do things more complex than i would. docker reduces complexity of deployment on production servers, while increasing the complexity of development. this is 100% a development tutorial and does not require deployment, making docker practically valueless in this project. (sure you could subjectively like the docker flow, now that you have adopted it, but it was a learning curve that did add complexity to your local dev flow)
@dllatinux
@dllatinux 11 күн бұрын
Hi! , maybe the colab note that is posted is a bit out of date ? I am having errors importing some packages , and of course thanks for this video !
@user-fq8uw8gv5m
@user-fq8uw8gv5m 11 күн бұрын
this world has gone kaka
@user-fq8uw8gv5m
@user-fq8uw8gv5m 11 күн бұрын
what a non sense channel and creator, please get a job and a life, we don't need the nth tutorial with this background where you pretend to be on a sci-fi astroship, oh come on, you are pathetic
@darkmatter9583
@darkmatter9583 13 күн бұрын
Can you share code please?
@Belus1234
@Belus1234 13 күн бұрын
apparently the LLama3 model has a limmit can someone tell me how to set the used tokens down only from LLama3 or are there any other solutions, if so please let me know.
@Ai_Austin
@Ai_Austin 12 күн бұрын
you can create a function to check the total tokens in convo before sending your request, if it exceeds their limit, remove messages until it doesn't then send the prompt.
@TheNLK
@TheNLK 14 күн бұрын
This works on .bin files, which have had their support dropped. Is there a solution around that?
@Pro-edit-No.01
@Pro-edit-No.01 15 күн бұрын
i am a windows user can you give me the link of pyAudio binding for the portAudio library. plzzzzz help me i have to submit my ai project in 2 weeks nice explaination ❤
@Avalon19511
@Avalon19511 16 күн бұрын
How would you change the voice from male to female?
@VsnsmnGndmd
@VsnsmnGndmd 16 күн бұрын
Hall Paul Thomas Brenda Walker Mary
@curiousfurious4162
@curiousfurious4162 16 күн бұрын
can anyone give me the code, mine is not working.
@doramy9689
@doramy9689 16 күн бұрын
this is criminally underrated
@September222036
@September222036 17 күн бұрын
Bro thanks so much for this, created my first self-hosted agent in less than 48h. The only issue I have if anyone can help, is that everytime I exit or reset the convo, it starts over again and considers only the system messages defined in python, not /memorized initel, or past convos. Any tips to make it like a continuous convo or to be able to recall actual context from /memorized pool would be appreciated <3
@SmartjinxKimani
@SmartjinxKimani 19 күн бұрын
🎯 Key points for quick navigation: 00:00 *Introduction to building a voice-only interface for Google Gemini using Python.* 00:15 *The project involves combining various AI libraries, including an improved version of OpenAI's Whisper.* 00:28 *The assistant will use OpenAI's Text-to-Speech API for generating a human-like voice.* 01:09 *Pro channels on Discord will offer written tutorials and code blocks to accompany the videos.* 01:23 *Python version 3.11 is required for this project, and installation guidance is provided.* 01:36 *Instructions for installing essential Python packages for the voice assistant are outlined.* 03:27 *A simple program to interact with the Gemini API is created using the API key.* 06:45 *Configuration settings for Gemini's performance are adjusted, focusing on response randomness and output length.* 08:19 *Safety filters in Gemini can be turned off for less restricted interaction.* 09:59 *The OpenAI Text-to-Speech API setup requires account creation and credit management.* 14:33 *The program incorporates functions for both speech synthesis and transcription using Whisper.* 18:53 *A wake word detection system is implemented to activate the voice assistant.* 21:10 *The assistant is set to listen continuously, with a half-second delay between audio processing callbacks.* Made with HARPA AI
@Bogdan-AI
@Bogdan-AI 20 күн бұрын
Hi AI Austin! Bogdan AI here. I want to express my gratitude for your video. I set a goal this week to just think about a solution (Agent, GPT, system prompt) to help myself with AI. Last week I already created an Obsidian based knowledgebase that I can chat with (and Fabric patterns to fill it with content). After following along your crash course, I realized that its result can be a very good starting point for creating my own AI assistant. Do you agree, or would you recommend an even better way? In response to your question, I already extended it with persistent ChromaDB and now I am wondering about what is best: persistent or in-memory db. I don't have a lot of data but persistent seems to load faster. And of course, I see that it can be extended with additional tools (google search, add knowledge from files, real-time separate database tables for tracking weight and other parameters, etc.)
@Johndoe-176
@Johndoe-176 20 күн бұрын
stop typing clear, just press ctrl+l
@patflc
@patflc 24 күн бұрын
hello Austin everything was working so well until importing WhisperModel, then i got this error: File "/home/linuxbrew/.linuxbrew/Cellar/[email protected]/3.12.5/lib/python3.12/site-packages/av/__init__.py", line 20, in <module> from av._core import time_base, library_versions ModuleNotFoundError: No module named 'av._core' I have been trying to solve it in a bunch of ways but still nothing T_T do you know how could i fix it? im working with windows wsl vscode thx you anyways, is there an alternative library for achieving these same functions for the program?
@Ai_Austin
@Ai_Austin 23 күн бұрын
it is a wsl issue. running a linux operating system inside or a non unix operating system, adds a large layer of complexity and introduces millions of bugs that do not exist on an actual linux operating system. my advice is code on linux, if not the windows will suffice. but do not code on linux in windows, unless you like having to hack wsl to run your program.
@Ronaldograxa
@Ronaldograxa 25 күн бұрын
looove this! thanks a lot! Please how do you get the mouth to move while you talk on the video? I have been trying to learn this! thanks again for that
@NoahStacy-k4g
@NoahStacy-k4g 26 күн бұрын
got this to run, but boy is llama3 INSANELY slow. like so slow it's useless.
@Ai_Austin
@Ai_Austin 26 күн бұрын
it's 100% a testament to your pc's hardware. it's like pulling out moms old laptop and saying "this new advanced graphics video game is slow" chatgpt is not fast. closedai has dozens of billions of dollars worth of GPU's running in a microsoft server to make chatgpt fast. hardware limitations are real things when we get into running Ai locally
@mycount64
@mycount64 27 күн бұрын
AI KZbin video teaching how. To teach the creation of an AI .
@sigil8784
@sigil8784 27 күн бұрын
This is fantastic and is something I've been looking into doing for the past few days. Do you think it'd be possible to do an add-on/followup to this video on how the code could be integrated into Open WebUI?
@winxalex1
@winxalex1 28 күн бұрын
/teach query answer. Read TeachableAgent python code in AutoGen. Use MongoDB so you don't need 2 DBs. Main problem of this approach is /update What is your name? I'm AI Texas. Solution Graph RAG.
@Izngd
@Izngd Ай бұрын
middle finger ---- this made me LOL.
@loganwilliams4958
@loganwilliams4958 Ай бұрын
Would it be possible to use this together with your unlimited memory technique and a RAG system? Or is the unlimited memory video going over a kind of rag system?
@loganwilliams4958
@loganwilliams4958 Ай бұрын
Can we use this and a RAG system together? I’m a little new to AI
@loganwilliams4958
@loganwilliams4958 Ай бұрын
Never mind I just realized that this is a RAG system🤦‍♂️. I guess what I’m really asking is if there is a way to give the ai access to word documents, graphs, and other text files. Or do I need to just copy and paste the documents into the prompt section and do the /memorize command?
@stevedavey9435
@stevedavey9435 Ай бұрын
This guy's monotone is gross
@wallmemes
@wallmemes Ай бұрын
how do i encorporate lamma 405
@antonpictures
@antonpictures Ай бұрын
RAG missing image embedding , text is not enough.
@sowbharnikadevi9331
@sowbharnikadevi9331 Ай бұрын
can we get the openai api without paying credits?
@TopBassBoosters
@TopBassBoosters 4 күн бұрын
Sadly no, that's what is holding me back from making my own chatbot😢
@ge0xploit
@ge0xploit Ай бұрын
Amazing🔥🔥🔥🔥🔥🔥
@mohammedissam3651
@mohammedissam3651 Ай бұрын
This AI doing is just saving my chat history ? Why , what is the application of using it The purpose of its exsiatance !! Could it solve problems I don't do chat on my system 😅 Can i teach it ?
@omarfargally7012
@omarfargally7012 Ай бұрын
Hi, this tutorial is amazing. I have one question though: in the video you made a function to set the chromadb database for the embeddings but you did not make a function to update that database and sync it with the postgresql server updates. That means that whatever conversations the user has before ending the session will NOT be used in recalling. As I said, the chromadb database is only set before the loop and is not updated during the conversation (even though the sql server is being updated). I made an update function just in case someone wants to do that: def update_vector_db(): conn = connect_db() vector_db = client.get_collection(name='conversations') with conn.cursor(row_factory=dict_row) as cursor: chroma_ids = vector_db.get()['ids'] max_id = max(int(id) for id in chroma_ids) if chroma_ids else 0 cursor.execute('SELECT * FROM conversations WHERE id > %s ORDER BY id', (max_id,)) new_conversations = cursor.fetchall() conn.close() for convo in new_conversations: serialized_convo = f"prompt: {convo['prompt']} response: {convo['response']}" response = ollama.embeddings(model='nomic-embed-text', prompt=serialized_convo) embedding = response['embedding'] vector_db.add( ids=[str(convo['id'])], embeddings=[embedding], documents=[serialized_convo] ) print(f"Added {len(new_conversations)} new conversations to the vector database.")
@Ai_Austin
@Ai_Austin Ай бұрын
absolutely an option and worth it if you are having conversations extending the context limit. another thing you could add to improve even further: every time you do a retrieval for embedded context, make sure the context is not already in the convo. it will add computational overhead but improve output quality by not giving it duplicated context in one convo
@i2c_jason
@i2c_jason Ай бұрын
This is so valuable. I'm considering your subscription based on the quality of this video! I have a dumb beginner question, but could this RAG module plug into LangChain if I wanted to architect a more complex agentic workflow? I'll be parsing a lot of engineering requirements into various parameters that will make their way to various mathematical and geometrical outputs, so I'm envisioning your example as a great way to manage the main assistant, and then different flavors of this as expert agents in various disciplines within my LangChain graph. Not looking to do any finetiningof models, as we are a small team and I anticipate the LLM capabilities to continue to grow, so I'd like to future-proof my design wiht a lot of multi shot learning near my system's final outputs. -I2C_Jason
@FredyGonzales
@FredyGonzales Ай бұрын
Excelente trabajo maestro, demasiado bueno como para ser verdad. Muchas gracias.
@stevetownley1
@stevetownley1 Ай бұрын
Awesome video's! What spec box are you running these local LLM system on?
@michaelandersen9491
@michaelandersen9491 Ай бұрын
I really liked your vid. I work with this stuff a bunch, and this sparks all kinds of ideas. Thank you for sharing! p.s. Man you deliver that stuff fluff free :o awesome.👍
@robertohluna
@robertohluna Ай бұрын
Perfect
@memegazer
@memegazer Ай бұрын
Could you use this to fine tune an LLM on your games lore and then produce npc output?
@mahdinahmed-zf5yw
@mahdinahmed-zf5yw Ай бұрын
where is the time stamp?
@Ai_Austin
@Ai_Austin Ай бұрын
00:00
@hassanlearns2290
@hassanlearns2290 Ай бұрын
Error: ModuleNotFoundError: No module named 'distutils'. Any fix?
@obentti
@obentti Ай бұрын
Has anyone actually implemented the logic in this video and managed to get it to work?