You don’t have to intentionally “allow”, the voice sent to and received from multimodal live api are chunked and in asynchronous way.
@nicolassuarez29332 күн бұрын
Outstanding! Can you add function calling, how? Thanks!
@kyuleo3 күн бұрын
Great job! Would it very complicated to achieve this with LangGraph?
@rollin_sap3 күн бұрын
Could you check the git repo, its not working properly.. config w text is working okayish but the audio config isnt working
@yeyulab3 күн бұрын
Could you open the Inspect console of your browser to let me know its print? I was using Chrome, what browser are you using?
@RealLexable3 күн бұрын
Isn't it multimodal able anyway from the beginning on as gemini 2.0 flash? Or was your dev for local purposes?
@yeyulab3 күн бұрын
I decoupled the API usage from google AI Studio for customization.
@yeyulab4 күн бұрын
!! Please carefully use the library Twikit mentioned in this video, X may suspend the account that frequently scrapes the platform.
@epic_miner8 күн бұрын
😮😮😮😮 bro you are the hidden champ in the youtube ❤❤❤❤❤
@lionmike2479 күн бұрын
awesome video man. thanks for sharing! I had one visual question on how you are getting those curved borders on your Mac applications? Is that something you did under the hood for mac OS or there an app for this?
@yeyulab9 күн бұрын
I use a video recording software called FocuSee.
@kmnNmk-je8jk9 күн бұрын
baaaaaaaaaaaaaaaaaaaaaaad
@Jason-ju7df18 күн бұрын
I want to apply this UI to Microsoft's AutoGen Magentic-One
@yeyulab18 күн бұрын
Next one!
@杨慧-z1c25 күн бұрын
That is where problem? 2024-11-28 15:39:56,819 Error running application handler <panel.io.handlers.ScriptHandler object at 0x0000022F47AE30A0>: 'material' File 'base.py', line 118, in __init__: theme = self._themes[theme]() Traceback (most recent call last): File "D:\anaconda3\envs\agent\lib\site-packages\panel\io\handlers.py", line 405, in run exec(self._code, module.__dict__) File "D:\crewaipro esearch_crew\src esearch_crew\main.py", line 8, in <module> from research_crew.crew import ResearchCrew File "D:\crewaipro esearch_crew\src esearch_crew\crew.py", line 11, in <module> chat_interface = pn.chat.ChatInterface() File "D:\anaconda3\envs\agent\lib\site-packages\panel\chat\interface.py", line 165, in __init__ params["widgets"] = [ChatAreaInput(placeholder="Send a message")] File "D:\anaconda3\envs\agent\lib\site-packages\panel\widgets\base.py", line 115, in __init__ super().__init__(**params) File "D:\anaconda3\envs\agent\lib\site-packages\panel eactive.py", line 634, in __init__ super().__init__(**params) File "D:\anaconda3\envs\agent\lib\site-packages\panel eactive.py", line 124, in __init__ super().__init__(**params) File "D:\anaconda3\envs\agent\lib\site-packages\panel\viewable.py", line 709, in __init__ self._update_design() File "D:\anaconda3\envs\agent\lib\site-packages\panel\viewable.py", line 727, in _update_design self._design = self._instantiate_design(design, config.theme) File "D:\anaconda3\envs\agent\lib\site-packages\panel\viewable.py", line 718, in _instantiate_design return design(theme=theme) File "D:\anaconda3\envs\agent\lib\site-packages\panel\theme\base.py", line 118, in __init__ theme = self._themes[theme]() KeyError: 'material'
@JohnnyJohnson-g4e26 күн бұрын
Ca you integrate this with a flows crew?
@Jason-ju7df27 күн бұрын
Love the CrewAi videos!
@yeyulab26 күн бұрын
More to come!
@KodandocomFariaАй бұрын
thank you so much for this tutorial. Do you know how to fine tuning this reasoning with PPO and human feedback(RLHF) ? I saw an interface from argilla one time but I lost the link.
@yeyulab26 күн бұрын
So the key is how to tune the 20k reasoning dataset with a "Feedback" content. It's good choice to use Argilla app for this job and after the dataset tuned, use ArgillaTrainer to train for RLHF.
@m.a.7768Ай бұрын
great work! thanks,
@yeyulabАй бұрын
Thank you!
@uzzamakhamar6605Ай бұрын
I understand the reason for switching from Streamlit to Panel. However the problem I have is that CrewAI's _ask_human_input takes human input only once. If you have an agent who needs to interactively request some data then CrewAI does not support this. However an agent can call a Tool how many ever times it wants to. So I have implemented your logic into a _ask_human_input_tool as a workaround. It works but I am yet to implement the UI for this using Panel.
@yeyulabАй бұрын
Wow, brilliant solution!
@nilamdhatrak6346Ай бұрын
Truly awesome! Many thanks!! Exactly what I was looking for!
@yeyulabАй бұрын
Glad you like it!
@user-wr4yl7tx3wАй бұрын
how about using LangChain for RAG, can that work as well?
@yeyulabАй бұрын
It surely can. The easiest way is to create a create_retrieval_chain and register the invoke() as a function of the agent.
@GOGYPROАй бұрын
Thank you for sharing this!
@yeyulabАй бұрын
Of course!
@mikew2883Ай бұрын
Excellent! 👏
@yeyulabАй бұрын
Glad you like it!
@ronengitАй бұрын
hi Yeyu, this is what I was looking for !! - Thanks Unfortunately none of the Agents call backs are working for me .. I am using panel version 1.5.3 and crewai 0.70.1 . I Did add call back the tasks which provide some info through the chat but .. Help please .
@yeyulabАй бұрын
Hey, the demo code was tested under the older version of these libraries: crewai==0.28.8 panel==1.4.0. CrewAI must have changed in some definitions but the basic idea of callback should be consistent. The original code: github.com/yeyu2/KZbin_demos/blob/10156ef4256aeb0e6ca88bf3191ebfe39f507956/crewai_panel.py
@ViktorIwanАй бұрын
Will it support PDF Upload / RAG ?
@yeyulabАй бұрын
The Swarm is a higher layer of agent orchestration framework. So the implementation of functions like RAG should be done by developers themselves. You can create a RAG agent with a “function” that processing user input and responding with doc retrieval.
@user-wr4yl7tx3w2 ай бұрын
Better than Langgraph?
@yeyulab2 ай бұрын
I would say Swarm is just gathering multiple chatbots (very less system prompting techniques), it’s not for complex agents workflow like what Langgraph or AutoGen does.
@Idiot1230092 ай бұрын
What about the query which doesn't need to call the function?
@yeyulabАй бұрын
The model will make a decision to possibly respond you with no calls
@zmjerry2 ай бұрын
What do you think about the OpenAI Swarm framework vs Agency Swarm? Is this still the more promising one?
@yeyulab2 ай бұрын
To me, I think Swarm frameworks are more focusing on complex and dynamic orchestration between agents, and the AgentKit is more like a quick improvement on models' reasoning process by adding sub-blocks and steps with external information supported. If you are building human interaction projects, you should choose Swarm so sure. For the comparison of OpenAI Swarm and Agency Swarm, the author of Agency Swarm has made a good video: kzbin.info/www/bejne/rF6ymIqdftKIpsU
@NavitaThakur-nd4il2 ай бұрын
Thank for the code. I tried it but it does not return any response after putting the question
@yeyulab2 ай бұрын
My bad, I forgot to attach the versions of these libraries as they probably have some conflicts if you install the latest ones. You'd better to install crewai==0.28.8 & panel==1.4.0 as when my code run successfully.
@blackswann95552 ай бұрын
Awesome video!
@yeyulab2 ай бұрын
Glad you enjoyed it, thanks!
@jlcasesES2 ай бұрын
🎯 Key points for quick navigation: 00:00 *📘 Introducción a Llama 3.2 y aplicaciones en dispositivos* - Presentación de los modelos ligeros Llama 3.2, especialmente los de 1B y 3B. - Comparación con los modelos medianos que destacan en comprensión visual compleja. - Capacidades multilingües y generación de texto de los modelos pequeños para aplicaciones en dispositivos. 02:31 *🔧 Función de llamada: Teoría y herramientas* - Explicación de la capacidad de llamada de funciones en modelos de lenguaje. - Introducción a la librería Instructor y Pydantic para definir y validar funciones. - Ventajas de implementar llamadas de funciones con modelos base no ajustados. 04:14 *💻 Ejemplo práctico: Función financiera con Llama 3.2* - Definición de la función dummy "get_revenue" utilizando modelos Pydantic. - Proceso de instalación de paquetes necesarios como "openai", "pydantic" e "instructor". - Configuración del API y preparación del entorno para ejecutar la función financiera. 06:16 *🛠️ Validación y ejecución de funciones generadas* - Ejecución del modelo Llama 3.2 1B para generar llamadas de funciones estructuradas. - Validación de la salida JSON que incluye el nombre de la función y sus argumentos. - Integración y ejecución de la función basada en los argumentos proporcionados por el modelo. 08:39 *🏠 Construcción de un sistema de hogar inteligente* - Definición de múltiples funciones como "set_temperature" y "toggle_lights" para controlar dispositivos. - Uso del modelo Llama 3.2 3B con Instructor para procesar comandos de usuario. - Demostración interactiva de comandos y respuestas del sistema en un entorno de hogar inteligente. 12:14 *✅ Conclusiones y beneficios de la automatización con Llama 3.2* - Importancia de conectar APIs reales y interfaces de hardware para automatizar acciones. - Ventajas económicas de utilizar modelos de código abierto como Llama 3.2. - Invitación a suscribirse al canal y próximos tutoriales para continuar innovando. Made with HARPA AI
@AsimovAcademy2 ай бұрын
Thank you so much for this video! That's exactly what I was looking for :)
@yeyulab2 ай бұрын
Glad it was helpful! Thanks!
@ramannegi37293 ай бұрын
I want to deploy to website I would can we do. .
@mahatmakawa84203 ай бұрын
Hi can you try function call with the latest version of vllm and pyautogen? because the method you proposed is not working again. Thanks
@yeyulab3 ай бұрын
Ok, let me try it
@IdPreferNot13 ай бұрын
Excellent video covering what I imagine llama-index(smart-guys) have purporsefully developed as a more robust agent system that bets on the stronger performance of the next generation of foundation models.... exactly what Altman and other model builders suggest. Look forward to more on this framework in the future.
@gokusaiyan11283 ай бұрын
love your series. Can you make a video on using autogen to generate a code and then later it compiles it too and we can a final executable from it. For e.g. I want it to write a function that takes web url and that web url requests example, later it analyzes the request to figure our the api routes and write a program in any compiled language that can make requests to that web url and execute it.
@successahead55983 ай бұрын
can you help with crewai working with django
@raomotorsports4 ай бұрын
THANK YOU 🙏🏻 Really appreciate the way you explain each step of the process! Made it super easy to understand how it works. You are the man!
@bwilliams0604 ай бұрын
Hi Yeyu! I’m yet to see a video where chainlit has been paired up with crewai. Is there a reason for this? Is it more like streamlit or panel as far as long running multi agent frameworks?
@leane-q9y4 ай бұрын
Great video! Does anyone know how to close the browser window that opens after the server is launched in the code? I am able to stop the server but not close the window with it...
@florentromanet54394 ай бұрын
Thanks 🙏
@yeyulab4 ай бұрын
You’re welcome, hope it helps!
@jermesastudio4 ай бұрын
Thank you for this tutorial. It was helpful for my work.
@yeyulab4 ай бұрын
Thanks, glad to see it helped
@mikew28834 ай бұрын
Awesome tutorial! 👏
@yeyulab4 ай бұрын
Thanks
@NathanVirushabadoss4 ай бұрын
can u please upload the video autogen with web sockets
@لغزالحياة-ظ5ك4 ай бұрын
Thank you for the video 🎉🎉
@mikew28834 ай бұрын
Good stuff! 👍
@gokusaiyan11285 ай бұрын
can you do one video with autogen with UI, no I don't want autogen studio.
@MrMoonsilver5 ай бұрын
Man!! Finally someone did it, thank you very much for sharing this with us!
@drlordbasil5 ай бұрын
Thanks for this, I don't use autogen often, but I love the builds that are possible.
@AdibaHaque-f2j5 ай бұрын
Amazing video! Very to-the-point and helpful. I am encountering an issue and was hoping you could assist me. In your code, you reset input_future each time to receive new input from the user. However, when I attempt to implement this, I only receive responses from the agents the first time I provide input. After that, there is no further agent engagement or conversation, and the terminal indicates that no input is awaited. How can I maintain ongoing user engagement after each termination? For instance, if I have three agents engaging in the chat, I would like the flow to be: User input User proxy Agent 1 Agent 2 Then again User input User proxy Agent 1 Agent 2 And again And so on. Thank you!
@沃滋基5 ай бұрын
Thank you very much for sharing the code! It is very useful to me. Additionally, I would like to ask if it is possible to initiate a new round of conversation. Specifically, when one round of conversation has ended and I attempt to ask a second question, it enters the logic that prints 'There is currently no input being awaited.'"
@AnandKumar-fn4sd6 ай бұрын
Hello, which versions of python, panel, crewai are you using. I am consistently getting an error: AttributeError: module 'panel' has no attribute 'chat' for the basic chat. ( the first example)