AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"

  Рет қаралды 633,556

Matthew Berman

Matthew Berman

Күн бұрын

Пікірлер: 685
@MarkLewis00
@MarkLewis00 13 күн бұрын
the future is Agentic indeed! would love to integrate Pinecone with Composio & Autogen
@e-vd
@e-vd 8 ай бұрын
I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.
@philipduttonlescorlett
@philipduttonlescorlett 2 ай бұрын
I completely agree with this sentiment. In a world dominated by populism, especially in politics and mainstream media, it's refreshing to see scientifically evidence-based content like this on KZbin. We need much more of this kind of journalism that prioritizes facts and critical thinking.
@stray2748
@stray2748 8 ай бұрын
LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!
@ihbrzmkqushzavojtr72mw5pqf6
@ihbrzmkqushzavojtr72mw5pqf6 8 ай бұрын
Is self dialog same as Q* ?
@stray2748
@stray2748 8 ай бұрын
@@ihbrzmkqushzavojtr72mw5pqf6 I think it's the lynchpin they discovered to be a catalyst for AGI. Albeit self-dialogue + multimodality being trained from the ground-up in Q* (something ChatGPT did not have in it's training). Transformers were built on mimicking the human neuron (Rosenblatt Perceptron) ; okay now following human nature, lets train it ground-up with multimodal data and self-dialogue (like humans posess).
@Korodarn
@Korodarn 8 ай бұрын
@@ihbrzmkqushzavojtr72mw5pqf6 Not exactly, Q* is pre-thought, before inference is complete. The difference is with planning if someone asks you a question like "how many words are in your response, you can think about it, and come to a conclusion like to say 'One'" but if you don't have pre-thought, you're doing simple word prediction every time, and the only way to get a simple outcome is if something akin to key/value pairs passed into LLM at some point gives it the idea to try that in one shot. Even if it has a chance to iterate it'll probably never reach that response without forethought.
@Existidor.Serial137
@Existidor.Serial137 8 ай бұрын
give it a couple more ai models, like world simulators, a little bit of time...and then something similar to what we refer as consiciousness may emerge of all those intereactions.
@defaultHandle1110
@defaultHandle1110 8 ай бұрын
They’re coming for you Neo.
@janchiskitchen2720
@janchiskitchen2720 8 ай бұрын
The old saying comes to mind: Think twice , say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.
@ajohnsonlllll
@ajohnsonlllll 6 ай бұрын
The saying is "measure twice, cut once." what kind of a surprise is it to anyone that allowing more buffer time to anyone increases the intelligence output.
@virtualalias
@virtualalias 8 ай бұрын
I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm," you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.
@kliersheed
@kliersheed 8 ай бұрын
ehem, skynet. :D but i agree
@blindmonkey5886
@blindmonkey5886 Ай бұрын
Sounds like all the people on planet earth guided by God. So maybe we're the AI-agents ... I wonder what the task is? To end up in a heavenly state forever? Sounds good enough. Let's run the program and see what happens, lol.
@BTFranklin
@BTFranklin 8 ай бұрын
I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.
@samhiatt
@samhiatt 8 ай бұрын
Matthew is really good, isn't he? I want to know how he's able to keep up with all the news while also producing videos so regularly.
@richardgordon
@richardgordon 8 ай бұрын
Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!
@Chuck_Hooks
@Chuck_Hooks 8 ай бұрын
Exponentially self-improving agents. Love how incremental improvements over a period of years is so over.
@andrewferguson6901
@andrewferguson6901 8 ай бұрын
I'm expecting deep mind to at any point just pop off with an ai that plays the game of making an ai
@aoeu256
@aoeu256 8 ай бұрын
When did the information age end and the AI age begin haha. I still think, we need to figure out how to make self-replicating robots (that replicate themselves half-size each generation) by making them out of lego-blocks, and then have the lego-blocks be cast from a mold that the robot itself makes. Once hardware(robots) improves the capabilities of software can improve.
@wrOngplan3t
@wrOngplan3t 8 ай бұрын
@@aoeu256 Oh come on now, you know how that'll end. Admit it, you've watched Futurama :D
@efexzium
@efexzium 8 ай бұрын
Not sure if. I love that
@paulsaulpaul
@paulsaulpaul 8 ай бұрын
It may refine the quality of results, but it won't teach itself anything new or have any "ah hah!" moments like a human thinker. There will be an upper limit to any exponential growth due to eventual lack of entropy (there's a limit to how many ways a set of information can be organized). Spam in a can is a homogenous mixture of meat scraps left over from slaughtering pigs. It's the ground up form of the parts that humans don't want to see in a butcher's meat display. LLMs produce the spam from the pork chops of human creativity. These agents will produce a better looking can with better marketing speak on the label. Might have a nicer color and smell to it. But it's still spam that will never be displayed next to real cuts of meat. Despite how much the marketers want you to think it's as good as or superior to the real thing.
@luciengrondin5802
@luciengrondin5802 8 ай бұрын
The iterating part of the process seems more important to me than the "agentic" one. If we compare current LLMs to DeepMind's AlphaZero method, it's clear that so far LLMs currently only do the equivalent of AlphaZero's evaluation function. They don't do the equivalent of the Monte-Carlo search thing. That's what reasoning needs : the ability to explore the tree of possibilities, the NN being used to guide that exploration.
@joelashworth7463
@joelashworth7463 8 ай бұрын
what get's interesting about agentic - is what if certain agents have access to differrent 'experiences' - meaning their context window starts with 'hidden' priorities objectives and examples of what final state should look like. Since Context windows are limited right now this is an exciting area. Of course the other part of agentic vs iterative - is that since a model isn't really 'thinking' it needs some for of stimulus that will disrupt the previous answer - so you either have to use self reflection or external crtiic - if the external critic uses a differrent model (fine tune or lora) and is given a differrent objective you should be able to 'stimulate' the model into giving radically differrent end products.
@garybarrett4881
@garybarrett4881 8 ай бұрын
Agents? You know this is how the matrix begins, right?
@ranjancse26
@ranjancse26 7 ай бұрын
We live a matrix for sure 😄
@skyless7304
@skyless7304 7 ай бұрын
😂
@friendlyword
@friendlyword 6 ай бұрын
Ha, nice. Well done. They need to make a nervous laughter emoji. I remember looking for one when I first read that China named its State surveillance AI, “SkyNet”
@ayanbandyopadhyay767
@ayanbandyopadhyay767 5 ай бұрын
Well well Mr And...rew...son
@gamesshuffler-v8n
@gamesshuffler-v8n 4 ай бұрын
A reference to the iconic movie The Matrix (1999)! Yes, I'm familiar with that famous line. In the movie, the character Morpheus explains to Neo that they are living in a simulated reality created by intelligent machines, and that agents are programs designed to eliminate any potential threats to this system.
@SuperMemoVideo
@SuperMemoVideo 8 ай бұрын
As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically
@josesalvador7747
@josesalvador7747 4 ай бұрын
I would call a "concept" a "feature pattern". An "agent" is more of an active orchestrator that will identify a "context" (a bunch of occuring "feature patterns" or "state"), and select a plan (also called a "policy", which is basically a sequence of actions) that will allow to reach another context/state while maximizing "rewards".
@DinoMuratovic-it9vl
@DinoMuratovic-it9vl Ай бұрын
thanks for explaining!
@AINEET
@AINEET 8 ай бұрын
You upload on the least expected random times of the day and I'm all for it
@matthew_berman
@matthew_berman 8 ай бұрын
LOL. Keeping you on your toes!
@holdthetruthhostage
@holdthetruthhostage 8 ай бұрын
Haha 😂
@existentialquest1509
@existentialquest1509 8 ай бұрын
i totally agree - was trying to make this case for years - but i guess technology has now evolved to the point where we can see this as a reality
@8691669
@8691669 8 ай бұрын
Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.
@StefRush
@StefRush 8 ай бұрын
I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅
@kliersheed
@kliersheed 8 ай бұрын
agens = actor = compartimented entity doing smth. i think the word fits perfectly. its like transistors are simulating our neurons and the agent is simulating the individual compartments in our brain. a frame of mind would be a fitting expression for the supervising AI keeping the agents in check and organizing them to solve the problem perceived. its like the "me" as in consciousness, ruling the processes in the brain. a frame always has to contain smth and IMO its hard to say what an agent contains as its already really specialized and works WITHIN a frame (not being a frame). even if you speak of frames as in relation systems, the agent is WITHIN, not one itself. just my thoughts on the terms ^^
@UthacalthingTymbrimi
@UthacalthingTymbrimi 3 ай бұрын
I really like the analogy, however I think that the term "frame of mind" tends to lend itself to a single-thread, serialized approach to solving a problem or completing a task - like the approach a single human would need to take. The very nature of agents lends itself to parallel execution of various aspects of the task at hand, either to complete more quickly, or to provide many variants of a solution that can be compared, selecting the best result. For example, "write me some code to do [x]". You could have a thousand agents write variations of the code, then have a bunch of agents to debug, plus an army of reviewing agents evaluating each candidate (in either a cooperative or adversarial fashion, or both). This approach would be orders of magnitude more powerful than serial, iterative execution of task steps. For me at least, this is more akin to the concept of a collaborative team of people, each with their own role to play in the overall objective, rather than a single entity changing its frame of mind to perform each aspect of a task, one step at a time until completion.
@carlkim2577
@carlkim2577 8 ай бұрын
This is one of the best vids you've made. Good commentary along with the presentation!
@jonatasdp
@jonatasdp 8 ай бұрын
Very good Matthew! Thanks for sharing. I built my simple agent and I see it improving a lot after a few interactions.
@youriwatson
@youriwatson 8 ай бұрын
Great point about combining Groq's inference speed with agents!
@ayanbandyopadhyay767
@ayanbandyopadhyay767 5 ай бұрын
Agree fully
@JacquesvanWyk
@JacquesvanWyk 8 ай бұрын
I have been thinking about agents for months without knowing what I am thinking of untill I found videos like crewai and swarm-agent and my mind is blown. I am all in for this and trying to learn as much as i can because this is for sure the future. Thanks for all your uploads
@EliyahuGreitzer
@EliyahuGreitzer 8 ай бұрын
Thanks!
@sma1015
@sma1015 6 ай бұрын
Thanks for sharing. As much as I love Andrew Ng, his voice always puts me to sleep. Its like a lullaby. Thanks for elaborating on these updates. It kept me engaged.
@JohnSmith762A11B
@JohnSmith762A11B 8 ай бұрын
Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍
@MicahBratt
@MicahBratt 2 ай бұрын
There’s a pattern found in nature that I think could be a great framework for building AI systems. I call it the helix framework because it's found in the body’s processes relating to the DNA. You have a static data structure with modules of rules / instructions / recipes. You have the building blocks or ingredients. And you have the builder, compiler, parser, creator etc the active agent that retrieves the right “instructions” from the dictionary and builds drawing from resources in the building blocks.
@mintakan003
@mintakan003 8 ай бұрын
Andrew Ng is actually one of the more conservative of the AI folks. So when he's enthusiastic about something, he has a pretty good basis for doing so. He's very practical. As for this video, good point on Groq. We need a revolution on inference hardware. Also, another point to consider, is the criteria for specifying when something is "good" or "bad", when doing iterative refinement. I suspect, the quality of the agentic workflows will also depend on the quality of this specification, as in the case of all optimization algorithms.
@notclagnew
@notclagnew 8 ай бұрын
Glad I saw this, your additional explanations were incredibly helpful and woven into the main talk in a non-intrusive way. Subscribed.
@ronald2327
@ronald2327 8 ай бұрын
All of your videos are very informative and I like that you keep the coding bugs in rather than skipping ahead, and you demonstrate solving those issues as you go. I’ve been experimenting with ollama, LM studio, and CrewAI, with some really cool results. I’ve come to realize I’m going to need a much more expensive PC. 😂
@agenticmark
@agenticmark 8 ай бұрын
Something you guys never talk about - the INSANE cost of building and running these agents. It limits developers just as much as compute limits AI companies. The reason agentic systems work is they remove the context problem. LLMs get off track and confused easily. But if you open multiple tabs and keep each copy of the LLM "focused" it gets better results" - so when you do the same with agents, each agent outperforms a single agent who has to juggle all the context. We get better results with GPT 3.5 using this method than you would get in a browser with GPT4. Basically, you are "narrowing" the expertise of the model. And you can select multiple models and have them responsible for different things. Think Mixtral but instead of a gating model, the agent code handles the gating.
@DaveEtchells
@DaveEtchells 8 ай бұрын
I’m really intrigued by your multi-tab workflow, it sounds super powerful, but I’m not sure how it works in practice. Do you have the different tabs working on different sub-tasks or performing different roles (kind of a manual agentic workflow, but with human oversight of each of the zero-shot workers), or are they working in parallel on the same task, or … ? IANAP, but I need to have ChatGPT (my current platform, or it could be Claude or whatever) do some fairly complex tasks like parsing web pages and PDFs to navigate a very large dataset and use reasoning to identify significantly-relevant data, download and assemble it into a knowledge database that I’ll then want to use as test input for another AI system. Ideally I’d use one of the no-code/low-code agent dev tools to automate the whole thing but as I said IANAP, and just multi-tabbing it could get me a long way there. It sounds like whatever you’re doing is exactly what I need to - and likely a boatload of others as well: I do wish someone would do a video on it. Meanwhile, would you be willing to share a brief description of an example use case and what you’d have the various tabs doing for it? (I hope @matthew_berman sees this and makes a vid on the topic: Your comment is possibly the most important I’ve ever encountered on YT, at least in terms of what it could do for my work and personal life.) Thanks for the note!
@japneetsingh5015
@japneetsingh5015 8 ай бұрын
You don't always need the state of the art models like that. GPT, Gemini, Claude etc. many open source 7B models work just as well for most of the companies.
@DefaultFlame
@DefaultFlame 8 ай бұрын
@@japneetsingh5015 Yeah, llama, mistral, mixtral, the list goes on. If you want something even more lightweight than 7B, stablelm-zephyr is a 3B that is surprisingly capable. Orca-mini is good too and comes in 3B, 7B, 13B, and 70B versions so you can pick whichever you want based on your hardware.
@TomM-p3o
@TomM-p3o 8 ай бұрын
What You're saying is: Attention is all you need 😁 I do agree that mixing goals will confuse models, as it could people. People however already have learned processes to compartmentalise tasks. We might have to teach agents to do that, apart from constructing them to minimize this confusion.
@DefaultFlame
@DefaultFlame 8 ай бұрын
@@TomM-p3o The whole point of multiple agents with different "jobs," personalities, or even different models powering them, is that we can cheat. The point of multiple agents is that we don't **need** to teach a single agent or model those learned processes, we can just connect several that each do each part, each agent taking on the role of different parts of a single functional brain.
@narindermahil6670
@narindermahil6670 8 ай бұрын
I appreciate the way you explained every step, very informative. Great video.
@NasrinHashemian
@NasrinHashemian 8 ай бұрын
Matthew, your videos are really informative. Many thanks to you for sharing such knowledge and update. This latest one was exceptionally good.
@RasoulGhaderi
@RasoulGhaderi 8 ай бұрын
I love this video. In the long run Advances in A.I surely can be debated for the good of AI Agents, though most will argue that only a few will benefit especially to their pockets, at the end, interesting to see what the future holds.
@YousefMilanian
@YousefMilanian 8 ай бұрын
I also agree that it will be interesting, take a look at the benefits of the computing age millions of people were made for life simply because they made the right decisions at the time thereby creating lifetime wealth.
@RasoulGhaderi
@RasoulGhaderi 8 ай бұрын
I wasn't born into lifetime wealth handed over, but I am definitely on my way to creating one, $715k in profits in one year is surely a start in the right path for me and my dream. Others had luck born in wealth, I have a brain that works.
@ShahramHesabani
@ShahramHesabani 8 ай бұрын
I can say for sure you had money laying around and was handed over to you from family to be able to achieve such.
@RasoulGhaderi
@RasoulGhaderi 8 ай бұрын
It may interest you to know that no such thing happened, I did a lot of research on how the rich get richer and this led me to meet, Linda Alice parisi . Having someone specialized in a particular field do a job does wonders you know. I gave her 100 grand at first
@TestMyHomeChannel
@TestMyHomeChannel 8 ай бұрын
I loved this video. Your selection was great and your comments were right to the point and very useful. I like that you test things yourself and provide links to the topics that are discussed previously.
@federico-bi2w
@federico-bi2w 8 ай бұрын
...ok I can see its right...having done a lot of "by hand iterations"...I mean i am not using agent yet...but if you think with GPT...you ask something...you test....you adjust...you give it back..and the result is better...and in this process if you do question on the same topic but from different aspect it becomes better...so an agent is basically doing this by itself!. Great video! Thank you :D
@BJM1896
@BJM1896 6 ай бұрын
Firstly, thank you Matthew for all that you do. You are really putting out excellent content and helping us to be on the cutting edge or as close to it as possible regarding AI. I would like to hear more about getting agents to behave when using them collaboratively. Sometimes it is difficult to get them to do what you want them to do that is true. Recently I had one agent tell its supervisory agent to “tell the human that it’s not necessary to do that“. tell the human. Imagine that.
@GregoryBohus
@GregoryBohus 8 ай бұрын
Is it possible for say Gemini to iterate itself if you prompt it correctly in your first prompt? Or do you need to build an application to do such? Can you use the web interface to do such?
@weishenmejames
@weishenmejames 8 ай бұрын
Nice share with valuable commentary throughout, you've got yourself a new subscriber!
@2945antonio
@2945antonio 7 ай бұрын
A dumb question - since the application of the Agent Reasoning Design Patterns improve the answers produced by AI (GT2,3, 4 etc), why not build these Agents into the GPT itself so that the need for iterations are minimized? Is it a commercial reason i.e. to pay for the upgrade?
@CharlesVanNoland
@CharlesVanNoland 8 ай бұрын
As long as we're relying on backpropagation to fit a network to pre-designated inputs/outputs, we're not going to have the sort of AI that will change the world overnight. The future of machine intelligence is definitely agentic, but we're not going to have robotic agents cleaning our house, cooking our food, fixing our house, constructing buildings, etc... unless we have an online learning algorithm that can run on portable hardware. Backpropagation, gradient descent, automatic differentiation, and the like, isn't how we're going to get there. We need a more brain-like algorithm. Throwing gobs and gobs of compute at backprop training progressively larger networks isn't how we're going to get where we're going. It's like everyone saw that backprop can do some cool stuff and then totally forgot about brains being the only example of what we're actually trying to achieve. They're totally ignoring that brains abstract and learn without any backpropagation. Backprop is the expensive brute force way to make a computer "learn". I feel like we're living in a Wright Brothers age right now where everyone believes that the internal combustion powered vehicle is the only way humans will ever move around the earth, except it's backpropagation that everyone has resigned to being the only way we'll ever make computers learn, when there's no living sentient creatures that even rely on backpropagation to exhibit vastly more complex behaviors than what we can manage with it. A honeybee only has one million neurons, and in spite of ChatGPT being, ostensibly, one trillion parameters, all it can do is generate text. We don't even know how to make a trillion parameter network that can behave with the complexity of an insect. That should be a huge big fat hint to anyone actually paying attention that backprop is going to end up looking very stupid by comparison to whatever does actually end up being used to control thinking machines - and the people who are fully invested in (and defending) backprop are most certainly going to be the last ones who figure out the last piece of the puzzle. When you have people like Yann LeCunn pursuing things like I-JEPA, and Geoffrey Hinton putting out whitepapers for algorithms like Forward-Forward, and Carmack saying things like "I wouldn't bother with an algorithm that can't do online learning at ~30hz", that should be a clue to everyone dreaming that backprop will get us where we're going that they're on the wrong track.
@sup3a
@sup3a 8 ай бұрын
Maybe. Though it's fun to hear what people said when Wright brothers and such tried to crack flying: this is not how birds fly, this is inefficient etc. We "brute forced" flying by just blasting shit ton of energy into the problem. Maybe we can do the same with intelligence
@bilderzucht
@bilderzucht 8 ай бұрын
Learning within a Single individual Brain maybe without any backpropagation. But couldn't the whole evolutionary process through billions of brains and ariving at a setup with different Brain Regions be seen as some sort of backpropagation?
@vicipi4907
@vicipi4907 8 ай бұрын
I think the idea is to get it to an advanced enough stage where it is competent and reliable. so much so it exaperdite the process in researching something that looks more like the the human brain process as a replacement. We might even get it to a point where it self improve there is no reason to think it won't find a different approach thats doesn't involve back propagation. Either we can't deny it has great potential and application to make AI advancement significantly much faster.
@intrestingness
@intrestingness 8 ай бұрын
Progression is rarely linear and innovation follows a line of optimism use not the end game. That's why we had the 'stupid' internal combustion engine for over 100 years melting our planet😢
@Mattje8
@Mattje8 8 ай бұрын
This assumes the goal of AI is to mimic a brain. It probably isn’t, mostly because it (probably) can’t, at least using existing compute approaches and current physics. If consciousness involves quantum effects as Penrose puts forward, current physics isn’t there yet. Or maybe it’s neither quantum nor algorithmic but involves interactions we can’t properly categorise today, which may or may not be deterministic. All of which is to say that I basically agree with you that all of the current approaches are building fantastic tools, but certainly nothing approaching sentience.
@AC-go1tp
@AC-go1tp 8 ай бұрын
Great video and valuable clarifications of AN's insights. It will be also great if you are able to make a video that capture all these concepts and notions using CrewAI and/or Autogen. Thank you Matt!
@AnOnymous-f8m
@AnOnymous-f8m 4 ай бұрын
The main discriminating factor between an agent program and a LLM model is that an Agent has a goal in mind, he has an action to take in the form of a response or calling an entire function in some other program (ex. Make a payment). An LLM on the other hand is the 'suggesting entity' for the agent. It provides the reasoning and understanding ability. Agent + LLM = JARVIS.
@saadatkhan9583
@saadatkhan9583 8 ай бұрын
Matthew, everytjing that Prof Ng referenced, you have already covered and analyzed. Much credit to you.
@rafaelvesga860
@rafaelvesga860 8 ай бұрын
Your input is quite valuable. Thanks!
@Dale-p2l
@Dale-p2l 8 ай бұрын
Thank you so much for all your videos. You are gold. Please never stop!
@horacioariash
@horacioariash 8 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 *Los flujos de trabajo "agentic" pueden mejorar el rendimiento de los modelos de lenguaje grandes.* 03:08 *Se presentan patrones de diseño para el uso de agentes en la IA, como reflexión, uso de herramientas y colaboración multiagente.* 20:46 *La generación rápida de tokens es importante para iterar eficazmente en los flujos de trabajo agentic.* Made with HARPA AI
@bobnothing4921
@bobnothing4921 8 ай бұрын
I am looking for something like Autogen/GPT Pilot 2, but that is designed for programming for iOS, such as Swift/Xcode. Is there something along those lines?
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 8 ай бұрын
I think the real breakthrough will come when we have user-friendly UI and agents based on computer vision, allowing them to be trained on existing software from the user's perspective. For example, I could train an AI agent on how to edit pictures or videos, or how to use a management application, etc. One approach could be to develop a dedicated OS for AI agents, but that would require all the apps to be rewritten to work with the AI agent as a priority. However, I'm not sure if that's feasible, as people may not adopt such a system rapidly. The fastest way forward might be to let the AI agent perform the exact task workflows that I would perform from the UI. This approach would enable the AI to work with existing software without requiring significant changes to the applications themselves.
@YEYSHONAN
@YEYSHONAN 7 ай бұрын
Thank you for translating Dr. Ng's speech to normal human language. I met Dr. Ng in Tokyo and asked him one of the dumbest questions at the press club in February. It was one of the hardest and mind boggling presentation I've encountered even though I was an ex-engineer. Liked and subscribed!
@freddy29228
@freddy29228 7 ай бұрын
Matthew, do you have a forum for people to learn about AI?
@matthew_berman
@matthew_berman 7 ай бұрын
My discord!
@johnh3ss
@johnh3ss 8 ай бұрын
What gets really interesting is that you could hook agentic workflows into an iterative distillation pipeline. 1) Create a bunch of tasks to accomplish 2) Use an agentic workflow to accomplish the tasks at a competence level way above what your model can normally do with one-shot inference 3) Feed that as training data to either fine tune a model, or if you have the compute, train a model from scratch 4) Repeat at step 2 with the new model. In theory you could build a training workflow that endlessly improves itself.
@autohmae
@autohmae 8 ай бұрын
Let's also remember this is what open source tools were already doing over a year ago, but often these got stuck in loops. I'm really interested in revisiting them.
@gotoHuman
@gotoHuman 8 ай бұрын
Or don't start the pipeline with a bunch of tasks, but rather let it be triggered from the outside when a task appears, e. g. in form of a customer support ticket
@DanTierney1
@DanTierney1 4 ай бұрын
Wow, Matthew! My mind is blown! Subscribed and Liked!🎉 I need to learn about AI Agents. You have a beginner video about this?
@timh8490
@timh8490 8 ай бұрын
Wow, I’ve been a big believer in agentic workflows since I saw your first video on chatdev and later on autogen. It’s really validating to hear someone of this stature thinking along the same lines
@boy_with_thorn
@boy_with_thorn 7 ай бұрын
I'm confused about how to implement these ideas. What is exactly that I have to do to utilize this "agentic workflow" pattern when I'm chatting with gpt-4 to generate code, for example? How do we automate this iterative process? Do you think we could call this process an "automated prompt engineering" system?
@zaurenstoates7306
@zaurenstoates7306 8 ай бұрын
Decentralized, highly specialized agents running on lower parameter count models (7b-70b) working together to accomplish tasks is where I think opportunity lies. I was mining ETH back when it was POW with my gaming rig to earn some money on the side. I did the calculations once and the entire eth computation available was a couple hundred exaflops. With more and more devices being manufactured for AI calculation (phones, GPUs, etc) the available computing will only increase
@rakoczipiroska5632
@rakoczipiroska5632 8 ай бұрын
Thank you for your great job. If things go like this, maybe there won't be a requirement for a startup accelerator to include a professional programmer among the founders? It won't be enough if someone is a hobbyist programmer, but professional propmt engineers?
@anonymeforliberty4387
@anonymeforliberty4387 8 ай бұрын
i bet you are still gonna need a prompt engineer and programmer, but alone he will do the work of a team
@danshd.9316
@danshd.9316 8 ай бұрын
Thank you just finished its great that you explained for ones who may not be as techie as ng expected.
@Dom-I-NATE
@Dom-I-NATE 8 ай бұрын
You are probably my number one source for bleeding edge info and explanations on AI and AI agents keep it up and great job Matthew! you were one of the fleeting influences in my learning AI and basically learning python for that matter now that I can use AI as a personal tutor for free anyone can learn anything now way better than being in a classroom because having an AI tutor way better than human
@baatimama2494
@baatimama2494 4 ай бұрын
zero shot can be converted autonmatically multiagents : llms can themselves generate multiple agaents and solve a problem . we have to automate this process of making zero shot to multi agent with auto scalable agentic workflows.
@cedricharris-v2r
@cedricharris-v2r 3 ай бұрын
Nice. Could you simplify your comment
@animalrave7167
@animalrave7167 8 ай бұрын
Love your breakdowns! Adding context and background info into the mix. Very useful.
@d.d.z.
@d.d.z. 8 ай бұрын
Thank you. Great analysis
@Sandheip
@Sandheip 3 ай бұрын
Absolutely inspiring! SmythOS is at the forefront of AI innovation, proving that the future is indeed agentic. Excited to see what's next! #SmythOS #FutureOfAI
@bradwuzhere
@bradwuzhere 3 ай бұрын
i've been doing this. i call it model hopping. i'll give each model a new task with the info from last task, i.e. outline, research, draft etc.
@rupertllavore1731
@rupertllavore1731 8 ай бұрын
@MatthewBerman what do you recommend I pick out to have a more synergistic Value as i prepare for the near future. Im already using Chat GPt Plus and perplexity pro. But Because of this video i might need to take away one so i can Add in Agent GPT So what do you Recommend i pick out Perplexity Pro + Agent GPT? Or ChatGPTplus + Agent GPt? Your advice would truly be appreciated.
@jimbig3997
@jimbig3997 8 ай бұрын
What other agentic workflow software is out there? I only know of CrewAI and it's difficult to use in some ways.
@gotoHuman
@gotoHuman 8 ай бұрын
LangGraph, AutoGen, Flowise,.. there are lots popping up. We'll try to make gotoHuman integrate with most of them..
@jbavar32
@jbavar32 8 ай бұрын
I've been using AI for a couple of years now for a creative workflow. (I don't do code) and I've often said Ai is like having the most brilliant collaborator on the planet but it has a slight drinking problem. My question is how does one create an agent so that one LLM can pass its result to other LLM's? In other words how do you engage several LLMs each working on the same problem? It looks like you would need a special code or a custom API.
@TubelatorAI
@TubelatorAI 8 ай бұрын
0:00 1. Introduction 🌟 Overview of Dr. Andrew Ning's talk on the power of AI agents. 0:32 2. Dr. Andrew Ning's Background 🧠 Insight into Dr. Andrew Ning's impressive credentials and contributions to AI. 1:12 3. Sequoia's Influence 🚀 Exploring Sequoia's significant presence in the tech industry and its successful portfolio. 2:02 4. Non-Agentic vs. Agentic Workflow 🔄 Comparison between traditional non-agentic and innovative agentic workflows for AI agents. 3:01 5. Power of Agents 💪 Understanding the strength of AI agents in collaborative iterative tasks. 4:06 6. Improved Results with Agentic Workflows 📈 Highlighting the remarkable outcomes achieved through agentic AI workflows. 5:19 7. Zero Shot Performance 🎯 Comparison of zero shot performance in large language models. 5:46 8. Agentic Workflows 🤖 Exploring the impact of agentic workflows on model performance. 6:19 9. Power of Agents 💥 Unveiling the significant impact of agentic workflows and agents in AI applications. 6:48 10. Design Patterns Overview 🌟 Insight into the various design patterns observed in agents technology. 7:11 11. Reflection Tool 🔄 Explanation and significance of the reflection tool in optimizing language model outputs. 7:58 12. Tool Use Empowerment 🔧 Empowering language models with custom tools and functionalities. 8:27 13. Planning & Collaboration 🤝 Discussing planning and multi-agent collaboration in AI applications. 9:55 14. The Power of Self-Reflection 🤔 Exploring the concept of self-reflection and feedback loop in AI agents. 10:32 15. Automating Coding with Agents 🤖 Using agents to automate coding processes and enhance performance. 11:41 16. Evolution to Multi-Agent Systems 🔄 Transition from single code agent to multi-agent systems for improved workflows. 12:22 17. Utilizing LM-Based Systems 🛠 Leveraging LM-based systems and tools for various tasks and productivity. 14:12 18. Exciting Potential of Planning Algorithms 📈 Exploring the impact and capabilities of planning algorithms in AI agents. 15:04 19. AI Agents in Action 🤖 Exploring the capabilities of AI agents in decision-making. 15:28 20. Reliability and Iteration 🔄 Discussing the reliability and iterative nature of AI agents. 16:25 21. Personal AI Assistants 🧑‍💼 Utilizing research agents for personal work tasks. 16:45 22. Multi-Agent Collaboration 🤝 Exploring the benefits of agents collaborating in tasks. 17:55 23. Optimizing Agent Performance 🚀 Enhancing performance through multiple specialized agents. 18:21 24. Design Patterns for AI 🎨 Summarizing key design patterns for effective AI usage. 18:43 25. Agentic Workflows Impact 🌟 Impact of agentic workflows on AI advancements. 20:25 26. The Power of Agents Leveraging hyper inference speed for agent workflows. 21:06 27. Importance of Fast Token Generation Discussing the significance of quick token generation in agentic workflows. 21:31 28. Advancements in Agent Architecture Exploring agenting reasoning and architectural enhancements. 21:56 29. Journey to AGI The path to Artificial General Intelligence through agent workflows. 22:18 30. Future Possibilities Implications of using agentic workflows with current models. 22:59 31. Enhancing Model Performance Improving model output through reflection and iteration with agents. 23:38 32. Conclusion & Call to Action Excitement for agents, inference speed, and encouraging engagement. Generated with Tubelator AI Chrome Extension!
@hansenmarc
@hansenmarc 8 ай бұрын
My favorite turnaround of all time. Thanks for sharing your versions.
@kritikusi-666
@kritikusi-666 8 ай бұрын
why don't you just link to the original video?
@EstrogenSingularity
@EstrogenSingularity 8 ай бұрын
Can someone drop a link for the Andrew Ng video
@matthew_berman
@matthew_berman 8 ай бұрын
I'll drop it in the description, sorry about that.
@TheStandard_io
@TheStandard_io 8 ай бұрын
Yeah, Sequoia Capital also misled everyone by not doing actual due diligence on FTX. When everyone heard that they invested, no one else did Due Diligence because they assumed Sequoia did. And they did not go to court or get any punishment
@GraveUypo
@GraveUypo 6 ай бұрын
that's how i've always used it. from the first time i used chat gpt, my prompt included a "main agent" and a second agent to analyze the solutions of the first one and propose viable alternatives or a "different perspective". nowadays i work with 3 agents and i even give them different personalities to get even more contrasting perspectives
@justinnkim
@justinnkim 3 ай бұрын
Can you point me in a good direction, so I can learn how to do this better? It seems that prompt creation from my point of view is nothing more than trial and error.
@FullEvent5678
@FullEvent5678 5 ай бұрын
as the non-technical cofounder of our startup, these videos with your explanations are helping me a lot. Thank you!
@greatworksalliance6042
@greatworksalliance6042 8 ай бұрын
Im considering delving into this space and curious what your preference is @Mathew Berman between Autogen, CrewAI, and whatever else is most comparable in the current market. What are your current rankings of them are, and optimal current use cases. Might make for a good upcoming video?
@nuclebros8001
@nuclebros8001 4 ай бұрын
I’m finding this out now. I’ve had it build me an entire business model step by step and how to approach it. Just keep asking it. Dig dig dig. This makes months of research happen in minutes.
@asafzilberberg6648
@asafzilberberg6648 3 ай бұрын
Great. Thank you for sharing this.
@NOYFB982
@NOYFB982 8 ай бұрын
With a limited context window, this hits an asymtotic wall very quickly. Keep in mind, I’m not saying the approach is not a big improvement; it is. However, my extensive experience is that it is not able to go nearly far enough. LLMs are still not fully functional of high performing work. That can only still do basics (or high level information recall). Perhaps with a large context window, this would actually be useful.
@StuartJ
@StuartJ 8 ай бұрын
Maybe this is what Grok 1.5 is doing behind the scenes to get a better score to GPT4.
@dhruvbaliyan6470
@dhruvbaliyan6470 8 ай бұрын
When me realising I realized this over a month ago . And thinking to create virtual environment where multiple agents work together that are especially fine tuned for each use case . So my brain is as intelligent as this person.
@chanceobondo3843
@chanceobondo3843 Ай бұрын
Your thoughts on Groq is exactly what I was thinking. LPUs make using agents work like magic, very very fast to work on tasks and respond.
@biskero
@biskero 8 ай бұрын
In a multi agent and multi LLM scenario is key to understand which LLM assign to each agent. I found that there are not enough information about each LLM to make that decision. Maybe the answer is to train each LLM on a specific topic.
@stepanfilonov
@stepanfilonov 8 ай бұрын
It's possible without training, just a simple group of base gpt3.5 already outperforms a single gpt4, it's more about orchestration.
@biskero
@biskero 8 ай бұрын
@@stepanfilonov interesting, so it's a matter of agents supporting it? Still LLM should have more information about their specific training.
@MrSlwek
@MrSlwek 7 ай бұрын
Are there any resources or videos that show step-by-step how to create collaborative agents?
@AshishDeshpande13
@AshishDeshpande13 2 ай бұрын
What exactly is meant by the inference speed here?(check out 20:05). I am a little confused. Is it the time required for the LLM to output the data from the moment we hit enter after writing the prompt or is it the rate at which it can output words after it's used the time for thinking or is it the speed at which it can understand the text prompt we have written? Someone please help me understand it! I would really appreciate it, thanks in advance!
@GetzAI
@GetzAI 8 ай бұрын
What is the inference cost? That is the big thing.
@therealsergio
@therealsergio 2 ай бұрын
The power of agents is that more focused smaller prompts (for each agent) perform better than a single aggregate monolith prompt tasked with the entire reasoning workflow. Divide and conquer AGI into agents. Create a society of mind.
@jakeparker918
@jakeparker918 8 ай бұрын
Awesome video. Yeah, this is why I voted for speed in the poll you did, this is the what I was talking about.
@bjarnehouengaard285
@bjarnehouengaard285 3 ай бұрын
Hi Matthew. Nice video! Thanks. I have a question, I work en af relatively big organization, and we are in front of AI development on a unknown scale. My question is, is it worth the effort, working toward LLM agents, reader then wait for GPT5? Or does the agents just get better with GPT5?
@chrisconn5649
@chrisconn5649 8 ай бұрын
You see improved results from agents, I see a 300% increase in token usage. Where is the pay off?
@ivanocj
@ivanocj 8 ай бұрын
run it locally...
@hindugoat2302
@hindugoat2302 8 ай бұрын
we are talking about exponentially self improving asians
@chrisconn5649
@chrisconn5649 8 ай бұрын
@@ivanocjI'm running local. Performance woes. 60t/s on simple queries.
@darianmays5933
@darianmays5933 8 ай бұрын
Can you explain what tokens are about ?
@hindugoat2302
@hindugoat2302 7 ай бұрын
@@darianmays5933 if you have to ask, you will never know... we are talking about exponentially self improving asians
@Lukas-ye4wz
@Lukas-ye4wz 8 ай бұрын
Did you know that this is actually how our mind/brain works as well? We have different parts (physical and psychological) that fulfill different roles. That is why we can experience inner conflict. One part of us wants this. Another part wants this. IFS teaches about this.
@MelvinSidd-ub4rb
@MelvinSidd-ub4rb 8 ай бұрын
Isnt agentic workflow just what langchain is doing? Just making llm to output to another llm and get different output?
@LouSpironello
@LouSpironello 6 ай бұрын
Great info and great insights!! My daily go-to. Thank you.
@humanoptimized
@humanoptimized 8 ай бұрын
Any recommendations on agentic workflow models online that are worth working with? Got any tutorials?
@markgonzales
@markgonzales 4 ай бұрын
In the Coding Benchmark graph I noticed the gap between zero shot and with agents decreased significantly between GPT 3.5 and 4. It seems like this trend would continue as new versions release until the gap is eliminated all together. Aren’t agents going to become obsolete as LLM’s absorb this extra step?
@229Mike
@229Mike 8 ай бұрын
Are there ways to add/use/find custom GPTs currently that may include (2) tools to access? I see 3.5 was tested within this video--which I am assuming someone pastes a template of these 4 steps (However tool agents I not think is a thing) ... Someone created a GPT but I cannot tell what this fellow did in theirs specifically. I think this is about using different LLMS via Chat, Grok, Mixtral via agents later on?
@LaurenceSinclair
@LaurenceSinclair 6 ай бұрын
After watching this. I gave 4o about 500 words of a story and asked it to do a rewrite. 1. Understand the task. 2. What research is needed? 3. Analyze the story for genre, language, creative temperature, sentence structure, writing style, and authors in similar genres. 4. create a brain map for the story, including adjacent outside-the-box ideas. 5. use the brain map to create a rubric of ideas and required tasks, including review, revision, and line editing. 6. Use the rubric to generate Agents who will use their expertise to carry out each task. Agents will refer to each other for the expertise they are missing. 7. Write a first draft. 8. submit the first draft to the Review Agent. Return it to the other agents for revision. 9. When revising use new, original, unique, fresh language. Change the order of events. Write new dialog based on the character's psychology (background+physical attractiveness+current need) 10. Write final draft.
@CA-sz6vw
@CA-sz6vw 6 ай бұрын
How was it?
@LaurenceSinclair
@LaurenceSinclair 6 ай бұрын
@@CA-sz6vw I like the results. I also applied to a GPT based on "Whole Brain" theory. It breaks down the task and assigns it to 8 Specialized Agents and they work together to give you an answer. For fiction writing, it improved a lot. It got slightly better when I told it to do another round of research after tasks were assigned. It's worth playing with. Even if you just start your prompt with Using Agentic Workflow... do this. "
@MichaelScharf
@MichaelScharf 8 ай бұрын
What is the link to the reviewed video?
@matthew_berman
@matthew_berman 8 ай бұрын
I put it in the description.
@Charles-Darwin
@Charles-Darwin 8 ай бұрын
Andrew Ng segment: kzbin.info/www/bejne/qZLPaGt3eNl6isUsi=MjdeVJ8u_h_gMo4G Andrej Karpathy segment: kzbin.info/www/bejne/mWTFXn13iNSDn5Isi=VPoWzzOk0jFz19uc It's better to view the original as they're more based
@XrealtorAIDanny-qt3mv
@XrealtorAIDanny-qt3mv 8 ай бұрын
I am not a programmer. So, when you say, "Have multiple agents talk amongst themselves," would I ask, say Chat to start the conversation, and tell Chat to initiate that request within that Chat? Or are you manually copying results and putting them in the other agent? And me copying back and forth, which seems dumb, or again, can the two agents dual it out inside, in my example, Chat? Great YT Agentic and your review. Thanks
@u2b83
@u2b83 7 ай бұрын
I've long suspected that iteration is the key to spectacular results, it's like an ODE solver iterating on a differential equation until it stumbles into a basin of attraction. You could probably do "agents" with just one GTP and loop through different roles. Then again maybe multiple agents are a crutch for small context windows lol. However, keep in mind that GPT4 already gives you an iterative solution by running the model as many times as there are tokens.
@vivekparmar7576
@vivekparmar7576 8 ай бұрын
Could you (or someone) please elaborate on the comment at around 23:18 about Grok running Mixtral. I thought both were LLMs. How does Grok ‘run’ Mixtral?
@kormannn1
@kormannn1 8 ай бұрын
Groq, not grok. They are 2 different things.
@nukadog1969
@nukadog1969 8 ай бұрын
You mention tool use and computer vision; I'm sure you already have seen Intel's ModelZoo and similar repositories for tools? With a coding LLM and tool libraries, you can essentially turn out new tooll AIs and validate them quickly.
@cleo1488
@cleo1488 8 ай бұрын
So gpt-5 is LLM with Q* and Agents?
@Edoras5916
@Edoras5916 8 ай бұрын
can autogen or crew AI or any other back end AI ue flutter flow or jyupiterlab as means of looking at he final product of the code we want to write?
@RichardHollway
@RichardHollway 8 ай бұрын
Great videos, thank you! I have a question about this agentic framework that perhaps you answer ... it seems like the iteration process inherent in the likes of Autogen & CrewAI will be built in to the next LLM models (CHatGPT5, Claude 4 etc) - does that make Autogen redundant at that point? Or, am I missing something? Thanks
@homewardboundphotos
@homewardboundphotos 8 ай бұрын
So agentic systems will use gpt 4 to create synthetic training data for gpt 5
OpenAI Unveils o3! AGI ACHIEVED!
26:24
Matthew Berman
Рет қаралды 207 М.
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Former Google CEO Eric Schmidt on AI, China and the future
29:45
Washington Post Live
Рет қаралды 23 М.
What's next for AI agentic workflows ft. Andrew Ng of AI Fund
13:40
Sequoia Capital
Рет қаралды 350 М.
AI Leader Reveals The Future of AI AGENTS (LangChain CEO)
16:22
Matthew Berman
Рет қаралды 108 М.
The Next Frontier: Sam Altman on the Future of A.I. and Society
36:47
New York Times Events
Рет қаралды 334 М.
How AI Got a Reality Check
8:53
Bloomberg Originals
Рет қаралды 319 М.
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 496 М.
Google’s Quantum Chip: Did We Just Tap Into Parallel Universes?
9:34
The Near Future of AI [Entire Talk]  - Andrew Ng (AI Fund)
46:23
Stanford eCorner
Рет қаралды 282 М.
What are AI Agents?
12:29
IBM Technology
Рет қаралды 870 М.