AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"

  Рет қаралды 637,346

Matthew Berman

Matthew Berman

Күн бұрын

Пікірлер: 685
@e-vd
@e-vd 10 ай бұрын
I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.
@philipduttonlescorlett
@philipduttonlescorlett 3 ай бұрын
I completely agree with this sentiment. In a world dominated by populism, especially in politics and mainstream media, it's refreshing to see scientifically evidence-based content like this on KZbin. We need much more of this kind of journalism that prioritizes facts and critical thinking.
@stray2748
@stray2748 9 ай бұрын
LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!
@ihbrzmkqushzavojtr72mw5pqf6
@ihbrzmkqushzavojtr72mw5pqf6 9 ай бұрын
Is self dialog same as Q* ?
@stray2748
@stray2748 9 ай бұрын
@@ihbrzmkqushzavojtr72mw5pqf6 I think it's the lynchpin they discovered to be a catalyst for AGI. Albeit self-dialogue + multimodality being trained from the ground-up in Q* (something ChatGPT did not have in it's training). Transformers were built on mimicking the human neuron (Rosenblatt Perceptron) ; okay now following human nature, lets train it ground-up with multimodal data and self-dialogue (like humans posess).
@Korodarn
@Korodarn 9 ай бұрын
@@ihbrzmkqushzavojtr72mw5pqf6 Not exactly, Q* is pre-thought, before inference is complete. The difference is with planning if someone asks you a question like "how many words are in your response, you can think about it, and come to a conclusion like to say 'One'" but if you don't have pre-thought, you're doing simple word prediction every time, and the only way to get a simple outcome is if something akin to key/value pairs passed into LLM at some point gives it the idea to try that in one shot. Even if it has a chance to iterate it'll probably never reach that response without forethought.
@Existidor.Serial137
@Existidor.Serial137 9 ай бұрын
give it a couple more ai models, like world simulators, a little bit of time...and then something similar to what we refer as consiciousness may emerge of all those intereactions.
@defaultHandle1110
@defaultHandle1110 9 ай бұрын
They’re coming for you Neo.
@MarkLewis00
@MarkLewis00 Ай бұрын
the future is Agentic indeed! would love to integrate Pinecone with Composio & Autogen
@janchiskitchen2720
@janchiskitchen2720 10 ай бұрын
The old saying comes to mind: Think twice , say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.
@ajohnsonlllll
@ajohnsonlllll 7 ай бұрын
The saying is "measure twice, cut once." what kind of a surprise is it to anyone that allowing more buffer time to anyone increases the intelligence output.
@virtualalias
@virtualalias 10 ай бұрын
I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm," you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.
@kliersheed
@kliersheed 9 ай бұрын
ehem, skynet. :D but i agree
@blindmonkey5886
@blindmonkey5886 2 ай бұрын
Sounds like all the people on planet earth guided by God. So maybe we're the AI-agents ... I wonder what the task is? To end up in a heavenly state forever? Sounds good enough. Let's run the program and see what happens, lol.
@BTFranklin
@BTFranklin 10 ай бұрын
I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.
@samhiatt
@samhiatt 10 ай бұрын
Matthew is really good, isn't he? I want to know how he's able to keep up with all the news while also producing videos so regularly.
@Chuck_Hooks
@Chuck_Hooks 10 ай бұрын
Exponentially self-improving agents. Love how incremental improvements over a period of years is so over.
@andrewferguson6901
@andrewferguson6901 10 ай бұрын
I'm expecting deep mind to at any point just pop off with an ai that plays the game of making an ai
@aoeu256
@aoeu256 10 ай бұрын
When did the information age end and the AI age begin haha. I still think, we need to figure out how to make self-replicating robots (that replicate themselves half-size each generation) by making them out of lego-blocks, and then have the lego-blocks be cast from a mold that the robot itself makes. Once hardware(robots) improves the capabilities of software can improve.
@wrOngplan3t
@wrOngplan3t 10 ай бұрын
@@aoeu256 Oh come on now, you know how that'll end. Admit it, you've watched Futurama :D
@efexzium
@efexzium 10 ай бұрын
Not sure if. I love that
@paulsaulpaul
@paulsaulpaul 10 ай бұрын
It may refine the quality of results, but it won't teach itself anything new or have any "ah hah!" moments like a human thinker. There will be an upper limit to any exponential growth due to eventual lack of entropy (there's a limit to how many ways a set of information can be organized). Spam in a can is a homogenous mixture of meat scraps left over from slaughtering pigs. It's the ground up form of the parts that humans don't want to see in a butcher's meat display. LLMs produce the spam from the pork chops of human creativity. These agents will produce a better looking can with better marketing speak on the label. Might have a nicer color and smell to it. But it's still spam that will never be displayed next to real cuts of meat. Despite how much the marketers want you to think it's as good as or superior to the real thing.
@richardgordon
@richardgordon 9 ай бұрын
Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!
@existentialquest1509
@existentialquest1509 9 ай бұрын
i totally agree - was trying to make this case for years - but i guess technology has now evolved to the point where we can see this as a reality
@luciengrondin5802
@luciengrondin5802 10 ай бұрын
The iterating part of the process seems more important to me than the "agentic" one. If we compare current LLMs to DeepMind's AlphaZero method, it's clear that so far LLMs currently only do the equivalent of AlphaZero's evaluation function. They don't do the equivalent of the Monte-Carlo search thing. That's what reasoning needs : the ability to explore the tree of possibilities, the NN being used to guide that exploration.
@joelashworth7463
@joelashworth7463 10 ай бұрын
what get's interesting about agentic - is what if certain agents have access to differrent 'experiences' - meaning their context window starts with 'hidden' priorities objectives and examples of what final state should look like. Since Context windows are limited right now this is an exciting area. Of course the other part of agentic vs iterative - is that since a model isn't really 'thinking' it needs some for of stimulus that will disrupt the previous answer - so you either have to use self reflection or external crtiic - if the external critic uses a differrent model (fine tune or lora) and is given a differrent objective you should be able to 'stimulate' the model into giving radically differrent end products.
@SuperMemoVideo
@SuperMemoVideo 9 ай бұрын
As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically
@josesalvador7747
@josesalvador7747 5 ай бұрын
I would call a "concept" a "feature pattern". An "agent" is more of an active orchestrator that will identify a "context" (a bunch of occuring "feature patterns" or "state"), and select a plan (also called a "policy", which is basically a sequence of actions) that will allow to reach another context/state while maximizing "rewards".
@DinoMuratovic-it9vl
@DinoMuratovic-it9vl 2 ай бұрын
thanks for explaining!
@8691669
@8691669 10 ай бұрын
Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.
@AINEET
@AINEET 10 ай бұрын
You upload on the least expected random times of the day and I'm all for it
@matthew_berman
@matthew_berman 10 ай бұрын
LOL. Keeping you on your toes!
@holdthetruthhostage
@holdthetruthhostage 9 ай бұрын
Haha 😂
@mintakan003
@mintakan003 10 ай бұрын
Andrew Ng is actually one of the more conservative of the AI folks. So when he's enthusiastic about something, he has a pretty good basis for doing so. He's very practical. As for this video, good point on Groq. We need a revolution on inference hardware. Also, another point to consider, is the criteria for specifying when something is "good" or "bad", when doing iterative refinement. I suspect, the quality of the agentic workflows will also depend on the quality of this specification, as in the case of all optimization algorithms.
@garybarrett4881
@garybarrett4881 9 ай бұрын
Agents? You know this is how the matrix begins, right?
@ranjancse26
@ranjancse26 8 ай бұрын
We live a matrix for sure 😄
@skyless7304
@skyless7304 8 ай бұрын
😂
@friendlyword
@friendlyword 8 ай бұрын
Ha, nice. Well done. They need to make a nervous laughter emoji. I remember looking for one when I first read that China named its State surveillance AI, “SkyNet”
@ayanbandyopadhyay767
@ayanbandyopadhyay767 6 ай бұрын
Well well Mr And...rew...son
@gamesshuffler-v8n
@gamesshuffler-v8n 5 ай бұрын
A reference to the iconic movie The Matrix (1999)! Yes, I'm familiar with that famous line. In the movie, the character Morpheus explains to Neo that they are living in a simulated reality created by intelligent machines, and that agents are programs designed to eliminate any potential threats to this system.
@EliyahuGreitzer
@EliyahuGreitzer 9 ай бұрын
Thanks!
@JacquesvanWyk
@JacquesvanWyk 9 ай бұрын
I have been thinking about agents for months without knowing what I am thinking of untill I found videos like crewai and swarm-agent and my mind is blown. I am all in for this and trying to learn as much as i can because this is for sure the future. Thanks for all your uploads
@carlkim2577
@carlkim2577 10 ай бұрын
This is one of the best vids you've made. Good commentary along with the presentation!
@StefRush
@StefRush 10 ай бұрын
I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅
@kliersheed
@kliersheed 9 ай бұрын
agens = actor = compartimented entity doing smth. i think the word fits perfectly. its like transistors are simulating our neurons and the agent is simulating the individual compartments in our brain. a frame of mind would be a fitting expression for the supervising AI keeping the agents in check and organizing them to solve the problem perceived. its like the "me" as in consciousness, ruling the processes in the brain. a frame always has to contain smth and IMO its hard to say what an agent contains as its already really specialized and works WITHIN a frame (not being a frame). even if you speak of frames as in relation systems, the agent is WITHIN, not one itself. just my thoughts on the terms ^^
@UthacalthingTymbrimi
@UthacalthingTymbrimi 4 ай бұрын
I really like the analogy, however I think that the term "frame of mind" tends to lend itself to a single-thread, serialized approach to solving a problem or completing a task - like the approach a single human would need to take. The very nature of agents lends itself to parallel execution of various aspects of the task at hand, either to complete more quickly, or to provide many variants of a solution that can be compared, selecting the best result. For example, "write me some code to do [x]". You could have a thousand agents write variations of the code, then have a bunch of agents to debug, plus an army of reviewing agents evaluating each candidate (in either a cooperative or adversarial fashion, or both). This approach would be orders of magnitude more powerful than serial, iterative execution of task steps. For me at least, this is more akin to the concept of a collaborative team of people, each with their own role to play in the overall objective, rather than a single entity changing its frame of mind to perform each aspect of a task, one step at a time until completion.
@jonatasdp
@jonatasdp 9 ай бұрын
Very good Matthew! Thanks for sharing. I built my simple agent and I see it improving a lot after a few interactions.
@youriwatson
@youriwatson 10 ай бұрын
Great point about combining Groq's inference speed with agents!
@ayanbandyopadhyay767
@ayanbandyopadhyay767 6 ай бұрын
Agree fully
@MicahBratt
@MicahBratt 3 ай бұрын
There’s a pattern found in nature that I think could be a great framework for building AI systems. I call it the helix framework because it's found in the body’s processes relating to the DNA. You have a static data structure with modules of rules / instructions / recipes. You have the building blocks or ingredients. And you have the builder, compiler, parser, creator etc the active agent that retrieves the right “instructions” from the dictionary and builds drawing from resources in the building blocks.
@agenticmark
@agenticmark 10 ай бұрын
Something you guys never talk about - the INSANE cost of building and running these agents. It limits developers just as much as compute limits AI companies. The reason agentic systems work is they remove the context problem. LLMs get off track and confused easily. But if you open multiple tabs and keep each copy of the LLM "focused" it gets better results" - so when you do the same with agents, each agent outperforms a single agent who has to juggle all the context. We get better results with GPT 3.5 using this method than you would get in a browser with GPT4. Basically, you are "narrowing" the expertise of the model. And you can select multiple models and have them responsible for different things. Think Mixtral but instead of a gating model, the agent code handles the gating.
@DaveEtchells
@DaveEtchells 10 ай бұрын
I’m really intrigued by your multi-tab workflow, it sounds super powerful, but I’m not sure how it works in practice. Do you have the different tabs working on different sub-tasks or performing different roles (kind of a manual agentic workflow, but with human oversight of each of the zero-shot workers), or are they working in parallel on the same task, or … ? IANAP, but I need to have ChatGPT (my current platform, or it could be Claude or whatever) do some fairly complex tasks like parsing web pages and PDFs to navigate a very large dataset and use reasoning to identify significantly-relevant data, download and assemble it into a knowledge database that I’ll then want to use as test input for another AI system. Ideally I’d use one of the no-code/low-code agent dev tools to automate the whole thing but as I said IANAP, and just multi-tabbing it could get me a long way there. It sounds like whatever you’re doing is exactly what I need to - and likely a boatload of others as well: I do wish someone would do a video on it. Meanwhile, would you be willing to share a brief description of an example use case and what you’d have the various tabs doing for it? (I hope @matthew_berman sees this and makes a vid on the topic: Your comment is possibly the most important I’ve ever encountered on YT, at least in terms of what it could do for my work and personal life.) Thanks for the note!
@japneetsingh5015
@japneetsingh5015 10 ай бұрын
You don't always need the state of the art models like that. GPT, Gemini, Claude etc. many open source 7B models work just as well for most of the companies.
@DefaultFlame
@DefaultFlame 10 ай бұрын
@@japneetsingh5015 Yeah, llama, mistral, mixtral, the list goes on. If you want something even more lightweight than 7B, stablelm-zephyr is a 3B that is surprisingly capable. Orca-mini is good too and comes in 3B, 7B, 13B, and 70B versions so you can pick whichever you want based on your hardware.
@TomM-p3o
@TomM-p3o 10 ай бұрын
What You're saying is: Attention is all you need 😁 I do agree that mixing goals will confuse models, as it could people. People however already have learned processes to compartmentalise tasks. We might have to teach agents to do that, apart from constructing them to minimize this confusion.
@DefaultFlame
@DefaultFlame 10 ай бұрын
@@TomM-p3o The whole point of multiple agents with different "jobs," personalities, or even different models powering them, is that we can cheat. The point of multiple agents is that we don't **need** to teach a single agent or model those learned processes, we can just connect several that each do each part, each agent taking on the role of different parts of a single functional brain.
@AnOnymous-f8m
@AnOnymous-f8m 5 ай бұрын
The main discriminating factor between an agent program and a LLM model is that an Agent has a goal in mind, he has an action to take in the form of a response or calling an entire function in some other program (ex. Make a payment). An LLM on the other hand is the 'suggesting entity' for the agent. It provides the reasoning and understanding ability. Agent + LLM = JARVIS.
@sma1015
@sma1015 7 ай бұрын
Thanks for sharing. As much as I love Andrew Ng, his voice always puts me to sleep. Its like a lullaby. Thanks for elaborating on these updates. It kept me engaged.
@ronald2327
@ronald2327 10 ай бұрын
All of your videos are very informative and I like that you keep the coding bugs in rather than skipping ahead, and you demonstrate solving those issues as you go. I’ve been experimenting with ollama, LM studio, and CrewAI, with some really cool results. I’ve come to realize I’m going to need a much more expensive PC. 😂
@JohnSmith762A11B
@JohnSmith762A11B 10 ай бұрын
Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍
@federico-bi2w
@federico-bi2w 10 ай бұрын
...ok I can see its right...having done a lot of "by hand iterations"...I mean i am not using agent yet...but if you think with GPT...you ask something...you test....you adjust...you give it back..and the result is better...and in this process if you do question on the same topic but from different aspect it becomes better...so an agent is basically doing this by itself!. Great video! Thank you :D
@notclagnew
@notclagnew 9 ай бұрын
Glad I saw this, your additional explanations were incredibly helpful and woven into the main talk in a non-intrusive way. Subscribed.
@bradwuzhere
@bradwuzhere 4 ай бұрын
i've been doing this. i call it model hopping. i'll give each model a new task with the info from last task, i.e. outline, research, draft etc.
@nuclebros8001
@nuclebros8001 5 ай бұрын
I’m finding this out now. I’ve had it build me an entire business model step by step and how to approach it. Just keep asking it. Dig dig dig. This makes months of research happen in minutes.
@johnh3ss
@johnh3ss 10 ай бұрын
What gets really interesting is that you could hook agentic workflows into an iterative distillation pipeline. 1) Create a bunch of tasks to accomplish 2) Use an agentic workflow to accomplish the tasks at a competence level way above what your model can normally do with one-shot inference 3) Feed that as training data to either fine tune a model, or if you have the compute, train a model from scratch 4) Repeat at step 2 with the new model. In theory you could build a training workflow that endlessly improves itself.
@autohmae
@autohmae 10 ай бұрын
Let's also remember this is what open source tools were already doing over a year ago, but often these got stuck in loops. I'm really interested in revisiting them.
@gotoHuman
@gotoHuman 9 ай бұрын
Or don't start the pipeline with a bunch of tasks, but rather let it be triggered from the outside when a task appears, e. g. in form of a customer support ticket
@chanceobondo3843
@chanceobondo3843 2 ай бұрын
Your thoughts on Groq is exactly what I was thinking. LPUs make using agents work like magic, very very fast to work on tasks and respond.
@BJM1896
@BJM1896 7 ай бұрын
Firstly, thank you Matthew for all that you do. You are really putting out excellent content and helping us to be on the cutting edge or as close to it as possible regarding AI. I would like to hear more about getting agents to behave when using them collaboratively. Sometimes it is difficult to get them to do what you want them to do that is true. Recently I had one agent tell its supervisory agent to “tell the human that it’s not necessary to do that“. tell the human. Imagine that.
@saadatkhan9583
@saadatkhan9583 10 ай бұрын
Matthew, everytjing that Prof Ng referenced, you have already covered and analyzed. Much credit to you.
@rafaelvesga860
@rafaelvesga860 9 ай бұрын
Your input is quite valuable. Thanks!
@GraveUypo
@GraveUypo 7 ай бұрын
that's how i've always used it. from the first time i used chat gpt, my prompt included a "main agent" and a second agent to analyze the solutions of the first one and propose viable alternatives or a "different perspective". nowadays i work with 3 agents and i even give them different personalities to get even more contrasting perspectives
@justinnkim
@justinnkim 5 ай бұрын
Can you point me in a good direction, so I can learn how to do this better? It seems that prompt creation from my point of view is nothing more than trial and error.
@YEYSHONAN
@YEYSHONAN 8 ай бұрын
Thank you for translating Dr. Ng's speech to normal human language. I met Dr. Ng in Tokyo and asked him one of the dumbest questions at the press club in February. It was one of the hardest and mind boggling presentation I've encountered even though I was an ex-engineer. Liked and subscribed!
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 10 ай бұрын
I think the real breakthrough will come when we have user-friendly UI and agents based on computer vision, allowing them to be trained on existing software from the user's perspective. For example, I could train an AI agent on how to edit pictures or videos, or how to use a management application, etc. One approach could be to develop a dedicated OS for AI agents, but that would require all the apps to be rewritten to work with the AI agent as a priority. However, I'm not sure if that's feasible, as people may not adopt such a system rapidly. The fastest way forward might be to let the AI agent perform the exact task workflows that I would perform from the UI. This approach would enable the AI to work with existing software without requiring significant changes to the applications themselves.
@narindermahil6670
@narindermahil6670 9 ай бұрын
I appreciate the way you explained every step, very informative. Great video.
@timh8490
@timh8490 10 ай бұрын
Wow, I’ve been a big believer in agentic workflows since I saw your first video on chatdev and later on autogen. It’s really validating to hear someone of this stature thinking along the same lines
@zaurenstoates7306
@zaurenstoates7306 10 ай бұрын
Decentralized, highly specialized agents running on lower parameter count models (7b-70b) working together to accomplish tasks is where I think opportunity lies. I was mining ETH back when it was POW with my gaming rig to earn some money on the side. I did the calculations once and the entire eth computation available was a couple hundred exaflops. With more and more devices being manufactured for AI calculation (phones, GPUs, etc) the available computing will only increase
@TestMyHomeChannel
@TestMyHomeChannel 10 ай бұрын
I loved this video. Your selection was great and your comments were right to the point and very useful. I like that you test things yourself and provide links to the topics that are discussed previously.
@FullEvent5678
@FullEvent5678 7 ай бұрын
as the non-technical cofounder of our startup, these videos with your explanations are helping me a lot. Thank you!
@baatimama2494
@baatimama2494 5 ай бұрын
zero shot can be converted autonmatically multiagents : llms can themselves generate multiple agaents and solve a problem . we have to automate this process of making zero shot to multi agent with auto scalable agentic workflows.
@cedricharris-v2r
@cedricharris-v2r 4 ай бұрын
Nice. Could you simplify your comment
@Gregzenegair
@Gregzenegair 8 ай бұрын
Actually I was thinking about a "self talking" LLM to improve results, correct itself, create some througts on a subject, as we all do in our heads. Thinking is about talking to yourself and exploring thougts, find new problems and solves them one after an other.
@chrisconn5649
@chrisconn5649 9 ай бұрын
You see improved results from agents, I see a 300% increase in token usage. Where is the pay off?
@ivanocj
@ivanocj 9 ай бұрын
run it locally...
@hindugoat2302
@hindugoat2302 9 ай бұрын
we are talking about exponentially self improving asians
@chrisconn5649
@chrisconn5649 9 ай бұрын
@@ivanocjI'm running local. Performance woes. 60t/s on simple queries.
@darianmays5933
@darianmays5933 9 ай бұрын
Can you explain what tokens are about ?
@hindugoat2302
@hindugoat2302 9 ай бұрын
@@darianmays5933 if you have to ask, you will never know... we are talking about exponentially self improving asians
@NasrinHashemian
@NasrinHashemian 9 ай бұрын
Matthew, your videos are really informative. Many thanks to you for sharing such knowledge and update. This latest one was exceptionally good.
@RasoulGhaderi
@RasoulGhaderi 9 ай бұрын
I love this video. In the long run Advances in A.I surely can be debated for the good of AI Agents, though most will argue that only a few will benefit especially to their pockets, at the end, interesting to see what the future holds.
@YousefMilanian
@YousefMilanian 9 ай бұрын
I also agree that it will be interesting, take a look at the benefits of the computing age millions of people were made for life simply because they made the right decisions at the time thereby creating lifetime wealth.
@RasoulGhaderi
@RasoulGhaderi 9 ай бұрын
I wasn't born into lifetime wealth handed over, but I am definitely on my way to creating one, $715k in profits in one year is surely a start in the right path for me and my dream. Others had luck born in wealth, I have a brain that works.
@ShahramHesabani
@ShahramHesabani 9 ай бұрын
I can say for sure you had money laying around and was handed over to you from family to be able to achieve such.
@RasoulGhaderi
@RasoulGhaderi 9 ай бұрын
It may interest you to know that no such thing happened, I did a lot of research on how the rich get richer and this led me to meet, Linda Alice parisi . Having someone specialized in a particular field do a job does wonders you know. I gave her 100 grand at first
@TubelatorAI
@TubelatorAI 10 ай бұрын
0:00 1. Introduction 🌟 Overview of Dr. Andrew Ning's talk on the power of AI agents. 0:32 2. Dr. Andrew Ning's Background 🧠 Insight into Dr. Andrew Ning's impressive credentials and contributions to AI. 1:12 3. Sequoia's Influence 🚀 Exploring Sequoia's significant presence in the tech industry and its successful portfolio. 2:02 4. Non-Agentic vs. Agentic Workflow 🔄 Comparison between traditional non-agentic and innovative agentic workflows for AI agents. 3:01 5. Power of Agents 💪 Understanding the strength of AI agents in collaborative iterative tasks. 4:06 6. Improved Results with Agentic Workflows 📈 Highlighting the remarkable outcomes achieved through agentic AI workflows. 5:19 7. Zero Shot Performance 🎯 Comparison of zero shot performance in large language models. 5:46 8. Agentic Workflows 🤖 Exploring the impact of agentic workflows on model performance. 6:19 9. Power of Agents 💥 Unveiling the significant impact of agentic workflows and agents in AI applications. 6:48 10. Design Patterns Overview 🌟 Insight into the various design patterns observed in agents technology. 7:11 11. Reflection Tool 🔄 Explanation and significance of the reflection tool in optimizing language model outputs. 7:58 12. Tool Use Empowerment 🔧 Empowering language models with custom tools and functionalities. 8:27 13. Planning & Collaboration 🤝 Discussing planning and multi-agent collaboration in AI applications. 9:55 14. The Power of Self-Reflection 🤔 Exploring the concept of self-reflection and feedback loop in AI agents. 10:32 15. Automating Coding with Agents 🤖 Using agents to automate coding processes and enhance performance. 11:41 16. Evolution to Multi-Agent Systems 🔄 Transition from single code agent to multi-agent systems for improved workflows. 12:22 17. Utilizing LM-Based Systems 🛠 Leveraging LM-based systems and tools for various tasks and productivity. 14:12 18. Exciting Potential of Planning Algorithms 📈 Exploring the impact and capabilities of planning algorithms in AI agents. 15:04 19. AI Agents in Action 🤖 Exploring the capabilities of AI agents in decision-making. 15:28 20. Reliability and Iteration 🔄 Discussing the reliability and iterative nature of AI agents. 16:25 21. Personal AI Assistants 🧑‍💼 Utilizing research agents for personal work tasks. 16:45 22. Multi-Agent Collaboration 🤝 Exploring the benefits of agents collaborating in tasks. 17:55 23. Optimizing Agent Performance 🚀 Enhancing performance through multiple specialized agents. 18:21 24. Design Patterns for AI 🎨 Summarizing key design patterns for effective AI usage. 18:43 25. Agentic Workflows Impact 🌟 Impact of agentic workflows on AI advancements. 20:25 26. The Power of Agents Leveraging hyper inference speed for agent workflows. 21:06 27. Importance of Fast Token Generation Discussing the significance of quick token generation in agentic workflows. 21:31 28. Advancements in Agent Architecture Exploring agenting reasoning and architectural enhancements. 21:56 29. Journey to AGI The path to Artificial General Intelligence through agent workflows. 22:18 30. Future Possibilities Implications of using agentic workflows with current models. 22:59 31. Enhancing Model Performance Improving model output through reflection and iteration with agents. 23:38 32. Conclusion & Call to Action Excitement for agents, inference speed, and encouraging engagement. Generated with Tubelator AI Chrome Extension!
@maverick9300
@maverick9300 7 ай бұрын
I built this into my gpt instructions awhile ago and the output instantly became 10x quality. The first step is to ask the ai to restate your request.
@2945antonio
@2945antonio 9 ай бұрын
A dumb question - since the application of the Agent Reasoning Design Patterns improve the answers produced by AI (GT2,3, 4 etc), why not build these Agents into the GPT itself so that the need for iterations are minimized? Is it a commercial reason i.e. to pay for the upgrade?
@weishenmejames
@weishenmejames 9 ай бұрын
Nice share with valuable commentary throughout, you've got yourself a new subscriber!
@LaurenceSinclair
@LaurenceSinclair 7 ай бұрын
After watching this. I gave 4o about 500 words of a story and asked it to do a rewrite. 1. Understand the task. 2. What research is needed? 3. Analyze the story for genre, language, creative temperature, sentence structure, writing style, and authors in similar genres. 4. create a brain map for the story, including adjacent outside-the-box ideas. 5. use the brain map to create a rubric of ideas and required tasks, including review, revision, and line editing. 6. Use the rubric to generate Agents who will use their expertise to carry out each task. Agents will refer to each other for the expertise they are missing. 7. Write a first draft. 8. submit the first draft to the Review Agent. Return it to the other agents for revision. 9. When revising use new, original, unique, fresh language. Change the order of events. Write new dialog based on the character's psychology (background+physical attractiveness+current need) 10. Write final draft.
@CA-sz6vw
@CA-sz6vw 7 ай бұрын
How was it?
@LaurenceSinclair
@LaurenceSinclair 7 ай бұрын
@@CA-sz6vw I like the results. I also applied to a GPT based on "Whole Brain" theory. It breaks down the task and assigns it to 8 Specialized Agents and they work together to give you an answer. For fiction writing, it improved a lot. It got slightly better when I told it to do another round of research after tasks were assigned. It's worth playing with. Even if you just start your prompt with Using Agentic Workflow... do this. "
@CharlesVanNoland
@CharlesVanNoland 10 ай бұрын
As long as we're relying on backpropagation to fit a network to pre-designated inputs/outputs, we're not going to have the sort of AI that will change the world overnight. The future of machine intelligence is definitely agentic, but we're not going to have robotic agents cleaning our house, cooking our food, fixing our house, constructing buildings, etc... unless we have an online learning algorithm that can run on portable hardware. Backpropagation, gradient descent, automatic differentiation, and the like, isn't how we're going to get there. We need a more brain-like algorithm. Throwing gobs and gobs of compute at backprop training progressively larger networks isn't how we're going to get where we're going. It's like everyone saw that backprop can do some cool stuff and then totally forgot about brains being the only example of what we're actually trying to achieve. They're totally ignoring that brains abstract and learn without any backpropagation. Backprop is the expensive brute force way to make a computer "learn". I feel like we're living in a Wright Brothers age right now where everyone believes that the internal combustion powered vehicle is the only way humans will ever move around the earth, except it's backpropagation that everyone has resigned to being the only way we'll ever make computers learn, when there's no living sentient creatures that even rely on backpropagation to exhibit vastly more complex behaviors than what we can manage with it. A honeybee only has one million neurons, and in spite of ChatGPT being, ostensibly, one trillion parameters, all it can do is generate text. We don't even know how to make a trillion parameter network that can behave with the complexity of an insect. That should be a huge big fat hint to anyone actually paying attention that backprop is going to end up looking very stupid by comparison to whatever does actually end up being used to control thinking machines - and the people who are fully invested in (and defending) backprop are most certainly going to be the last ones who figure out the last piece of the puzzle. When you have people like Yann LeCunn pursuing things like I-JEPA, and Geoffrey Hinton putting out whitepapers for algorithms like Forward-Forward, and Carmack saying things like "I wouldn't bother with an algorithm that can't do online learning at ~30hz", that should be a clue to everyone dreaming that backprop will get us where we're going that they're on the wrong track.
@sup3a
@sup3a 9 ай бұрын
Maybe. Though it's fun to hear what people said when Wright brothers and such tried to crack flying: this is not how birds fly, this is inefficient etc. We "brute forced" flying by just blasting shit ton of energy into the problem. Maybe we can do the same with intelligence
@bilderzucht
@bilderzucht 9 ай бұрын
Learning within a Single individual Brain maybe without any backpropagation. But couldn't the whole evolutionary process through billions of brains and ariving at a setup with different Brain Regions be seen as some sort of backpropagation?
@vicipi4907
@vicipi4907 9 ай бұрын
I think the idea is to get it to an advanced enough stage where it is competent and reliable. so much so it exaperdite the process in researching something that looks more like the the human brain process as a replacement. We might even get it to a point where it self improve there is no reason to think it won't find a different approach thats doesn't involve back propagation. Either we can't deny it has great potential and application to make AI advancement significantly much faster.
@intrestingness
@intrestingness 9 ай бұрын
Progression is rarely linear and innovation follows a line of optimism use not the end game. That's why we had the 'stupid' internal combustion engine for over 100 years melting our planet😢
@Mattje8
@Mattje8 9 ай бұрын
This assumes the goal of AI is to mimic a brain. It probably isn’t, mostly because it (probably) can’t, at least using existing compute approaches and current physics. If consciousness involves quantum effects as Penrose puts forward, current physics isn’t there yet. Or maybe it’s neither quantum nor algorithmic but involves interactions we can’t properly categorise today, which may or may not be deterministic. All of which is to say that I basically agree with you that all of the current approaches are building fantastic tools, but certainly nothing approaching sentience.
@GregoryBohus
@GregoryBohus 9 ай бұрын
Is it possible for say Gemini to iterate itself if you prompt it correctly in your first prompt? Or do you need to build an application to do such? Can you use the web interface to do such?
@Sandheip
@Sandheip 5 ай бұрын
Absolutely inspiring! SmythOS is at the forefront of AI innovation, proving that the future is indeed agentic. Excited to see what's next! #SmythOS #FutureOfAI
@therealsergio
@therealsergio 4 ай бұрын
The power of agents is that more focused smaller prompts (for each agent) perform better than a single aggregate monolith prompt tasked with the entire reasoning workflow. Divide and conquer AGI into agents. Create a society of mind.
@mykdoingthings
@mykdoingthings 9 ай бұрын
GPT 3.5 cognitive performance going from 48% to 95%+ by just changing how we interact with the same exact model is WILD! Are we learning that "team work makes the dream work" is true even for AI? I wonder what other common human sayings will cause the next architectural breakthrough in the field🤔 Thank you Matthew for this walkthrough, first time I learn about agentic workflow, Andrew Ng is amazing but you made it even more accessible 🙏
@binoite1
@binoite1 8 ай бұрын
this reminds me of giving humans tools to perform certain task helping us with quality, precision and efficient use of time. By training AI for specific task and giving them appropriate tools we can harness AIs computing output more efficiently. Awesome development!
@horacioariash
@horacioariash 9 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 *Los flujos de trabajo "agentic" pueden mejorar el rendimiento de los modelos de lenguaje grandes.* 03:08 *Se presentan patrones de diseño para el uso de agentes en la IA, como reflexión, uso de herramientas y colaboración multiagente.* 20:46 *La generación rápida de tokens es importante para iterar eficazmente en los flujos de trabajo agentic.* Made with HARPA AI
@danshd.9316
@danshd.9316 9 ай бұрын
Thank you just finished its great that you explained for ones who may not be as techie as ng expected.
@Lukas-ye4wz
@Lukas-ye4wz 9 ай бұрын
Did you know that this is actually how our mind/brain works as well? We have different parts (physical and psychological) that fulfill different roles. That is why we can experience inner conflict. One part of us wants this. Another part wants this. IFS teaches about this.
@TheStandard_io
@TheStandard_io 10 ай бұрын
Yeah, Sequoia Capital also misled everyone by not doing actual due diligence on FTX. When everyone heard that they invested, no one else did Due Diligence because they assumed Sequoia did. And they did not go to court or get any punishment
@Dom-I-NATE
@Dom-I-NATE 10 ай бұрын
You are probably my number one source for bleeding edge info and explanations on AI and AI agents keep it up and great job Matthew! you were one of the fleeting influences in my learning AI and basically learning python for that matter now that I can use AI as a personal tutor for free anyone can learn anything now way better than being in a classroom because having an AI tutor way better than human
@NOYFB982
@NOYFB982 10 ай бұрын
With a limited context window, this hits an asymtotic wall very quickly. Keep in mind, I’m not saying the approach is not a big improvement; it is. However, my extensive experience is that it is not able to go nearly far enough. LLMs are still not fully functional of high performing work. That can only still do basics (or high level information recall). Perhaps with a large context window, this would actually be useful.
@u2b83
@u2b83 9 ай бұрын
I've long suspected that iteration is the key to spectacular results, it's like an ODE solver iterating on a differential equation until it stumbles into a basin of attraction. You could probably do "agents" with just one GTP and loop through different roles. Then again maybe multiple agents are a crutch for small context windows lol. However, keep in mind that GPT4 already gives you an iterative solution by running the model as many times as there are tokens.
@kritikusi-666
@kritikusi-666 10 ай бұрын
why don't you just link to the original video?
@baumulrich
@baumulrich 9 ай бұрын
whether we know or not - that is how most of us work. we evaluate the prompt, then we do a first pass, then we reevaluate, then we edit, then we do more, and reevaluate, check against the prompt, edit, do more work, etcetc
@freddy29228
@freddy29228 8 ай бұрын
Matthew, do you have a forum for people to learn about AI?
@matthew_berman
@matthew_berman 8 ай бұрын
My discord!
@evanoslick4228
@evanoslick4228 10 ай бұрын
It makes sense to be agents. They are parallelized and can be specificly trained where needed.
@gregkendall3559
@gregkendall3559 9 ай бұрын
You can actually tell a gpt to break itself into multiple separate personalities. Give them each a goal. One can write code then the next reviews it and have the one chatbot work it all without resorting to a convoluted separate agents system. Tell them to talk to each other to get a task done. Name them.. Bob, Joe and tell it to preference their discussion with their names as each one talks. I tried it and results were very promising.
@JakehainHain
@JakehainHain 3 ай бұрын
I use the free subscriptions of LLM AS Agents . Now I know why they work so well for me .
@Dale-p2l
@Dale-p2l 10 ай бұрын
Thank you so much for all your videos. You are gold. Please never stop!
@jbavar32
@jbavar32 10 ай бұрын
I've been using AI for a couple of years now for a creative workflow. (I don't do code) and I've often said Ai is like having the most brilliant collaborator on the planet but it has a slight drinking problem. My question is how does one create an agent so that one LLM can pass its result to other LLM's? In other words how do you engage several LLMs each working on the same problem? It looks like you would need a special code or a custom API.
@GetzAI
@GetzAI 10 ай бұрын
What is the inference cost? That is the big thing.
@unorevers7160
@unorevers7160 5 ай бұрын
Instead of generating agents by assigning GPT specific roles, it would be way more productive to have a specialized LLM which was created for a very specific topic. And the you could have agents for specific tasks and one agents who orchastrates those agents. Also a idea I have regarding response time. If you cut a big task into multiple smaller ones you could also think about something like a cache. If two requests are seen as identical by an agent, they could fall back to a static past response.
@dhruvbaliyan6470
@dhruvbaliyan6470 9 ай бұрын
When me realising I realized this over a month ago . And thinking to create virtual environment where multiple agents work together that are especially fine tuned for each use case . So my brain is as intelligent as this person.
@markgonzales
@markgonzales 5 ай бұрын
In the Coding Benchmark graph I noticed the gap between zero shot and with agents decreased significantly between GPT 3.5 and 4. It seems like this trend would continue as new versions release until the gap is eliminated all together. Aren’t agents going to become obsolete as LLM’s absorb this extra step?
@bobnothing4921
@bobnothing4921 10 ай бұрын
I am looking for something like Autogen/GPT Pilot 2, but that is designed for programming for iOS, such as Swift/Xcode. Is there something along those lines?
@alabamacajun7791
@alabamacajun7791 9 ай бұрын
I had to skip the sound effects and powerpoint like intro to 2:00. Content was excellent. Agents are going to be our gophers of the cyber intelligence "AI" world.
@marshallodom1388
@marshallodom1388 10 ай бұрын
I convinced my chat AI that our new mutually conceived idea of think before you speak extremely helpful for both of us
@konstantinlozev2272
@konstantinlozev2272 9 ай бұрын
If you spend some exchanges on brainstorming first with GPT4 a few different approaches and only then give it a task, it is superb. I can see a pair of agents brainstorming in the future instead.
@gekalfat
@gekalfat 8 ай бұрын
This is very interesting...Basically what is needed is to assign to AI Agents the Roles required pre-AI by human with outlined SOP and fine tune these models with resources relevant to each role...
@samuelmarndi
@samuelmarndi 5 ай бұрын
U can ask llm to help u like an agent. So agents can be used inside LLM u just need to ask the right question or prompt. Speak to the llm judt like u would speak to an agent and the ansers amd guidence produced would be very similar.
@boy_with_thorn
@boy_with_thorn 8 ай бұрын
I'm confused about how to implement these ideas. What is exactly that I have to do to utilize this "agentic workflow" pattern when I'm chatting with gpt-4 to generate code, for example? How do we automate this iterative process? Do you think we could call this process an "automated prompt engineering" system?
@matt6288joyce
@matt6288joyce 9 ай бұрын
As is often the case, education in England is years behind private sector organisations. I’d love to understand how AI can be utilised to help run a school and how a senior leader like myself could learn how to introduce this infrastructure into the functions of a school at large. I’m sure it can be utilised to help with lesson planning but I’m thinking more in terms of the organisation scale processes
@Павел-н5ц8о
@Павел-н5ц8о 9 ай бұрын
That obviously seems that the conversation over people involved in GPT chat training displace a few cornerstone and important things. The information analized with it statistics can be easily checked up and qulifies its source purity, sorting listed public sources if they are proudent or not. The set of information passed trough the analisys and valued as not proven gives the opportunity to analyse why it is not proven, or misleading, or just simply faking. So, the discussion was not about agents as well as independent and open press are, but about "mediates" who trying to place uncofident and fakefull information instead. P.S. Obviously, this video was placed by one of the previously told "mediates" in the certain fakefull purposes and could be such example of what was genuinely the conversation about.
@animalrave7167
@animalrave7167 9 ай бұрын
Love your breakdowns! Adding context and background info into the mix. Very useful.
@jakeparker918
@jakeparker918 10 ай бұрын
Awesome video. Yeah, this is why I voted for speed in the poll you did, this is the what I was talking about.
@jmherrera00
@jmherrera00 5 ай бұрын
This remembers me John Rowan subpersonalities , there is not and "I" in us. I guess is the same in Agentic Flow. We are an agentic flow ourselves lol
@asafzilberberg6648
@asafzilberberg6648 5 ай бұрын
Great. Thank you for sharing this.
@biskero
@biskero 10 ай бұрын
In a multi agent and multi LLM scenario is key to understand which LLM assign to each agent. I found that there are not enough information about each LLM to make that decision. Maybe the answer is to train each LLM on a specific topic.
@stepanfilonov
@stepanfilonov 10 ай бұрын
It's possible without training, just a simple group of base gpt3.5 already outperforms a single gpt4, it's more about orchestration.
@biskero
@biskero 10 ай бұрын
@@stepanfilonov interesting, so it's a matter of agents supporting it? Still LLM should have more information about their specific training.
@StuartJ
@StuartJ 10 ай бұрын
Maybe this is what Grok 1.5 is doing behind the scenes to get a better score to GPT4.
@gotoHuman
@gotoHuman 9 ай бұрын
I think there should be more emphasis on the insane efficiency gains achievable when agents are enabled to take actions in connected apps and systems
@EstrogenSingularity
@EstrogenSingularity 10 ай бұрын
Can someone drop a link for the Andrew Ng video
@matthew_berman
@matthew_berman 10 ай бұрын
I'll drop it in the description, sorry about that.
@AshishDeshpande13
@AshishDeshpande13 3 ай бұрын
What exactly is meant by the inference speed here?(check out 20:05). I am a little confused. Is it the time required for the LLM to output the data from the moment we hit enter after writing the prompt or is it the rate at which it can output words after it's used the time for thinking or is it the speed at which it can understand the text prompt we have written? Someone please help me understand it! I would really appreciate it, thanks in advance!
@homewardboundphotos
@homewardboundphotos 10 ай бұрын
So agentic systems will use gpt 4 to create synthetic training data for gpt 5
@bztube888
@bztube888 8 ай бұрын
He also taught Ilya Sutskever (co-founder of OpenAI). Dr. Andrew Ng is a big deal.
@YorkyPoo_UAV
@YorkyPoo_UAV 9 ай бұрын
I just started learning how to set up AI last month but this is what I thought this is what Multi-Agents or a Crew was.
@vincent_hall
@vincent_hall 7 ай бұрын
I'm so happy to hear this word, "finicky". I have heard it in ages.
The Industry Reacts to OpenAI Operator - “Agents Invading The Web"
14:55
AI Leader Reveals The Future of AI AGENTS (LangChain CEO)
16:22
Matthew Berman
Рет қаралды 109 М.
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
The AI Future Has Arrived: Here's What You Should Do About It
15:58
Y Combinator
Рет қаралды 121 М.
What's next for AI agentic workflows ft. Andrew Ng of AI Fund
13:40
Sequoia Capital
Рет қаралды 359 М.
The Next Frontier: Sam Altman on the Future of A.I. and Society
36:47
New York Times Events
Рет қаралды 429 М.
Andrew Ng On AI Agentic Workflows And Their Potential For Driving AI Progress
30:54
Anthropic Revealed Secrets to Building Powerful Agents
19:06
Matthew Berman
Рет қаралды 139 М.
Andrew Ng: Opportunities in AI - 2023
36:55
Stanford Online
Рет қаралды 1,9 МЛН
Andrew Ng on AI's Potential Effect on the Labor Force | WSJ
31:43
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41