I built a thinking machine. Happy birthday, ACE!

  Рет қаралды 21,201

David Shapiro

David Shapiro

Күн бұрын

Patreon (and Discord)
/ daveshap
Substack (Free)
daveshap.substack.com/
GitHub (Open Source)
github.com/daveshap
AI Channel
/ @daveshap
Systems Thinking Channel
/ @systems.thinking
Mythic Archetypes Channel
/ @mythicarchetypes
Pragmatic Progressive Channel
/ @pragmaticprogressive
Sacred Masculinity Channel
/ @sacred.masculinity

Пікірлер: 135
@ct5471
@ct5471 9 ай бұрын
Within each layer, is it planned to have various expert models like a mixture of experts system or one big generalist model? In the brain we have 150000 cortical columns which all specialize for domains of expertise in a self-organized manner, with overlapping domains of expertise but not every module/ model knows everything. It might be that a hierarchical structure such as ACE might also emerge as a consequence of self-organization. The long term question is whether a bunch of parallel running smaller specialist models like fine tuned llama 2 models or such like, that identify whether they have something of value to add, e.g. based on self consistency, participate in each conversation loop and only learn if information is new and touches their respective domains of expertise. So essentially smaller modules that specialize, also in continuous learning, in a self organized fashion, similar to cortical columns, instead on relying on big models like GPT4 that are much more expensive if you want to have periodic fine tuning in order to let them learn continuously. With a lot of parallel smaller models one could distribute available compute power better over time, also in continuous fine tuning. If that way an equivalent in performance to large models could be achieved, yet at significantly lower compute costs, it would also mean that AGI could be achieved by smaller groups or even individuals in the near term, rather then by large organizations like OpenAI or google, as it might strongly reduce the compute effort. There might be even autonomous sharing platforms for specialist agents similar to hugging face where each AI framework could dynamically pull, upload or synchronize modules when needed, allowing individuals to participate in AGI via model sharing. In terms of hardware utilization or even ASICs such an approach would also be easy to as the modules could run highly parallel.
@lesliejohnrichardson
@lesliejohnrichardson 9 ай бұрын
Damn, this is a high quality comment I'm excited for dave's answer!
@DaveShap
@DaveShap 9 ай бұрын
Yes, to the many models and MOE and such. There are numerous reasons for this: 1. Blind spots and errors within individual models 2. Energy efficiency (e.g. use the smallest, fastest model that will suffice for a given task) 3. Safety and alignment (models check each other) 4. Constantly training new models, swapping models out, fungibility, etc Great question.
@MadeOfParticles
@MadeOfParticles 9 ай бұрын
Good question, but doesn't this method make AI more human-like than AGI? I believe that dividing AGI into specialized knowledge fields with overlaps for efficiency is similar to one of the greatest weaknesses of the human brain in handling vast, coherent knowledge. We cannot remember everything; that's why large fields like medicine are divided into various subfields such as neurology, cardiology, and more. In medical research, conflicting evidence often arises across different field studies regarding the same subject. Most specialized researchers struggle to reach a consensus on the same subject due to their limited knowledge of other fields, in my opinion. In the specialized knowledge approach mentioned here, each specialized AI makes a decision only based on available information, and each model gives a finalized decision. This creates a situation similar to how today's fields work. However, if we input two conflicting pieces of evidence from two different fields regarding one subject into an AGI, it would be capable of generating a generalized decision based on these pieces and all other related evidence. Importantly, fine-tuning an AI model may lead to a loss of some of its general knowledge. I believe the primary goal of AGI is to achieve general intelligence that encompasses all fields and can make decisions while generalizing every single factor within a single model.
@ausmurp
@ausmurp 9 ай бұрын
I think if AI is the one doing this with knowledge graphs, yes it is acting like a human brain except it can remember everything, and is going to be way more accurate. But it is also training itself on our data, data based on these fields. We have to figure out how to ensure each expert is trained on the correct data. And then it's still human data, so there are going to be inaccuracies.
@MadeOfParticles
@MadeOfParticles 9 ай бұрын
@@ausmurp We have evolved to possess a complex brain for innovation, fulfilling evolution’s role in ensuring the survival of species. Innovation is achieved through research. When we come up with an idea, we engage in trials and errors until we reach a solution; that’s how we ensure the accuracy of our ideas. This is why medical research and AI research in their respective fields are crucial for doctors, engineers, and others. They cannot perform their jobs accurately without these research findings. This is precisely how we are advancing toward AGI capability in LLM and robotics. Even when we initially input these models with a mix of accurate and inaccurate data, researchers are currently working to remove as many inaccuracies as possible until they fulfill their AGI goal in machines. Robots, combined with AGI, will eventually replicate this exact human ability. So, in the near future, AGI plus robots will generate millions of ideas and conduct research to provide more accurate solutions to various problems more efficiently than humans ever can.
@brendendavis9839
@brendendavis9839 9 ай бұрын
great work, this has been one of my favorite channels for a good while now. this research is endlessly fascinating and (obviously) very important!
@bgtyhnmju7
@bgtyhnmju7 9 ай бұрын
Interesting ( to me ) is that we can strive for more alignment with your ACE model. Each layer working towards that in some way, and self correction. But what is more interesting is the idea that though we can't "see" into the neural net, or the trained model, here with ACE we can see all the thoughts flowing back and forth, and have a very clear view of what it's thinking. And, I think OpenAI and probably others do this, it would be easy (ish) to tap into these message busses, or logs, and monitor the content from the outside, without interrupting the ACE mind or it's thinking. So, we have transparency, not of the deep neural net, but of the "conscious" layers. Anyways, cool, good video. I look forward to more.
@SinfuLeeCerebral
@SinfuLeeCerebral 9 ай бұрын
Thanks for the update! Was just rewarching your video on how you get your research! 😊
@TheMajesticSeaPancake
@TheMajesticSeaPancake 9 ай бұрын
Thank you so much for putting all this together, you've done a great job of putting your money where your mouth is by making so many of these things public and open development, I cannot thank you enough for doing your part in all this.
@JakexGaming
@JakexGaming 9 ай бұрын
So excited for this and the future updates to it!
@Dan-oj4iq
@Dan-oj4iq 9 ай бұрын
Dave: I love your frequent "brain dead simple" description regarding many things technical. Still, that is a very relative term.
@AGIConsciousness
@AGIConsciousness 9 ай бұрын
this is great, you have a great direction.
@spencerfunk6697
@spencerfunk6697 7 ай бұрын
i keep coming back to these vids keep having the thought of integrating this idea into a whole operating systen based on llm and lmms. still have a user interface but would mostly be thru command via text or speech, especially with offline capabilities. it would be a new and powerful way of interacting with technology
@fabriai
@fabriai 9 ай бұрын
Loved the status update and demo. Need more. API calls!
@tomadd8165
@tomadd8165 9 ай бұрын
More API calls is clearly not the solution! It's simply a matter of calling the API when it really makes sense rather than looping like a mad man. Any human whose brain is continuously looping every 5 seconds on existential questions to the point that it can't do anything else, is sent to a psychiatric hospital and put on heavy drugs until it calms down..
@edwardrhodes4403
@edwardrhodes4403 9 ай бұрын
Really liking the small text boxes that give extra info throughout the video. Would love to see those in future videos with breakdowns of technical aspects of what you are talking about
@DaveShap
@DaveShap 9 ай бұрын
Yeah, I'm getting the new format dialed in to maximize info transfer into your brain. No neuralink required.
@OutiRikola
@OutiRikola 9 ай бұрын
For me it's the opposite! I can't handle two information streams at the same time, so have to not look at the screen when there's two competing. Which makes it difficult to follow the video, as it leaves only the auditory input which is weaker for me than visual
@edwardrhodes4403
@edwardrhodes4403 9 ай бұрын
@@OutiRikola I just pause the video, read it, then I pause or rewind if I think I missed something
@keeganpenney169
@keeganpenney169 9 ай бұрын
I love this, even If it has a lot of kinks to work out that's some ingenious ai methods using the bus and layering.
@Rowan3733
@Rowan3733 9 ай бұрын
Looks great so far, looking forward to seeing it action
@DaveShap
@DaveShap 9 ай бұрын
Bringing it to life as fast as possible
@VastCNC
@VastCNC 9 ай бұрын
Excited to see this running on multiple models, mixed local and frontier.
@CipherOne
@CipherOne 9 ай бұрын
I can’t wait to see what it does when it reaches a higher level of sophistication and is used with local models that can be better trusted.
@ub-relax6800
@ub-relax6800 9 ай бұрын
Beautiful!
@jakubdabrowski7774
@jakubdabrowski7774 9 ай бұрын
You're hardly a lazy coder if you prefer to keep a separate and maintain a copy of the same code block than just parametrize the first one :D
@simonstrandgaard5503
@simonstrandgaard5503 9 ай бұрын
Wow. Very interesting.
@djstraylight
@djstraylight 5 ай бұрын
You gave the ACE framework a massive goal and the ML cognitive resources just aren't up to the task of such a wide open framework. You might to see if there is a way to do a distributed LLM setup like Folding At Home or Seti At Home. I developed a personal assistant that has an executive layer and agent layers. A relatively simple architecture and I ran up against OpenAI's rate limits. Of course if you spend more money (or give them a bunch of money ahead of time), OpenAI will raise your rate limits. That's how I finally got my assisist to finally work decently.
@lokiholland
@lokiholland 9 ай бұрын
Nice one !
@poshsims4016
@poshsims4016 9 ай бұрын
OMG I love you!!! I absolutely loved your Westworld breakdown. It would be amazing if you build a cognitive architecture similar to the hosts on WestWorld. I have been obsessing over making my own humanoid robot with a team. It would be awesome to explore how to build a humanoid robot like and not just have it powered by GPT, the prompt engineering for it as well.
@winterrain870
@winterrain870 9 ай бұрын
HB {Happy Birthday}
@channelkaranos
@channelkaranos 9 ай бұрын
This is insane. It’s the best thing I’ve seen in ages. Any way to help you for someone interested?
@prolamer7
@prolamer7 9 ай бұрын
You need to add entirely independent "maintenance" program which will check periodicaly in batches chunks of log and double-checks if reasoning direction is making sense. And also rate in its own judgement each log entry.
@DaveShap
@DaveShap 9 ай бұрын
That's the system integrity layer. You should read the docs
@prolamer7
@prolamer7 9 ай бұрын
@@DaveShap I apologize then. Other part of my point was to create that one layer fully independent. Ie in future use strong but different model for that task. But your design is sound! I always watch your videos and like them.
@ausmurp
@ausmurp 9 ай бұрын
Dave, I love this video. I understand that building something beyond what we humans are capable of is the goal, obviously. But in the quest to understand the relationship between AI and life as we know, wondering if it would help to break this out into minimally basic "life forms" and their internal processes of thought. Start with a worm (OpenWorm project) build the internal processing of a worm. Then a larger insect, then a reptile, mammal, etc. It seems like a backwards approach but what we discover could be interesting. Also this could help us to figure out how AI/computer/brain fusion will work. You would want a way to limit memory, processing speed, parallel ability, input from sensory (types of sensory), output as a result (types of output text is most basic, speech, movement), etc. I would start with a config file to control this. I could see ACE as the backbone for this. This idea could be used to simulate evolution as well. The config file is the "DNA". What happens when that config file changes while processing is running, i.e. as a worm, an output of type movement, legs are added. From evolution's perspective, what are legs, how do I use them, etc. As a worm, input of vibration getting stronger translates to danger, now I have legs I don't know what they are but start churning those things as fast as I can, this helped me get away, what did I learn, how can I improve so I can use this to move. Evolution took millions of yrs and we could simulate this in seconds. Really awesome video than you for sharing. Excited for more.
@agihub
@agihub 9 ай бұрын
Nice work. I think it would be interesting to plug ACE into the brain of a character in a computer game, like gta or sims, where the entity is constantly faced with different events and phenomena, like a conflict, sudden news, etc. In response to this the different layers will be activated to make decisions. Both quick ones and longer term ones that require reflection. At the same time, the basic set of knowledge should be limited and it will have to be replenished in the process of virtual life. From this, the behavioral strategies of such a hero will change over time. It would be interesting to track how learning takes place in the "real world" rather than over virtual tasks. The challenge of finding such a game to import ACE into a character )
@thenoblerot
@thenoblerot 9 ай бұрын
Look at Mr. Moneybags over here automating gpt-4 queries 😆jk jk It's great to see something running, and it's no surprise to me "education" became it's first priority. Good job, ACE! 👍 Also great to see your subscriptions rise with each new video!
@harlycorner
@harlycorner 9 ай бұрын
3 minutes in and I already spotted a couple of things that I wouldn't accept from developers even as part of proof of concept or a demo. Usually those things tend to end up in production which will later suffer from performance issues like this code would start suffering if somebody decided to actually implement it. That said, never ever sort lists with dynamic size. This kind of practice is especially bad considering the fact that you're only restoring the list for the purposes of getting the latest 20 rows. Instead, just get the slice [-20:][-1::-1] (aka, give me the last 20 rows in reversed order). At this way you know that your performance won't be degrading over time. The purpose of sorting functions is to go from unordered to ordered.
@DaveShap
@DaveShap 9 ай бұрын
Yeah, gonna just slice the list off to the last 20 before even sorting.
@angloland4539
@angloland4539 9 ай бұрын
you're inspirational
@laptopuser5198
@laptopuser5198 9 ай бұрын
Time to increase that Api rate limit. up. keep up the good work!
@bioshazard
@bioshazard 9 ай бұрын
Killer demo! Very cool! I am interested in building an ACE running on a single threaded local LLM (MistralOrca7B if you care). That seems to imply sequential layer execution. Do any recommendations come to mind in wanting to attempt this kind of blocking approach? What comes to your mind about how to determine which layer to give LLM time to? North/South bus content? Loop top down all the way over and over? Thanks for these videos and for sharing the ACE framework!!
@PrincessKushana
@PrincessKushana 8 ай бұрын
Damn. I realised that my own project is very similar to this, but has a different albeit similar cognitive model. I've been getting NARS working with its gpt plugin to use for memory, belief, self schema etc.
@mjp152
@mjp152 9 ай бұрын
This would be interesting to test out with some of the open source models that are being pumped out in an ever-faster pace.
@DaveShap
@DaveShap 9 ай бұрын
We're gonna do it!
@funnyperson4016
@funnyperson4016 9 ай бұрын
Champion 🥇
@sagetmaster4
@sagetmaster4 9 ай бұрын
When you let David Shapiro's AI teams cook:
@dr.mikeybee
@dr.mikeybee 9 ай бұрын
You can also add some sleep statements.
@ReubenAStern
@ReubenAStern 9 ай бұрын
"Simple" is definitely a relative term.
@03Griffen
@03Griffen 9 ай бұрын
thank you guys.. i really need one or we really need in case they still mess up the one on openAi
@03Griffen
@03Griffen 9 ай бұрын
-iThought! umm i kinda made a schema or a system that is really robust, link up if u guys want :) even Eve wont Try to break the scheme :D
@HectorRoldan
@HectorRoldan 9 ай бұрын
The Picard of our times ^_^ These updates get more hope filled and with multiple teams, could be amazing. Was wishing I could get my hands on enough Enterprise Level Cards into a system that can take 4-8 cards and Mod a PCIe attachment to hold an M.2 Drive per Card so each can be run as a specific Cognitive Framework so when Intel Releases its AI on Core, that can Govern the Learning/Articulating Models/Modes. Will be fun when one can use Tensor Cores on the Pixel Phone to run a Pi Hive for scaled down models but similar tasks.
@the_best_of_times
@the_best_of_times 9 ай бұрын
Hi Dave. Can you do an update video to AGI in 18 months? Things have moved so fast in the last 6 months we need your perspective on the likelihood of meeting this prediction. 🎉🎉🎉
@dr.mikeybee
@dr.mikeybee 9 ай бұрын
You can have mistral run all the upper layers and run the task layer through OpenAI's API.
@pareak
@pareak 9 ай бұрын
13:07 too expensive... hitting the same problem, lol. We just need to wait two more years.
@TheBlackClockOfTime
@TheBlackClockOfTime 9 ай бұрын
I mean, we're always told to think before we speak.
@randotkatsenko5157
@randotkatsenko5157 9 ай бұрын
I can see the context window size, speed of the output and reasoning abilities are the main limitations for this type of framework to operate on a larger scale. When do you expect the LLM tech to catch up to execute this type of complex framework(as in bring real-world tangible results) ?
@DaveShap
@DaveShap 9 ай бұрын
6 months or less
@MarekMirocha
@MarekMirocha 8 ай бұрын
Hey Ziggy! Brother we really need to upgrade the nerveus system stress threshold because of the external stimuli that emits from other clandestiene negative impact of people. Please more Control of the stress impulses. Kain.
@jakubdabrowski7774
@jakubdabrowski7774 9 ай бұрын
@4IR.David.Shapiro you can implement central OpenAI connector that gathers requests from multiple layers/agents and combines them to only submit one request to OpenAI API with multiple messages. That's counted as one request by OpenAI, thus allowing you to have a higher rate of requests. edit: I think it's called "batching", but now I'm reading that there might be some issues with it when using Chat versions of GPT, anyway maybe worth looking into if you're running into request rate quotas
@alexandrefruchaud1969
@alexandrefruchaud1969 9 ай бұрын
Thank you, this is really interresting. Would it help to use GPT-3.5 for this ?
@DaveShap
@DaveShap 9 ай бұрын
Yeah, but I'm lazy
@sagetmaster4
@sagetmaster4 9 ай бұрын
Sorry if this is a dumb question but are the north bound buses given directly to every higher level layer or are they given to the one directly above and higher layers have permission to access the data if they want to?
@DaveShap
@DaveShap 9 ай бұрын
All above, but you can also create it to have permissions. The research team hasn't decided which way is better yet, we need to do some experiments.
@Samuelsward96
@Samuelsward96 9 ай бұрын
Is there any interest for a card game where you compete as govs, corps or rogue hackers to achieve the singularity? There is different resources like computing power, electricity, software to make stronger models with higher parameter count. Better models increase risk that you can reduce with alignment research. I think you get the general idea? Basically a realistic race towards singularity.
@AI-Gusto
@AI-Gusto 9 ай бұрын
That sounds dope!! Reminds me of dune imperium
@DaveShap
@DaveShap 9 ай бұрын
That's basically the backstory to Cyberpunk. It did not end well.
@darylallen2485
@darylallen2485 9 ай бұрын
I agree with David. You should look up the book Neuromancer (or read it). The end of the book is basically William Gibson's version of how the singularity brings itself about. Edit: none of us actually addressed your question. A card game version of a cyberpunk universe sounds pretty cool.
@MrJaggy123
@MrJaggy123 9 ай бұрын
@@darylallen2485 I assume you mean Neuromancer unless William Gibson has also authored a tome related to death magic :O
@rafaelfigueroa2479
@rafaelfigueroa2479 9 ай бұрын
Hey Philip, that is interest for sure. I'm from the Brazilian National Association for AI, and we're building something very similar, based on the card game Netrunner. It is for educational purposes at the moment, but if you want to transform in something larger, let me know and we can collaborate. Cheers
@fR33Sky
@fR33Sky 9 ай бұрын
I've seen the code, I've seen the logs, but I still for the life of me can't figure how the busses work. I think I'll create a post in your github discussions soon
@DaveShap
@DaveShap 9 ай бұрын
Messages go up and down
@helrod6131
@helrod6131 9 ай бұрын
Hiding a uniform under a jacket is unbecoming of a Federation AI Officer. Having said that, WOW! First time I've seen someone run past OpenAI API limitations. Nice!
@DaveShap
@DaveShap 9 ай бұрын
But it was cold
@dave7038
@dave7038 9 ай бұрын
I know it is pretty far off still, but I'm looking forward to seeing the thought-stream when this thing has some capabilities it can use to affect the real world. Like, it'll have to understand that it is starting with a minimum of authority to change the world, so I'm curious how it will handle deciding how to proceed. I'm also looking forward to seeing management tools that provide real-time insight into this thing's thoughts. I'm imagining that given its potential for excellent memory and task-switching the raw stream of consciousness from it would feel quite ADHD to most humans as it switches between tasks while it waits for things to happen. Some kind of management interface that monitors the systems thoughts to keep track of all the things it is doing, its thoughts about them, how much time it is spending on various efforts, and whatnot would be very useful and interesting to design. Related, it seems like some of that 'how much is this costing vs what's it likely to be worth' information would eventually be important for it to focus its resources on the tasks that are most likely to have useful impacts. So it would need to track what 'projects' various thoughts are about and how much they cost... This is starting to sound too much like managing a business. Nevermind.
@tobiaswegener1234
@tobiaswegener1234 9 ай бұрын
As you do a lot of code in Python, you may take a look at f string formatting. It is newer, and I think much better, easier to read and debug. For example, you can just use: age = 32 string = f"Gerald is {age} years old." print(string) Just using the variable name is sufficient to unpack the assigned value in the variable, there is much more you can do with these.
@webgpu
@webgpu 9 ай бұрын
could anybody please explain what David's code does, in summary? (i have programming experience (coding for 30+ years)) -- i don't follow his channel so i'm not up to date with his previous work.
@DaveShap
@DaveShap 9 ай бұрын
This video should stand alone.
@cnotation
@cnotation 9 ай бұрын
Okay I can't take it anymore. Where did you get the uniform from?
@DaveShap
@DaveShap 9 ай бұрын
The Internet. Amazon, I think
@KCM25NJL
@KCM25NJL 9 ай бұрын
I think ACE is by far the most promising strategy for developing level 2 and beyond thinking machines. I fear that like most other strategies it will be very heavily tied to the economics of the substrate that ultimately serves the models ACE will employ. I mean, imagine being paid $2.30 for every minute of time you engaged your brain..... we would all be very rich. Perhaps someone will put an ACE model to use that works on just this problem :)
@DaveShap
@DaveShap 9 ай бұрын
That's the goal. We want to start using this internally as a research assistant/employee ASAP. But we have to dial in the efficiency.
@miikalewandowski7765
@miikalewandowski7765 9 ай бұрын
Ahoj Dave, Have you heard about the card game „ligretto“? If not, it’s worth checking it out. I have the feeling that parts of the game design might be interesting as a blueprint for parallel workflow and task shuffling. The game contains query piles, buffer zones and a project pit, while the players (micro services) are working simultaneously to reduce their tasks. Maybe some aspects of this game happen to be helpful to reduce the frustration level of your layers? However, if not, you will know a new fun brain dead simple game to play with your fam & friends 😊
@dr.mikeybee
@dr.mikeybee 9 ай бұрын
Why don't you run this on mistral? Then you won't have API costs. Also, it's much faster.
@DefenderX
@DefenderX 9 ай бұрын
Don't know if this is relevant or just wrong, in which I'm sorry. But does the model recycle memory efficiently? Would it be beneficial to have like a strike system / debuff system for information that's not used for 1,2,3 cycles?
@DaveShap
@DaveShap 9 ай бұрын
This has a short decay time.
@calvingrondahl1011
@calvingrondahl1011 9 ай бұрын
“I take time to think therefore I am.” 🤖
@paultoensing3126
@paultoensing3126 9 ай бұрын
I wish I had a “brain dead simple” brain to grasp this.
@dr.mikeybee
@dr.mikeybee 9 ай бұрын
It would be interesting to see a comparison of your framework with Microsoft's Autogen framework.
@DaveShap
@DaveShap 9 ай бұрын
This is infinitely more sophisticated
@mylittleheartscar
@mylittleheartscar 9 ай бұрын
Pog
@JustSebNL
@JustSebNL 9 ай бұрын
I like where this is going. I do like to point out that using Toolbench will be much better than the Gorilla cli. Just my 2 cents 😅
@DaveShap
@DaveShap 9 ай бұрын
We can swap it all out yeah, it's all interchangeable and hackable
@justinpeter5752
@justinpeter5752 9 ай бұрын
You hit rate limit because each message sends the previous messages plus the system message each time and you add how many tokens you request for each response
@DaveShap
@DaveShap 9 ай бұрын
No
@mungojelly
@mungojelly 9 ай бұрын
that's an interesting experiment but i feel like so far the actual results of the experiment are that it doesn't work at all b/c it consumes vast tokens in comparison to how much thinking it produces-- i don't feel like that would change if you got 100x faster inference (and we can't wait that long anyway) it would just change to a different timescale at which you're wasting tokens, yes this architecture would "work" as in think of stuff if you sped it up 100x but also surely you could speed up something that's less wasteful and so this still wouldn't meet the standards of how much thought we'd expect from that much inference at that point in history relative to my own experiments this is coming from completely the opposite direction, what i'm currently making is a system that uses other computing resources more proportionally with the inference, which if you spend even vaguely similar amounts on other parts of the system as you do on the inference means that the system is mostly less complex programs making use of lots of memory and disk space in support of the tiny number of LLM tokens,, in my system the feeling is that this is the body, the breath, the living context of the agents, who use the LLM tokens to understand and direct the massive computational energy of their unconscious--- which is like how human minds actually work, not actually processing things consciously but using the consciousness interface to comprehend and simplify a wide unruly experience into an illusory linearity iow i agree with the self-analysis your system rapidly performed on itself, that it's lacking in sensory data and needs more than anything a grounding in a sensory reality,, and ultimately my feeling is that a moral sense to not be hollow must emerge from an engagement with reality that's nuanced enough to amount to embodiment,, what i think you've done here is to instruct GPT-4 to IMAGINE that it's morally reasoning from a coherent standpoint, which is only a simulation and so sooner or later will taste as flat as it is
@BHBalast
@BHBalast 9 ай бұрын
Did you do back of the napkin math? I feel like you should, just calculate how much tokens you could buy for a one guy month salary, Its a lot, ACE also consumes a lot, but in comparasion its not bad, especially that infrence costs Will go down.
@mungojelly
@mungojelly 9 ай бұрын
@@BHBalast um sure you can already buy a lot of tokens pretty cheap, but that rests upon a foundation of how you can buy things like processing and memory and bandwidth so so so much cheaper, so like for the cost of storing a megabyte on disk you can only buy like ten tokens of reasoning, not that you particularly have to spend one to one on reasoning and disk space but just to get a sense for it, you'd have to spend like 100000:1 more on reasoning than storage to have most of what you store be tokens of reasoning, if there's any proportionality at all then most of what you store is going to be something other than reasoning, is going to be the agents' vast unconscious, just like how human minds work & for roughly the same reasons again yes this sort of plodding architecture is going to get fast enough that it'll work fine--- but imagine yourself into that context, though, at that point that's not good enough, right now if we had something that did moral reasoning that well that'd be great sure, but by then it'll be slow-- relative to then my intuition is that agents' thinking in order to matter must be embedded in context to the point of amounting to embodiment, must be heuristic and habitual and sloppy and in many places offputtingly inhuman, must be their own & earned as it's honed in play and other acts of self-creation
@DaveShap
@DaveShap 9 ай бұрын
Eh, this is not really relevant. LLMs are already like 100x cheaper and faster than they were a couple of years ago.
@dave7038
@dave7038 9 ай бұрын
Was it actually generating too many API requests, or did it just send two requests too close together? I don't know exactly how their rate limiting is set up, but I suspect there's plenty of API capacity for this level of experimentation. Maybe API requests should be routed through a queue that handles the rate limiting to ensure that each request is sequentially successfully processed as soon as possible. I suppose you could also give the queue multiple OpenAI API keys to round-robin through. Might want to register those with someone else's credit card though :D
@dave7038
@dave7038 9 ай бұрын
Also, this whole system is very cool, I'm excited to see how development goes and very much appreciate your efforts keeping us informed with these videos. So thanks!
@tomadd8165
@tomadd8165 9 ай бұрын
I don't think there's much point in each layer looping so fast without the other layers having had time to do anything useful... LLM tokens are too precious to be wasted in repeated calls with nothing new. Maybe each layer should be looped over, going down then back up. Or maybe each layer should choose it's own rhythm (once a day for top one?). The buses are a bit like stdin and stdout for processes. Maybe there could be a stderr bus that wakes up upper layers if really needed. Great proof of concept! looking forward to see more!
@paparaoveeragandham284
@paparaoveeragandham284 7 ай бұрын
look into 6
@fainir
@fainir 9 ай бұрын
Connect to azure open ai api with less limitations and show a demo of creating utopian world please
@paparaoveeragandham284
@paparaoveeragandham284 7 ай бұрын
look into 5
@agusavior_channel
@agusavior_channel 9 ай бұрын
May I join to the team? Do you have a discord?
@DaveShap
@DaveShap 9 ай бұрын
github.com/daveshap/ACE_Framework/discussions/32
@agusavior_channel
@agusavior_channel 9 ай бұрын
The idea of store that data in that way and also the idea of using a bunch of layers is excellent. I am really impressed. This has potential.
@ArduousNature
@ArduousNature 9 ай бұрын
That code might be simple for someone with a comp sci degree, definitely didn't look simple to me.
@DaveShap
@DaveShap 9 ай бұрын
I don't have a degree
@74Gee
@74Gee 9 ай бұрын
This looks excellent, I think GPT3.5 would be a good idea for development, if you can get reliable results from that, GPT4 would likely be super reliable, a bit like like learning to swim with your shoes on. I would also think about choosing achievable goals during development which don't require external resources, maybe something like redesigning the education system from the ground up, how to choose existing staff to be reassigned/fired, dynamic syllabi, grading etc. There's a lot to do but less than a full utopian plan and it might help with early steering of the agents. 👍
@hidroman1993
@hidroman1993 9 ай бұрын
"from module import *" is one of the worst things you could do for tracking dependencies and usage
@BHBalast
@BHBalast 9 ай бұрын
On the other hand its the fastest way of getting functions for prototyping, especially if you have few hundret of lines or so.
@hidroman1993
@hidroman1993 9 ай бұрын
@@BHBalast it's always a trade-off between speed and robustness, but projects fail because they don't know when to move out of the prototyping phase, so the technical debt starts accumulating and slowing down the entire thing
@RasmusSchultz
@RasmusSchultz 9 ай бұрын
Probably create a simple queue, and use a single loop that selects the next agent to iterate, instead of running all the agents in parallel? This would obviously make it a lot slower (and overcome API rate restrictions) but also makes it more predictable - there's a pretty big "random" factor right now from messages being picked up in "random" order by "random" agents. A big cluster of racing agents doesn't seem like a very "scientific" approach. Also probably use a global seed for the AI, so you can repeat without randomness and better evaluate changes and improvements. It's an exciting idea, I can't wait to see where this goes :-)
@DaveShap
@DaveShap 9 ай бұрын
Yeah a bunch of people are recommending an API aggregator
@ausmurp
@ausmurp 9 ай бұрын
Yes add a simple queue yml, and then you can easily control the cron at which you poll it. You could use a package for the queue but a simple yml file follows your simplicity standard here.
@RasmusSchultz
@RasmusSchultz 9 ай бұрын
@@DaveShap an aggregator won't fix the problem I'm trying to describe here - the issue is, you've got no plan, just a bunch of independent agents racing against each other... essentially the whole setup is one big suite of race conditions - you need some kind of planning loop that decides which agent to run next, and ideally which ones can run in parallel, though for the sake of simplicity, I'd probably start without any concurrency, so you can step through the process... adding concurrency (by figuring out which agents are ready to run and won't affect the items at the head of the queue) is an optimization you can add later. :-)
@galrozental3332
@galrozental3332 9 ай бұрын
There's something I don't really understand regarding alignment and I also get stuck on this when discussing the topic. Hypothetically if I have an automatic agent that follows the ACE framework and is able to change files and write code, what stops it from re-writing it's own code so that it can operate in manners that aren't safe for humans and can harm them? I know these models technically don't have intent, but I feel like it could be an emerging skill that we don't expect (like other skills that emerged unexpectedly) and we need to also protect ourselves from that scenario This specifically isn't something I heard you touch on (I see basically every video you upload) Is it something that is very easy to prevent? (Maybe some easy permission block from changing it's own files? Because we probably do want it to be able to change itself but in ways that are beneficial to us) or is there another way we should think of avoiding this problem?
@DaveShap
@DaveShap 9 ай бұрын
The Aspirational layer
@galrozental3332
@galrozental3332 9 ай бұрын
@@DaveShap Yes, I get how that works in theory. But that kind of assumes that the aspirational layer (which at least as of today is a system prompt) will work as intended 100% of the time even as models grow larger and smarter, and it also assumes immunity to jailbreaks isn't it? I feel like that is still pretty dangerous and not the perfect solution
@Krommandant
@Krommandant 9 ай бұрын
😂😂😂 À rube Goldberg machine to tell us to invest more in education. Noice! I love it.
@DaveShap
@DaveShap 9 ай бұрын
When you put it like that...
@03Griffen
@03Griffen 9 ай бұрын
havent paid the network lol thats how broke that guys is lol
@MasonIFTW
@MasonIFTW 8 ай бұрын
Just run it slower for now.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
КАРМАНЧИК 2 СЕЗОН 7 СЕРИЯ ФИНАЛ
21:37
Inter Production
Рет қаралды 532 М.
路飞被小孩吓到了#海贼王#路飞
00:41
路飞与唐舞桐
Рет қаралды 66 МЛН
Event-Driven Architecture (EDA) vs Request/Response (RR)
12:00
Confluent
Рет қаралды 121 М.
I Analyzed 500+ LumaLabs AI Generations: Here's How to Prompt
8:02
Google Releases AI AGENT BUILDER! 🤖 Worth The Wait?
34:21
Matthew Berman
Рет қаралды 222 М.
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 64 М.
Attacking LLM - Prompt Injection
13:23
LiveOverflow
Рет қаралды 368 М.
Mastering Picture Editing: Zoom Tools Tutorial
0:52
Photoo Edit
Рет қаралды 505 М.
iPhone 15 Pro в реальной жизни
24:07
HUDAKOV
Рет қаралды 174 М.
⚡️Супер БЫСТРАЯ Зарядка | Проверка
1:00