Nuts and Bolts: Modular AI From the Ground Up

  Рет қаралды 46,615

GDC 2025

GDC 2025

Күн бұрын

Пікірлер: 43
@monishdhayalan2552
@monishdhayalan2552 Жыл бұрын
this made my life modular
@dasaggropop1244
@dasaggropop1244 6 жыл бұрын
got me at modular approach
@HueyTheDoctor
@HueyTheDoctor 6 жыл бұрын
You can tell when the speaker is passionate about what they're saying because they'll take two steps away from the podium and shake their finger at the audience while they speak. I mean when they're delivering the key point they wanna emphasize in the talk. Walk away from podium, shake finger at audience.
@SirNightmareFuel
@SirNightmareFuel 6 жыл бұрын
4:14, for instance ^^
@dineshmondal2973
@dineshmondal2973 2 жыл бұрын
J
@Alic4444
@Alic4444 2 жыл бұрын
Emergent behavior baybee
@maxhuk
@maxhuk 2 жыл бұрын
Amazing talks! Thank you for sharing 🙏
@Thecommet
@Thecommet 6 жыл бұрын
Take a shot every time you hear the word "modular"
@peterac
@peterac 6 жыл бұрын
Most of this boils down to: it is better to organize your code than not. Since that should be intuitive to even non-programmers, I don't understand the audience for this presentation. The presenters forward (more or less) Object Oriented design and AI-specific OO patterns, but they do not discuss OO's issues or alternate approaches. Since they don't much discuss other options, what they look like, and their pros/cons, the whole thing sounds like justification of presumptions that no one is willing or able to question. OO is not the answer to every question. It is an approach with some wisdom behind it that needs paired with wisdom on the behalf of the programmer to be fit correctly. And that includes _not_ using OO in some situations. No tool is universal. OO might be generally the best approach for most game AI teams/programmers. Just don't choose it blindly. Every abstraction, every layer, every API, and every class reduces the reader's ability to understand exactly what can happen and in what order. And the notion that you can push something off into a module and then never worry about how it is implemented... well... that rarely works out like you want.
@peterac
@peterac 6 жыл бұрын
Heh, yeah, my programming sensibilities may not work on the enterprise scale. But then, one may argue that nothing truly works at that scale. Or maybe that things work in spite of themselves? The Data Oriented crowd gets some stuff right, even though they are IMO overly religious about it. I'm not even sure they understand _why_ they are right when they are. Still, in the projects I've worked on, that approach did much better at eliminating the Accidental Complexity that Chris Draggert spoke of. It's all about listening to your code, asking it what it wants to be, and then liberating the true expression of it out of the chrysalis it starts in.
@thehemi69
@thehemi69 6 жыл бұрын
I want talks from the trenches. The first speaker may have gone through the mud, but the talk read like those OO books and lectures that got us into this OO disaster in the first place. All the nice theory that we all know breaks down in the real world at scale. Things are messy. I've learned to run away from anyone who is preaching a silver bullet. You can't quote "no silver bullet" ("out of the tar pit" is far more relevant IMHO), then pitch a silver bullet. I value you experience - a good talk raises more questions than it answers, admits flaws so you start a conversation, rather than preach. Just as in life, the older we get the more we realize we don't understand.
@Clairvoyant81
@Clairvoyant81 6 жыл бұрын
I agree that most of this boils down to "good organization". Basically, it's about writing code that does one small thing very well. This applies to everything you write, so yeah... nothing revolutionary about it... but it's also a concept that isn't restricted to OOP and it's something you can always improve on. I disagree that the talk focuses all that much on OO, though. Yes, the second guy showed a few interfaces here and there... but honestly, no matter what paradigm you use, whenever you try and build a larger system, you will need to define how the different bits should interact in order to be able to focus on the solution of one problem and switch those pieces out for different ones, if need be. For example: The third guy focuses on separating data acquisition (sensing), reasoning (deciding) and acting. That separation makes sense to me. When I want to think about how my character should pick his next action, I don't want to be stuck in "how does the character know that that door exists and where it is?" or "what does the character need to do exactly to open that door?". I want to focus on what makes the character want to open that door. The more I can focus on that, the more likely it is that my solution works and doesn't lead to the character opening doors without a reason, for example. Also, yes, pushing "something off into a module and then never worry about how it is implemented" rarely works completely... but taking that as a reason for not doing that at all and taking that to the extreme would mean you would have to never call any previously defined methods or functions anywhere ever. At its ultimate conclusion that means writing your own compiler for your own processor. As soon as you call a method to compute a dot product, for example, you're pushing something off somewhere else without worrying about how it works. So... generalizing this isn't a sensible thing to do. At some point you will have to trust your (or somebody else's) code. The benefit from that abstraction is that you suddenly can focus on the problem you're actually trying to solve. IMO, at some point, you will have to trust the code you're using.
@peterac
@peterac 6 жыл бұрын
I am not advocating anti-modularization, if there even is such a thing. Of course that thinking is flawed to the point of insanity. I try to stay away from all extremes, hence my comments disparaging OO (I see it as an extreme). I do not know why you infer my position as so broken, but so far as I know, it isn't. In OO, you sort code by subject, whether that makes any practical sense or not. All modules appear to the reader as entirely reusable, except that is misleading. In order to really understand a module, you have to know how it is used. Which means you still have to understand how the whole app works in order to understand any piece. Every public method is an implicit invitation to be called whenever and however, unless you start each with a litany of assert( ). Of course, you may claim this is a pessimistic view, and you're not entirely wrong. After years of seeing unrealized presumptions in my "reusable" code cause crashes, though, I am of the opinion that reusability is a better hypothesis than theory. Reusable code is indeed a requirement of development, but I prefer to limit its accessibility as much as possible. Maybe you haven't been scorched by OO. If so, good for you. I hope it continues to serve you well. It is inherently broken based on my experience, and I think there are better ways to attain its goals.
@Clairvoyant81
@Clairvoyant81 6 жыл бұрын
I didn't infer that that was your position. I simply took your statement "the notion that you can push something off into a module and then never worry about how it is implemented... well... that rarely works out like you want." and took it to the extreme to illustrate my point that everyone does that with every line of code they write and that you absolutely have to do that if you ever want anything done. I wasn't saying that that was your position, I was saying that taking this at face value and to the extreme would lead to the insanity I described above. It seems we agree on that. I don't really understand how you get from my statements that I was trying to defend OO. The only mention I made of that was arguing that the ideas presented in the video apply to every way you can write code, not just OOP. However, I'd say that whether a method or function or module seems entirely reusable to the reader doesn't depend on the paradigm you're using, but how you're designing your solution. That said, of course every method intended for public use should check whether its arguments are valid and handle the case when they're not, preferably, it should give an understandable error. I know that's too idealistic and rarely, if ever, the case. I've been programming long enough to see a lot of really terrible code. But: I've seen that in plenty of languages and paradigms. In my experience, you can produce terrible code, no matter what paradigm you choose. Could you explain further what you mean by "sort code by subject, whether that makes practical sense or not"? Right now I can't really think of a way to sensibly sort code in a way that isn't related to the code's subject one way or another, if I want myself or other people to find what they're looking for in it.
@NSTGamingChannel
@NSTGamingChannel 6 жыл бұрын
I was expecting some banjo lol
@vast634
@vast634 3 жыл бұрын
Reusability: designing code to be "reused" in some future unknown product apart from the current one is likely going to unnecessarily blow up the complexity of the project. Making assumptions about a hypothetical use and then adding all kinds of generalized methods. There is no knowing what the future project will require anyways. The code can end up overly complex and unwieldy, and even prevent necessary changes for the current product. It can work well on encapsulated modules such as the physics engine, sound engine, file handeling etc, but is likely to fail on things like gameplay logic and ai.
@jakefisher1638
@jakefisher1638 2 жыл бұрын
What does his shirt say its driving me crazy! ‘My ai can pwn your…’ Can pwn your what???
@alicem3415
@alicem3415 2 жыл бұрын
The scope is too large here to gain anything from this I feel. If you understand the concept they're referring to, you already understand it. If you don't understand the concept they're referring to, it is incomprehensible. For example, the factory slide. If you already know what a factory pattern is, you certainly aren't gaining any information from this. If you don't know what a factory pattern is, it doesn't explain it any depth that could possibly help you.
@thomashaller4876
@thomashaller4876 3 жыл бұрын
Some one found source code and sample projects with that approach ?
@JorgeEscobarMX
@JorgeEscobarMX 6 жыл бұрын
I got lost in macros
@aron8999
@aron8999 3 жыл бұрын
meelee
@TonOfHam
@TonOfHam 4 жыл бұрын
Mealy
@StarContract
@StarContract 6 жыл бұрын
Ubisoft talks = best talks
@iamisandisnt
@iamisandisnt 6 жыл бұрын
I got two words: TOYS. Oh, that's one word. THE MOVIE.
@michalslusarski
@michalslusarski 3 жыл бұрын
8:00
@dancingbubbles1126
@dancingbubbles1126 6 жыл бұрын
i make comment
@szebike
@szebike 6 жыл бұрын
i can typing
@sesanti
@sesanti 6 жыл бұрын
I has haxx0r
@johnlime1469
@johnlime1469 5 жыл бұрын
I hate how this sh*tty comment is one of the most liked ones in this comment section
@ultimaxkom8728
@ultimaxkom8728 3 жыл бұрын
Amazing achievement.
@khlorghaal
@khlorghaal 6 жыл бұрын
first speaker was terrible, he took forever and described nothing insightful
@TheMrTape
@TheMrTape 3 жыл бұрын
This is stupid. It's the opposite extreme of duplicating stuff 100 times. Firstly, it's called functions, not modules. Secondly, it doesn't help overview to create a separate 5 line function just to avoid replicating the same line once. You don't save lines or get better overview and control by separating everything that can be separated, you do the opposite; that should only be done once the same thing is done at least a few times, and even then, if it's just one line, maybe you're better off not separating it. Excessive separation causes spaghetti code, that hurts overview and ability to read the code, and you don't save lines if you're calling a function in place of one line of code; it should also be considered that alone calling a function wastes 12+ CPU cycles, and while that isn't much, eg. calling 500 functions that may each call 100 other functions directly or indirectly (a somewhat low number given the proposition), each spending 20 cycles to initialize (preparing variable data and returning), at each 120hz tick, wastes 120 million cycles or 4% of 3Ghz; that isn't taking into account that each function have their own scope, meaning that data which several of them touch won't be in the CPU cache for separate functions to use, and thus has to be loaded from ram (and stored back to it) for each function that uses it, which is extremely slow, possibly wasting 750 million CPU cycles at 3Ghz to wait for data, or 25% (+4%) just for overhead (in a somewhat worst case, though it could still be way worse). Furthermore they're advocating for making "modules" as simple as possible, just so you can have "better overview", regardless of how that hurts gameplay; in their mind, programmer overview comes before the quality of the gameplay, and so it's OK to sacrifice gameplay in turn for simpler code. That's fucked up.
@ifcoltransg2
@ifcoltransg2 Жыл бұрын
If you only call a function once, every modern compiler is going to inline it. The compiler copies and pastes the code directly into the function calling it, bypassing the overhead of a call. There's no difference from a performance perspective, so it's entirely about preference and what you consider most readable/maintainable. You might find that's one function, or many. Compilers are pretty smart these days, so I'd recommend humans write code for humans. When you optimise your code, profile it first.
@TheMrTape
@TheMrTape Жыл бұрын
@@ifcoltransg2 "If you only call a function once, every modern compiler is going to inline it" I was talking in regards to the programmers ease of overview, not efficiency. Your entire comment ignores all aspects of what I'm actually saying. When I did mention efficiency, I did it in regards to splitting up code into 1000 functions unnecessarily, not in regards to putting one-offs into a function.
@ifcoltransg2
@ifcoltransg2 Жыл бұрын
@@TheMrTape It wasn't my intention to say this or that way is better for readability. It comes down to preference. I was only saying that your preference matters more than the performance impact, which is about the same whatever way you do it. If you have 1000 extra functions that don't change the logic at all, chances are they're mostly inlineable. If some stragglers don't get figured out properly by the compiler, then sure, that'll add a slight overhead. Someone who chose to extract that many pieces of code into functions, surely finds it more legible that way. Multiply the average function length by a thousand, and they're getting an enormous readability improvement (subjectively). On the flip side, if someone prefers bigger functions, they'll see just as much improvement from merging those separate functions together. A slight change to performance isn't so big of a deal in comparison.
A House Built on Sand: Engineering Stable and Reliable AI
29:33
бабл ти гель для душа // Eva mash
01:00
EVA mash
Рет қаралды 6 МЛН
СОБАКА И  ТРИ ТАБАЛАПКИ Ч.2 #shorts
00:33
INNA SERG
Рет қаралды 1,4 МЛН
This dad wins Halloween! 🎃💀
01:00
Justin Flom
Рет қаралды 53 МЛН
Stop Fighting! Systems for Non-Combat AI
28:32
GDC 2025
Рет қаралды 26 М.
Tech Toolbox for Game Programmers
48:14
GDC 2025
Рет қаралды 252 М.
Clean Code - Uncle Bob / Lesson 2
1:06:01
UnityCoin
Рет қаралды 504 М.
Practical Creativity
1:04:25
GDC 2025
Рет қаралды 311 М.
Practical Procedural Generation for Everyone
31:30
GDC 2025
Рет қаралды 417 М.
Goal-Oriented Action Planning: Ten Years of AI Programming
1:01:06
бабл ти гель для душа // Eva mash
01:00
EVA mash
Рет қаралды 6 МЛН