NEAT FlapPyBi/o

  Рет қаралды 41,617

Michael Iuzzolino

Michael Iuzzolino

Күн бұрын

Пікірлер: 57
@jerryjohnthomas4908
@jerryjohnthomas4908 3 жыл бұрын
This is very awesome , the very fact it is 4 years ago is surreal
@brennanmcconnell5884
@brennanmcconnell5884 8 жыл бұрын
This is a very interesting concept. I'd love to learn more!
@zes7215
@zes7215 6 жыл бұрын
nst as interesx or not or genex, be/can be any nmw
@sankalpbhamare3759
@sankalpbhamare3759 5 жыл бұрын
I believe this video to be interesting : kzbin.info/www/bejne/emG7dZ-arKiNaqc
@VibhavBobade
@VibhavBobade 7 жыл бұрын
I love you guys for implementing NEAT in python. I am trying to do the same as MARI/O did but the Lua code was so confusing to me. You have given such a good explanation for NEAT, couldnt have done better than this.
@Pyroseza
@Pyroseza 6 жыл бұрын
awesome work! love seeing this kind of implementation
@jabrilsdev
@jabrilsdev 7 жыл бұрын
This is a great presentation guys thank you! My only wish is that you guys added some performance graphs. I plan to do a similar project by the end of the year using a self-built engine. Thanks again for the presentation!
@mliuzzolino
@mliuzzolino 7 жыл бұрын
Thank you! Agreed - we also wished we could have added some analysis elements, such as performance graphs. Some network visualizations would have also been fantastic (especially for parameter tuning). Will be very interested to see what your results are if you do a similar project - please let us know!
@jabrilsdev
@jabrilsdev 7 жыл бұрын
are you guys working on any Machine Learning since? & also you guys said "we were ultimately not able to fully solve random pipes using NEAT" how did you guys define "fully solving random pipes?" from the demo that I watched, I think you guys did stellar! I am assuming that your objective was to get your NN to never lose?
@mliuzzolino
@mliuzzolino 7 жыл бұрын
This was just for a semester project in a grad ML course at CU Boulder. Since then we've moved on from NEAT, but my interests and research directions have swung me back around towards ML and neural networks. (Playing around with DCNs and comp-vis right now, actually!) Thank you for your praise of our meager algorithm, haha - and you're correct - our ideal would be for the NN to play indefinitely without losing in the context of completely random pipes. 'Deterministic' pipes were relatively easy to solve, and the scores reached into the hundreds or thousands when we let it run. But the dream of completely stochastic pipes was never achieved. In my opinion, it did not achieve this because 1) our fitness function, 2) sub-optimal GA and network parameters. There's also the question of scaling the NN's input data, as correctly scaling the input feed for SVMs and NNs is critical for performance. A lot of cool stuff to play with and think about, surely! Love your channels, by the way! Seeing your implementation of agents in Unity is really cool! I've been wanting to play with Unity and use it as a testbed for developing deep RL agents for a while, but haven't taken the leap yet. Have you explored Deep Reinforcement Learning, and have you seen this: kzbin.info/www/bejne/jnu0goCKZraEn80 Super amazing stuff!
@jabrilsdev
@jabrilsdev 7 жыл бұрын
hey thanks for the compliment, & yeah that make sense as for your objective. All of the culprits youve described is exactly what gets me excited to begin working on some research projects here shortly, just been writing my libraries via C# & doing minor dabbling, & am already super afraid of encoding input data ( as youve mentioned ), initializing ( weights bias & activation ), & finding a worthy Crossover. Which I must give you guys super credit, your crossover solution was very creative & for sure a source of inspiration! as for Deep RL, no I have not yet, to be honest I am placing my wager on some form of NEAT being the real solution to Deep RLs because initializing is such a B I tell you. Its still an open research area so figuring out how to initialize your Deep NN might take you some time, & in my personal opinion, this is the not so fun part. But when I convert my science channel to be a bit more CS/ML focused I will be able to back to you with a more confident answer. :D & yeah I just came across UEBS a few days ago myself! I even ran a little test demo myself the other day twitter.com/SEFDStuff/status/857196937438789632 but my take away is sending data to the GPU for more computing power currently is such a pain as well haha super hats off to BrilliantGameStudios for their commitment & innovation. 👍
@2000jmartins
@2000jmartins 5 жыл бұрын
@@mliuzzolino I'm pretty sure a fixed topology would have done it if you had just added the birds y velocity to the inputs because if you think about it, it actually is a really important thing to factor in because when in the same position if a bird is going down and is about to hit the lower pipe he's supposed to jump but if he is already going up its probably a bad idea to jump and hit the top pipe. Also, you could reduce the complexity of the networks by not giving them inputs like the birds x position which will always be the same and in reality you just need to give in the x and y position of the very next pipe to jump because when your playing your not paying that much attention to the next few pipes, the bird should have enough space to correct its height in between pipes. And the distance covered should be enough of a fitness function just reduce them all equally and then square them to speed the process. I know its an old project and I don't mean to be annoying or anything just wanted to leave this here.
@anuraganand6053
@anuraganand6053 3 жыл бұрын
i was also trying to use NEAT for pac - man like game with random points but it won't converge for random points
@neilslater8223
@neilslater8223 7 жыл бұрын
Probably the major competitor to NEAT here would be reinforcement learning approach, such as Q-learning. I'd be interested to know how well Q-learning does in comparison, given the same input, output and a small neural network to calculate the action values. You might even be able to get rid of the network and just have a rough table of state/action values, if you approximate the positions. Here's an example: kzbin.info/www/bejne/h5jKq52Em6d-ors - this appears to solve the random pipes problem effectively with a single bird after only a few dozen attempts. And here's a NEAT implementation that does similarly well: kzbin.info/www/bejne/gmfFk3mdn9CkgZI . . . not sure what differences in the code make it work. It might be something minor. I think NEAT and RL are on a par for this problem. I feel that RL copes with larger-scale problems better, and maybe NEAT has some problems it does better than RL on. Maybe they can even be combined (NEAT changes the neural network structure, RL learns in the lifetime of the agent)
@mliuzzolino
@mliuzzolino 7 жыл бұрын
Hey Neil, sorry I missed the approval of your comment - I'm not even sure why it flagged your comment for needing my approval; presumably it was the youtube links. Thanks for those - I'll check them out. I agree that an RL approach is a very reasonable, and likely superior approach for this problem. For this particular toy problem, you could probably replace the NN with a lookup table; however, for 'real-world' problems, I don't think it would be wise. In my very puerile knowledge of RL, it seems that most advances have been made by moving away from the biased, insufficient lookup tables to HMMs to neural networks in terms addressing the credit assignment problem in the decision making models of RL. That being said, I haven't taken a serious look at RL yet and could be incorrect. (Definitely planning on fixing my lack of knowledge in RL very soon). What I do know is that GAs like NEAT, simulated annealing, RL, etc. are approaches to solving an unsupervised learning problem. The problem with GAs is that they don't use any structural knowledge of the problem and have to learn everything from scratch, whereas RL models operate within some reasonably well-defined space, giving them a considerable leg up on solving the problem. In the long run, it's difficult to say whether or not this will be a pitfall for RL (it doesn't seem like it, but who knows). I am also quite interested in this idea of combining genetic algorithms with RL. Another significant issue with genetic algorithms, in my opinion, is the 'fitness function.' Heuristics and stabs in the dark are used to construct it, as it's really just an abstraction that doesn't truly exist, even in the evolutionary biology analog. It seems like a more reasonable approach would be to have a fitness function that is derived from a model or ensemble of models that capture the dynamics of environment you're operating in - but maybe this is moving into the land of RL. My background is in neuroscience and molecular / cell biology - and I had keen interests in evolutionary bio. There is something to be said about the raw power of the evolutionary process. That we exist, and with the intellectual capabilities we have, is a proof of concept that physical processes can and do give rise to "general intelligence." Then, I wonder, why can it not simply be simulated given sufficient algorithms, computational power, etc.? I believe it can, and I think genetic algorithms, in some capacity, must be leveraged to achieve the sort of 'human'-esque intelligence some camps of the AI community seek to replicate. But then again, maybe we can approximate it well enough with our heuristics and intuitions, although I'm highly skeptical about that. I forget to check KZbin often. If you'd want to chat and share some ideas, papers, etc., feel free to email me at mliuzzolino at gmail.com!
@omagadbrohellna
@omagadbrohellna 7 жыл бұрын
You helped me to get the concept, ty so much
@magzhanabdibayev3818
@magzhanabdibayev3818 7 жыл бұрын
Great work, guys!
@skaiyeung7645
@skaiyeung7645 7 жыл бұрын
Thank you for the great presentation about NEAT. What's the difference between two implementation in source code?
@mliuzzolino
@mliuzzolino 7 жыл бұрын
No big differences. Brennan and myself were both very interested in trying to fully understand what was going on with NEAT, so we tackled the implementations independently to optimize our understanding.
@peterbonnema8913
@peterbonnema8913 6 жыл бұрын
A point about speciation: given givena population, your definition of compatibility, and a fixed threshold for this number for when two genomes are compatible there are probably multiple ways that you could divide up your population in multiple species. In particular, the number of species might even differ between multiple ways of dividing up your pop. This fact has influence on the diversity of your population. I can imagine that when you would maximize the number of species for every generation (perhaps limit by an artificial cap) that this would increase diversity. Did you guys account for this? Does the original paper on NEAT account for this? Or any follow up papers perhaps? Is my guess even correct?
@petermonev352
@petermonev352 3 жыл бұрын
I know that you guys probably won't respond, but I've got to take a chance. 13:00 Just to clarify, after crossover and repopulation, we apply a chance to mutate to every single genome (even the newly crossovered ones)? The champion would be excluded obviously, but would any other genome be excluded from this process? Also, is there a 50-50 chance for a weight OR a structural mutation to occur, or do we just apply all the random chances of both regardless if either had a successful "gamble"? Would there be any parameter as to how much the adjustment of a weight would be? Thanks in advance, but since this video was made back in 2016 I doubt the creators still remember that this video exists.
@mliuzzolino
@mliuzzolino 3 жыл бұрын
Hi Peter, We applied a chance to mutate to every genome aside from the champion. See Brennan's code: github.com/Brennan-M/FlappyBird_NEAT_GeneticAlgorithm/blob/60a4fdef5ec28456048efb807cd776a991067fbb/NeuralEvolution/species.py#L107-L118 Regarding weight vs. structural mutation, we first randomly mutated the weight then we randomly mutated the structure; so both have the chance to happen, each with the probabilities described in the video. That code segment is here: github.com/Brennan-M/FlappyBird_NEAT_GeneticAlgorithm/blob/60a4fdef5ec28456048efb807cd776a991067fbb/NeuralEvolution/network.py#L166 The weight adjustment amount, given a mutation, is uniformly sampled from either [-0.1, +0.1], or [-2, +2] (see previous code link for more details). There are likely many foolish things that we did in this project, as this was our very first exposure to ML (and my first exposure to programming -- I came from a neuroscience/biology background). I'd take our suggestions with a grain of salt and experiment on different variations! For example, if we were to do this again, I likely wouldn't allow for both a weight and structural change on the same update step. Good luck! -Michael
@petermonev352
@petermonev352 3 жыл бұрын
@@mliuzzolino Holy Shit! Thank you so much! I didn't expect you guys to respond at all, but you did within a day! Thank you!!!!!
@petermonev352
@petermonev352 3 жыл бұрын
@@mliuzzolino Oh and one more question. When you init the population for the first time, do you start off with absolutely no connections and then run the probability to add a random node / connection, or do you start off with all of the input nodes directly connected to all of the output nodes with random weights?
@fisslewine1222
@fisslewine1222 7 жыл бұрын
I read some of the GA and TWEANNs, but how do you set up you target goal?is there a way to train specific networks before they merge, so that one focussed on speed of movement while,the over focuses on evading objects?
@mliuzzolino
@mliuzzolino 7 жыл бұрын
The target (aka, objective function, or in GA's, the fitness function) is a bit difficult to setup. We arrived at our heuristically - that is, you generally will handcraft the function per your problem space. In one case of our experiments, we reasoned that fitness is proportional to distance traveled and inversely proportional to the number of flaps (i.e., energy expenditure). Some heuristics may exist that generally apply and might bootstrap the problem, but I'm not aware of them as I haven't played around enough in this problem space.
@wli2718
@wli2718 5 жыл бұрын
i downloaded the source codes, but it doesnt run. getting this error: TypeError: main() missing 1 required positional argument: 'neural_network'
@cutmecutme
@cutmecutme 4 жыл бұрын
Interesting ! Well done
@MrTomaskarella
@MrTomaskarella 7 жыл бұрын
Wonderfull video! I would be greatfull for subtitles. Some of you speaks pretty fast.
@pulkitsharma828
@pulkitsharma828 7 жыл бұрын
Can you publish the code base to replicate flappy bird and how do you calculate/analyse the inputs like pipe? Are you using computer vision or python written program just provides this data.
@mliuzzolino
@mliuzzolino 7 жыл бұрын
Hi Pulkit. All code is available here: github.com/michael-iuzzolino/FlapPyBio The code for the game can be found in the FlapPyBird folder. We are not using CNNs/RNNs/q-Learning or anything like that for CompVis - we are simply taking the x,y locations of the pipes as inputs to the network - and normalizing those inputs, of course.
@pulkitsharma828
@pulkitsharma828 7 жыл бұрын
ok cool.. so if I want to implement this on another game(say football) for which I do not have coordinates, do you think image processing is the best way to get input data?
@mliuzzolino
@mliuzzolino 7 жыл бұрын
In that case, if you wanted to use NEAT, the way I would first approach it is as follows. If each frame of the football game is 256 x 256 pixels, for example, then the input layer of the neural network should have 65536 input nodes; create a 1-1 mapping between the pixels of the frame and the input layer. Feedforward the network for the frame, generate an output, (throw the football, move left, move right, move up, move down, etc.), and process this for every frame of the game. Make the fitness function a function of yardage, touchdowns, etc. Choose the top neural networks, mutate / crossover, and go again. There are many ways to optimize the above approach, as I'm sure you can already see how computationally complex it is. For some context, the NN for FlappyBird has ~8 input nodes and it still takes a long time to train. As others have stated in the GA community, for tasks like this your best bet is probably to utilize something other than genetic algorithms. I'd point you in the direction of deep reinforcement learning. However, if your aim is simply to understand GA's, then I'd go with a simpler task than a football game - there are so many factors and a lot of complexity; especially if you have to approach it from an image processing aspect rather than having direct access to the 'sprites.'
@pulkitsharma828
@pulkitsharma828 7 жыл бұрын
Thanks Michael. I have started with the GA for smaller games. But since ultimate goal is Football, I have to try something on image processing as well. I am thinking of "condensing" the pixels to an appropriate input size to counter high pixels.
@mliuzzolino
@mliuzzolino 7 жыл бұрын
You're welcome. That's awesome and sounds fun! Yes - condensing the pixels is a good idea to reduce the dimensionality; maybe something along the lines of a MaxPooling layer of a CNN. When you finish the project, please let me know how it goes! Best of luck. :)
@laurentnicolas7
@laurentnicolas7 4 жыл бұрын
I loved it.
@yousorooo
@yousorooo 7 жыл бұрын
Is there a chance of overfitting due to the chance of the nerual net learning the underlying algorithm used by the pseudorandom number generator?
@EikeSchwass
@EikeSchwass 6 жыл бұрын
I would assume very low
@maxwellhunt3732
@maxwellhunt3732 5 жыл бұрын
Over fitting doesn't really make sense in the context of reinforcement learning since there is no training data to memorize. The goal of reinforcement learning is simply to be able to optimize the fitness function and if it does that, the goal is accomplished. Also, it's way easier for NEAT to learn how to play flappy bird over learning the complex algorithm behind the random number generator
@emilterman6924
@emilterman6924 7 жыл бұрын
Very nice explanation. But I can't understand why you're normalizing the inputs like that? Why, for input1 and input2, the normIn1 and normIn2 (normalized Input) are calculated in this way? normIn1 = input1 / (input1 + input2) normIn2 = input2 / (input1 + input2) It doesn't make any sense for me... Can you please explain that? Won't it be more logical to normalize them like this: input1 = input1 / maxInput1Val
@mliuzzolino
@mliuzzolino 7 жыл бұрын
I haven't looked at this code since last year, but I am guessing you are 100% correct. When we did this project, I was only 4 months or so into learning to code - I came into CS from a neuroscience background. In other words, it is highly likely I made many stupid, trivial mistakes. :P
@jeroenkodde2438
@jeroenkodde2438 4 жыл бұрын
Thanks for the explaination. Maybe you schould consider passing the y-component of the bird as an input. I have seen it improve performance in other video's.
@rotabssab5568
@rotabssab5568 7 жыл бұрын
Wanted to try it. But wow you used a lot of extern modules. I was unable to install them because i got errors. I thought this was a bit more vanilla.^^
@mliuzzolino
@mliuzzolino 7 жыл бұрын
The module that is likely causing a problem is Pygame and whatever its dependencies are. We also had some friction getting it going in the beginning, and it takes some digging on StackOverflow, but it's not so bad.
@rotabssab5568
@rotabssab5568 7 жыл бұрын
Ok. I'll try it again later. Almost forgot to thank you for this wonderful presentation:)
@mr.hitham1171
@mr.hitham1171 7 жыл бұрын
Is that a fan? Great video, but there is some sound noise.
@zyberal3226
@zyberal3226 4 жыл бұрын
interesting
@mkDaniel
@mkDaniel 5 жыл бұрын
Do not make the letterboxing not good on 16:9 screen
@airforcedrone7066
@airforcedrone7066 7 жыл бұрын
this is a g8 video lol vry cool topic that BLOWS MY MIND! hahahaha anyway i play the trombone haha hope to talk to you soon
@bitp
@bitp 7 жыл бұрын
Hi, just finished my neat lib in C# github.com/BitPhinix/BitNeat feedback is needed ^^
@emilterman6924
@emilterman6924 7 жыл бұрын
If I'm not wrong, your NEAT doesn't support recurrent cycles: I checked how you calculate the result.
@bitp
@bitp 7 жыл бұрын
Thanks for feedback! It don´t. I will try to implement it in the next few days.
@mariokohler4916
@mariokohler4916 5 жыл бұрын
20th century research, 18th century microphone :-(
Snake learns with NEUROEVOLUTION (implementing NEAT from scratch in C++)
28:08
AI learns to play PACMAN using NEAT
8:11
Code Bullet
Рет қаралды 3,1 МЛН
А я думаю что за звук такой знакомый? 😂😂😂
00:15
Денис Кукояка
Рет қаралды 6 МЛН
كم بصير عمركم عام ٢٠٢٥😍 #shorts #hasanandnour
00:27
hasan and nour shorts
Рет қаралды 11 МЛН
Smart Sigma Kid #funny #sigma
00:33
CRAZY GREAPA
Рет қаралды 31 МЛН
NEAT - Introduction
21:27
Finn Eggers
Рет қаралды 85 М.
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
freeCodeCamp.org
Рет қаралды 4,5 МЛН
Neuroevolution of Augmenting Topologies (NEAT)
13:39
Connor Shorten
Рет қаралды 41 М.
Understanding AI from Scratch - Neural Networks Course
3:44:18
freeCodeCamp.org
Рет қаралды 460 М.
How I created an evolving neural network ecosystem
10:09
The Bibites: Digital Life
Рет қаралды 336 М.
Deep Learning: A Crash Course (2018) | SIGGRAPH Courses
3:33:03
ACMSIGGRAPH
Рет қаралды 3,3 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
Tutorial On Programming An Evolving Neural Network In C# w/ Unity3D
27:51
I Saved an Electron Microscope from the Trash
34:54
ProjectsInFlight
Рет қаралды 580 М.
AI Teaches Itself to Play Flappy Bird - Using NEAT Python!
10:16
Tech With Tim
Рет қаралды 143 М.
А я думаю что за звук такой знакомый? 😂😂😂
00:15
Денис Кукояка
Рет қаралды 6 МЛН