Hello everybody! I have created a Discord Channel for everybody wanting to learn ML-Agents. It's a place where we can help each other out, ask questions, share ideas, and so on. You can join here: discord.gg/wDPWsQT
@juniorjason5833 жыл бұрын
you all prolly dont care but does someone know a method to log back into an instagram account..? I somehow forgot my account password. I appreciate any tricks you can offer me!
@oakleybraylen70933 жыл бұрын
@Junior Jason Instablaster =)
@juniorjason5833 жыл бұрын
@Oakley Braylen i really appreciate your reply. I got to the site on google and Im trying it out now. I see it takes quite some time so I will get back to you later with my results.
@juniorjason5833 жыл бұрын
@Oakley Braylen It worked and I finally got access to my account again. I'm so happy! Thanks so much, you saved my account !
@oakleybraylen70933 жыл бұрын
@Junior Jason glad I could help :)
@Diego0wnz4 жыл бұрын
This is exactly what I need! I've been trying to make a machine learning project for months but all the other tutorials were outdated or not complete. Finally someone who explains the concepts besides the preset tutorials from unity itself. Thanks so much man! PS: I still have a few questions, would you mind helping me over mail/voice chat since I think youtube comments is a bit messy for that?
@sovo944 жыл бұрын
This is one of the best tutorials I have ever come across. Thank you for the hard work.
@MotoerevoKlassikVespa4 жыл бұрын
Its amazing to see how much effort you put in your videos. Keep it up 👍🏻
@OProgramadorReal4 жыл бұрын
What an incredible job, man. Thank you so much for this video and repository!
@mateusgoncalvesmachado13614 жыл бұрын
Awesome Video Sebastian!! I was struggling really hard to find any runnable example for it!
@SebastianSchuchmannAI4 жыл бұрын
Note for everybody watching: Two days ago ML-Agents Release 2 was released. Don't worry, the latest release just contained bug fixes, meaning you can still follow the tutorial without doing anything differently. The naming may be a bit confusing because Release 2 sounds like a big thing but it isn't, they just changed their naming scheme. I would always recommend using the latest release version! Enjoy! :)
@sayvillegames4 жыл бұрын
Hi i have a question, so if i am working on a ml agent and idk where too put the shoot function in c# please help me.
@animeabsolute71304 жыл бұрын
thank you it useful
@dannydechesseo13224 жыл бұрын
Thank you so so so much I was trying to find tutorials on this!
@harrivayrynen4 жыл бұрын
Thank you. We really need 1.x ML-Agents stuff. Older stuff is often too old for current version and also very hard to use today’s releases.
@robertbalassan2 жыл бұрын
thank you for this ml agents playlist brother. this is what i have been desperately looking for. it's not like i don't understand the documentation, but i am too lazy to read it unless there is an unsolvable problem to solve. so i would rather see someone go thorough the full process as you did in this playlist.
@augustolf4 жыл бұрын
Congratulations Sebastian, excellent video! I'm waiting for another video showing more about hyper parameters.
@superLuckyMari4 жыл бұрын
Man, you're saving my life with this videos! You have no idea! Please keep up the great work!
@SebastianSchuchmannAI4 жыл бұрын
Thank you! :)
@ioda0064 жыл бұрын
This is so cool. I'm learning ML separately from games, but this is making want to try Unity!
@RealNikolaus2 жыл бұрын
Very well done video. So much effort was put into this.
@ximecreature4 жыл бұрын
Extremely good video ! Exactly what I was looking for. Keep it up, this is great work !
@kyleme96974 жыл бұрын
Its going to be a hell of a ride ... keep it up Sebastian !!
@kunaljadhav78803 жыл бұрын
Keep up the good work Sebastian!
@mahditarabelsi25354 жыл бұрын
What an incredible job, Keep Up Man .
@Andy-rq6rq4 жыл бұрын
amazing production quality for a small youtuber
@jorgebarroso24964 жыл бұрын
if instead of summaries your folder under Train config is called results: used this command to open the results in the tensorboard (access throgh localhost:6006): tensorboard --logdir results --port 6006
@tattwadarshiguru35074 жыл бұрын
Dude you are awesome. May God bless you with great success.
@SebastianSchuchmannAI4 жыл бұрын
Thank you! :)
@BenjaminK1234 жыл бұрын
Man this is real hard learning but very cool thank you for creating this video :)
@manzoorhussain51613 жыл бұрын
Goodness me! this is what I was looking for.
@isaiahhizer87964 жыл бұрын
Hallo, I get this error shown in the unity editor: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." I run the mlagents-learn trainer_config.yaml --run-id="JumperAI_1" line, unity logo pops up, tells me to press play in editor, I press play, but because of this error above(I think), it doesn't train, and simply times out. I am using windows 10. I searched the internet for a solution, turned off windows defender, tried to do it in pyvenv, but it hasn't helped.
@darcking994 жыл бұрын
Same problem here
@alexsteed30914 жыл бұрын
This error occurs when mlagents-learn is not running. It is basically saying, "no training available, (on port 5004) I will use inference (brain) instead".
@shivmahabeer94504 жыл бұрын
dude! thank u. saving me for my honours project!!
@santiagoyeomans4 жыл бұрын
The Channel I was looking for so long... Like, subscribed and notifications turn on!!
@owengillett58064 жыл бұрын
Great stuff. Nice work !
@RqtiOfficial4 жыл бұрын
I'm a little confused, there is no such thing as having my jumping script derive from Agent instead of MonoBehavior. I added the ML-Agents package with the package manager.
@topipihko6114 жыл бұрын
Excellent series! Chilled pace and you explain all the details clearly. + Puns (y) Have you tried creating agent, which has continuous and discrete actions? Eg. Angle of cannon + shooting-action. Would be great topic for the next episode! :)
@alexandrecolautoneto73744 жыл бұрын
Keep with the good work, it helps a lot
@ahmedahres5303 жыл бұрын
Hello, My agent does not jump during the training. Everything else is working properly, however it seems like it does not do anything except staying idle and receiving negative rewards. When I set actionsOut[0] = 1; in Heuristic() instead of actionsOut[0] = 0; the agent keeps jumping instead. It seems like the agent just makes the same decision over and over again. Anybody experienced the same? Thank you
@gradient49283 жыл бұрын
Ran into this issue as well
@ThiemenDoppenberg3 жыл бұрын
The AI car is just jumping around all the time and I cannot get it to just drive on the road and only jump when the raycast hits the cars :/
@realsoftgames71744 жыл бұрын
why is it your cars jump so smoothly, yet mine just continuously jump? ive trained for about an hour and same results highest score i got was 11
@berkertopaloglu9114 жыл бұрын
i got the same issue have you figured out yet?
@rizasandhi4 жыл бұрын
same. Highest score I got was 10.
@chorusgamez7554 жыл бұрын
same!!! have you figured it out yet?
@realsoftgames71744 жыл бұрын
@@chorusgamez755 no i havent sorry, have you?
@protondeveloper4 жыл бұрын
@@realsoftgames7174 No
@HamadAKSUMS8 ай бұрын
WOW crazy man Thanks
@berkertopaloglu9114 жыл бұрын
I did same way as you but when i wrok with multiple enviroments highscore stuck at 8 and never increase
@electrocreative43034 жыл бұрын
That's alot from you Thanks so much for explaining :)
@carlosmosquera79462 жыл бұрын
I'm sure it is a stupid question but what's the difference with just using raycast and an if statement used to jump when the distance is close to the other car?
@keyhaven81516 ай бұрын
I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
@mahdicheikhrouhou22864 жыл бұрын
Good Video ! Thanks it helped me a lot !
@mohammadsoubra35324 жыл бұрын
When attempting this I keep getting "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." when playing it, I even started fresh by cloning your repo again and just hitting play without starting anything and it still gave me that error, Im not sure how to fix it
@lukewg Жыл бұрын
When i add the ray perception sensor I cant see it at all in the scene view. Does anybody know how to fix this problem?
@whoami824314 жыл бұрын
For some reason, the player does not jump even after adding the Decision Requester. Not sure why this is happening. Any ideas? I have my project installed in the same folder where I have ML-Agents.
@ashitmehta50004 жыл бұрын
I ran into the same issue for HOURS... Then I noticed that I didn't change the branch size to 2. Try checking all the components values in each script.
@FrostiFrostiz4 жыл бұрын
It does jump, you still have to press the space bar. Check to remove the decision requester and then press space bar when playing the game versus with decision requester ;). The AI has not yet learned to jump so even if he doesn't say it, he presses space bar!
@ShinichiKudoQatnip3 жыл бұрын
tensorboard summaries shows no data found?
@maxfun6797 Жыл бұрын
How many behavior parameters can we have, and how can we get the actions from these specific parameter components?
@maloxi14724 жыл бұрын
Excellent ! Liked, subscribed, belled... you name it !
@adamjurik54424 жыл бұрын
Hey, I did exactly what you said to do in the tutorial but the decision requester script doesn't seem to work for some reason. The script is exactly the same I have checked and tried to do this multiple times. Although, great tutorial. EDIT: After a night's sleep I managed to try it out again and when training, everything worked. Weird but thanks :)
@Making_dragons3 жыл бұрын
hey why did the decision requester just work out of the blue mine doesn't work.
@JasonPesadelo4 жыл бұрын
Hi Sebastian, do you know how continue a trainning the previous model saved? Thanks !
@FaVeritas4 жыл бұрын
Thank you for this series. I've attempted to get started with ML in Unity about 3 times and gave up each time. (Mostly due to having to use Python along side Unity). I'm hoping to apply ML to bot players in my 2D shooter game!
@herbschilling22154 жыл бұрын
Really helpful ! Thanks!
@resmike44443 жыл бұрын
can you make a video for mlagents 2.1.0??? there are some changes there!
@_indrahan4 жыл бұрын
Thank you so much for this informative video, I've added the request component but the car doesn't jump, does anyone know what's causing this?
@miguelmascarenhas6134 жыл бұрын
I have the same issue. Were u able to solve it?
@FrostiFrostiz4 жыл бұрын
It does jump, you still have to press the space bar. Check to remove the decision requester and then press space bar when playing the game versus with decision requester ;). The AI has not yet learned to jump so even if he doesn't say it, he presses space bar!
@BramOuwerkerk4 жыл бұрын
I have a problem where it basically sets the reward to 0.1 instead of adding 0.1. Can anyone help me?
@SebastianSchuchmannAI4 жыл бұрын
Did you use SetReward () instead of AddReward() ?
@BramOuwerkerk4 жыл бұрын
@@SebastianSchuchmannAI no, but I build a similar game like yours and there it worked
@jorgebarroso24964 жыл бұрын
Is there a way to save a NN model when the agent makes a highscore? So the best are kept
@lucaalfino21059 ай бұрын
Hi Again! I am have made a hide and seek environment with a single seeker agent and a single hider agent. What I would like to do is to use the ray perception sensor 3d to give rewards depending if the hider is in the seekers view or not. What should I use for this, as the resources for the subject are rather scarce. Also, are the tags used in any way? for example if the seeker sensor hits the hider sensor(tag 2) then the seeker gains rewards and respectively the hider loses rewards
@ashitmehta50004 жыл бұрын
Can someone tell how to start tensorboard in the latest update (Release 6)? I am going nuts over this.. Edit: run the following command instead: tensorboard --logdir results
@wiktor34534 жыл бұрын
tensorboard --logdir=results works for me
@jorgebarroso24964 жыл бұрын
I tried some train configs and the one that fit best was the default one with 5.0e7 steps instead of 500000. Also, my AI just stopped jumping when reached 139, anyone knows why?
@hansdietrich834 жыл бұрын
This is so awesome. Could you try to make a tutorial on an agent with legs learning to walk?
@BramOuwerkerk4 жыл бұрын
I have a question: where in all of this is the Raycast Sensor used?
@Armadous Жыл бұрын
Can you do a video on how to work with the curriculum feature?
@DeoIgnition4 жыл бұрын
I keep getting these two errors when i try to train: "File "g:\ml-agents\ml-agents\mlagents\trainers\trainer_controller.py", line 175, in _create_trainer_and_manager trainer = self.trainers[brain_name] KeyError: 'Jumper'" and "File "c:\program files\python37\lib\site-packages\tensorflow_core\python\summary\writer\writer.py", line 127, in add_summary for value in summary.value: AttributeError: 'str' object has no attribute 'value'"
@WilliamThyer4 жыл бұрын
Great video!
@jastrtva95053 жыл бұрын
when i change the script from monobehavior to Agent it says its a compiler error whats wrong?
@ThiemenDoppenberg3 жыл бұрын
Import MLAgents in the project via the package manager. You have to choose a version. I don't know what version is recommended for this tutorial but for the ML agents unity release 12 it is 1.7.2 I believe
@jastrtva95053 жыл бұрын
@@ThiemenDoppenberg i have imported ml agents i can see the files in the project
@ThiemenDoppenberg3 жыл бұрын
@@jastrtva9505 what version do you have of it?
@ThiemenDoppenberg3 жыл бұрын
@@jastrtva9505 you should go to package manager. Then advanced tab -> preview packages. Then go to 'in project' and check the version. Put it on preview 1.7.2 and click update. This worked for me with this tutorial in 2019.4
@jastrtva95053 жыл бұрын
@@ThiemenDoppenberg i tried it but it does still not work
@DeJMan4 жыл бұрын
Everything worked except tensorboard. Just says no data.
@fetisistnalbant4 жыл бұрын
Yeah. I can't see too
@jorgebarroso24964 жыл бұрын
Same
@ricciogiancarlo4 жыл бұрын
is it possible to give some basic knowledge to the ml-agents ? Like for example stopping at the red light to a traffic signal
@your_local_reptile67004 жыл бұрын
where did the behavior parameter script come from :s
@miguelmascarenhas6134 жыл бұрын
Awesome video
@Draco984 жыл бұрын
it fails to create process when i write "mlagents-learn trainer_config.yaml --run-id="JumperAI_1". can some one help
@SebastianSchuchmannAI4 жыл бұрын
Remote debugging is always though, but I will try my best. First make sure you have python 3.6.1 or higher installed (Check via: "python --version"). Then make sure the ML-Agents Package is installed. Just put in the command: "mlagents-learn --help" to verify that it works. Next, make sure you are really located in the TrainerConfig directory, because the trainer_config.yaml file is located there. You can try the "dir" command on windows or "ls" on Mac/Linux and check if it prints "trainer_config.yaml". If nothing helps, I would advise you to try reinstalling python/mlagents again following the instructions here: github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md Hope you will find a solution.
@KevinLeekley4 жыл бұрын
Hmm I think I am having the same issue, I'm running Python 3.6.8, mlagents-learn help command does work and I am definitely in the TrainerConfig dir, but I am getting error when I run that command: "Trainer configurations not found. Make sure your YAML file has a section for behaviors. mlagents.trainers.exception.TrainerConfigError: Trainer configurations not found. Make sure your YAML file has a section for behaviors. "
@SebastianSchuchmannAI4 жыл бұрын
So on a new Windows Machine I had the same error and it took me some time to fix it. First I deinstalled all versions of python I had. Next I made sure to remove the python folder in Program Files. This one caused me a lot of trouble. Before reinstalling python make sure that when using the command "mlagents-learn" that it raises the "command not found" error. In my case even after deinstalling python it was stuck on "fails to create process" and only after removing the python folder in progam files it stopped doing that. Then I installed a fresh version of python 3.65, making sure to check the "Add to PATH" box when installing and then after doing "pip3 install mlagents" it worked. Hope this helps. Every machine is different.
@itaybroder28974 жыл бұрын
Amazing vid
@Markste-in4 жыл бұрын
how do u move the code that easily? what is the key binding for moving a selection? need this xD thx!
@albertoferrer88034 жыл бұрын
Congratulations! how do you have not a lot of subscribers? Nice Video, I have a question? how do you think is the best way to detect with a sensor on my agent the nearest another agent and after attack? (Im making a symmetric agents in one environment like a battle royale game) I supposed to Sensors detect what is the best time to attack, but in the first time they attack, i trying to make a custom near sensor (but i think this is covered by the actual Ray Perception sensor 3D. What do you think this could be the best solution? Now I'm subscribed :p
@TheStrokeForge4 жыл бұрын
I love it!!!
@chorusgamez7554 жыл бұрын
Hey guys, can anyone help me, when I run this the cars just jump endlessly, it's really annoying, they don't improve at all, if I run a unity example AI, such as the ball balancing one, it trains fine (also the rewards for my AI don't get printed to the command prompt) Edit: I fixed that. But it still isn't learning anything after 10 hours
@arthanant8634 Жыл бұрын
Hey I have the same problem. Did you find a solution for the AI not learning?
@ferdinandospagnolo76644 жыл бұрын
Why is it better to do the final training on the built version of unity rather than the editor?
@SebastianSchuchmannAI4 жыл бұрын
It's simpy faster. The Editor has a lot of overhead
@서재민-c7p4 жыл бұрын
Hi. your video is so good! But I have some question. How can i find a location of application that we build?? I mean, can you teach me what "--env=../Build/build.app" mean??
@nan15124 жыл бұрын
빌드 하실때 빌드할 경로 유니티에서 지정할 수 있지 않나요?
@DeJMan4 жыл бұрын
I wrote an extra rule(?) so that if the car jumps and it didnt jump over a car (unnecessary jump) then it loses 0.1 score. This made the car jump only when it needed to after training and not all the time.
@SebastianSchuchmannAI4 жыл бұрын
Nice, great Idea!
@bigedwerd4 жыл бұрын
Inspiring stuff
@etto44254 жыл бұрын
Hey Sebastian! I watched your tutorial and had loads of fun replicating it. The only problem is that my AI isn't learning! He just jumps over and over with no sign of improvement also after 34 minutes! Do you have any solution? Thanks!
@arthanant8634 Жыл бұрын
A little late to the party but did you find a solution?
@etto4425 Жыл бұрын
@@arthanant8634 i deleted unity a few months ago… guess you’re gonna have to keep looking 😅
@arthanant8634 Жыл бұрын
@@etto4425 damn okay lol
@DeJMan4 жыл бұрын
I dont understand the purpose of the Heuristic function.
@SebastianSchuchmannAI4 жыл бұрын
It is usally used for testing your agents via human input or some hard coded logic. Most of the classic ai in games is implemented in a heuristic fashion.
@mrstal52384 жыл бұрын
Thank you
@user-rl1ij5qo8r4 жыл бұрын
some help mlagents stops at self._traceback = tf_stack.extract_stack()
@thomasjardanedeoliveirabou31754 жыл бұрын
is anyone else getting a bugs with the file? my car is just FLYING, i had to adjust the mass and the highscore is HUGE,.
@SebastianSchuchmannAI4 жыл бұрын
Hey, maybe my physics settings got lost or something. I set the gravity in the physics settings quite high, to around 80, if i recall correctly.
@РегинаСафина-т2и4 жыл бұрын
u cool. thanks for ur channel!
@nischalada81084 жыл бұрын
This is insane
@applepie72824 жыл бұрын
thanks man
@ygreaterr4 жыл бұрын
that car is do be jumping doe
@lisastrazzella713 жыл бұрын
So here in this video you don't write the code you only explain it because I don't know if that is already the beginning or not:/ Many thanks in advanced:)
@Timotheeee14 жыл бұрын
when I train the cars they just jump endlessly and never improve
@chorusgamez7554 жыл бұрын
same! did you figure it out?
@Fuzzhead934 жыл бұрын
This is so cool! I'm curious how you could train different intelligences of AI for different difficulties and also curious about what the performance impact is of using ML agents in a shipped game, on mobile or pc
@alexandrecolautoneto73744 жыл бұрын
Nicuruuuu
@mateusgoncalvesmachado13614 жыл бұрын
Sebastian, how can I create my own trainer? I have PyTorch models waiting to be used with Unity's ML-Agents haha :)
@ElPataEPerro Жыл бұрын
😀
@monkeyrobotsinc.98754 жыл бұрын
outro music too ghetto. im not in the hood. im trying to learn. not rob and kill someone.