Пікірлер
@MyungjinPark-y5f
@MyungjinPark-y5f 4 ай бұрын
this is really helpful for me. thx a lot.
@MarioLau-o2i
@MarioLau-o2i Жыл бұрын
AttributeError: 'BehaviorSpec' object has no attribute 'observation_shapes',how to solve it?
@fahadmirza8497
@fahadmirza8497 Жыл бұрын
can i also implement Deep Q Network for training the agents?
@sexyscorch7548
@sexyscorch7548 Жыл бұрын
Screen with game shows up after env=.... But command prompt doesn't let me write code as well as the popped up game doesn't five me wait button like it did to u
@AlMgAgape
@AlMgAgape Жыл бұрын
can i using Deep Q Network for training the agents? example for create a agent AI Car for racing game
@FernandoSevilla-q9u
@FernandoSevilla-q9u Жыл бұрын
Awesome!! Really Great content. I’ve learned a lot, thanks for sharing
@cpthorfe
@cpthorfe Жыл бұрын
Thank you so much for this. Fascinated by machine learning and unity-ml seems like a great place to explore. You've saved me a lot of headaches already and I can't wait to dig through the rest of your videos.
@jandelisidro8994
@jandelisidro8994 2 жыл бұрын
Help me, console said Cant Connect on port 5004, and the command shell said " AttributeError: 'str' object has no attribute '_key' " please help me been stuck on this for 4 days straight cant find best solution
@codyquist4908
@codyquist4908 2 жыл бұрын
"After a bit of crying..." So on point... This tutorial is superb! Thank you so much for the time and effort you have put into making this video, it is immensely helpful!
@ItsThe675
@ItsThe675 2 жыл бұрын
you didn't explain selling at market value, selling at market doesn't sell at last traded price it sells within X% of where it's at, could be higher could be lower than what the price was when you executed market price bell/buy. for such a long video I was really hoping that you were going to get into everything, like the order book, ADL, index prices. what everything on the screen means and how it all comes into play with each other.
@noti88
@noti88 2 жыл бұрын
Great run-through. Appreciate you touching on Auto-Deposit Margin, was wondering about that.
@김평-c6n
@김평-c6n 2 жыл бұрын
This is the awesome lecture I've ever seen. Thanks to you I could handle the ML-Agent 2.0. Especially I got hard time with "Action Buffers and Heuristic parts". I'm looking forward to the next lecture on "Discrete Action". Thank you Thank you again for your painful effort and a great lecture!
@guesswho1705
@guesswho1705 2 жыл бұрын
Awesome explanation, wonderful video, thank you
@Joedon
@Joedon 2 жыл бұрын
Hi, this is a great tutorial but Im really confused about one thing. I'm running into an issue where my agent won't train properly. The game is very similar to this one but slightly different I have an arena with 4 walls where I have a character. Both the agent and an apple spawn randomly. The entire goal of the game is for the agent to run into the apple and collect it (just like this game). I have practically the same observations & code etc as you, but it doesn't seem to learn, and instead just insists on running in a random direction instead of learning to use the coordinates its given for the apple. I thought "Observable Attribute Handling" might be the culprit but I see you have it on Ignore as well. The only thing different about my game is that it doesn't reset after the agent picks up the apple - everything's handled from a controller script which resets the game every 10 seconds, no matter the state. I've tried it with/without that and it doesn't seem to make a difference, however. The full list of observations I'm passing are (7 total): - Apple count within radius (of entire map for testing purpose) - Nearest apple vector3 (there's only one) - Agent's own vector3 Inputs are exact same as yours (2 continuous), except using a controller.Move instead of rigidbody. Code: - Agent Movement: www.toptal.com/developers/hastebin/ocodezijon.csharp - Spawner: www.toptal.com/developers/hastebin/usuzakehoc.csharp Any ideas? Here's a GIF of it: gyazo.com/29c37778c005b6fafd49262cbd1b45ac , this is after 1000+ gens. I ran it overnight in a similar setting with 5million steps and it still wasn't working. I suppose the only real difference is the chance of random collision into the apple to begin with is much smaller due to the gameplay space? Trying it with a large apple but the optimal strategy seems to be the same of just running in a random direction, and not using any of the observation date: gyazo.com/5c1cf2abb3d0d52dc4dc8a06fd6bea48 thanks sorry for the long comment
@leozinho2r
@leozinho2r 2 жыл бұрын
Hi! Thanks so much for this tutorial, so so helpful! I've been applying it to my own custom environment. I've got everything working the way I want it except that for some reason when I reset after the first episode, I cannot move my player anymore, and the same happens when the agent is training... Any idea why that might be happening? Thanks in advance 🙏🏽
@rachelstclair9897
@rachelstclair9897 2 жыл бұрын
Hi, thank you! You might want to make sure that in behavior type you select 'heuristic' and check during play that you can move the character. If you can't move it, then you need to check your behavior heuristics are correct in the C++ code script.
@leozinho2r
@leozinho2r 2 жыл бұрын
@@rachelstclair9897 Yeah it's weird because it moves fine during the first episode (both when I manually control it in 'heuristic' mode and when the agent starts training in 'default' mode), but as soon as it resets it stops responding to actions (in both manual 'heuristic' mode and agent training 'default' mode)... I've checked my script but can't find what's going wrong :(
@leozinho2r
@leozinho2r 2 жыл бұрын
Hey, no need to reply to this I figured it out! I wasn't resetting my variable collisionInfo and apparently that was the problem 😅 Works now! 🙏🏽
@indrajitpal63
@indrajitpal63 2 жыл бұрын
This was so informative ! Thanks a ton for making this 🤩. Just wish I came across this a few months earlier so that I wouldn't have to struggle to figure stuff out on my own - would have saved a ton of time & energy. Thanks once again ! 🎉
@jaechangko6574
@jaechangko6574 2 жыл бұрын
Hi, quick question. How can I deal with the "Academy is inaccessible due to its protection level c#" error when I inherit the class?
@rachelstclair9897
@rachelstclair9897 2 жыл бұрын
Hi, It sounds like you have a version control issue. This tutorial is using an older version of ML-agents.
@jaechangko6574
@jaechangko6574 2 жыл бұрын
@@rachelstclair9897 Awwwww I see. Thank you
@newcooldiscoveries5711
@newcooldiscoveries5711 2 жыл бұрын
A truly enjoyable tutorial. Good length for the depth of information. Really well presented with all the needed details articulated clearly and concisely in a pleasant and friendly manner. You have a gift for teaching. Thanks!
@DoYouEvenStream
@DoYouEvenStream 3 жыл бұрын
Very informative video! Thank you.
@brendantierney3787
@brendantierney3787 3 жыл бұрын
Awesome video, thank you !
@brendantierney3787
@brendantierney3787 3 жыл бұрын
Bit Corn lol!
@juanzamoracr
@juanzamoracr 3 жыл бұрын
I tried this and had to change a couple of lines in the agent.cs: I believe is OnActionReceived(ActionBuffers actionBuffers) and Heuristic(in ActionBuffers actionsOut) instead of float[] lists
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hello Juan! This is an older tutorial on the previous ml-agents video. The updated for version 2.0 can be found here: kzbin.info/www/bejne/gILXY6GbZtychaM I'm glad you figured out the changes on your own!
@juanzamoracr
@juanzamoracr 3 жыл бұрын
@@rachelstclair9897 Still good! cool you have a new video
@howardb.728
@howardb.728 3 жыл бұрын
Very well presented - natural and honest. Thank you for taking the time to help the community. It is appreciated.
@juleswombat5309
@juleswombat5309 3 жыл бұрын
That was really cool, and rather inspiring on the application of machine learning for Protein Analysis. Makes me think. I do however have a couple of queries: a) It sorta feels as though train_data still has the label indicator values left in the final column 1000, so the Nueral net may be seeing that last feature, and possibly overfitting against that when I remove that last column in val_data, I get the validation error of around 6%, not that different to Logistic Regression, and then ROC_AUC of 0.69, because of an excess of false Negatives (0s) b) I thought in pytorch we were supposed to call optimizer.zero_grad(), before doing the loss.backward calls ? However when I do put this call in, I could not get my network to consistently train, improve Anyhow I found this really interesting stuff.
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hello Jules! Please see the notebook here: colab.research.google.com/drive/1NjMbSSbooId82Mu1XU0lcyUw19VGHfvN?usp=sharing The training data is pulled in with x and y labels, so they are separate. X is the amino acid string, and Y is the label, bind or not. As for the optimizer, setting the gradient to 0 helps memory, and slightly improves performance. I updated the collab notebook to include it for you! You can also see the official docs here: pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html
@howardb.728
@howardb.728 3 жыл бұрын
One of the friendliest and most honest walk-through tutes on this rapidly changing and, at times, frustrating ml-agents environment. Enjoyed it so much!!! well done and thank you.
@fauzannrifai
@fauzannrifai Жыл бұрын
agree mate
@juleswombat5309
@juleswombat5309 3 жыл бұрын
That kinda useful and informative. The ML-Agents examples tend to use Multiple parallel Environments for faster Training. Will this Build and execute process against a Unity project Environment which has multiple copies of the Environment/Agent instances For faster Training ? I presume that it would recognise that ?
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hello Jules, yes you can absolutely use multiple training areas in the same build execution. I didn't show that here to keep it simple.
@gregorymaynard4443
@gregorymaynard4443 3 жыл бұрын
This is really really cool. I have been meaning to try adding some learned AI to a few unity projects and watching this is a huge motivator! It seems like there is a lot to know, and not really having any idea how much learning I have yet to do makes it intimidating - at least for me. I didn't want to start with something that was outdated, so I was happy when I saw this v2.0 video. I appreciate it!
@melodicwuitartree
@melodicwuitartree 3 жыл бұрын
Thank you so much for this. I appreciate you and this so much. This helps set up a whole foundation that I'll need. I also love how intuitive and relatable you make the whole process and how clear you make things on parameters that need to be set. Computer science is a male dominated space and it's refreshing to see more women becoming more included and involved in the CS community. You've made this whole ML-Agent experience that much more accessible and I'm very grateful for that.
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Thank you so much David! I really appreciate your feedback. It's comments like this that make my time worthwhile for the videos. Please show us what you come up with! I love to see what other people are doing after this jumping off point!
@lafanzo1919
@lafanzo1919 3 жыл бұрын
I get you but your screen is to small can't really see Mam
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Thanks for the feedback! I'll try to address this in an updated video. Is it the text that is too small?
@CosmicSeeker69
@CosmicSeeker69 3 жыл бұрын
That was going great Rachel - until we got the bit about the Trading Password. If this is a beginners guide you needed to have included that in the instructions (bc that's where I clicked away) Personally I was looking for up to date info on funding with Fiat - I'm not going to set up on another Ex just to buy BTC when I can fund direct.. I hope this helps you structure future content.
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hello! Thank you for your feedback. Can you mark the time in the video you are referring to? You said the password instructions were not clear. I'd like to clarify that for you. I also understand the fiat portion is a big part of it. I'm going to sort these issues out in an updated video!
@rainbowsunshinekitty3953
@rainbowsunshinekitty3953 3 жыл бұрын
She told us in the description what she was covering. Excellent video 😊!
@momomo2298
@momomo2298 3 жыл бұрын
thank you so much! helps a lot!
@lovebitcoin813
@lovebitcoin813 3 жыл бұрын
Thanks babe
@JimFreedom2007
@JimFreedom2007 3 жыл бұрын
This video is BLANK but does have audio.
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hi Jim Freedom! Thanks for your comment. The video seems to be working for myself and others. Perhaps make sure your browser is up-to-date and you don't have specif ad-blockers that may prohibit the content?
@WentRumble
@WentRumble 3 жыл бұрын
this is the most useful guide on kucoin I have seen on youtube even though I lost my entire investment
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Thank you Went Rumble! Hopefully you learned some useful strategies on risk aversion in the process! Good luck to you.
@TwodiVix
@TwodiVix 3 жыл бұрын
I have problem with "float[]" thing, i copied code from github and it gives this message "Assets\agent.cs(69,26): error CS0115: 'RollerAgent.Heuristic(float[])': no suitable method found to override"
@andy1858
@andy1858 3 жыл бұрын
hey... Do you know this now? :c
@donglinwang5874
@donglinwang5874 3 жыл бұрын
Hi Rachel. Thank you so much for the tutorial. When I clicked the Play button for the first time, the ball does not seem to respond to my keyboard input. The console shows me the following message: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." Is there a way for me to fix this?
@donglinwang5874
@donglinwang5874 3 жыл бұрын
I have managed to fix the problem. For some reason, I did not at the "Decision Requester" component to the sphere object. The warning message is to be expected, since Rachel also ran into the same message in the video.
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hi Donglin, thanks for y our comment and your fix! Be careful with this warning because it can turn into an error. It's usually negligible, but in some instances, you might run into versioning issues. If you are having trouble training and have exhausted every other possible option, I'd recommend updating your coms package. More info on that can be found on MLagents github.
@ThiwankaJayasiri
@ThiwankaJayasiri 3 жыл бұрын
Great stuff, super helpful to crack the code!
@beardordie5308
@beardordie5308 3 жыл бұрын
Versions compatibility issues is exactly what has kept me away from playing more with ml agents to date.
@alexstankovic2947
@alexstankovic2947 3 жыл бұрын
Oh my, oh my, so precise and to the point. Great work and keep it coming! 🤩
@veimars1567
@veimars1567 3 жыл бұрын
gracias a tu video pude llegar mas de la mitad :) pero me tranque en el minuto 21:00 :( pero gracias :,)
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Thank you for your comments! A bit has changed since the new ML-Agents package update. I'll be posting a new video on the most recent version soon!
@veimars1567
@veimars1567 3 жыл бұрын
@@rachelstclair9897 Thank you very much I will be looking forward to it :,)
@893uma3
@893uma3 3 жыл бұрын
Is it possible to use ML-Agents to create strategic AI that targets specific individuals, such as detecting the habits of players and exploiting their weaknesses in 3D action games?
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hi, I'm sure this is possible if you put your mind to you. You might want to look more at using tags on agents and objects. You can also look into side-channels in mlagents-envs if you need some more advanced methods. Good luck!
@zezooalzezoo5200
@zezooalzezoo5200 3 жыл бұрын
Thank , can i use it for enemies ai ?
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Absolutely! If you get it working, make sure to drop a link so we can see the results!
@zezooalzezoo5200
@zezooalzezoo5200 3 жыл бұрын
@@rachelstclair9897 I will of course, still under processing, need to learn more about AI behavior & logic
@mingukkim13
@mingukkim13 3 жыл бұрын
Appreciate for the tutorial video! I got a question about code in expected_sarsa.py line 41. I have changed this code below: action = env.behavior_specs[behavior_name].create_random_action(len(decision_steps)) to action = env.behavior_specs[behavior_name].action_spec.random_action(len(decision_steps)) since there is some of updates in base_env.py ... But this code occurs this error below and I can't find solutions.. : IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices Is there any solution?
@raknak7667
@raknak7667 3 жыл бұрын
i have the same issue, pls can someone help us?
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hi guys, since the new ml-agents update, there is some code that in my files that is not longer supported. I'm working on releases some new videos with version 2.0. You can't use `.create_random_action` in the older versions, instead you can use `.random_action(1)`, where the 1 indicates the action index. Hope this helps!
@SteliosStavroulakis
@SteliosStavroulakis 3 жыл бұрын
It doesn't seem to work because of the mlagents version I guess. I have the same Unity version (2019.4) and mlagents package (1.5.0) but I'm wondering if the reason some functions do not exist (for example is_action_continuous() ) is because of the python mlagents version? I installed in python mlagents and mlagents-envs version 0.27.0
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Hello Stelios. I am using mlagents-envs version 0.23.0 and same as the ones you listed above. You can skip the is_action_continuous(), I'm pretty sure they took it out in later versions. There is probably a work around, but if you print out the entire spec, it should give you more info. Sorry, it's hard to keep up with the awesome dev team in charge of ML-agents!!
@SteliosStavroulakis
@SteliosStavroulakis 3 жыл бұрын
@@rachelstclair9897 Thank you for the swift response! Your tutorial was the only one that helped me setup a unity environment from scratch and train ml-agents with it using python (OpenAI Gym). I just can't thank you enough, truly. It finally works! 🙏😊
@SteliosStavroulakis
@SteliosStavroulakis 3 жыл бұрын
[update] - I couldn't follow along with mlagents/mlagents-envs version 0.23.0 but it works perfectly with version 0.20.0 👌
@haraldgundersen7303
@haraldgundersen7303 3 жыл бұрын
This was great. Glad I stumbled upon your channel. Looking forward to follow into the future 👍😊😎
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
drive.google.com/file/d/1NjMbSSbooId82Mu1XU0lcyUw19VGHfvN/view?usp=sharing
@harrivayrynen
@harrivayrynen 3 жыл бұрын
Thank you for ml-agents video. For me Reinforcement with ml-agents & Unity seems to be hard to get clear idea how to make own working projects etc. There is couple of books from ml-agents, but all books seems to be quite old. Best source so far is KZbin.
@beardordie5308
@beardordie5308 3 жыл бұрын
FYI volume is WAY too quiet. Thanks for the video.
@rachelstclair9897
@rachelstclair9897 3 жыл бұрын
Sorry about that
@mykewl5406
@mykewl5406 3 жыл бұрын
I am so grateful that this video exists! Thank you!
@mingukkim13
@mingukkim13 4 жыл бұрын
I followed every step exactly and when I tested this by using 'Heuristic only' behavior type, it worked. But When I changed it to 'Default' and enter the command "mlagents-learn config/Roller_Ball_tutorial_config.yaml --run-id=RollerBallTest --force" and press the play button, the RollerAgent does not move.. Is there any solution?
@rachelstclair9897
@rachelstclair9897 4 жыл бұрын
Hi, did you set up the yaml file correctly? And do you have all the requirements installed correctly?
@mingukkim13
@mingukkim13 4 жыл бұрын
​@@rachelstclair9897 Yes. and I solved this problem. just leave this comment for those who will get the same error as me: I have downgraded ML Agents version from 1.7.2 to 1.5.0. I think the reason is discordance between Python API and Unity API. Anyway, thank you for your kind comment!