AttributeError: 'BehaviorSpec' object has no attribute 'observation_shapes',how to solve it?
@fahadmirza8497 Жыл бұрын
can i also implement Deep Q Network for training the agents?
@sexyscorch7548 Жыл бұрын
Screen with game shows up after env=.... But command prompt doesn't let me write code as well as the popped up game doesn't five me wait button like it did to u
@AlMgAgape Жыл бұрын
can i using Deep Q Network for training the agents? example for create a agent AI Car for racing game
@FernandoSevilla-q9u Жыл бұрын
Awesome!! Really Great content. I’ve learned a lot, thanks for sharing
@cpthorfe Жыл бұрын
Thank you so much for this. Fascinated by machine learning and unity-ml seems like a great place to explore. You've saved me a lot of headaches already and I can't wait to dig through the rest of your videos.
@jandelisidro89942 жыл бұрын
Help me, console said Cant Connect on port 5004, and the command shell said " AttributeError: 'str' object has no attribute '_key' " please help me been stuck on this for 4 days straight cant find best solution
@codyquist49082 жыл бұрын
"After a bit of crying..." So on point... This tutorial is superb! Thank you so much for the time and effort you have put into making this video, it is immensely helpful!
@ItsThe6752 жыл бұрын
you didn't explain selling at market value, selling at market doesn't sell at last traded price it sells within X% of where it's at, could be higher could be lower than what the price was when you executed market price bell/buy. for such a long video I was really hoping that you were going to get into everything, like the order book, ADL, index prices. what everything on the screen means and how it all comes into play with each other.
@noti882 жыл бұрын
Great run-through. Appreciate you touching on Auto-Deposit Margin, was wondering about that.
@김평-c6n2 жыл бұрын
This is the awesome lecture I've ever seen. Thanks to you I could handle the ML-Agent 2.0. Especially I got hard time with "Action Buffers and Heuristic parts". I'm looking forward to the next lecture on "Discrete Action". Thank you Thank you again for your painful effort and a great lecture!
@guesswho17052 жыл бұрын
Awesome explanation, wonderful video, thank you
@Joedon2 жыл бұрын
Hi, this is a great tutorial but Im really confused about one thing. I'm running into an issue where my agent won't train properly. The game is very similar to this one but slightly different I have an arena with 4 walls where I have a character. Both the agent and an apple spawn randomly. The entire goal of the game is for the agent to run into the apple and collect it (just like this game). I have practically the same observations & code etc as you, but it doesn't seem to learn, and instead just insists on running in a random direction instead of learning to use the coordinates its given for the apple. I thought "Observable Attribute Handling" might be the culprit but I see you have it on Ignore as well. The only thing different about my game is that it doesn't reset after the agent picks up the apple - everything's handled from a controller script which resets the game every 10 seconds, no matter the state. I've tried it with/without that and it doesn't seem to make a difference, however. The full list of observations I'm passing are (7 total): - Apple count within radius (of entire map for testing purpose) - Nearest apple vector3 (there's only one) - Agent's own vector3 Inputs are exact same as yours (2 continuous), except using a controller.Move instead of rigidbody. Code: - Agent Movement: www.toptal.com/developers/hastebin/ocodezijon.csharp - Spawner: www.toptal.com/developers/hastebin/usuzakehoc.csharp Any ideas? Here's a GIF of it: gyazo.com/29c37778c005b6fafd49262cbd1b45ac , this is after 1000+ gens. I ran it overnight in a similar setting with 5million steps and it still wasn't working. I suppose the only real difference is the chance of random collision into the apple to begin with is much smaller due to the gameplay space? Trying it with a large apple but the optimal strategy seems to be the same of just running in a random direction, and not using any of the observation date: gyazo.com/5c1cf2abb3d0d52dc4dc8a06fd6bea48 thanks sorry for the long comment
@leozinho2r2 жыл бұрын
Hi! Thanks so much for this tutorial, so so helpful! I've been applying it to my own custom environment. I've got everything working the way I want it except that for some reason when I reset after the first episode, I cannot move my player anymore, and the same happens when the agent is training... Any idea why that might be happening? Thanks in advance 🙏🏽
@rachelstclair98972 жыл бұрын
Hi, thank you! You might want to make sure that in behavior type you select 'heuristic' and check during play that you can move the character. If you can't move it, then you need to check your behavior heuristics are correct in the C++ code script.
@leozinho2r2 жыл бұрын
@@rachelstclair9897 Yeah it's weird because it moves fine during the first episode (both when I manually control it in 'heuristic' mode and when the agent starts training in 'default' mode), but as soon as it resets it stops responding to actions (in both manual 'heuristic' mode and agent training 'default' mode)... I've checked my script but can't find what's going wrong :(
@leozinho2r2 жыл бұрын
Hey, no need to reply to this I figured it out! I wasn't resetting my variable collisionInfo and apparently that was the problem 😅 Works now! 🙏🏽
@indrajitpal632 жыл бұрын
This was so informative ! Thanks a ton for making this 🤩. Just wish I came across this a few months earlier so that I wouldn't have to struggle to figure stuff out on my own - would have saved a ton of time & energy. Thanks once again ! 🎉
@jaechangko65742 жыл бұрын
Hi, quick question. How can I deal with the "Academy is inaccessible due to its protection level c#" error when I inherit the class?
@rachelstclair98972 жыл бұрын
Hi, It sounds like you have a version control issue. This tutorial is using an older version of ML-agents.
@jaechangko65742 жыл бұрын
@@rachelstclair9897 Awwwww I see. Thank you
@newcooldiscoveries57112 жыл бұрын
A truly enjoyable tutorial. Good length for the depth of information. Really well presented with all the needed details articulated clearly and concisely in a pleasant and friendly manner. You have a gift for teaching. Thanks!
@DoYouEvenStream3 жыл бұрын
Very informative video! Thank you.
@brendantierney37873 жыл бұрын
Awesome video, thank you !
@brendantierney37873 жыл бұрын
Bit Corn lol!
@juanzamoracr3 жыл бұрын
I tried this and had to change a couple of lines in the agent.cs: I believe is OnActionReceived(ActionBuffers actionBuffers) and Heuristic(in ActionBuffers actionsOut) instead of float[] lists
@rachelstclair98973 жыл бұрын
Hello Juan! This is an older tutorial on the previous ml-agents video. The updated for version 2.0 can be found here: kzbin.info/www/bejne/gILXY6GbZtychaM I'm glad you figured out the changes on your own!
@juanzamoracr3 жыл бұрын
@@rachelstclair9897 Still good! cool you have a new video
@howardb.7283 жыл бұрын
Very well presented - natural and honest. Thank you for taking the time to help the community. It is appreciated.
@juleswombat53093 жыл бұрын
That was really cool, and rather inspiring on the application of machine learning for Protein Analysis. Makes me think. I do however have a couple of queries: a) It sorta feels as though train_data still has the label indicator values left in the final column 1000, so the Nueral net may be seeing that last feature, and possibly overfitting against that when I remove that last column in val_data, I get the validation error of around 6%, not that different to Logistic Regression, and then ROC_AUC of 0.69, because of an excess of false Negatives (0s) b) I thought in pytorch we were supposed to call optimizer.zero_grad(), before doing the loss.backward calls ? However when I do put this call in, I could not get my network to consistently train, improve Anyhow I found this really interesting stuff.
@rachelstclair98973 жыл бұрын
Hello Jules! Please see the notebook here: colab.research.google.com/drive/1NjMbSSbooId82Mu1XU0lcyUw19VGHfvN?usp=sharing The training data is pulled in with x and y labels, so they are separate. X is the amino acid string, and Y is the label, bind or not. As for the optimizer, setting the gradient to 0 helps memory, and slightly improves performance. I updated the collab notebook to include it for you! You can also see the official docs here: pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html
@howardb.7283 жыл бұрын
One of the friendliest and most honest walk-through tutes on this rapidly changing and, at times, frustrating ml-agents environment. Enjoyed it so much!!! well done and thank you.
@fauzannrifai Жыл бұрын
agree mate
@juleswombat53093 жыл бұрын
That kinda useful and informative. The ML-Agents examples tend to use Multiple parallel Environments for faster Training. Will this Build and execute process against a Unity project Environment which has multiple copies of the Environment/Agent instances For faster Training ? I presume that it would recognise that ?
@rachelstclair98973 жыл бұрын
Hello Jules, yes you can absolutely use multiple training areas in the same build execution. I didn't show that here to keep it simple.
@gregorymaynard44433 жыл бұрын
This is really really cool. I have been meaning to try adding some learned AI to a few unity projects and watching this is a huge motivator! It seems like there is a lot to know, and not really having any idea how much learning I have yet to do makes it intimidating - at least for me. I didn't want to start with something that was outdated, so I was happy when I saw this v2.0 video. I appreciate it!
@melodicwuitartree3 жыл бұрын
Thank you so much for this. I appreciate you and this so much. This helps set up a whole foundation that I'll need. I also love how intuitive and relatable you make the whole process and how clear you make things on parameters that need to be set. Computer science is a male dominated space and it's refreshing to see more women becoming more included and involved in the CS community. You've made this whole ML-Agent experience that much more accessible and I'm very grateful for that.
@rachelstclair98973 жыл бұрын
Thank you so much David! I really appreciate your feedback. It's comments like this that make my time worthwhile for the videos. Please show us what you come up with! I love to see what other people are doing after this jumping off point!
@lafanzo19193 жыл бұрын
I get you but your screen is to small can't really see Mam
@rachelstclair98973 жыл бұрын
Thanks for the feedback! I'll try to address this in an updated video. Is it the text that is too small?
@CosmicSeeker693 жыл бұрын
That was going great Rachel - until we got the bit about the Trading Password. If this is a beginners guide you needed to have included that in the instructions (bc that's where I clicked away) Personally I was looking for up to date info on funding with Fiat - I'm not going to set up on another Ex just to buy BTC when I can fund direct.. I hope this helps you structure future content.
@rachelstclair98973 жыл бұрын
Hello! Thank you for your feedback. Can you mark the time in the video you are referring to? You said the password instructions were not clear. I'd like to clarify that for you. I also understand the fiat portion is a big part of it. I'm going to sort these issues out in an updated video!
@rainbowsunshinekitty39533 жыл бұрын
She told us in the description what she was covering. Excellent video 😊!
@momomo22983 жыл бұрын
thank you so much! helps a lot!
@lovebitcoin8133 жыл бұрын
Thanks babe
@JimFreedom20073 жыл бұрын
This video is BLANK but does have audio.
@rachelstclair98973 жыл бұрын
Hi Jim Freedom! Thanks for your comment. The video seems to be working for myself and others. Perhaps make sure your browser is up-to-date and you don't have specif ad-blockers that may prohibit the content?
@WentRumble3 жыл бұрын
this is the most useful guide on kucoin I have seen on youtube even though I lost my entire investment
@rachelstclair98973 жыл бұрын
Thank you Went Rumble! Hopefully you learned some useful strategies on risk aversion in the process! Good luck to you.
@TwodiVix3 жыл бұрын
I have problem with "float[]" thing, i copied code from github and it gives this message "Assets\agent.cs(69,26): error CS0115: 'RollerAgent.Heuristic(float[])': no suitable method found to override"
@andy18583 жыл бұрын
hey... Do you know this now? :c
@donglinwang58743 жыл бұрын
Hi Rachel. Thank you so much for the tutorial. When I clicked the Play button for the first time, the ball does not seem to respond to my keyboard input. The console shows me the following message: "Couldn't connect to trainer on port 5004 using API version 1.0.0. Will perform inference instead." Is there a way for me to fix this?
@donglinwang58743 жыл бұрын
I have managed to fix the problem. For some reason, I did not at the "Decision Requester" component to the sphere object. The warning message is to be expected, since Rachel also ran into the same message in the video.
@rachelstclair98973 жыл бұрын
Hi Donglin, thanks for y our comment and your fix! Be careful with this warning because it can turn into an error. It's usually negligible, but in some instances, you might run into versioning issues. If you are having trouble training and have exhausted every other possible option, I'd recommend updating your coms package. More info on that can be found on MLagents github.
@ThiwankaJayasiri3 жыл бұрын
Great stuff, super helpful to crack the code!
@beardordie53083 жыл бұрын
Versions compatibility issues is exactly what has kept me away from playing more with ml agents to date.
@alexstankovic29473 жыл бұрын
Oh my, oh my, so precise and to the point. Great work and keep it coming! 🤩
@veimars15673 жыл бұрын
gracias a tu video pude llegar mas de la mitad :) pero me tranque en el minuto 21:00 :( pero gracias :,)
@rachelstclair98973 жыл бұрын
Thank you for your comments! A bit has changed since the new ML-Agents package update. I'll be posting a new video on the most recent version soon!
@veimars15673 жыл бұрын
@@rachelstclair9897 Thank you very much I will be looking forward to it :,)
@893uma33 жыл бұрын
Is it possible to use ML-Agents to create strategic AI that targets specific individuals, such as detecting the habits of players and exploiting their weaknesses in 3D action games?
@rachelstclair98973 жыл бұрын
Hi, I'm sure this is possible if you put your mind to you. You might want to look more at using tags on agents and objects. You can also look into side-channels in mlagents-envs if you need some more advanced methods. Good luck!
@zezooalzezoo52003 жыл бұрын
Thank , can i use it for enemies ai ?
@rachelstclair98973 жыл бұрын
Absolutely! If you get it working, make sure to drop a link so we can see the results!
@zezooalzezoo52003 жыл бұрын
@@rachelstclair9897 I will of course, still under processing, need to learn more about AI behavior & logic
@mingukkim133 жыл бұрын
Appreciate for the tutorial video! I got a question about code in expected_sarsa.py line 41. I have changed this code below: action = env.behavior_specs[behavior_name].create_random_action(len(decision_steps)) to action = env.behavior_specs[behavior_name].action_spec.random_action(len(decision_steps)) since there is some of updates in base_env.py ... But this code occurs this error below and I can't find solutions.. : IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices Is there any solution?
@raknak76673 жыл бұрын
i have the same issue, pls can someone help us?
@rachelstclair98973 жыл бұрын
Hi guys, since the new ml-agents update, there is some code that in my files that is not longer supported. I'm working on releases some new videos with version 2.0. You can't use `.create_random_action` in the older versions, instead you can use `.random_action(1)`, where the 1 indicates the action index. Hope this helps!
@SteliosStavroulakis3 жыл бұрын
It doesn't seem to work because of the mlagents version I guess. I have the same Unity version (2019.4) and mlagents package (1.5.0) but I'm wondering if the reason some functions do not exist (for example is_action_continuous() ) is because of the python mlagents version? I installed in python mlagents and mlagents-envs version 0.27.0
@rachelstclair98973 жыл бұрын
Hello Stelios. I am using mlagents-envs version 0.23.0 and same as the ones you listed above. You can skip the is_action_continuous(), I'm pretty sure they took it out in later versions. There is probably a work around, but if you print out the entire spec, it should give you more info. Sorry, it's hard to keep up with the awesome dev team in charge of ML-agents!!
@SteliosStavroulakis3 жыл бұрын
@@rachelstclair9897 Thank you for the swift response! Your tutorial was the only one that helped me setup a unity environment from scratch and train ml-agents with it using python (OpenAI Gym). I just can't thank you enough, truly. It finally works! 🙏😊
@SteliosStavroulakis3 жыл бұрын
[update] - I couldn't follow along with mlagents/mlagents-envs version 0.23.0 but it works perfectly with version 0.20.0 👌
@haraldgundersen73033 жыл бұрын
This was great. Glad I stumbled upon your channel. Looking forward to follow into the future 👍😊😎
Thank you for ml-agents video. For me Reinforcement with ml-agents & Unity seems to be hard to get clear idea how to make own working projects etc. There is couple of books from ml-agents, but all books seems to be quite old. Best source so far is KZbin.
@beardordie53083 жыл бұрын
FYI volume is WAY too quiet. Thanks for the video.
@rachelstclair98973 жыл бұрын
Sorry about that
@mykewl54063 жыл бұрын
I am so grateful that this video exists! Thank you!
@mingukkim134 жыл бұрын
I followed every step exactly and when I tested this by using 'Heuristic only' behavior type, it worked. But When I changed it to 'Default' and enter the command "mlagents-learn config/Roller_Ball_tutorial_config.yaml --run-id=RollerBallTest --force" and press the play button, the RollerAgent does not move.. Is there any solution?
@rachelstclair98974 жыл бұрын
Hi, did you set up the yaml file correctly? And do you have all the requirements installed correctly?
@mingukkim134 жыл бұрын
@@rachelstclair9897 Yes. and I solved this problem. just leave this comment for those who will get the same error as me: I have downgraded ML Agents version from 1.7.2 to 1.5.0. I think the reason is discordance between Python API and Unity API. Anyway, thank you for your kind comment!