I have always had a question about mlagents: they randomly select actions at the beginning of training. Can we incorporate human intervention into the training process of mlagents to make them train faster? Is there a corresponding method in mlagents? Looking forward to your answer.
@neuro_corgi2 жыл бұрын
Actually, ML-Agents can be used for imitation learning, and imitation learning is a supervised learning
@TylerGreen4 жыл бұрын
Really nice stuff! I plan on using ML for a future vid (probably in 2 vids) and have been watching your vids. They're super helpful!
@ProjectWander4 жыл бұрын
This is amazing. I just starting building some trees in behavior designer, but without a TON of code its always going to look super robotic and end up doing the same predictable stuff every time. This looks like quite the solution.
@Rizzmaster90016 ай бұрын
how did it work out for you?
@ProjectWander5 ай бұрын
@@Rizzmaster9001 I ended the project, but it could have worked. It takes a LOT of training
@choco24822 жыл бұрын
is it possible to export my ML model into other languages?
@lucutes29363 ай бұрын
any alternatives?
@anonymosranger47594 жыл бұрын
Turns out he is a PyTorch Guy, Where is the Tensorflow Squad? (I am Tensorflow!)
@juleswombat53093 жыл бұрын
But doesn't ML Agents use Tensorflow anyway, I presume all the use of Tensorboard, inferred a Tensorflow backend. But the stack is pretty much hidden by the ML Agents hyperparamters and config files anyway. As a novice in Deep ML, I started with Keras, and hence now Tensorflow 2.0. But I am seeing a lot of the new Deep RL algorithms examples being developed and explained through PyTorch based networks, with easier control over which sub networks are frozen. I perceive that Academia has some preference for PyTorch.