Seems like adding a facing reward would help stabilize the rotation.
@LolKiller_UA Жыл бұрын
I had the same thought!
@KushagraPratap Жыл бұрын
but he's asian, so adding a not facing punishment
@memoryman15 Жыл бұрын
I was gonna say the same thing, he should have made it hover correctly before asking it to move from point to point.
@prophangas Жыл бұрын
Or a negative reward for every Spin
@elmatichos Жыл бұрын
Or maybe a directional speed through the point? So more purposeful thruster orientation gets rewarded
@redpug5042 Жыл бұрын
you should also have a negative reward for high angular velocities, that way it has a reason to be more still
@nullumamare8660 Жыл бұрын
Also, allow more actions than just "turn on and off the thrust of these 4 rockets". If the AI could aim the rockets (like when you paddle backwards in a canoe to turn it), it would have better control over its rotation.
@thicctapeman9997 Жыл бұрын
Yeah and maybe adding a time reward so it needs to learn how to improve speed, that might cause it to do more "iron man" like flying
@redpug5042 Жыл бұрын
@@nullumamare8660 well i think it does have the ability to move each limb. It might be able to manage thrust, but i'm pretty sure it's only using limb movements.
@Jashtvorak Жыл бұрын
Wrists need to have thrust vectoring as well as the whole arm 🙂
@tomsterbg8130 Жыл бұрын
@@nullumamare8660 This sounds like a good idea and I think it'd be amazing if the thrust can be throttled instead of just on and off. However the more complex a model is the bigger brain and time and resources it needs. You saw how good and stable the drone was and that's because it has the same inputs, but 4 outputs for the engines while the iron man has 4 engine outputs and rotation for each limb.
@jaceg810 Жыл бұрын
Theory on why it flies so slow: Its original training was based on hovering around one point, thus when it gets a new destination, it still assumes that it should arrive there without momentum to better stay at that spot. Then it got a little training with randomly moving spots, having momentum there is bad too, since its actually way more probable that you need to turn around than that you need to continue going. This, along with little time based punishment, results in a slower ar
@ianbryant Жыл бұрын
Yeah I would try training it with a list of like 6 points that it has to hit in order. As soon as it hits the first point, remove that point and add a new random point to the end of the list.
@morgan0 Жыл бұрын
yeah also use line paths instead of dots to hit. as is the space in between would give it lower reward, so even a model that takes into account future/total reward would not like the space in between
@Delta1nToo Жыл бұрын
additionally i think it would benefit from having it's senses limiited and that it only knows where the target is by looking at it. if it's gonna fly like iron man it must also have the same senses as iron man
@gageparker Жыл бұрын
@@Delta1nToo Yeah I think that may help the spinning as well. Should probably have some penalty points in there for too much spinning,
@Jlewismedia Жыл бұрын
Yep, AI doesn't have a sense of time (unless you give it one) as long as it's completing it's goals it doesn't care if it takes 1000 years
@danieltoomey1653 Жыл бұрын
give it access to the next 2 points so it can find a vector between them, also give it incentive to be faster
@GAcaley321 Жыл бұрын
Agreed it needs to be able to see beyond one point to “fly” a course.
@BigGleemRecords Жыл бұрын
Lastly, give it rewards for not spinning, and negative incentives every time it spins
@bryanwoods3373 Жыл бұрын
Spinning is only a problem because we think it is. Part of what makes these AI learning experiments interesting is how the system finds solutions without our preconceived limitations. Fixing other factors and improving the flight system could very well fix the rotation problem. Or the AI could rotate in a straight line like a bullet.
@BigGleemRecords Жыл бұрын
@@bryanwoods3373 that’s easy to understand but in all practicality if we were going to implement this into reality, we wouldn’t want to spin we would want to fly straight. As a simulation of iron man flying it should fly like him as well as look cool doing it. If the AI mastered its control it could easily go much quicker and precise just flying straight. It needs positive and negative flight control incentives, a clear path as well as a timer to reach its potential.
@bryanwoods3373 Жыл бұрын
@BigGleemRecords The video isn't about implementing this into reality. If we were, we'd be using more robust systems that would have more control systems and likely build on human testing or include a human analog as part of the reward system. The spinning is the last thing you want to focus on since fixing everything else will address it.
@p529. Жыл бұрын
To combat the agent being slow and rotating you could add 2 other negative point rewards, every full rotation can deduct points which would likely reduce the spinning to a minimum and then also give it say 30 seconds to complete a course but deduct points for each second spent too, the agent might learn that the quicker it goes the less points he get deducted. I think revisiting this with these 2 additional criteria would be pretty interesting
@Drunken_Hamster Жыл бұрын
I think part of the reason it has such a hard time is because it doesn't quite have the detailed control vectors that Iron Man does. If you watch the hovering and flight scenes in the first movie, you'll see he has little compressed air nozzles, jet redirectors, and control surfaces on the boots to help stabilize. He also obviously has flaps on his back, and in later iterations of the armor he has backpack-style thrusters so his COG can be below the thrust point. If the game simulates air drag then add the flaps and stuff, too, but the minimum I think you need to add are the micro thrusters, back jets, and elbow/knee joints.
@rndmbnjmn Жыл бұрын
I was looking for this comment, even the clips in this video show control surfaces helping to stabilize Tony's flight.
@flyinggoatman Жыл бұрын
Can we just admire how a few years ago AI struggled to play a 2D game and now this. It's really remarkable.
@michaeln7381 Жыл бұрын
You should’ve added more or less points depending on how much time they to get to the target, that’s what would fix the flight.
@nemonomen3340 Жыл бұрын
That's a good solution, but I'd also reward it for facing toward the target to keep it from spinning.
@michaeln7381 Жыл бұрын
@@nemonomen3340 with those 2 things it should learn to fly perfectly… or spin at the right angle but that would be slower so that won’t happen.
@bumpybumpybumpybumpy Жыл бұрын
I'd love to see you tackle AI in a preexisting game. I dunno, throw half life at it and see what sticks.
@MrAmalasan Жыл бұрын
Your reward function could be modified to get what you want. Add in score for time, add in penalty for excessive rotations/spinning
@grimcity Жыл бұрын
This is my first time viewing your work, and I'm struck both by how incredibly cool this is and your f'ing hilarious sense of humor. I'm always the last to know, I guess. Really fantastic work, fam.
@ArtamisBot Жыл бұрын
I would make the reward relative to the forward direction to each node to promote a flying posture and stop the spinning. If you added the next node as input as well it might be a bit better at handling its own momentum out of each node.
@lombas3185 Жыл бұрын
* proceeds to float in place facing the point without moving at all *
@CharthuliusWheezer Жыл бұрын
Another thing that you could add to this would be random perturbations like throwing blocks at the agents so that they learn to recover from instability like the drone had at the end. Would you be willing to release the source files for the project and then do a compilation of different people's attempts at improving the result? I think the learning the actual Iron man style of flying might be possible but if you don't want to do all the work on that it could be fun to see what the community comes up with.
@reendevelops Жыл бұрын
Another banger. Always love the way you use memes to make it funny!
@nogoodgod4915 Жыл бұрын
After getting so many good suggestions on improving the ai, you have to make a part two now. And make it more of a challenge.
@SUED145 Жыл бұрын
it must have control over the propulsion force
@fodderfella Жыл бұрын
seems like the rotation is so that it can use centrifugal force as a stabilization method
@geterdone4936 Жыл бұрын
But he looks like a sped kid on a tricycle
@Flippin_Tables_Like_Jesus Жыл бұрын
Let it know where the next goal point is going to be after the one it's currently at disappears and add a reward for getting to the next goal faster. That way it'll learn to keep the momentum between goals instead of learning to slow down before hitting goals so it doesn't overshoot them and get punished.
@McShavey Жыл бұрын
LOL every time I watch your videos I laugh at the editing. Excellent.
@WyrdNexus_ Жыл бұрын
Maybe use radians instead of vectors for rotation? To make this effective you'll need three reward mechanics: facing, distance to point, and time. 1. Hover: (-score distance from pointA) 2. Hover: (-score distance from pointA) and Face (-score angle offset from direction to pointB) 3. Hover Time: (+score time on pointA) and Face (+score time [very close] to direction to pointB) 3. Race: Time (-score duration from start to pointA) and Face (+score time [very close] to direction to pointA) 4. Race: Time (-score duration from start to pointA then B then C) and Face (+score time to next destination point) 5. Add more and more points until you get to around 10 in one course, train them on that for several days. 6. Hover & Race: Distance (from next point), Face (offset from next point), Time (+score for time on point). Move a single point randomly every n seconds. Once they touch the point set the facing direction target randomly, until the point moves again. Now every time the point moves, they will get a high score for immediately facing the point, getting there as quick as possible, then staying there as long as possible while picking a new facing direction. 7. Bring it all together, and make another long course of 10 points or so, but remove all the rewards except completion time.
@xreylabs Жыл бұрын
Bro you just validated a theory I've had for a long time. I'm not sure if I'm saying this right so please bare with me. All kinetic type movement is always multi layered. There is probabilistic correct-ness at every axis. Therefor it is necessary for every joint to learn to work together. You need a series of cooperating routines that all independently learn and get rewarded by a higher system. For drones it would look like a computer flying with a full flight controller managing the power at every motor with an operator that verbalizes instructions. Love your work here ( subscribed! )
@raeraeraeraeraerae Жыл бұрын
glad your brain cells and hairs grew back :)
@geterdone4936 Жыл бұрын
You should add a style reward so he’s not spinning around like a neurodivergent fish and you should also add a speed reward so it’s not taking seven years to tickle the next ball
@MrAndroGaming Жыл бұрын
I think adding the wrist rotation joint in the hands and ankle joints in the feet would help stabilization a lotttt if trained enough! (As a bonus give it variable thrust... The ability to control how much thrust to output from each of the boosters independently and individually... But it will require a lotttt of training too)
@eldadyamin Жыл бұрын
Amazing work! I suggest adding another training step - fastest route. Eventually, the model will fly as intended. Good luck!
@SimplyElectronicsOfficial Жыл бұрын
you should have continued the first training session until it stopped spinning
@tejasmishra7832 Жыл бұрын
i guess the rotation was due to the movement allowed in the x and y direction .. it was trying to move in a 3d space with just 2d controls and couldnt comprihend that he can turn too
@oouziii4679 Жыл бұрын
Amazing, this is the kind of models I wanna make. Great video
@zettabitepragmara4031 Жыл бұрын
ayo new gonkee vid? time to watch instead of woman
Жыл бұрын
W comment
@starfleetau Жыл бұрын
As others have said, part of it is the physical points your taking into account, your wanting 6 degree's of freedom which means that you need to take all 6 degree's properly into account if you do not want to use heading etc because of the discontinuous number though that could have been corrected for in programming, you need to look at it in terms of relative velocities and programming it to do it's best at having certain velocities as low as possible, that's how things like the PID's etc in an Arducopter work for navigation, you take into account your standard X,Y,Z velocities but then you also have horizontal rotation velocity, pitch rotation velocity and roll rotation velocities if you don't want these to be in radians or degree's, have them in m per second or g's that gives you the ability to ask the AI to fine tune the model, the ai also really needs to be allowed to change it's positing more, it seemed in every one of those that it ended up basically becoming a ridged body, which kinda nullifies half the point of the test. and that appears to be because each thruster is always giving out a constant velocity, it's hard to tell without seeing the full code base in unity. Great project never the less love these vids.
@ohctascooby2 Жыл бұрын
So Stark’s space version of his suit has a booster set of thrusters mounted high in his back. You need to include those and make them the primary lift thruster. That allows you to use the arms and legs to fine tune the location. You also (likely) need more dexterity in the arms and legs.
@h0ckeyman136 Жыл бұрын
I love the low attention span shade and for that, a sub
@benjaminlines6387 Жыл бұрын
Finally! Really like your videos
@tomsterbg8130 Жыл бұрын
I think the AI would really benefit if you reduce the reward the longer it takes to achieve its goal and this is timed by a max timeout of 20 seconds and a minimal timeout of when the AI gets to the goal. This will remove inefficiencies in the more complex models such as the iron man spinning because it will realize that spinning makes it slow. Also if you can't possibly make a system that changes the reward to make the AI better or it just caps out at a certain performance it might need a bigger brain with more inputs and rethought outputs. This is because sometimes the brain is just a limit or the AI is blind to, for example the strength of gravity. There's a reason humans have g force detecting fluid pockets in the head and eyes and ears and skin nerves and all the other things that allow us to perceive the world. The more info you give it the better, but it'll require more resources so it's a balance. Bigger brain is also better, but then again the calculation is more expensive.
@lolcat69 Жыл бұрын
if you do something like this in the future, you could add a reward for it facing forwards, it's going to take a bit longer to train, but damm it is going to be more stable
@Krixsix Жыл бұрын
fr this is video is one of the best YT vids all time
@EigenA Жыл бұрын
Great job, a lot of room to continue developing your algorithm, but love the initiative and results are fun to watch.
@michaelganzer3684 Жыл бұрын
This might be a good demonstration on how the heater element of my old baking oven works. Gets the job done, but only readjusts when falling under or climbing over certain temperature thresholds.
@Dicklesberg Жыл бұрын
Would be really cool if you made this into a series of a few videos, where you take ideas from the comments and other ideas you come up with to improve the flying. Basically, I think the "meta answer" is to think through all the properties you would want to see in a perfect flight, and then build all of those things into the reward function. Other commenters have mentioned penalties for taking too long, penalties for excessive rotations, etc. One approach would be to compute the total "energy" used by all the rotors, and make the reward function "Gets the the destination in the least amount of time using the least amount of total energy". This would probably have a side benefit of reducing the extra rotation, since that is mostly a "waste" of energy.
@Crazyclay78YT Жыл бұрын
i think its less wasting energy and more wasting time
@AlexJoneses Жыл бұрын
spinning is because of stability, the way its spinning kinda makes it so it just stabilizes itself with a constant output rather than needing to vary every single thruster individually. To avoid this, when making the path rewards add a reward for maybe not rotating or facing the target
@fardouk Жыл бұрын
For the third step (on the map), change your rewards by going from "how close it is from the point" (it knows how to do it) to "how fast it completes the path" and get some generations : when you learn a track in track mania: first you try to finish it, then, when you know that, you try to make a new personal record.
@ChipboardDev Жыл бұрын
MLAPI is a blast, love it. This inspired me to (hopefully) do my next AI experiment soon.
@ephedrales Жыл бұрын
- You should add a sensor for the position of second next objective, the one right next to the current one, this way the model will be able to assess if it need to slow down or keep momentum after meeting the objective. - During training, instead of putting random point, make him follow the discrete points of a random spline or Bézier curve, looping around. It will emphasize keeping your momentum. You can event switch between both, curve and pure random, for a more resilient system. - Maybe add a speed reward too. - You could add a few neurone from output has input, this way you create some kind of short term memory for the model. Having some memory isn't such a bad thing isn't it ? Just imagine steering a car with no short term memory.
@cookesam6 Жыл бұрын
I like these explanations bro. This is really decent content, thanks for putting in the effort with your videos
@jag8926 Жыл бұрын
The rotation is to account for the lack of degrees of rotation in the limbs. It can't even lean forward to fly because it isn't able to position the hand thrusters under the center of gravity. It's also missing critical mechanics such as being able to turn thrust on and off. Sure it could fly eventually with full power all the time, but any small mistake it makes in the learning process is amplified because the momentum is already moving the that direction and it has no way to stop it, only reposition the hand to start working the momentum back towards the reward. This is another reason for the rotation.
@simonqwadjke-t5g Жыл бұрын
I see a good number of people suggesting to score the rotation. I instead propose a "nausea" value that would increase exponentially with more rotations or faster rotation, this nausea value would then add noise and randomness into the limb movements. Short and simple, more/faster rotation decreases dexterity, just like in real life.
@SaltyMcSaltyPants Жыл бұрын
You could try adding a time based reward (a 0 second score should be considered bad as well). A stability based reward could also help with training in the beginning 🤔
@comproprasad6438 Жыл бұрын
Saw that you had some parameters related to velocity which I think depends on the direction. Haven't done much machine learning or 3D animation programming myself but I think you need to train it on 2 random points and optimize for speed instead of velocity and time taken to reach the destination.
@dipereira0123 Жыл бұрын
Dude for real, you should have a premium version of you channel with the walkthrough this is the kind of content that some people like me can only dream of
@Dwaggiemorph Жыл бұрын
I would like to see a continuation of the iron man with a few changes. First being that it should only be rewarded when facing the target, hitting it with it's back should count against the reward. Secondly make the head be able to turn towards the reward, with limited human movements ofc. Thirdly, a speed reward, the faster the better, to a limit ofc. Honestly, I would love to do this kind of AI learning myself but I can't stay focus on anything for more than 5min.. :P So thank you for doing what I want to do but can't do. It might not be by our hands, but we still get to see the result of the labour.
@elyassaci9781 Жыл бұрын
Man ur so fkn funny continue like this first time I saw u and not the last
@Seanpence04 Жыл бұрын
6:07 Quaternions are actually continuous. Quaternions are a type of mathematical object that extend the concept of complex numbers to four dimensions. They can be thought of as a combination of a scalar (real number) and a vector in three-dimensional space. Just like real numbers and complex numbers, quaternions form a mathematical field, which means they satisfy certain algebraic properties such as commutativity, associativity, distributivity, and the existence of inverses (except for zero). One of the key properties of continuous objects is that small changes in the input lead to small changes in the output. Quaternions also have this property, which is known as continuity. Specifically, the quaternion multiplication and addition operations are continuous, meaning that small changes in the inputs lead to small changes in the output. This property of continuity makes quaternions useful in a variety of applications, including 3D computer graphics, robotics, and quantum mechanics.
@LrdBxRck Жыл бұрын
1) add thrusters to the back 2) add at least 1-10 output for each thruster 3) add negative reward to limit spinning
@rogerayman4499 Жыл бұрын
like my boy Pontypants used to say "Epik ballerina simulator 2k", awesome btw
@Beatsbasteln Жыл бұрын
i can see a future in which instagram- and tiktok content creators just rip off that scene of your ironman spinning around slowly through your obstacle course as a background video for their voiceover content
@HansPeter-gx9ew Жыл бұрын
what I learned from the video that Quaternions are not good to use for training, thank you :D Btw., a negative rewardr for overall spinning velocity would help to minimize the quirky movement
@matteoventura7465 Жыл бұрын
I read that some machine learning techniques try to fail the object by telling them to stay off the ground, that way they could learn more stricter and efficiently to avoid collision with a certain surface.
@DaveShap Жыл бұрын
Negative reward for time taken to complete the course could result in better flight mechanics aka preservation of momentum
@astrovation3281 Жыл бұрын
I love how there are so many comments from people that know how this works, but imo its fun to watch this
@Speculiar Жыл бұрын
I had to go back and watch this three times. Hilarious!
@terristen Жыл бұрын
I've done similar trainings in Unity. It's been a while, but if I can remember things from more than 2 weeks past... negatively incentivize your rotational velocity to reduce the reinforcement of rotation as a stabilizer. you might also apply a reinforcement using the dot-product of the global up vector and the model's up vector; that should incentivize standing orientation without eliminating more horizontal flight. I don't know what all your inputs were, but make sure you are capturing delta values of important things, i.e. delta rotational velocity. Without these, it's hard to optimize for those things because they become second-order and thus the model has to figure them out on its own. As for the quadcopter, I did one of those and used a PID to control thrust on each motor to optimize for level flight. Once I had that, I was going to train it to navigate to targets but never got around to it. FUADHD
@Crazyclay78YT Жыл бұрын
you should still do that if you can, id love to see that
@a-fletcher Жыл бұрын
I feel like if you added additional informally like g forces from spinning and added a penalty for spinning 2 much, plus maybe a bonus for having the right way for the drone it could improve the stability. Especially for the iron Man as it was just doing a lazy, I spin 2 win technique 😂😂. Super cool video though loved it.
@GameGasmTrailer Жыл бұрын
I wonder what force could've been added to the suit to balance the spinning momentum. Maybe two angled thrusters on the back.
@skilledthecat876 Жыл бұрын
Two things that would help greatly. 2. Train Iron Man to keep his eye on the ball. 1. Give his thrusters a variable throttle. He's overpowered right now, and that acceleration needs somewhere to go. That's why he's spinning so much. Spinning is a logical and efficient way to "blow off steam".
@micky2be Жыл бұрын
Really enjoyed your explanation and video format.
@carbonyte Жыл бұрын
To fly like in the movies the suit would need to create some kind of uplift, that it obviously do not. So one feet need to keep pointing down to hover, but it is absolutly unstable. There for the AI uses spin to stabilize the position. It is really clever, and i think the only smart way to stay stable in the air. And it shows, that it would be almost impossible for a human to fly in such a suit. It would need some movable thruster attached to the hips, to perform some kind of VTOL movements. Very interesting video. Thank you a lot for this hard work!
@shanjoogamer3609 Жыл бұрын
If that were Tony, He'd be puking in his helmet halfway through the course.
@notkotten Жыл бұрын
bro did this without even activating windows what a legend
@reptileassassin7660 Жыл бұрын
Retrain with time taken between points and add penalty for collision with stage. The network will learn that spinning makes it hard to change velocity and will correct itself. It’ll zip around like you want it to.
@AllisterVinris Жыл бұрын
I think it would be better to fix its rotation habit, and also reward speed to help it learn more optimal methods of flight. I would suggest giving it control over the strength of the propellers so that it can fine-tune it, but maybe that would complexify the learning process too much. Worth a shot maybe.
@marshmallow_fellow Жыл бұрын
you could multiply the reward with a second reward for looking at the goal it's moving to or looking at a separate goal if the task is to stay in one place that could solve the spinning around issue. the time taken to reach the goal being taken into account would help the speed a lot too
@Galerak1 Жыл бұрын
I couldn't help imagining that this was Tony Stark's TRUE first test flight. Tony throwing up in the suit and Jarvis continually assuring him that 'this is fine' and that he'll 'have it under control momentarily' 😂
@vvhitevvabbit6479 Жыл бұрын
There are multiple reasons why it might be spinning to solve the problem; the model has a mobility constraint that you are unaware of, if Unity physics are true to life, it could be taking advantage of gyroscopic stability, or your reward function needs to be adjusted to punish these kinds of unstable movements. In reality, a human being wouldn't be able to navigate this way, so this should be considered in your reward function. The more consideration you give to real life in your NN model, the more likely you'll achieve life-like results.
@joelmulder Жыл бұрын
It spins because spinning balances out any misaligned thrust in all directions.
@clarysshow Жыл бұрын
11:23 dude you're so true. Your words are similar to mine, we share the same knowledge , as great minds think alike Mr. Gonkee
@FreedomAirguns Жыл бұрын
It "sucks"(your words) at flying because it's like a rigidbody in the vacuum of space, without any perception of the environment nor the feedback required to "feel"/interact with the forces involved during flight in a MEDIUM, which can be approximated with a simplified version of the Navier-Stokes equations for the simulation of fluids/gases. But, you can even subdivide your space/environment in smaller bounding volumes, where each smaller volume can have many different atmospheric forces to apply to the parts of the model/triangles which overlaps them; with a boolean operation as flag/trigger(in the box/not in the box). This can be used to approximate atmospheric forces without the extreme computational overhead of advanced fluid simulations. They can be based on fixed maps or dynamic maps, which can be generated with shaders also, or even simple videos/gifs, where each pixel/rgba subcomponent could represent a force(0 to 255 per 4 hypothetical forces). You can also overlay many such force fields. You can consider the normals as reference points/sources on which the modifiers/forces have to be applied on the model, where each bounding volume influences the vector/vectors involved, simulating pressure/drag/temperature(or whatever needed), altering the path of the rigidbodies involved. With a feedback mechanism, the neural network then has a perception and it can act accordingly. The result will give you a natural flight, like that which you would have on earth with an ATMOSPHERE, not just gravity. Just remember that the speed at which each part enters a bounding volume acts as a modifier/multiplier when interacting with the forces contained in it. That's all it needs.
@glennhuman6936 Жыл бұрын
You probably should added a learning phase where it learned to recover from an uncontrolled fall
@FraudFord Жыл бұрын
100k!!! i rlly wish you get 100k subs very soon
@EfimBroStudio Жыл бұрын
You should have added a rule where the AI moves towards the target facing forward. I think then the model would stop spinning. Maybe. More likely...
@wtechboy18 Жыл бұрын
You might add a couple additional values - one for how much of a discrepancy is between the "facing" vector and the vector to the target, and another for the speed of the figure. I don't think you should necessarily make it only count speed on the "towards" vector, but like, maybe just speed within 90 degrees or so? Between the two of those, you'd encourage the AI to point to the next target *and* get there faster. If you REALLY wanted to get fancy with things like flight lines, you could have it calculate a bezier curve between the past, current, and next points, and assign it more reward for following that curve more closely, but that might be a little too much math for this kind of project.
@nehpets216 Жыл бұрын
So, this guys suit had no stabilizers (small weaker max thrust jets on the chest / legs that allow hovering) which would have made staying in a spot easier and allowed for small adjustments from closer to the center of mass while the arms and legs would go for the larger movements.
@VJArt_ Жыл бұрын
I feel like good adjustments would be a reward for looking forward like other people said, a bigger radius on the ball goals, in order to have a follow through momentum without it having to constantly correct itself, as well as spreading out the balls, so theres more time for momentum to be created
@TheMarshedMallow Жыл бұрын
It would help make it look a little less derpy if the AI had control over Thruster power, maybe adding a small reward for keeping the power low? (That way they can level themselves out and increase thrusters on straight aways) Also I agree with some of the other comments, adding a reward for facing the target would help stabilize rotation, or maybe, if you want them to fly horizontal, having a reward for the limbs to be facing away from the target.
@tachrayonic2982 Жыл бұрын
Scoring it based on the time taken to reach the point would theoretically encourage it to pick up the pace. Having 2 or more points, and only measuring the time at the final point might keep it from over-committing to any single point. You should have the next 2 or 3 points as inputs for the AI, so it has the opportunity to plan it's path beyond the next point. As others have said, giving more points for facing towards the path could help with spinning, although this should be lenient to factor in the body's inability to turn its head (particularly upwards). Penalizing angular velocity may be a better option, but this has it's own problems where you do want it to turn at times. I think ideally you'd want to have the head look towards an additional target, which may or may not be along the path. You could then penalize failing to do this, and use this to have additional control over the flight pattern. To make the points more lenient, you could increase the radius they detect for. To keep it from being counted early, the point doesn't count as being hit until the body starts moving away from it. Bonus points should be awarded for being close to the point when this happens, but it doesn't gain any benefit from turning around to hit a missed/overshot point.
@Future_Guy Жыл бұрын
Direction of flight (increase) + Posing (increase) + Self rotations (decrease)
@EmergencyTemporalShift Жыл бұрын
What might improve things is to have it target the point two ahead from the closest point, that way it always wants to move forward instead of matching the points exactly.
@GnJoe941 Жыл бұрын
Tony: JARVIS I think there is something wrong with my suit... JARVIS: It's working fine Sir..
@kurikokaleidoscope Жыл бұрын
Fabulous channel and style NEW SUBSCRIBER FROM JAPAN ❤
@misterwiggles8771 Жыл бұрын
Why not add a function or something to make it stabilize after every checkpoint. When I see it constantly spinning around it makes me feel like it's because there's constant motion, and the jets are always powered on, so it HAS to keep spinning to continue (I don't know shit about programming). But like, what would happen if you started it from a standing upright position, then had it pause between every checkpoint? Would it give you the seamless realism you're looking for? Or something (I'm also high)?
@misterwiggles8771 Жыл бұрын
lol at 17:12 *he just a lil confused but he got the spirit*
@JanBadertscher Жыл бұрын
i guess simulating drag and lift would make the model converge towards flying in a more horizontal pose and maybe even suppress that spinning, but i understand this would exceed the idea of this video...
@simethigsomethingidfk Жыл бұрын
Part of the problem is probably that it was trained to stay in one place first. So it learned how to stabilize itself by spinning. It might be better if it was trained how to move first, and add a reward for speed
@noahr4752 Жыл бұрын
Quaternions are continuous, used a lot in spacecraft dynamics. Often times you convert quaternions back to euler angles though because it’s hard to visually understand what a quaternion is doing from its output alone
@ronaldocala8447 Жыл бұрын
Adding a time reward, so it can gets faster to the checkpoint might help, so it doesn't take a lifetime to get there, and a negative reward for each complete rotation it does
@linecraftman3907 Жыл бұрын
The model is probably spinning as a simple way to strabilize into the upright direction like a gyroscope
@anthonylosego Жыл бұрын
Your second alteration should have been to add a reward for not spinning. When you are 180, you are cancelling any forward momentum for stability. Then move on to target seeking.
@tdog_ Жыл бұрын
you could make it better if you increase the reward for it facing in the correct direction, decrease for spinning. you could also increase the reward for getting to a point in the most effiecient way possible. you could also encourage it to use its legs for power and hands for stabalization.
@_dense Жыл бұрын
you didn't give it enough leg control so it had to spin to stay balanced.
@cam_by_art Жыл бұрын
adding my 2 cents in, making the points bigger, more like gates than points, whould incentives the AI to not be so focused on minor adjustments and more on just getting through as fast as possible, along with the other good suggestions from people in the comments, I could see this AI speeeeeding through the course
@biomechannibal8888 Жыл бұрын
It's spinning because it's trying its best to gyroscopically stabilize since it lacks the balance properties of the pilot's inner ear. Though, the suit must have had it's own stabilizing tech in order to fly remotely.
@David-gk2ml Жыл бұрын
" sometimes you gotta learn to run before you can learn to walk" Ironman