No video

Robot Dog Learns to Walk - Bittle Reinforcement Learning p.3

  Рет қаралды 55,098

sentdex

sentdex

Күн бұрын

Пікірлер: 104
@Skyentific
@Skyentific 2 жыл бұрын
I really hope to see part 4 of these series. Maybe with minimum power/energy usage, instead of minimum changes in direction. I think this will solve everything! :)
@sandroalcantara6025
@sandroalcantara6025 Жыл бұрын
you really build excelent robots. thanks :D
@Kram1032
@Kram1032 2 жыл бұрын
One thing that's probably a good idea is to go for efficiency. Generally speaking, that means punishing energy usage. Lower-energy gaits tend to also be more robust afaik
@sentdex
@sentdex 2 жыл бұрын
How might you calculate energy use?
@DavePetrillo
@DavePetrillo 2 жыл бұрын
@@sentdex energy expended by the servos in a move is the torque on that joint times the angular displacement they moved in that step. Torque should be available from the simulator and displacement you chose as your model output. Also worth noting that you haven't put any limitations on the system to reflect real limitations of the actuators or the hardware. You can command the simulation to go a certain displacement but the robot likely cannot complete the move as commanded when its loaded. When the robot flips up into the air, its not an abuse of physics, the physics are probably accurate, the inaccuracy is that your actuators are infinitely powerful. You may or may not find that the gaits on real bittle don't look like the simulation until you add some rate and torque limits and re-train.
@mark_plays_games-p9c
@mark_plays_games-p9c 2 жыл бұрын
@@sentdex Maybe you could punish for using too much torque (force)?
@DjSapsan
@DjSapsan 2 жыл бұрын
Just wanted to suggest this. Minimize motor usage
@sentdex
@sentdex 2 жыл бұрын
@@DavePetrillo While torque exists in the simulator, the whole project is aimed at getting this to work in real life too, on this quadruped. I see no way to acquire the torque of the servos on the physical Bittle's servos. It's a 0 or 1 switch situation when it comes to torque on these little servos.
@Nerdy135
@Nerdy135 2 жыл бұрын
My NNFS book came in a few days ago and I gotta say I love it. Thank you for all you do here on this channel.
@thundersepp
@thundersepp 2 ай бұрын
I think now would be a perfect time for a part 4 😁
@mr.witter4543
@mr.witter4543 2 жыл бұрын
This is awesome! I also have a Petoi Bittle and I'm planning to start practice reinforcement learning on it as well so these videos are a life-saver. Right now I'm using ROS with a raspi mounted on mine for teleoperation and other cool features but I'd like to train it to make sense of it's surroundings and eventually just act on it's own (for now walking and flipping would be a start). I see a couple of comments about calculating energy usage and how you could do it on Bittle. One thing that comes to mind is to simulate the current draw from the battery to each of the motors-basically simulate the battery on the robot as well. So for example, if motor 8 moves by 0.2, the battery discharges 0.5%. These are made up numbers as I didn't look into the exact draw of the servo motors. The second change you could make which would benefit the simulated battery is to have a delayed reward, so instead of getting a reward after each action, you get a reward after either making it a certain distance or fully depleting the battery, and this would reset the simulation. This would encourage Bittle to not only move in a certain direction, but to also conserve energy with movements. This also alleviates your worry of over constraining the robot, and since you have the resources for training it should work just as well with the Proximal Policy.
@petriseppanen4975
@petriseppanen4975 Жыл бұрын
After a couple of days of swearing and testing dobot robot model on Omniverse RL. My model has two servos (exactly three, but the third is not used in this case) that are working together at the same time. Now I started to understand why, I like the discrete delta explanation. Huge thanks to you
@Tolstoievsky
@Tolstoievsky 2 жыл бұрын
i think the coolest part was when it stumbled into hopping...esp since that exists in nature anyway (frogs, pigeons, etc), no other movement looked even remotely natural
@baserockbathead
@baserockbathead 2 жыл бұрын
Wouldn't it be better to punish for 'per servo' change of direction instead of total? That's probably why your upper legs aren't working, right? As it would seek to prevent as much servo movement as possible?
@sdfgeoff
@sdfgeoff Жыл бұрын
I've been playing with a similar problem - getting a (sprawling rather than mammalian) type quadrupedal robot to walk using RL. Your videos gave me some great ideas, so big thanks for uploading. One very simple change that gave a big gain was to use the centroid of feet position as the "robot position" rather than the center of mass or body coordinates. This avoids the problem of the robot making big initial gains simply by "leaning" forwards - and avoids the tendency to end up with hunched forward poses such as the one seen at 16:29 18:17 etc.
@Ou_dembele
@Ou_dembele 2 жыл бұрын
Happy new year
@Veptis
@Veptis 2 жыл бұрын
If reinforcement learning models biological evolution - it's crazy to believe we developed walking, running or stuff like the human ear
@serta5727
@serta5727 2 жыл бұрын
I also looked a little into the NEAT python library. It is really cool.
@Voloskaya
@Voloskaya 2 жыл бұрын
I think you should really consider giving IsaacGym another try. You get 2 or 3 orders of magnitude speedup for training which allows you to have a much faster feedback loop and add things like randomization to prevent your model from being stock in a local minima. I was doing the same thing as you: training bittle with RL, and that's how I found your videos, and I am able to train a reasonably good gait in about 10 minutes on a single 1070 GPU with IsaacGym (about a million steps). Not sure how IsaacGym handles camera however, or if it can at all.
@datgatto3911
@datgatto3911 2 жыл бұрын
I think IsaacGym is a good choice too. I trained A1 with reward function that makes it move forward, then use Impedance control instead of just Joint torques. Got fast and correct gaits after about 10 seconds of training 216 robots on a small GPU
@datgatto3911
@datgatto3911 2 жыл бұрын
Btw, you can also set camera with gym.create_viewer(sim, gymapi.CameraProperties()), then gym.viewer_camera_look_at(viewer, None, cam_pos, cam_target). Or scroll the mouse while running
@swannschilling474
@swannschilling474 2 жыл бұрын
I am so happy about this series!! Its just great!! 😊
@RoseMaster3000
@RoseMaster3000 2 жыл бұрын
The reason it jumps is because that is the ideal adaptation given the environment it is being trained in (a perfectly smooth plane) If you want the model to evolve to have a "realistic gait" you will need to train it in an environment where the ground plane is bumpy or something. Just my take on it, love your vids!
@jeffmofo5013
@jeffmofo5013 2 жыл бұрын
Look up two minute papers. They have showcased many papers that capture walking gates with neural networks.
@rlew12
@rlew12 2 жыл бұрын
Have you considered creating an energy usage model for the biddle that calculates the center-of-mass and moment-of-inertia of the body and each joint. You could get a rough calculation of the energy usage by looking at the acceleration of the joints and even have a more natural penalty for jumpy movements since there'd be a higher energy usage for moving in the upwards Z direction which you wouldn't get back in the downwards motion. This would also punish jittery movements since it would punish rapid back and forth acceleration. I know this would be pretty involved to make but might give you more realistic constraints than "no side to side sway", "no roll", "no bobbing up and down" etc. It might even translate into a more natural running gait where you might want something more like a leap in between steps. Great video and looking forward to the next! Happy new year!
@sentdex
@sentdex 2 жыл бұрын
I haven't personally come up with an actual formula for overall energy usage, but yeah, maybe that's the holy grail. Inertia and joint acceleration could help, but joint acceleration is also a function of whether or not you're combating gravity, body weight, and inertia. I am not yet clear how to calculate this well.
@aadarshkumar2257
@aadarshkumar2257 2 жыл бұрын
I am here from your Neural Networks from scratch course. My only question for you is just tell me how you taught yourself Machne Learing and Deep Learning ? and moreover i think many people must be wondering about this. And a very deep request from the bootom of my heart please do a video on your journey from how you got interested in programming to growing this channel and teaching peoplpe in the most simple and easy to understand way. Your NN from scratch course is insane and one of the most valueable content i found in the deep learning material available.
@Stinosko
@Stinosko 2 жыл бұрын
What a nice New Year gift! 🎁
@Ihsees91
@Ihsees91 2 жыл бұрын
Cool video! Good mix between information and showcasing your models. Didn't know about discrete delta PPO yet, this alone was worth subscribing to you in the past. Regarding the stiff upper legs: maybe that has something to do with move_punish_div? Having no movement in those joints will punish the agents less after all. When I had to deal with jittering I tried limiting the overall "joint-movement sum" per step. If an agent requested more than the allowed value, I scaled the movement of every joint linearly. This works because jittering takes a longer "path" to get to the same target position, thus forcing the agent to reduce jitter. I've had decent results with this method (albeit on a different continuous-control problem - and not a walker) Of course this will give you another hyper-parameter to tune. Maybe do the same calculations on an already working model, and see what kind of value is needed for a stable gait?
@EtherealIntellect
@EtherealIntellect 2 жыл бұрын
Might be early for it, but adding in a latency/delay to the sensors can apparently sometimes help produce a more natural gait when the ai has to predict and adapt. Ofc humans have at least 100ish/200ish ms we gotta predict for constantly
@BenjaminScottfromFrance
@BenjaminScottfromFrance Жыл бұрын
I'm loving these videos! A few months ago I had an idea of doing exactly what you are doing: running a simulation of an existing robots to try and teach it to walk through reinforcement learning rather than doing it with a real robot. It's impressive to see how far you're getting. My knowledge about AI, machine learning and, frankly, programming are not even close to yours! When I thought of the idea I thought of doing it in different steps, for example: - Use IMU and the first task is for the robot to stand up and stay horizontal; - Walk and, still using the IMU, staying as horizontal as possible; Is is possible to start training a model to do something specific to start with, and then train it to evolve more complex behaviours, like the examples I gave above? Thanks, and keep up the good work!
@ethanblackthorn3533
@ethanblackthorn3533 2 жыл бұрын
thank you for amazing videos! They're very motivational
@lennartlut
@lennartlut 2 жыл бұрын
Thank you for this great video. Well done!
@t3chm0nkey
@t3chm0nkey 2 жыл бұрын
You could try something like this: hipCost = (1 - Abs( Dot( thighDir, gravityDir))) * hipToFootDis * costDelta; kneeCost = (1 - Abs(Dot( calfDir, gravityDir))) * kneeToFootDis * costDelta; sumCost = glueCost + kneeCost; This should into account moving against gravity and leverage (the hipToFootDis is the euclidean distance. not the length of the appendage) You could also mess around per joint cost i.e. the knee costs more then the hip to move
@rumdabbadoh
@rumdabbadoh 2 жыл бұрын
Just an idea, since you punish any movement of the joints to counteract the jitter, that would also punish using all the joints on each leg, right? I saw a lot of your walkers were using only one of lower joint in the leg. What do you think? Anyways thanks for creating all your content, your videos are a big part of my career choices and development the last couple of years! Cheers!
@Skyentific
@Skyentific 2 жыл бұрын
I completely agree! I think this is really the case. It would be nice to implement minimising power usage, instead of minimising number of direction changes. Minimising power usage would be more complicated, but not crazy complicated. It will also favourite to walk on the mostly straight legs.
@sdfgeoff
@sdfgeoff Жыл бұрын
One reward I'm currently trying is to reward joint velocity while punishing joint acceleration. The idea is that this will favour all joints moving at a high constant velocity, and both numbers are fairly trivial to calculate..
@codetestdummy
@codetestdummy 9 ай бұрын
Is it best (not sure if unavoidable) to consider each servo independently? Looking at the "tippy-toe" solution which the model arrived at, we can imagine that it would be very difficult for an actual dog to achieve or maintain that kind of posture (completely understand how local reward valley caused this). We know that in real life both joints on each leg would necessarily move in some kind of biomechanical proportion. I also image that what a real dog's brain is trying to do is imagine "where" it wants to put its paw (for balance or some impulse), and the motor cortex figures out the IK subconsciously, rather than thinking of "how much" to move it's leg muscles. But since the goal is just a forward gait, is it possible to have a reward function or constraint on the servo solution that follows the "natural" movement of the leg? I.e. (edit:) the next movement is a feasible IK solution based on current posture of the leg?
@simonrichter3950
@simonrichter3950 2 жыл бұрын
It looks like it freezes the upper leg joint once you activate the movement punishment :/ Maybe punishment should not be the same for every joint .
@PatrickHoodDaniel
@PatrickHoodDaniel 2 жыл бұрын
Could you treat each leg as a planar 2D mechanism instead of using the rotation of the individual servo and use inverse kinematics (planar 2D version of that) as the input? This way, it's like inputting a 2D image into the model rather than the positions of each servo and the problem is just the position of the foot in that plane. Just a thought.
@wktodd
@wktodd 2 жыл бұрын
Happy New year. The robot is only moving part of each limb. Is that something you've restricted the model to, or is it simply not learning to use the shoulder/hip joints?
@Stinosko
@Stinosko 2 жыл бұрын
I do wonder if it would be possible to predict the position of the main body. For a position of a object in 3d you need 3 parameters: a center point position of the center of the main body, and two vectors with force 1 that can make a 3D plane that the main body is currently at. (A vector from the center to the front like 1,0,0 and a vector 90° from the center for example the vector up 0,0,1) Those result in 9 values the robot has to keep track off in order to know it's his 3d position of the main body, and only 6 values for stabality. Than you can make observations with the current model where you try to predict the position of the main body based on the movement the dog is gonna make. So he learns and predict his behaviour in 3D and maybe can use those prediction in future model? Than you can try to make a model that minimise the wobbling in 3D plane of the main body, and keeping the walking direction forward. In a project of the university where i had to calculate a 3d plane from point I used sympy library. Maybe you can use it as well although if i remember correctly it was fairly slow 😊
@Kram1032
@Kram1032 2 жыл бұрын
I don't know what sorts of things the bittle can measure, but @sentdex learning to predict the future (i.e. "given the current state, and the next planned input, predict what the next state will look like", possibly with a bit more history than a single time step) can also help a lot in better, more robust walks. If you do it right it can even learn how to deal with different kinds of ground. Like, if it has any sort of thing where it can keep track of resistance to movement, if it tries to predict the evolution of that, it might be able to detect subtle changes in that to change its gait to do better etc. Variations of this might be called "predictive coding", "world models", or "surprisal learning" - try looking up those things. Very cool stuff
@Stinosko
@Stinosko 2 жыл бұрын
I don't have a lot of time to tinker myself on the bittle. So i hope someone else can use these ideas and test them out 😊
@Kram1032
@Kram1032 2 жыл бұрын
@@Stinosko I don't *have* a bittle - hoped sentdex would see this haha
@Stinosko
@Stinosko 2 жыл бұрын
I ordered a bittle but hasn't arrived yet 😉
@unknowinglyanonymous9215
@unknowinglyanonymous9215 2 жыл бұрын
yo! happy new year tanks for the nice content
@iliya-malecki
@iliya-malecki 2 жыл бұрын
well, if a robot moves "fluently", its ICU is mainly gonna output sinusoidal and half-sinusoidal 2d shapes, so I think you should try punishing for deviations from those shapes. I would define that additional cost as SS from a curve you can precompute, leaving delta of the distance travelled to plug in the formula as the simulation runs. Obviously, distance should be a little smarter than just an L2 norm, but you get the point
@keldfrslev3935
@keldfrslev3935 Жыл бұрын
I understand that it's cool teaching the robot to walk from scratch. Wouldn't it be much better to start with the hand coded walk and have it improve that?
@cryptojanne1134
@cryptojanne1134 2 жыл бұрын
Amazing. Fun project!
@xakslim
@xakslim 2 жыл бұрын
Oh, my God. You create a half-life monster 😀
@judedavis92
@judedavis92 2 жыл бұрын
Loved to video. Any news on the nnfs video series?
@HungrySoutherner
@HungrySoutherner 2 жыл бұрын
Have you considered instead of trying to learn the gate, instead learning the optimization functions for the prime kinematics for the legs with respect to the body. It seems approaching it this is always going to be approaching odd walking behaviors. Since the kinematics for the joints per leg is a known, learning how to optimize those and using those learn kinematics functions would make this thing walk smoother.
@jacobdavidcunningham1440
@jacobdavidcunningham1440 2 жыл бұрын
16:54 lol 20:15 aerodynamic boi
@josgraha
@josgraha 2 жыл бұрын
seems like there should be punishment for power consumption if that could be thrown in somehow (total movements of servos and kg/cm applied)
@usercurious
@usercurious 2 жыл бұрын
Cool that robot learned to fly instead of walking 😃
@cvspvr
@cvspvr Жыл бұрын
you could probably get the torque and speed from isaac to calculate to the true motor output power. also, look into shac. it's supposed to be even more godly than ppo. also also, maybe you should kill the bittle as soon as it flips over or its knees touch the ground
@giusepperandazzo5357
@giusepperandazzo5357 2 жыл бұрын
Hi there, I'm here after 2 months after I collected a lot of experience with RL. Rewatching the video, a cool thing that actually you could try: 1) Program your robot to walk normally without using any AI algo. Hand coding as well... 2) Use the RL algos to IMITATE your previously programmed controller. I mean, instead of starting from 0...put into the system something that already properly works. Hi there, I'm here after 2 months after I collected a lot of experience with RL. Rewatching the video, a cool thing that actually you could try: 1) Program your robot to walk normally without using any AI algo. Hand coding as well... 2) Use the RL algos to IMITATE your previously programmed controller. I mean, instead of starting from 0...put into the system something that already properly works.
@MrJosephtrapani
@MrJosephtrapani 2 жыл бұрын
🖖Amazing work! Happy new year. Would it be practical to set a bounding box moving in the y at a speed and height you would like and punish if the body doesn't stay within it? Not sure how it would generalise once this crutch is removed but may help pass design intent to the model more quickly. Thank you for the constant inspiration.
@ihateorangecat
@ihateorangecat 2 жыл бұрын
my very first comment on yotube. i'll check again next year & let's see how much i will develop.
@22vortex22
@22vortex22 2 жыл бұрын
Would making it navigate non flat terrain promote more standard gaits
@sentdex
@sentdex 2 жыл бұрын
I'm not sure about more standard, since the way you or a dog walks on ice or gravel or down hill is different than how you would on flat concrete, but it'd be instead something that I would like to also be capable of doing, as time goes on.
@giusepperandazzo5357
@giusepperandazzo5357 2 жыл бұрын
Hi there, thanks for your videos. I'm specialized in industrial robotics and I love them. Currently, I'm trying to apply rl to the stock market with discrete success...(I'm at the beginning).... What disturbs my personality is not to not understanding in depth what really happens inside these neural networks...so actually your videos, helped me with the deep neural networks....but I should understand better math beyond lstm and reinforcement learning algos. In your book, you provide those explanations ? Thanks in advance, Giuseppe
@fctrend6170
@fctrend6170 2 жыл бұрын
nice, i can supply robot dog hardware
@KennTollens
@KennTollens 2 жыл бұрын
What would happen if you punished for crouching?
@ienesree6808
@ienesree6808 2 жыл бұрын
Hello Sentdex, I want to start programming AI, but I don't know how to start. I tried several times to start with pytorch but always stopped because I didn't understand. I have basic knowledge of Python, but I still only understand a small part. Do you have any tips for me?
@monisprabu1174
@monisprabu1174 2 жыл бұрын
I don't have a clue how to use nvidia Omniverse or what that thing is it would be great if you upload a tutorial on it , BTW greater video!!! very interesting!
@iwoaugustynski9265
@iwoaugustynski9265 2 жыл бұрын
What about reward for similar movement of pair of legs? Or directly link legs in pairs?
@daniellaucht5560
@daniellaucht5560 2 жыл бұрын
Would it be possible for you to provide a newer branch of your great Bittle solution. At the moment I get this error with isaac sim 1.2: [carb.python] Failed to reload python module pxr.Tf. Error: module pxr.Tf._tf not in sys.modules.This heapens when I run TD3-Bittle-16-1.py via ./python.sh Maybe this module is nor longer supported.
@aadarshkumar2257
@aadarshkumar2257 2 жыл бұрын
Will the Neural Networls from scratch in python series continue ? When the next video in that series will come. Please clarify this matter !
@ashutoshmishra5901
@ashutoshmishra5901 6 ай бұрын
Where is other 2 parts ?
@Build_the_Future
@Build_the_Future 5 ай бұрын
How are you able to update the joint positions? I saw in part 1 that you had to constantly rewrite the file did you find a better way?
@1UniverseGames
@1UniverseGames 2 жыл бұрын
Happy new year sir. I have a a question, how can I plug or Integrate a Pytorch based GNN models into Pyspark or Spark cluster?
@pedroalvarado2515
@pedroalvarado2515 2 жыл бұрын
Sentdex, do you think it is possible to make ai gym environment of planet surfaces for robots to explore? This robot reminds me of prototype idea of martian walking probes
@sentdex
@sentdex 2 жыл бұрын
Sure, I am not sure about texturing the ground surface at the moment, but I'd imagine its fairly simplistic to make the terrain for more interesting.
@tcgvsocg1458
@tcgvsocg1458 2 жыл бұрын
Can you do a python tutorial?
@trongnguyenphanminh5615
@trongnguyenphanminh5615 2 жыл бұрын
what software you used in this video
@UNTBC
@UNTBC 2 жыл бұрын
Punish for efficiency, faster legs use more power with less effective work.
@sentdex
@sentdex 2 жыл бұрын
This is effectively the dir change punishment, it cant be purely about speed, since fast legs might also be ...well... moving us forward fast :D If you have a proposal for an actual calculation, I'd be happy to consider applying and testing it.
@SmartKeyboard2011
@SmartKeyboard2011 Ай бұрын
Why don't you put this code into that real robot dog?
@bytblaster
@bytblaster Жыл бұрын
Can we find the code somewhere?
@amangautam1779
@amangautam1779 2 жыл бұрын
How did you learn all this stuff???!!!!
@hendazzler
@hendazzler 2 жыл бұрын
more like _brittle_ reinforcement learning, amirite
@phillipotey9736
@phillipotey9736 2 жыл бұрын
give a punishment for low height, or if you want a specific height maybe. you're awesome
@danielniels22
@danielniels22 2 жыл бұрын
have you ever made a video where you talk about yourself? I still wonder how you went from law major and to do these without computer science background
@sentdex
@sentdex 2 жыл бұрын
You can check out the podcast I did with Sanyam here: kzbin.info/www/bejne/b4SQk4uoj96MkMU
@danielniels22
@danielniels22 2 жыл бұрын
@@sentdex yeahh cool!
@jameshodds8800
@jameshodds8800 2 жыл бұрын
Sentdex I'm looking at starting opencv on Raspberry pi 4b I have no computer knowledge I would like to know which of your video it would be best to start from. What I would like to do with opencv is body detection and grab a image from pi camera and email it to myself sorry for this comment on your youtube page but I don't know how to find your email address from youtube James please delete if required
@hansoloo98
@hansoloo98 2 жыл бұрын
If you have no computer knowledge your goals are quite tough. Do you have any mentors that can help when you get stuck? That is critical I think
@tsunamio7750
@tsunamio7750 2 жыл бұрын
Your poor things are scared of using their god damned thighs. They are only using the front part of their legs! That just can't be smooth! Somewhere in your restriction, you should let the hip joints be free of movement, because here it's not jittery, it's stiff as steel.
@tj_1260
@tj_1260 2 жыл бұрын
Lopfs
@zbigniewloboda3393
@zbigniewloboda3393 2 жыл бұрын
2:35 Forgive me my open response. It looks like, you have no methodology of operate of your creation. I don't have it also but I hope that stop, where you are and star develop methodology. Than you.
AI Learns to Use Stairs (deep reinforcement learning)
12:00
AI Warehouse
Рет қаралды 1,7 МЛН
A. I. Learns to Play Starcraft 2 (Reinforcement Learning)
17:42
Can This Bubble Save My Life? 😱
00:55
Topper Guild
Рет қаралды 78 МЛН
How I Did The SELF BENDING Spoon 😱🥄 #shorts
00:19
Wian
Рет қаралды 36 МЛН
Пройди игру и получи 5 чупа-чупсов (2024)
00:49
Екатерина Ковалева
Рет қаралды 3,8 МЛН
How to learn coding for Robotics ft. Bittle Robot
14:38
Let's Talk With Robots
Рет қаралды 9 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 948 М.
Can we simulate a real robot?
21:26
sentdex
Рет қаралды 106 М.
The World's Tallest Pythagoras Cup-Does It Still Drain?
10:05
The Action Lab
Рет қаралды 152 М.
AI Learns to Walk (deep reinforcement learning)
8:40
AI Warehouse
Рет қаралды 9 МЛН
I Built a Robot Dog and Made it Dance
15:01
Aaed Musa
Рет қаралды 403 М.
Meet Bittle, an Advanced Open-Source Robot Dog by Petoi
9:00
Michael Klements
Рет қаралды 38 М.
The Clever Way to Count Tanks - Numberphile
16:45
Numberphile
Рет қаралды 1 МЛН
What is the Smallest Possible .EXE?
17:57
Inkbox
Рет қаралды 365 М.
Can This Bubble Save My Life? 😱
00:55
Topper Guild
Рет қаралды 78 МЛН