Build a Chrome Dino Game AI Model with Python | AI Learns to Play Dino Game

  Рет қаралды 39,983

Nicholas Renotte

Nicholas Renotte

Күн бұрын

Пікірлер: 122
@aymanaslam7267
@aymanaslam7267 2 жыл бұрын
Thanks for listening to our feedback Nick! I think you should mention that you're building a custom RL environment in this tutorial, since a lot of people who have wanted a tutorial on this might miss this. Thanks again for all the content!
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Yeah, I need to call it out more!
@Hassibayub
@Hassibayub 2 жыл бұрын
Great Tutorial. Definitive explanations and presentation. Nick consistently exceeds expectations by providing top-class tutorials. Thanks a lot, NICK. 👍❤
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Thanks so much @Muhammad!!
@zaplavs6944
@zaplavs6944 Жыл бұрын
Nicholas Rennote, hi. Thank you so much for this video, I've learned a lot. If you're interested, I've improved your code a bit. 1. I removed the detection of the end of the game by letters (since it took away too many fps). And make sure that the detection of the end of the game was by changing the color (number) in a certain place. And my fps has grown to 25. 2. I increased the view of the dinosaur because he does not have time to catch the moment of the jump. 3. I changed the reward: -1 for any action and +2 for inaction. thanks to this , I got rid of random actions and got more deliberate actions . (translated from Russian into English, mistakes are possible)
@buzzchop5520
@buzzchop5520 Жыл бұрын
Yo, im just getting started on working on this NICE
@raadonyt
@raadonyt Жыл бұрын
Can you please share that reward code of yours????
@tetragrammat0n
@tetragrammat0n 8 ай бұрын
I have applied this changes but the FPS remains 1
@fustigate8933
@fustigate8933 2 жыл бұрын
Been waiting for this one!🔥
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
🙌
@Thomas-uq6gl
@Thomas-uq6gl 2 жыл бұрын
Hey man, i really enjoyed this video. I would be interested in some multi agent RL next, where models play in the same environment at the same time against or with each other
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Definitely, going to need to use a different framework for that I don't think SB3 has support for multi agents yet. I took at look at the Unity SDK and apparently it can handle it. Might take a deeper dive when I'm back.
@Quantiflyer
@Quantiflyer Жыл бұрын
I know i am very late to reply but with the unity sdk, it is incredibly easy to setup, and it doesnt need any extra modifications to work with multiple agents. Compared to unity sdk, open ai gym is very difficult. The only downside of unity sdk is (of course) it only works with unity@@NicholasRenotte
@SimonLam2024
@SimonLam2024 2 жыл бұрын
As always, thank you for the great tutorial.
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Thanks for checking it out Simon!!
@nikvod1330
@nikvod1330 2 жыл бұрын
Hey Nick! Love you ^_^ That is so cool What's next? What about a game in which you need to control the mouse and buttons?
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Hmmm, yeah that would be cool!!
@Quantiflyer
@Quantiflyer Жыл бұрын
use keyboard and pyautogui module for that
@frankz61
@frankz61 Жыл бұрын
Changing the "cv2.waitKey(1)" to "cv2.waitKey(0)" will fix the render() function freezing issue.
@vialomur__vialomur5682
@vialomur__vialomur5682 Жыл бұрын
Thanks a lot I always wanted a custom environment :)
@gonzalobaezcamargo2210
@gonzalobaezcamargo2210 2 жыл бұрын
Great stuff again! You are creating so much content that I'm struggling to keep up, but please keep going!
@meetvardoriya2550
@meetvardoriya2550 2 жыл бұрын
We use to see this javascript hack of dino bot, and here's nick with automating it with its custom logic using RL 😍, always been a fan of your content nick!, and congrats for your 80k subs 💥💯
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Hahaha, someone told me you can remove the die line in the js. Now I'm like I should've just done that LOL
@prasenlonikar9753
@prasenlonikar9753 2 жыл бұрын
Thanks for creating this video. Can you please create a video on evolutionary AI to play a dino game ??
@pushpakbakal4457
@pushpakbakal4457 2 жыл бұрын
Heyy nicholas, I'm stuck at initial stage of installation of pydirectinput After that I'm getting constant error as follow:- module 'ctypes' has no attribute 'windll' Please go through this if possible
@gmancz
@gmancz Жыл бұрын
If you use Mac or unix-like systems, you should use PyAutoGUI instead.
@ege1217
@ege1217 6 ай бұрын
1:17:34 I have error here how can I fix it AssertionError: Your environment must inherit from the gymnasium.Env class cf.
@reinierkr.bz2hs
@reinierkr.bz2hs 3 ай бұрын
I got the same issue. Its a version issue and I fixed it for myself, check my code and it Will solve your probleem. This is How I created a env for the flappy bird game: import cv2 import numpy as np import pydirectinput import time from mss import mss import gymnasium as gym from gymnasium.spaces import Discrete, Box import pytesseract class WebGame(gym.Env): def __init__(self): super().__init__() # Setup spaces: 2 actions, 0 = do nothing, 1 = click self.action_space = Discrete(2) # Two actions: do nothing or click self.observation_space = Box(low=0, high=255, shape=(1, 83, 100), dtype=np.uint8) # Capture game frames self.cap = mss() self.game_location = {'left': -1488, 'top': -1297, 'width': 424, 'height': 662} self.done_location = {'left': -1230, 'top': -999, 'width': 94, 'height': 23} self.np_random = None # Variable to store random generator def step(self, action): if action == 1: # Action 1 means "click" pydirectinput.click(x=-1519, y=-79) # Adjust the location depending on the desired click position # Action 0 is "do nothing", so we skip the click action done, done_cap = self.get_done() observation = self.get_observation() reward = 1 # Simple reward structure terminated = done # In this case, if the game is done, the episode is terminated truncated = False # No truncation logic, so it's False info = {} return observation, reward, terminated, truncated, info def reset(self, seed=None, options=None): # Set the seed if provided if seed is not None: self.np_random, _ = gym.utils.seeding.np_random(seed) time.sleep(1) # Click to reset the game pydirectinput.click(x=-1519, y=-79) # Click to start the game (adjust coordinates to the correct start position) return self.get_observation(), {} # Returning observation and optional info dict def render(self): cv2.imshow('Game', self.current_frame) if cv2.waitKey(1) & 0xFF == ord('q'): self.close() def close(self): cv2.destroyAllWindows() def get_observation(self): raw = np.array(self.cap.grab(self.game_location))[:, :, :3].astype(np.uint8) gray = cv2.cvtColor(raw, cv2.COLOR_BGR2GRAY) resized = cv2.resize(gray, (100, 83)) channel = np.reshape(resized, (1, 83, 100)) return channel def get_done(self): done_cap = np.array(self.cap.grab(self.done_location)) done_strings = ['=LUB','si '] done = False res = pytesseract.image_to_string(done_cap)[:4] if res in done_strings: done = True return done, done_cap
@peralser
@peralser 2 жыл бұрын
Nick..you alwas do the best!!! Thanks.!!
@thewatersavior
@thewatersavior 2 жыл бұрын
Just awesome, thank you!
@neo564
@neo564 2 жыл бұрын
Can you make a video about how we can retrain our previous model
@brandencastle3526
@brandencastle3526 Жыл бұрын
is there any way to use an image to signal a reset? here we used a "game over" text, but I wanted to see if you could use an image instead.
@ai.egoizm2.059
@ai.egoizm2.059 Жыл бұрын
Cool! That's what I need. Thanks!
@ashkankiafard8566
@ashkankiafard8566 2 жыл бұрын
Hi Nick! Thank you so much, this was exactly what I was looking for! One question though: How can I know which algorithm and policy from stable baselines should I use for each different game? Will I understand if I just read the docs? Keep up the great work! Love your tutorials!
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Yep go through the docs and read papers! Normally I'll try out a range of algos, for optimization I'll use Optuna!
@ashkankiafard8566
@ashkankiafard8566 2 жыл бұрын
@@NicholasRenotte Thanks a lot!
@gumbo64
@gumbo64 2 жыл бұрын
This is timed perfectly for me, in my project I'm using python selenium to play flash games (ruffle) which should keep all the inputs, screenshots etc contained. Was wondering would pydirectinput be faster because speed is of course very important for training. Anyway though this vid will help a lot with the environment and image processing and everything so thank you!
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
It's meant to be faster but for whatever reason I couldn't get the gym environment past 2 FPS. I tried a ton of optimization but couldn't find the specific reason why, I'm going to dig into it more if I ever do more games.
@wiegehts6539
@wiegehts6539 2 жыл бұрын
@@NicholasRenotte please find it out
@jumbopopcorn8979
@jumbopopcorn8979 2 жыл бұрын
Hello! I tried doing this with geometry dash, a similar game where you have to jump over obstacles. I trained it for 100,000 steps, and it's about as bad as pressing random buttons. The get_observation() function had enough information to see the objects to jump over, but it feels like my model didn't learn anything. After looking a little further, I found that the reward system was not related to time at all. It felt like it was picking random numbers, but I used the same one as you. Any help would be appreciated!
@Alex-ln7ds
@Alex-ln7ds 2 жыл бұрын
Hi Nicholas, thanks for the content (amazing as usual 😄). What pc specs do you have? Like gpu/cpu/ram
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Cheers @Alex, it's a Ryzen 7 3700x, 2070 Super and 32 GB of DDR4 (I think). Stay tuned for Sunday's vid, I'm going to do a deep dive into my hardware!
@Alex-ln7ds
@Alex-ln7ds 2 жыл бұрын
@@NicholasRenotte Yeah, watched the video already! Thanks for answering. And additional thanks for the content, it is seen how much effort you put in ur videos! 🙏
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
This looks like a fun one, but it only runs on Windows. I think if you'd used pyautogui instead, this would also run on Linux and macOS.
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Ohhhh, didn't realise direct input would cause issues, should be easy enough to swap out for pyautogui, thanks for the heads up @Michael!
@m12772m
@m12772m 2 жыл бұрын
Bro please go for disco diffusion! Need a mode for that! If you can make multi initial image with multi prompts can maintain google sheet … for the moment its one image with one prompt ….that would be awesome
@omarismail4734
@omarismail4734 2 жыл бұрын
Thank you, Nick, for this video! May I ask you what a screencast recorder are you using for your videos? It seems that you can zoom and pan while you are screencasting. Or is this a post-production video?
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
It’s live Omar, I use a apple trackpad to zoom! I show the setup in the battle station breakdown!
@Adam-vy1ri
@Adam-vy1ri 7 ай бұрын
Is there a basic, from scratch, no exp ML,DL, RL course that you have done. Struggling to find a from day 0 how to get started with all of this. I'm wanting to make my own custom env to load mazes, games, maps etc so they can RL there way around
@novis1177
@novis1177 2 жыл бұрын
Nice video! But have you ever consider building the agent on your own, not just using stablebaseline? Cause I think stablebaseline is quite limited.
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
I have but it's quite involved and would probably end up being a monster tutorial. Haven't had the time to dedicate to it yet, but maybe soon!
@sanjoggaihre4178
@sanjoggaihre4178 8 ай бұрын
While running DQN i got an error Anaconda\Lib\site-packages\torch\_dynamo\config.py:279, in is_fbcode() 278 def is_fbcode(): -->279 return not hasattr(torch.version, "git_version") AttributeError: module 'torch' has no attribute 'version'. How can i resolve it ?
@papercraftsanddrawing
@papercraftsanddrawing 5 ай бұрын
Bro The first scene monitor wallpaper is same as mine. :D
@solosoul2041
@solosoul2041 2 жыл бұрын
nich! how can i get model summary of a .pth file?
@toppaine4008
@toppaine4008 Жыл бұрын
Hi, I have a question about the get_observation function, why do you want to resize how does it affect our code(i'm a beginner so this might seem like a dumb question)
@gr33nben40
@gr33nben40 9 ай бұрын
I think to minimize the size of the data captured, the smaller the data the faster it'll run I think
@TheRamsey582
@TheRamsey582 Жыл бұрын
pydirectinput only works on windows, will this work on mac with pyautogui? I did not attempt it yet, but I am planning to
@brhoom.h
@brhoom.h Жыл бұрын
Hello, thank you so much for this video, but I'm wondering if I cloud download you 88k trsined model and I train it again so it reach to 20k or more? can I do it or not? and how?
@samibenhssan3121
@samibenhssan3121 2 жыл бұрын
You are a marvellous guy ! But i think it is time to leave a bit ML field into data engineering /mlops
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Cheers, yeah we'll probably get into some of that stuff later this year!
@sbedi5702
@sbedi5702 2 жыл бұрын
Hey, this is a great video! I have a question when installing pydirectinput I get this error on my Mac "AttributeError: module 'ctypes' has no attribute 'windll'" What should I do?
@neo564
@neo564 2 жыл бұрын
it might be that pydirectinput is only for windows
@hunterstewart3172
@hunterstewart3172 2 жыл бұрын
You can use pyautogui
@PedroHenrique-ci8fy
@PedroHenrique-ci8fy Жыл бұрын
Anyone else receiving the error "tuple has no attribute "shape" when trying to make the model play from the 88k steps we have available in the GitHub repository?
@mdiianni
@mdiianni Жыл бұрын
Is it possible combine the training steps with actual human interactions/training? Imagine a common scenario where a teacher provides some instructions, the alumns follow them to have a first hint, but with a few more lessons you can see most students are getting the basics right and from there it's just a matter of train hard (lots of epocs or callbacks starting from 88000 and not from 0 if that makes sense?
@captainlennyjapan27
@captainlennyjapan27 2 жыл бұрын
Hello Nicholas! As a part of my Master's project, I want to compare two human voice audio files. I want to compare two spectrograms and look for similarities and differences. Do you have any advice on where to get started with this? Of course, I'll be doing my own research but I thought I'd ask for my favorite data science KZbinr for his wisdom! ;)
@GamingExpert0321
@GamingExpert0321 2 жыл бұрын
Hey, I have a quick question, really hoping for your reply Because you are using reinforcement learning, What I think will happen is, no matter how many times we train it (even trillion times) it will still have a quite high probability of game over on the first few cactus because cactus positions are random and ai don't care about cactus position, it only care about at what time he jumped in past and at what time will have to jump this time to maximize the reward but the cactus are random then i think it will never be perfect am i wrong? Please answer
@abhinavtiwari8481
@abhinavtiwari8481 Жыл бұрын
yes you are wrong here, because it does not seed that when and where it jumped in the past, it sees what the situation around it was when it jumped and got rewarded (the whole purpose of using "screen shots" here is that only") so that when AI encounters the same situation again it will jump, instead of recording the time and place when to jump. Because that would just be recording and not learning
@lorenzoiotti
@lorenzoiotti 2 жыл бұрын
Hi, when i try to load a model back it works properly but if i try to resume training from the saved model it starts training from scratch, do anyone know what I am doing wrong?
@jacobmargraf4564
@jacobmargraf4564 2 жыл бұрын
Great video! Is there a way to continue training starting from my latest model or do I have to start the training all over again if I want it to learn more?
@MAAZ_Music
@MAAZ_Music 2 жыл бұрын
Usse latest model
@captainlennyjapan27
@captainlennyjapan27 2 жыл бұрын
It’s here!!!!!
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
IT ISSS!! Thanks for checking it out Leonard!
@SaiyanRGB
@SaiyanRGB 10 ай бұрын
Can this be applied on 3d games in unreal engine ?
@giochelavaipiatti
@giochelavaipiatti 2 жыл бұрын
Great tutorial. I just have a question. After about 30000 trainings I still see no progress in how the AI plays, is it normal or there may be a bug in my code ?
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Nope, that's normal. Keep training! It takes a while, you could also try dropping the training starts at parameter so it starts training the DQN from the get go!
@giochelavaipiatti
@giochelavaipiatti 2 жыл бұрын
@@NicholasRenotte Thank you !
@NoahRyu
@NoahRyu 2 жыл бұрын
How can I add negative reward functions? I tried searching all over the internet but can't find any similar to this code...
@NoahRyu
@NoahRyu 2 жыл бұрын
Nevermind, I found out how to do it.
@arjunprasaath5538
@arjunprasaath5538 2 жыл бұрын
Hey Nick, awesome work! A quick question, you got a FPS value of 2 which is very less any thoughts or ideas on improving it?
@Froparadu
@Froparadu Жыл бұрын
Hey Arjun. I know this might be late. But I started my RL journey last week and have watched Sentdex's RL series (SB3). I haven't completed watching this video but in the Sentdex video, he was rendering the UI of Lunar Lander to show the training process visually. This drastically reduced the speed of training considering the UI had to render. If the training process was headless (without visuals), it would've sped up the training. Just extrapolating that theory to this video, I see that the game can't be run headless unless you have the internal code of the game in your custom environment. I assume Nicholas is capturing the frames and feeding it to the DQN network. There are bunch of other factors that may affect the training process as well (and the reason I stated might not be true in Nicholas' case). Hope this helps! EDIT: I saw that Nicholas is using pytesseract to predict whether the game is over and indicating that episode is "done". That seems to me like a very expensive operation since the get_done() is going to run every frame to check whether the game is over. Maybe devising another way to check that will drastically speed up the training process.
@arjunprasaath5538
@arjunprasaath5538 Жыл бұрын
@@Froparadu thanks a lot man, your feedback helps me validate my thought process.
@xiaojinyusaudiobookswebnov4951
@xiaojinyusaudiobookswebnov4951 Жыл бұрын
@@Froparadu That makes a lot of sense, thank you
@musa_b
@musa_b 2 жыл бұрын
Hey nick, is it possible to use tensorflow for RL.
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
It is, might eventually do it as part of my DL basics/from scratch series (e.g. face detector, iris tracking etc)
@musa_b
@musa_b 2 жыл бұрын
that would be really great, brother!!
@musa_b
@musa_b 2 жыл бұрын
thaks for always helping us!!!
@SalahDev-wz8ob
@SalahDev-wz8ob 2 жыл бұрын
yoo Nick well done , i was wondoring how could you solve the i fps issue ?, Thank you dude.
@raadonyt
@raadonyt Жыл бұрын
Did you delete the file from github???? I have trained my model to 60,0000 but there's literally no progress at all. Can to please share the source code, I have to present it tomorrow.
@YasinShafiei86
@YasinShafiei86 2 жыл бұрын
Can you please make a Video Classification tutorial?
@affanrizwan3672
@affanrizwan3672 2 жыл бұрын
Hey Nick i am having trouble downloading your model the system is saying virus detected
@luklucky9516
@luklucky9516 2 жыл бұрын
Hi i really like your content but i suck at programming mainly because i started like last year and dont have a course or something like that to follow. Do you have a suggestion for me as beginner to learn reinforcement learning? Because i start loosing track of what the code that you write does really quickly
@aankitdas6566
@aankitdas6566 2 жыл бұрын
getting module 'tensorflow' has no attribute 'io' error while trying to run model. Any fixes anyone? (i have tensorboard installed)
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
Might be a tensorboard issue, try running through this: stackoverflow.com/questions/60146023/attributeerror-module-tensorflow-has-no-attribute-io
@sspmetal
@sspmetal 2 жыл бұрын
I would like to create an Ai that plays with binary options. Do you have any suggestions ?
@rlhugh
@rlhugh 2 жыл бұрын
Hi. What do you mean by 'binary options'?
@sspmetal
@sspmetal 2 жыл бұрын
@@rlhugh i mean binary options trading. It's like forex but you have to predict the direction and the expiry time
@BodyBard
@BodyBard 11 ай бұрын
Hey, did it work out for you? I also made some good agnets on futures and would like to share experience
@FatDwarfMan
@FatDwarfMan Жыл бұрын
can someone help me with jupiter? like how do i set everthing up
@ozzy1987mr
@ozzy1987mr Жыл бұрын
no consigo buen material sobre este tema mas detalladamente en español.... me ayudan bastantes tus videos
@JacobSean-iy3tl
@JacobSean-iy3tl 9 ай бұрын
hey , I would like to see you try this with Fifa 😁
@sunidhigarg673
@sunidhigarg673 2 жыл бұрын
please do more stuff like this. us there any discord channel?
@jinxionglu5008
@jinxionglu5008 2 жыл бұрын
great content!
@seandepagnier
@seandepagnier 2 жыл бұрын
I am disappointed with the results. I think a single IF statement on a particular pixel of the image would give far better performance.
@erfanbayat3974
@erfanbayat3974 Жыл бұрын
you are the GOAT
@MatheusMorett
@MatheusMorett Жыл бұрын
thanks!!!!
@skaiyeung7183
@skaiyeung7183 2 жыл бұрын
nice
@owolabitunjow9041
@owolabitunjow9041 2 жыл бұрын
nice video
@ege1217
@ege1217 6 ай бұрын
45:46
@ApexArtistX
@ApexArtistX Жыл бұрын
and DXCAM faster than MSS
@fruitpnchsmuraiG
@fruitpnchsmuraiG Жыл бұрын
hey, im getting a lot of errors since gym has now been shifted to gymnasium how to fix that, did ypu get the code running?
@ApexArtistX
@ApexArtistX Жыл бұрын
@@fruitpnchsmuraiGwhat error message
@ApexArtistX
@ApexArtistX Жыл бұрын
@@fruitpnchsmuraiGgym returns 4 gymnasium returns 5 if I remember so u need some changes in ur code
@dewapramana3859
@dewapramana3859 2 жыл бұрын
Nice
@jasonreviews
@jasonreviews 2 жыл бұрын
it's easier than that. Just remove the die feature with javascript. Don't need AI. lols.
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
😂 damn, I didn't even think of that!
@wriverapaniagua
@wriverapaniagua 2 жыл бұрын
excelente!!!!!
@sunidhigarg673
@sunidhigarg673 2 жыл бұрын
is*
@musa_b
@musa_b 2 жыл бұрын
Hahh🥴🥴! This was left
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
🙏🙌🙏
@JurgenAlan
@JurgenAlan 2 жыл бұрын
Please check your LinkedIn
AI Learns How To Play The Chrome Dinosaur Game
10:34
Tyler Mommsen
Рет қаралды 237 М.
Getting Started with Python Deep Learning for Beginners
1:10:44
Nicholas Renotte
Рет қаралды 211 М.
Every team from the Bracket Buster! Who ya got? 😏
0:53
FailArmy Shorts
Рет қаралды 13 МЛН
Andro, ELMAN, TONI, MONA - Зари (Official Audio)
2:53
RAAVA MUSIC
Рет қаралды 8 МЛН
УЛИЧНЫЕ МУЗЫКАНТЫ В СОЧИ 🤘🏻
0:33
РОК ЗАВОД
Рет қаралды 7 МЛН
Build an Mario AI Model with Python | Gaming Reinforcement Learning
1:17:06
Nicholas Renotte
Рет қаралды 167 М.
AI learns to play Google Chrome Dinosaur Game || Can you beat it??
10:21
Training AI to Play Pokemon with Reinforcement Learning
33:53
Peter Whidden
Рет қаралды 8 МЛН
Can AI code Flappy Bird? Watch ChatGPT try
7:26
candlesan
Рет қаралды 9 МЛН
AI Learns To Play Golf
13:57
b2studios
Рет қаралды 726 М.
AI Learns to Speedrun Mario
8:07
Kush Gupta
Рет қаралды 962 М.
AI Learns to Play MORTAL KOMBAT
16:50
Will Kwan
Рет қаралды 146 М.
Natural Language Processing with spaCy & Python - Course for Beginners
3:02:33
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Every team from the Bracket Buster! Who ya got? 😏
0:53
FailArmy Shorts
Рет қаралды 13 МЛН