This Isn't Just A Chatbot (OpenAI Should Be Scared...)

  Рет қаралды 123,701

Theo - t3․gg

Theo - t3․gg

4 ай бұрын

I heard nvidia was doing some chat bot stuff, but Chat With RTX ended up being much more interesting than I expected. Retrieval-augmented generation (RAG) is a fascinating new technique and I'm curious how we see it adopted over time. Compared to ChatGPT and ollama, this is very different.
"insert statement about Tensorflow for SEO reasons here"
Sources:
www.nvidia.com/en-us/ai-on-rt...
github.com/NVIDIA/trt-llm-rag...
github.com/ollama/ollama
Check out my Twitch, Twitter, Discord more at t3.gg
S/O Ph4se0n3 for the awesome edit 🙏

Пікірлер: 251
@yowwn8614
@yowwn8614 4 ай бұрын
That start with linus is makes my heart drop.
@H-Root
@H-Root 4 ай бұрын
Same point 😂 When I saw him I thought I clicked the wrong notification for a split second
@berkeozkir
@berkeozkir 4 ай бұрын
Same! Wasn't expecting that.
@danielfernandes1010
@danielfernandes1010 4 ай бұрын
Wow somehow I missed it lol
@MelroyvandenBerg
@MelroyvandenBerg 4 ай бұрын
hahhaha.. there we go Linus.
@blacktear5197
@blacktear5197 3 ай бұрын
Jajajaja a ver si os enteráis que el OpenSource es una de los mayores engaños de las multinacionales para que trabajéis a cambio de una camiseta
@MrSofazocker
@MrSofazocker 4 ай бұрын
ONLY addition, it HAS to support markdown. Imagine, just setting this to your obsidian vaults folder path and boom, you can chat with your second brain 🤯
@Sanchuniathon384
@Sanchuniathon384 4 ай бұрын
I need this yesterday
@personinousapraham3082
@personinousapraham3082 4 ай бұрын
The way these models work means it almost definitely already does to some extent, since there a decent amount of markdown in the training process for the models used for both generation and document embedding (making the search part work). At that point it's mostly just a matter of prompt tuning
@z_0968
@z_0968 4 ай бұрын
What if I told you that already exists? - There is an Ollama plugin to chat with your notes. Although this is more single note specific, it's free. - There is also a plugin called Smart Connections, but you need a OpenAI API key for that. This one is note global, it will create embeddables (vector files created from your notes) from your notes. And then you can chat with all your notes.
@cabaucom376
@cabaucom376 4 ай бұрын
So I’m working on this… anyone who sees this comment what are some must have’s for a local first AI knowledge vault?
@carvierdotdev
@carvierdotdev 4 ай бұрын
❤ oh man that's actually a very good thinking..
@hugazo
@hugazo 4 ай бұрын
Finally one use for my 4080 that doesn't involve crying trying to play cities skylines 2
@RT-.
@RT-. 4 ай бұрын
Wut? It lags on that game?
@Rock48100
@Rock48100 4 ай бұрын
@@RT-. Yeah that game is beyond a mess
@hugazo
@hugazo 4 ай бұрын
The game is broken@@RT-.
@GetUrFunnyUp
@GetUrFunnyUp 3 ай бұрын
Isnt that a cpu based game
@oo--7714
@oo--7714 3 ай бұрын
​@@GetUrFunnyUpyes
@YomiTosh
@YomiTosh 4 ай бұрын
Hey Theo, just wanted to point out a few inconsistencies. RAG doesn’t train a model, it indexes the text files in a vector database and uses word similarity to look up relevant text. So the model, such as llama2 or mistral is unchanged but it is able to add context and make the retrieved text more conversational. There are loads of great AI/RAG projects other than Ollama out in the git seas too. Many not quite as simple or easy to use though. Thanks for all the great videos. Already subscribed ;)
@cryptogenik
@cryptogenik 4 ай бұрын
Came here to say this, and also - your bias is showing :P
@rendezone
@rendezone 4 ай бұрын
Shamelessly mentioning here the product/startup I work for: Qdrant is an open source vector database which excels at RAG setups. I created the JS SDK :P which offers fully-typed REST and gRCP clients
@lukeweston1234
@lukeweston1234 4 ай бұрын
@@rendezone What does it offer that PGVector for instance does not?
@martinkrueger937
@martinkrueger937 3 ай бұрын
@YomiTosh by any chance do you know which RAG system/framework is giving out the best performance?
@rusyaidimusa2309
@rusyaidimusa2309 4 ай бұрын
I would consider giving a shoutout to the llamacpp project that serves as the backend engine to many of the open source programs like Ollama, and the many many talented engineers who brought support to so many different systems configurations. The open source scene has been on fire since Llama dropped and running models locally has never been easier.
@MrLenell16
@MrLenell16 4 ай бұрын
It's not training the model just doing RAG. Retrieval is basically querying for relevant docs based on semantic similarity basically doing a sql query which a vectors in the where clause
@poipoi300
@poipoi300 4 ай бұрын
Yep thanks for that, was about to comment something similar. Words are coordinates, yo.
@Imp0ssibleBG
@Imp0ssibleBG 4 ай бұрын
Is it done by searching to the nearest/closest embedding?
@LookRainy
@LookRainy 4 ай бұрын
@@Imp0ssibleBGpretty much. But there are other different approaches for doing RAG
@hunterkauffman9400
@hunterkauffman9400 4 ай бұрын
simplest technique is the cosine similarity between the query and each document chunk.
@joreilly
@joreilly 4 ай бұрын
But is this the best example of 'kind of' training your own model? There's a race right now for people to train models on private or proprietary data. RAG seems to be the most practical solution so far even though it's not perfect. Or am I wrong about this?
@tuckerbeauchamp8192
@tuckerbeauchamp8192 4 ай бұрын
Oh man, was not ready for that intro. I love LTT and your channel, that was a great little combination
@ofadiman
@ofadiman 4 ай бұрын
Finally 🎉 I couldn't wait any longer for ray tracing support in my chat bot GUI
@Sindoku
@Sindoku 4 ай бұрын
I’ll definitely be checking this out this weekend when I don’t have to work. This looks bad ass!
@medalikhaled
@medalikhaled 4 ай бұрын
they are not directly using svelte, they are using a project OSS project called Gradio for the UI which uses svelte
@DaniDipp
@DaniDipp 4 ай бұрын
This is huge for my wiki. I can just give it a directory of markdown files. 🤯
@hugazo
@hugazo 4 ай бұрын
Better search in docs, i would add my frameworks/libraries documentations as well
@SenorRobinHood
@SenorRobinHood 4 ай бұрын
Would this work on the codebase for a library? For example inputting a freshly downloaded wordpress directory and then also digesting the wordpresss developer docs to make it your private Q&A tutor for platform you're trying to learn?
@Al-Storm
@Al-Storm 3 ай бұрын
Yes
@aloufin
@aloufin 4 ай бұрын
Yes, your explanation of RAG was very nice and easy to understand
@Petyr25
@Petyr25 4 ай бұрын
Feedback: Superb video, more AI stuff from you would be great. Specially with open source stuff with our own data.
@adam_k99
@adam_k99 4 ай бұрын
Good stuff! Could you make a video on how well it performs as a coding assistant?
@Falkov
@Falkov 3 ай бұрын
Good stuff..liked and subbed.
@creatortray
@creatortray 4 ай бұрын
I love it! I find this stuff fascinating
@arnaudlelong2342
@arnaudlelong2342 4 ай бұрын
You and Prime need to get with this soon
@niteshbaskaran2262
@niteshbaskaran2262 4 ай бұрын
If all of the python docs were fed to an LLM model, would you use query that LLM model or still refer to the original docs?
@user-pc8vn6ym7r
@user-pc8vn6ym7r 4 ай бұрын
I have to admit, that is the MOST creative L&S I've ever seen on here. And I normally swear at the screen in response. Maybe.
@entropywilldestroyusall1323
@entropywilldestroyusall1323 4 ай бұрын
Great vid, Adam.
@arianj2863
@arianj2863 4 ай бұрын
Could you make a small RAG project :-)? Or do you have a channel who is like the theo of open source LLMs?
@DNA912
@DNA912 4 ай бұрын
While watching this I started to realise how huge usage I would have with this at work. The project I'm in has a huge documentation, but everything is just a brain dump, and there have many times happened that we've found something "new" in the docs that we completely had missed before. Imma work on making an AI on that dataset asap. I love experimenting with AI's locally, it so fun and it feels so much cooler and better then the cloud once
@Al-Storm
@Al-Storm 3 ай бұрын
WSL2 works surprisingly well. I've been using it on one of my machines for SD, llama, and mixtral.
@jzeltman
@jzeltman 4 ай бұрын
Would love to see more AI content. Great look into this new release from NVIDIA
@brett_rose
@brett_rose 3 ай бұрын
I did something similar with Pinecone. I parsed a huge chuck of wiki data into a Pinecone DB. I then would use one model prompt which would return multiple pieces of data based on the prompt. That model would then decide which pieces of outside information were the most related to the prompt. It would then send the original prompt along with the external data to a new model prompt which then would provide the response to the user.
@unowenwasholo
@unowenwasholo 4 ай бұрын
This is like ControlNet for LLMs. Dope.
@nothingtoseehere5760
@nothingtoseehere5760 3 ай бұрын
NOT DEEP ENOUGH! MOAR PLZ!
@sarjannarwan6896
@sarjannarwan6896 4 ай бұрын
Rags are cool, they can use vector databases to map to data.
@tasmto
@tasmto 4 ай бұрын
Ok... this is really really cool!
@GeorgeG-is6ov
@GeorgeG-is6ov 3 ай бұрын
If you don't want to use it because it's so large, get ollama and you can run it on your command prompt, I recommend watching a tutorial on it, and there's models as little as 1.8 gb (For example, phi 2, which is small yet very powerful)
@TrimutiusToo
@TrimutiusToo 4 ай бұрын
Deeper, go even deeper!!!
@sadshed4585
@sadshed4585 4 ай бұрын
what is vram for RAG or their version of it
@aloufin
@aloufin 3 ай бұрын
I keep thinking about this video.... RAG is showing up on my timeline on twitter everywhere... I would have had to spend HOURS trying to understand it... let alone realise I could download NVIDIAs demo and run it on my GPU..... Your videos are amazing to understand huge swathes of new AI tech easily... not to mention actually show working tech demos.
@SkyyySi
@SkyyySi 4 ай бұрын
The app isn't made with Svelte, but with Gradio. Gradio is a Python library for creating web UIs for ML applications. Gradio, however, uses Svelte and Tailwind internally.
@eointolster
@eointolster 3 ай бұрын
What’s the advantage over privateer that’s been out for months where you can choose your own model and it is tiny in comparison?
@Tymon0000
@Tymon0000 4 ай бұрын
This will be huge when the ai will be capable of parsing a whole project and multiple docs.
@pencilcheck
@pencilcheck 4 ай бұрын
ahhh, imagine parsing and generating tests that makes sense based on prompts ::OOOO
@MightyDantheman
@MightyDantheman 3 ай бұрын
This is pretty cool, though I'm still waiting for the day that I can use at least GPT-4 level AI locally *(and ideally either for free or a one-time payment for one single version).* Sadly, I doubt this will ever happen outside of opensource projects, which tend to not be as good due to less funding and resources. But I still appreciate any effort put towards that future.
@riftsassassin8954
@riftsassassin8954 3 ай бұрын
Definitely on team AI deep dive!
@bugged1212
@bugged1212 4 ай бұрын
Already been using this, I have also wired up ollama to serve multiple requests and I run a business off it now.
@juanmacias5922
@juanmacias5922 4 ай бұрын
Damn, that's impressive.
@azeek
@azeek 4 ай бұрын
Brooo what an opening ❤❤😂
@RisingPhoenix96
@RisingPhoenix96 4 ай бұрын
2:19 The KZbin algorithm recommends me your videos frequently. Is there any real benefit I get from subscribing if I'm going to watch your videos and see all your community posts anyway?
@Gocunt
@Gocunt 3 ай бұрын
what if i don't subscribe to anyone because i just don't want to be subscribed to a bunch of random channels? ai doesn't understand
@E-Juice
@E-Juice 3 ай бұрын
I make 2 points here. 1 questioning the accuracy of this system 2 why windows 1. what's interesting about downloading youtube video transcripts and using those files at 7:40 is that nvidia's setup is MOST LIKELY using their own ASR (Automatic Speech Recognition) model, either Canary or Parakeet, which i've tested and found that theyre good but still not as accurate as open ai's Whisper ASR model. So without knowing what specific model is used to transcribe the youtube videos, we don't know how exact those transcriptions are, so that affects how well this RAG can asnwer questions using that data. I would reccommend using Whisper-Large-v3 and manually transcribing the youtube videos, or just uploading actual documents and notes and testing them rather than transcribing youtube videos. 2. you dont reccommend using WSL but you didnt elaborate. what is the best alternative? installing linux locally or using a cloud workstation? dont say mac because they dont come with nvidia gpu
@banalMinuta
@banalMinuta 3 ай бұрын
Why would you not recommend running ollama on WSL right now
@pixma140
@pixma140 4 ай бұрын
Nice thing what Nvidia did there :) Do you want to share the two KZbin Playlists in the comments or description maybe? :D
@TheGoodMorty
@TheGoodMorty 4 ай бұрын
Ollama just released the Windows preview
@lancemarchetti8673
@lancemarchetti8673 4 ай бұрын
Crazy... it's hard to keep up. And now there's Groq.. which is ridiculously fast.
@schtormm
@schtormm 4 ай бұрын
8:17 for some reason the Dutch public news broadcaster also uses svelte sometimes lmao
@red9090
@red9090 4 ай бұрын
Theo just dropped a 3 min suscribe pitch.
@hohohotreipatlajele2044
@hohohotreipatlajele2044 4 ай бұрын
I've tried but it's a bit strange and slow and I didn't find how to start again after shutting down
@Fire.Blast.
@Fire.Blast. 4 ай бұрын
1:36 "as you can see, it's pretty fast" yes, instant even it would seem
@chaks2432
@chaks2432 3 ай бұрын
This kind of stuff would be a lifesaver if it manages to work as an AI powered chatbot for documentation for proprietary frameworks and stuff. I'm working at a startup and we're building our own framework from scratch, so having RTX Chat work as an AI documentation assistant would be great
@banalMinuta
@banalMinuta 3 ай бұрын
I would definitely appreciate more AI content as somebody just getting into web development. It seems pretty apparent that AI is going to unleash a new category of tools, one whose mastery will most likely be paramount to ones success
@user-tk5ir1hg7l
@user-tk5ir1hg7l 3 ай бұрын
50% faster inference with nvidia gpus on tensor-rt is no joke, i hope they expand this and let you fine tune and add models
@chriss3154
@chriss3154 3 ай бұрын
A comparison against privateGPT and/or localGPT would've been awesome
@blenderpanzi
@blenderpanzi 4 ай бұрын
Point it to a playlist of Jonathan Blow videos and then tell it JavaScript is the best language and ask when will Jai have LSP support? Can a LLM can get an aneurysm?
@PRIMARYATIAS
@PRIMARYATIAS 4 ай бұрын
Can be good for learning, Could point it to some programming books I have so I can "chat" with them 😂
@sozno4222
@sozno4222 3 ай бұрын
The models you can run on chat with RTX are a bit inadequate right now. But it shows promise
@zyxwvutsrqponmlkh
@zyxwvutsrqponmlkh 4 ай бұрын
I wonder how large a single text can be for this to work. Can I throw whole books at it? What about my states entire law code?
@setasan
@setasan 4 ай бұрын
That would be great for my hundred pages onenote files.
@MobCat_
@MobCat_ 4 ай бұрын
I wanna point that folder at my current project im working on. or a massive archive or python code lol...
@ThePawel36
@ThePawel36 3 ай бұрын
I wonder if you could train your language model to play a game on your behalf, such as Cyberpunk, for example of course. It seems feasible, as some local language models are equipped with vision capabilities. It would be fascinating to witness the first KZbinr attempting this."
@exapsy
@exapsy 3 ай бұрын
00:03 and i already saw linus dropping not just something, a graphics card. Instant like! xD
@jackg_
@jackg_ 4 ай бұрын
Here's hoping more and more AI things move to have local options. Sure not everyone can run these locally, I am typing this on an iMac from 2015... BUT it is super promising.
@christianremboldt1557
@christianremboldt1557 4 ай бұрын
Never thought an AI would convince me to subscribe to someone
@cintron3d
@cintron3d 3 ай бұрын
Fine fine I'll subscribe.
@bhaskaruprety230
@bhaskaruprety230 4 ай бұрын
Please make a video on SLM
@nathanfife2890
@nathanfife2890 3 ай бұрын
I'm curious how good it is at code
@johnbarros1
@johnbarros1 4 ай бұрын
Hey I hit subscribe! Gimme more Ai!
@kyleleblancvlogs3820
@kyleleblancvlogs3820 4 ай бұрын
Finally when someone says "when did i say that !! Huh" I can go. If this video on this date at this timestamp. Checkmate
@hairy7653
@hairy7653 3 ай бұрын
the KZbin option isn't showing up on my rtxchat
@omanimedia
@omanimedia Ай бұрын
Same Issue Bro I Also searching for this whole internet but couldn't even get one useful tutorial🥲 If you figured it out Then please let me know.
@FarishKashefinejad
@FarishKashefinejad 4 ай бұрын
Theo Tech Tips
@d4rkg
@d4rkg 3 ай бұрын
I never expected to see Freddie Mercury talking about AI
@MultiMojo
@MultiMojo 4 ай бұрын
Keep in mind that Chat with RTX bundles a 7B parameter model, which will consume GPU memory during use. Inference is going to be painfully slow if you're running a weaker gpu. Responses from this model aren't going to be at par with GPT4/Claude. If you're looking to chat with your own documents, paying for an OpenAI API key w. langchain RAG implementation is the more efficient way to go.
@dubya85
@dubya85 3 ай бұрын
It will not work on anything less than 8gb 30 or 40 series rtx cards. 12g Min for the larger ai model
@minimal2224
@minimal2224 3 ай бұрын
Woah woah woah it’s only fast because of your laptop hardware lol M1 - M4 chip?
@anime.x_ror
@anime.x_ror 4 ай бұрын
i missed nvidia exe for nvidia chat zip. could someone share it with me? )
@__greg__
@__greg__ 4 ай бұрын
Nice
@TheD3adlysin
@TheD3adlysin 4 ай бұрын
This is a good video. Just wish you could run this on Linux....with a AMD card.....heh
@amodo80
@amodo80 3 ай бұрын
you can so RAG with ollama when you run ollama-webui
@undefined6512
@undefined6512 3 ай бұрын
What's Ollama's last name?
@jacobgoldenart
@jacobgoldenart 4 ай бұрын
I'm confused here, There are a ton of vector databases that you can install and run on a Mac. No external GPU needed. Like Chromadb, or Faise. Then just use something like llamaindex or langchain to chunk your documents and create embeddings using something like openai's ada2. Then insert them into chroma and starting doing rag on your documents. You certainly don't need an Nvidia GPU.
@kennypitts4829
@kennypitts4829 Ай бұрын
Lawyer: AI, find precedence to get my client off the hook for drunk in public. AI: Beep, bop... Bort - Say it was diabetes related.
@jaylenjames364
@jaylenjames364 4 ай бұрын
I can get a little more specific on a topic with chat with rtx. Compared to chatgpt.
@Endelin
@Endelin 4 ай бұрын
PrivateGPT is doing something similar to this.
@bblatnick1
@bblatnick1 3 ай бұрын
Love the AI content.
@Hunger53
@Hunger53 4 ай бұрын
The main downside of this program is it only parses one file at a time, even if you have multiple files with data. Kinda meh if you need to do comparisons or use one file as a context to process the second.
@user-oo2wb8tf7i
@user-oo2wb8tf7i 4 ай бұрын
You need CrewAI
@vitorwindberg4212
@vitorwindberg4212 4 ай бұрын
I agree. It would be awesome if, for example, in the "What is Theo's favorite library?" question, the model could use all different videos data at once and assume it's React - instead of relying on a single video that it deemed the most important for that question.
@sauer.voussoir
@sauer.voussoir 4 ай бұрын
Nice pointing it out, I didn't even realize it did it that way. Hopefully they get updated along the way.
@FamilyManMoving
@FamilyManMoving 3 ай бұрын
This is a simple demonstration cobbled together from open source. It's not meant to be an actual system. What you want is already available; it just requires a little more work on your end, and models that can handle the actual data. Context length is a real issue with a lot of open source models. There is only so much RAG can do if a model limits context to 2048 tokens, for instance. I've had models start hallucinating when they get close to the limit. The good news is those hallucinations are so off the topic that it's obvious when they occur.
@dubya85
@dubya85 3 ай бұрын
It can look at everything in a folder at once
@Readraid_
@Readraid_ 4 ай бұрын
'Nvidia just dropped' linus clip is CRAZY
@jazilzaim
@jazilzaim 4 ай бұрын
This is going to give another moat to Windows devices over Mac and Linux devices
@alireza-bonab
@alireza-bonab 4 ай бұрын
👏💚
@edgarasben
@edgarasben 4 ай бұрын
More AI content yes, pls:)
@hogwrangler3283
@hogwrangler3283 4 ай бұрын
bruh, if you ever hear that they've ported it over to amd graphics cards (7800xt enjoyer here), then please make a video on it asap. it's very relevant for someone like me. it took like 2 months to transcribe my favourite game design videos on youtube because i didn't find that kind of literature other than the really academic stuff, like about possible chess permutations or probabilities in poker.. it's over thousand pages of raw .txt with no formatting and extra whitespace to help with readability (yet). it's just sitting there on the disk.. waiting for a moment like this to come by.
@MickenCZProfi
@MickenCZProfi 3 ай бұрын
I think very soon, I recently pushed AMD on their github to support the 7800 XT with ROCm and less than a week ago they satisfied my request and it is now supported. Should be able to run stuff on it a lot better now.
@jerry27syd
@jerry27syd 4 ай бұрын
Can now convert mining farm to to run on this
@scottiedoesno
@scottiedoesno 4 ай бұрын
MOAR AI. The solution to having a job with AI is knowing how to use it
@seanmartinflix
@seanmartinflix 2 ай бұрын
Okay this is something I'm trying to learn. For the record I'm not a programmer just an enthusiast trying to learn stuff kind of a dummy Compared to you all probably. But what do you guys think of Pinocchio I've gotten llama to run through it but it doesn't always run And there's not very many good tutorials on Pinocchio . I would love it to be covered on this channel. just what you think of it? And other insights if possible. Anything I really can find on it is very basic and only gets you so far. Weird comment I know. like the channel always some great insights.
@jenny-DD
@jenny-DD 3 ай бұрын
Good now, I can just get the point of youtube videos by putting them into Nvidea cuts out all the fluff
@nazarshvets7501
@nazarshvets7501 4 ай бұрын
WHERE ARE TIMESTAMPS????
@patricknelson
@patricknelson 3 ай бұрын
Maybe “trt-llm-rag-windows” implies maybe there will be a Linux or MacOS version someday. 🤔
@smallbluemachine
@smallbluemachine 3 ай бұрын
So, you're gonna need an Nvidia RTX-capable card (30xx is fine) and it'll need at least 7GB vram, so maybe your laptop might struggle. Really interesting app. Super simple to install and run. It's quite the killer app for a demo 😅 some start-ups and their investors somewhere are gonna be sweating bullets. 😂
@j0hannes5
@j0hannes5 4 ай бұрын
What if you got an amd card?
@YISP7
@YISP7 3 ай бұрын
Then you can't chat with RTX😅
@dubya85
@dubya85 3 ай бұрын
As amd what they going to do for you
Why WebAssembly Can't Win
19:38
Theo - t3․gg
Рет қаралды 103 М.
10 weird algorithms
9:06
Fireship
Рет қаралды 1,1 МЛН
Why You Should Always Help Others ❤️
00:40
Alan Chikin Chow
Рет қаралды 126 МЛН
FOOLED THE GUARD🤢
00:54
INO
Рет қаралды 59 МЛН
Её Старший Брат Настоящий Джентельмен ❤️
00:18
Глеб Рандалайнен
Рет қаралды 8 МЛН
Apple's Silicon Magic Is Over!
17:33
Snazzy Labs
Рет қаралды 962 М.
AI Deception: How Tech Companies Are Fooling Us
18:59
ColdFusion
Рет қаралды 1,6 МЛН
So, Cloudflare Responded...
12:52
Theo - t3․gg
Рет қаралды 140 М.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 1 МЛН
This Chip Could Change Computing Forever
13:10
ColdFusion
Рет қаралды 1 МЛН
I Built a $1M AI App [No Code]
16:14
Starter Story
Рет қаралды 528 М.
How React 19 Almost Broke The Web
55:40
Theo - t3․gg
Рет қаралды 5 М.
HTMX Sucks
25:16
Theo - t3․gg
Рет қаралды 99 М.
📦Он вам не медведь! Обзор FlyingBear S1
18:26
Iphone or nokia
0:15
rishton vines😇
Рет қаралды 1,8 МЛН
i like you subscriber ♥️♥️ #trending #iphone #apple #iphonefold
0:14
Как работает автопилот на Lixiang L9 Max
0:34
Семен Ефимов
Рет қаралды 17 М.