Do you have 1 caméra recording the épisode or two?
@seyyedmahdisarfarazi736510 күн бұрын
thanks a lot sir
@mrrobot777113 күн бұрын
kzbin.info/www/bejne/aIm0qZV-idBgodE
@배성재-p4e17 күн бұрын
informative!!
@KamilBanc20 күн бұрын
i need help... im a completely illiterate noob who just ordered the parts and printed the rest.... I don't understand whats going on on the screen what program is that in?
@KamilBanc20 күн бұрын
Is there a tutorial for a total noob like me? my daughter and I are trying to get this to work and have zero terminal or coding experience...
@meerkatj936328 күн бұрын
Thanks for the video but including the cables would have helped a lot.... I really don’t want to redo it so now I have to remove the plastic caps and pass the cables through...
@HisHonorable29 күн бұрын
Nonono, you just have to google "robot kit". Blocked. I just saved all of your time from this guy's 0:23 of fame, so from now on all your woman belongs to me, (representing score-boarding, gamers while the rest always just get busy doin' it, because of it, after that).
@stifenjans4342Ай бұрын
How much cost computation, in Gcp?
@FridayYoung007Ай бұрын
So how can I transform the output 7 action figures into the real length?
@moibe182Ай бұрын
It's amazing, or was, because you don't longer have the graphic interface :(
@arirajuhАй бұрын
Amazing.
@Swiethart7Ай бұрын
Very cool stuff
@MarkAnkcornАй бұрын
is it just me or is there no sound? and I have no idea what other people are talking about, these steps are virtually incomprehensible. Step 2 with removing the gear was like five feet away from the part in question and hard to see, an was only onscreen for a second or two. Spent 100x that time watching the guy turn screws. How is this helpful at all? Could he not afford a tripod? I can't even watch without getting nauseous
@micuentadecasaАй бұрын
Very interesting work, where is the list of robots used for the first dataset/model?
@moostudiosXАй бұрын
5:03, there are only 4 M2x6 screws supposed to be used, not 8 screws
@Danilyn_LivaoАй бұрын
Fantastic tutorial!😁 Your step-by-step approach really helps simplify the process, making it accessible for both beginners and advanced users. Can’t wait to try it out myself. Thank you for sharing this informative tutorial!👍
@joost3759Ай бұрын
Communall
@mohamedkarim-p7jАй бұрын
Thank for sharing👍
@studyingAIАй бұрын
COOL
@jmirodg70942 ай бұрын
Great video! Important notice for those on linux you have to remove brltty (a driver for braille tactile system which you do not need unless you are vision impaired) using 'sudo apt remove brltty' as they share the same usb ID as the ch341 on the motor controll board
@Ivelin2 ай бұрын
Great presentation! Backstory and prior work references much appreciated. The connection Cheng Chi made between energy based models and diffusion is amazing. Thank you for sharing with the Open Source community! Looking forward to a follow up video with new results on scaling with larger number of action prediction steps and multiple tasks learned on the same model. Any commentary on using federated learning techniques to merge models trained on different tasks and different data sets by different researchers would be awesome.
@mleonsl2 ай бұрын
What a useless video
@aradhya17122 ай бұрын
Can we stream whisper output? Realtime conversion of audio to text?
@meteorinc42592 ай бұрын
Wow another amazing video from the LeRobot Team. I guess Christmas is early this year 🎄 Thanks Jess. I got my dynamixel teleop working now trying to figure out how to get the training to work on jupyter lab! Keep up the great work!
@netook82 ай бұрын
is it possible to use my own GPU, without spending months learning bare bones coding?
@แอ้มน่ารัก1232 ай бұрын
SDfjvkd
@itxwali-m2 ай бұрын
Hackathon intro
@ibrhmvk2 ай бұрын
cool
@kenchang34562 ай бұрын
But wait there's more?! 🙂
@jmirodg70942 ай бұрын
So you're not giving free hugs anymore🤪. Haha at the end you still gives a few free hugs... this looks a lot like google's VertexAI on GCP but with open models and optimization
@TheAIPivot2 ай бұрын
I'm so excited for this! 😱
@brianhopson20722 ай бұрын
Is this part of Ollama's integration with HF?
@HuggingFace2 ай бұрын
No this is quite different - ollama integration is for local deployment, HUGS is for in-your-own-server deployment, be it an AWS instance or on-premise. hf.co/blog/hugs has the details
@ArtyomBoyko2 ай бұрын
Thanks!
@ArtyomBoyko2 ай бұрын
Thanks!
@fosbergaddai49962 ай бұрын
Good one.
@HerroEverynyan2 ай бұрын
This was a fantastic walkthrough.
@DavidWhite-j3e2 ай бұрын
West Lights
@AnnaRushi2 ай бұрын
Awesome 👌
@CraigFurkin-t8m2 ай бұрын
Gleason Turnpike
@BlairReisen-k2p2 ай бұрын
Spencer Spur
@ShannonMadden-x9k2 ай бұрын
Xander Wells
@SeadratulMontaha2 ай бұрын
Ettie Underpass
@ChildFranklin-i7r2 ай бұрын
Lang Track
@QorQar2 ай бұрын
If you allow, can you make a video on using an acceleration library with a prompt for a model larger than Vega, with the code displayed on a Colab page? There is no code on the Internet for a normal claim, and all that exists is for training.
@EveStephanie-m6s2 ай бұрын
Erik Inlet
@123playwright2 ай бұрын
Zephyr gave u the cure to cancer, but u cant decode it? Smh