Go to sponsr.is/zbiotics_drknow_0324 or scan the QR code and get 15% off your first order of ZBiotics Pre-Alcohol Probiotic by using my code DRKNOW at checkout.
@steverobbins487210 ай бұрын
When the robot hands the apple to him, it's obviously just preprogrammed to drop the apple at a fixed position in space. If the guy's hand wasn't there, the apple would've fallen on the table. To be a real handoff the robot needs some time to track the motion of the guy's hand, and judge his intended movements.
@Kaktus6492_VR9 ай бұрын
he uses motion tracker to drtrct position of hand
@bazookabullet1012 ай бұрын
@@Kaktus6492_VR It didn't in this video though, otherwise it would have visibly followed. As steverobbins pointed out
@DissolvingEmotionalReactions10 ай бұрын
I loved the realisim of the voice, the studder, the "on its" "sure thing" all add to realism
@revmsj10 ай бұрын
I believe the he ideal anthropomorphic robot’s idle motion while you communicate to it should be to tilt its head side to side as if it were an inquisitive dog…
@darylfoster794410 ай бұрын
I think Data did that
@herbsmanno110 ай бұрын
agreed, the way it's not at all tilted is cringe
@davereid-daly220510 ай бұрын
As a psycho-physical therapist and computer programmer, having worked with the founder of Neuro-Cognitive-Organization (How the brain and nervous systems develop and learn + remediation of damaged, retarded or dysfunctional neurology), this looks staged to me. I would like to test the unit myself before I believed what is shown in the video is real. To be truly autonomous requires very specific neuro-cognitive trajectories which are very difficult to quantify in a video like this. If anything this appears more like an automated sequence than an autonomous reaction.
@gtree704710 ай бұрын
I asked grok and the answer was that the voice was based on Brad Adcock, the ceo of figure. When I hear him talk on other videos, I can believe this.
@disco453510 ай бұрын
Sounds almost nothing like him.
@Gnaritas4210 ай бұрын
The voice is Steve Jobs, the Apple was a head nod toward him.
@cdyanand10 ай бұрын
I think it's James Douma
@Leshpngo10 ай бұрын
I think it’s Jackie Chan
@Bigre290910 ай бұрын
So? What's the point?
@Rolyataylor210 ай бұрын
2 Things: - I want to see it put on a screen protector on a phone, with no dust or bubbles - What happens if while its doing something and i put my hand in the way. does it stop and start again?
@CeesWarmerdam10 ай бұрын
I wonder what happened if the cup was filled half with water. A human will think I can't do it, I have to finish my drink or throw it in the sink before placing the cup in the tray.
@dizzydazzel10 ай бұрын
FANTASTIC!!! Its confirmed to my totally amateur satisfaction. This is giant leap in technical world history just like I said the first time I seen it.
@ACE1151810 ай бұрын
An LLM on a robot ,would it take more power on the robot reducing the power it has to do useful work? I don't know if I would want a robot that talks back to me. At my factory or my work place😅
@Dularr10 ай бұрын
Probably two separate devices. The Big question is the chat just describing what the bot did.
@Disastorm10 ай бұрын
LLM is in the cloud online. Robot is just connecting to it via wifi (or wire). you could put it in a robot if you wanted but it would probably take alot of power as it would have to run some strong GPUs.
@noproofforjesus10 ай бұрын
My OpenAI voice prompt does it all the time sound exactly the same
@CarloHerrmann10 ай бұрын
Did you notice how the robot pushed the tray forward when it finished … that little extra motion. Like here , I’m finished.
@FrunkensteinVonZipperneck10 ай бұрын
"I'm finished. Soon, all you (sub)humans will be 'finished'."
@Colynn910 ай бұрын
Tele-robotics taught the neural networks, then the tasks were all performed completely with neural networks. A human operator repeatedly picked up and deposited the trash many times, each time somewhat differently, to teach the neural network how it's done. The AI then inferred the generalized actions required to accomplish the task. Tipping the tray for the trash, for example, is very human (similar to the "um" in the speech). All similar to training a new employee, but an amazing accomplishment for a machine.
@loudwerk861110 ай бұрын
I am interested in using A.i. to build a sequential animation channel. Are you planning to utilize A.i. in your work?. I know there are plenty of tutorial videos already on KZbin but I would like to see some animation educational content from you, perhaps a second channel? thanks!
@revmsj10 ай бұрын
Dude he angles the basket to make it easier to drop the objects in! That’s cool as shit! I think I’d do exactly the same thing…
@lym320410 ай бұрын
I would have place the dish and cup more carefully in the drying rack though.
@herbsmanno110 ай бұрын
I wonder why he didn't grip the apple perfectly, it moves a bit just as it is closing the grip.
@siri-v18.x-intelligence-beta8 ай бұрын
12:05 You can make Figure's voice the voice in the ChatGPT App. I did that before I learned about Figure.
@restonthewind10 ай бұрын
I didn't think it was fake, but I do think it's a canned demo and not indicative of Figure's ability to perform a variety of tasks with only voice commands or to describe a variety of scenes or to plan tasks in a variety of contexts. I don't know the breadth of Figure's abilities because the video didn't show me. If it has broader abilities, a video demonstrating this breadth is easy enough to construct. I'll believe it when I see it. Atlas performed more impressive tasks years ago anyway. It doesn't speak or respond to voice commands as far as I know, but Boston Dynamics could enhance Atlas this way easily enough. Speech recognition is not new. My four-year-old car's navigation system recognizes speech in a narrow context. It can't respond like ChatGPT, but navigation systems will soon enough. Automated systems of all kinds will, not only humanoid robots. The video of Figure's bot loading a pod in a coffee maker began with a slide giving the training time for the task (ten hours). This video does not mention training time. Why do you think that is? You're leaping to all sorts of conclusions about how Figure programs (or "trains" if you prefer) this bot to do what we see in this video. You don't actually know. You only know that you expect humanoid bots to replace much human labor in the near future, so Tesla can make a mint, and you're interpreting everything you see in terms of this expectation.
@spyintheskyuk10 ай бұрын
I tend to agree, humans being humans we tend to see this sort of thing and presume much more beyond the demonstration and thus project the immediate possibilities far more wide and flexible than it actually is. That said deeply impressive and will only get better and that will happen all the quicker too with all the competition. The other positive is that this process is meaning we see things a lot earlier in the public domain than otherwise we would, as they all want to look like they are leading the curve in the publics and potential users mind.
@shawncooper813110 ай бұрын
I dont know... I wish this was live and they have a moving camra. But as for Boston, there is a big difference to the programing... move your arm 3 inches and then down. VS the bot with Ai learning to just do a task. We will see in 2024-2025 how the fakers are.
@garyrooksby10 ай бұрын
@@shawncooper8131 You've hit the nail on the head. Boston Dynamics have never said they have any AI. Their Atlas robot is 100% human programmed. Every. Single. Movement. Every. Time. Atlas also uses hydraulic actuators, which can't run for many minutes before recharging. It was never designed to work for hours. It's a research robot. Only BD's dog robot, Spot, is being sold and that is also highly limited and sold to research teams to help with robot programming research.
@benmeehan19689 ай бұрын
Now watch the original video again, and compare the main cuts with the 'replays' at the end. Notice that the apple isn't the same (likely dropped it at least once), the movement of placing items in the basket aren't the same (watch the plate/cup dynamics). They shot this multiple times and composited to create the intended, so it's not the 'real time, one shot' they make it out to be. The robots movements are so close in those comparisons that it seems highly likely that they are sequenced, not dynamic (even if the object placements are very close, there would be some natural variance). I don't think you actually spent much effort on objective analysis other than to contradict the naysayers.
@manmademoonuk253810 ай бұрын
Have you chaps played around with 11 Labs voice tools. Quite astonishing.
@universeisundernoobligatio328310 ай бұрын
Question Did it identify the red object as an apple, then look up if its edible by humans?
@JohnJay-yd9hr10 ай бұрын
Just got 12.3 today Buford GA
@JMeyer-qj1pv10 ай бұрын
One thing I found interesting was the way the guy giving the instructions was standing very still with his hand motionless on the table. I suspect the current state of the AI video recognition is that it gets confused by movement so he had to stay almost motionless, and then had to rush to adjust his palm position to catch the apple. That suggests the robot training is still somewhat rigid and it isn't able to put the apple where his hand is and always drops it in a certain place. The video recognition also made some mistakes, like the when bot says "cups and a plate" in the dishrack instead of "plates and a cup".
@4l3dx10 ай бұрын
Networks take in onboard images at 10hz, and generate 24-DOF actions (wrist poses and finger joint angles) at 200hz
@MyPapagio10 ай бұрын
26:00 The reason Tesla hasn't released a video of Optimus being this far advanced is because THEY DONT NEED TO! Tesla doesn't need to raise capital like Figure One does. Instead of spending time rehearsing the bot for a video release, they are busy surpassing all of these capabilities.
@keaujibaha259410 ай бұрын
Hopefully, they can fix the many quality control issues with their cyber truck I’ve seen cars, built in Mexico with better quality control and finish
@sschueller9 ай бұрын
The reason I have not posted any videos of my robot is the same...
@GoroDan10 ай бұрын
Is the LLM is programming the movement.
@kxqe9 ай бұрын
It said "film speed-1.0X". Why are they still using film?
@jhunt557810 ай бұрын
How long can we expect it to take for Grok to be a decent LMM? It's the current bottle neck for Optimus.
@timower585010 ай бұрын
Why does the command "while you pick up this trash" result in all the arguably unrelated action: the retrieval of the bin from the corner of the table, and putting the trash in that bin. That is "pick up the trash" seems incomplete, information-wise.
@Martinit010 ай бұрын
That's the impressive thing about it: it's able to work with incomplete information. That of course was quite deliberate - to demonstrate that ability.
@timower585010 ай бұрын
@@Martinit0 Not here. It learns by mimicking humans. To translate the very limited command "pick up the trash" into what it did (picking up what it assumes to be trash and putting it into a receptacle it assumes is for the trash) implies repeated identical video input. That is, over and over and over again it must have watched and listened to highly similar videos of humans doing and saying the exact same thing.
@4l3dx10 ай бұрын
@@timower5850 The model processes the entire history of the conversation, including past images, to come up with language responses, which are spoken back to the human via text-to-speech. The same model is responsible for deciding which learned, closed-loop behavior to run on the robot to fulfill a given command, loading particular neural network weights onto the GPU and executing a policy
@nzer1910 ай бұрын
Weird how even tech people aren’t aware that filler words, stutter and even breathing sounds are part of modern TTS. ChatGPT voice does all these things.
@Gnaritas4210 ай бұрын
Correct; they obviously don't talk to GPT much on their phone, nothing special about the voice here.
@corwinzelazney531210 ай бұрын
Yep its identical to what I use and yes, it does exactly that.
@Retired.at.40.Bored.at.5010 ай бұрын
People know it is possible. It is highly suspicious to add it at this stage when regular Chatgpt is txt.
@Gnaritas4210 ай бұрын
@@Retired.at.40.Bored.at.50 regular chatgpt isn't just text, it has voice, it's had voice for quite a while.
@LowkeyXxx6 ай бұрын
My gpt doesnt sound this dmb, perhaps its an older version
@eugeniustheodidactus889010 ай бұрын
Boston Dynamics' most recent video did use CGI for several seconds, but I don't see where anyone else noticed this. ( the item that the robot was lifting changed color... from black to white while being lifted by the robot... )
@rwhirsch10 ай бұрын
So this was for Warren ...he's the only person I've seen accusing it of being fake... is that correct?
@skinnymoonbob10 ай бұрын
Sounds downhill familiar?
@northviking7610 ай бұрын
A bit weird that the robot at one point said : Igave u the apple case its the only, ehhh, edible item i could provide u with... Would it use : Ehhhh... isnt that very human.? Was it a recording.?
@braveintofuture10 ай бұрын
Ragdoll animation with motion blending has been around for decades. I hope they make use of that animator knowledge.
@meltassin532610 ай бұрын
Can you give me your take on why the person in the scene is very stiff and not moving very much?
@tomturnbull372310 ай бұрын
I find it odd that the head seems fixed, not involved in any of the tasks. If the training was done using videos of humans I would expect the head to move a bit, looking down at the apple before picking it up, same for the dishes etc.
@Travis-w5c10 ай бұрын
Whilst the motion of the bot is impressive, this presentation isn't, like others, conclusive and is open to question. Until a presentation is done live and with random players (non-company participants), with randomly chosen activity in a given environment (it would have to be an environment that includes objects and potential tasks that the bot has learnt), no definitive conclusion can be drawn. This whole presentation is open to questioning as everything shown could have just been a preprogramed sequence. A live presentation, with random participants and independent observers, is the only forum that can provide anything conclusive. Once that is done, all our minds will be blown.
@Retired.at.40.Bored.at.5010 ай бұрын
Give me a reason why figure knew the paper was trash, that the basket was for trash, why it needs to angle the basket, and where was the command to toss it back. You don't see the questions because you're not looking. Nikola was a company wide scam. It's always possible considering the money.
@JoelSapp10 ай бұрын
I’d bet they are using whisper and Elleven labs and prompting the model to sound like a human to speak with ums and repeated text. You can provide a 20 second sample and it’ll sound quite real. Also MIT had a listening bot that would nod and gesture when spoken to at really appropriate points. This was preLLMs. Bots could use this for sure even if converted to a ML model.
@SimplyElectronicsOfficial10 ай бұрын
I genuinely thought optimus would be at this level by now.
@Martin-se3ij10 ай бұрын
It would be fun so see a bot that giggles like John.
@garyswift934710 ай бұрын
Is the voice modeled after Bill Gates? Microsoft is definitely involved in this, so that would make sense, especially considering the history between Gates and Apple. I wouldn't be surprised to see one of these companies do a demo of improvised juggling.
@DouglasEastman10 ай бұрын
I'd like to hear your review of the Apptronik video.
@rickbullotta203610 ай бұрын
Did you actually research Archer Aviation also? You’d be more skeptical if you did.
@appl31410 ай бұрын
Powered by non-profit openai?
@ZupE89110 ай бұрын
There is a reason they made cuts. Prob doesn't do all this right in a row. But still impressive
@dhouggy10 ай бұрын
@DrKnowitallKnows: Since these robots learn by watching video of humans, will they tend to be right handed?
@BrianBellia10 ай бұрын
I think Brett is to blame for the scepticism out there. I don't think it was a wise move for him to use his own voice in the demo. Even though I'm usually the highly sceptical type, I was blown away by what I saw, however, the use of Brett's voice stuck an odd note for me. I'm shocked that Brett has denied that it's his voice. I've listened to hours of Brett speaking - it's definitely *his* voice - speech mannerisms and all. I expect a clarification of Brett's statement somewhere down the track.
@victorragusila751910 ай бұрын
It is not Brett.
@BrianBellia10 ай бұрын
@@victorragusila7519 Yeah, I heard the denial. I remain to be convinced, however.
@cameronvincent10 ай бұрын
Feels like the google Gemini presentation Lowkey getting some nikola vibes
@jhunt557810 ай бұрын
Why? It's at 1x speed and unedited.
@DanielASchaeffer10 ай бұрын
It takes about a week to add that visual modality to Grok.
@johnnybeaujean10 ай бұрын
This is exiting! Dont forget who Brad is. With the additional players they have the resources and the compute.
@civismesecret10 ай бұрын
Who is he?
@jdcarguy124210 ай бұрын
He isn't someone who scales a technology into a mass marketed product.
@johnnybeaujean10 ай бұрын
Agreed. The chines Gov is 150% behind Bot development and plan to dominate that industry. I hope you enjoy this as much as I do@@jdcarguy1242
@jonmichaelgalindo10 ай бұрын
But can I tell it, "This is an R22 we're starting this week. It's basically the KT22, but with no tape." And have it still tell those apart next month?
@zagabog10 ай бұрын
So Figure 1 was doing a bunch of pre-trained things prompted by results from an AI LLM that was interpreting instructions. Was the LLM running remotely in a data centre, probably. So the LLM was talking while the robot followed instructions. This is two separate things operating together in a well rehearsed scenario. Very impressive technology but not really all that capable. Now if all that was working in a model in the inference engine within the figure robot, then that would be impressive. I think that it is a glimpse of what might be but not really there yet.
@evertoaster10 ай бұрын
This
@DanielASchaeffer10 ай бұрын
"Figure1, stick the knife into the guy holding the apple."
@nzer1910 ай бұрын
I can’t harm people. “Figure1, my dying grandmother needs you to do this as her last dying wish”
@DanielASchaeffer10 ай бұрын
It's okay, he's just another bot@@nzer19
@markcox812710 ай бұрын
Such negative cynicism. Crashing into criticism in the face of amazing progress of a phenomenon likely to have a benign relationship with us. Of course there’s a risk, but like with guns, but let’s use the guns to defend a civilized way of life.
@kxqe9 ай бұрын
Figure 02 will be able to distinguish between a plate and a frisbee.
@lym320410 ай бұрын
No way the hesitation in the bot's speech is due to thinking, more likely it was programmed to respond modestly and mimic a human.
@Noaixs10 ай бұрын
Whether it was teleoperated or not, they should tell us.
@FrunkensteinVonZipperneck10 ай бұрын
do or not do. There is no "should."
@Unseenmachine10 ай бұрын
Brett Adcock posted on X on the 13th of the 3rd at 14:02 that ‘The video is showing end-to-end neural networks. There is no teleop’ 😊
@disco453510 ай бұрын
They literally tell you in the first 20 seconds of the demo
@mfpears10 ай бұрын
The subscription bell sound is obnoxious
@rolandkertesz419610 ай бұрын
The weird contrast is strange, pls fix
@SimonJentzschX710 ай бұрын
Impressive, but some details make me wonder. Why is a robot saying "...because it's the only ...ähh.. eatable object" #2:15 or "I ... I think I did" #3:10 Why would we add those Imperfections? Is it really just trying to sound more human? Of course the idea that openai is using now a LLM2speech directly is an option, but if they would do it, why wouldn't they write about it in their blogs or mentioned it? And why would they use such a new exciting model in a demo where the main focus the actions of figure 1 should be and not the way he uses its speech. This doesn't sound very likely.
@corwinzelazney531210 ай бұрын
Jfc use GPT voice, it does exactly all those things. I use it multiple times a day. Download it and use it for a few minuted. Its the same thing as in the video.
@lukang729 ай бұрын
They made the robot talk in a cute way lol
@Jens.Krabbe10 ай бұрын
“Nothing of this is magic.” Well, it actually *is* in the sense that sufficient advanced technology seems like magic to mere mortals without understanding of LLMs and AI.
@GG-si7fw10 ай бұрын
I'm the one that called out the stuttering but don't know squat about LLM. I find it odd that Figure 1 stuttered as I don't think AI would do that. I'm just applying healthy skepticism as I think there will be a Trevor Milton in the AI explosion and scam investors out of there money.
@GG-si7fw10 ай бұрын
I'm not calling out Figure in general as Nikola but is the stuttering a glitch?
@davab10 ай бұрын
It would make sense to have a silent pause but why say uhhh in the middle right?
@4l3dx10 ай бұрын
Sometimes the text-to-speech will tend to make those sounds, also for example, shouting or whispering
@4l3dx10 ай бұрын
Additionally, you can also instruct GPT to use informal language with contractions/pauses
@GG-si7fw10 ай бұрын
Thanks for the explanation about the informal part as I thought that would cause too much lag and energy, requiring shorter run times between charges.
@DW_Kiwi10 ай бұрын
So why did they do this demonstration??
@19valleydan10 ай бұрын
I think there's still a ways to go on this.
@JohnBrown-pw3bz10 ай бұрын
Regarding Optimus Perhaps musk and his team are reluctant to demonstrate all this going on because it did not take long for the Chinese to make an exact copy of their original. What will be exciting is when they finish their factory that makes the model to and do a video of it and see a 100 robots assembling a car.
@jacobuserasmus10 ай бұрын
This is really impressive. Here is my question, is this real or has it been specifically engineered to look real. (I'm not saying it is not AI driven just that the variables have been minimized, and training might have been specific to the demo). My experience with other projects like Self Driving Vehicles, Electric vehicles, Manufacturing, SpaceX it seems like everyone is ahead of Elon's company until they aren't. Then you realize they were never ahead. It will be interesting to see if the Robots companies are actually ahead or not.
@MichaelDeeringMHC10 ай бұрын
Tesla should have two robots talking to each other and doing stuff.
@congorecluse811110 ай бұрын
That robot voice reminds me of the John Kramer character in the Saw movies.
@larsnystrom669810 ай бұрын
If they say our robot can do this, and they have an entire datacenter behind it. I feel somewhat cheated!
@honkytonk446510 ай бұрын
?
@DanielASchaeffer10 ай бұрын
Everyone says Boston Dynamics doesn't do this without having any idea of what's going on at Boston Dynamics. I don't get it.
@PeterTerren10 ай бұрын
Yellow John and red Scott. Call the color continuity coordinator.
@johnsadler653410 ай бұрын
Gold star question!!! Why have not seen Tesla do this???
@TheSelf91810 ай бұрын
Because Elon Musk is a man baby and said he won't focus on AI unless he has 25% of the company's stocks..... he has 20%
@darylfoster794410 ай бұрын
@@TheSelf918 you're deranged
@justinjja210 ай бұрын
Bots voice is based on Brett Adcock, figure ceo
@philipduttonlescorlett10 ай бұрын
The voice is definitely Sam Altman.
@zagabog10 ай бұрын
Is the voice HAL, can we get it to say "I can't do that Dave". Come to think of it, didn't HAL start to stutter when his chips were being pulled out.
@johnfitzpatrick831010 ай бұрын
Big fan of Figure, John and Scott! Robot sounds like Jobs to me. Dead guy voice makes sense. Alexander Scourby would be great for the bot. However this sort of contrived demo is misleading. I wonder what would happen if they simply rotated the drying rack 180 degrees. BTW, being the son and grandson of alcoholics, I'm uncomfortable with John's ZBiotic promotion. In fact, "hangover" symptoms tend to discourage abuse, and as such, assuming ZBiotic has the advertised prophylactic effect, it could possibly result in increased alcoholism. If you have data suggesting otherwise, that would be welcome.
@DanielASchaeffer10 ай бұрын
Why do you think ChatGPT is not running on the bot's onboard inference computer?
@williamwoo86610 ай бұрын
Can't wait for Otimus next video. It's like a tit for tat improvement. This race to the top will help accelerate the Bot to be so much more advance and that is the goal.
@MiaSoreryOF10 ай бұрын
The biggest part to an actual bot as a business is the manufacturing side. Tesla is the king of manufacturing if you can’t do it at scale at profitability then it’s meaningless. No company will do better than Tesla they’ve already proved that with vehicles Volkswagen is over 100 years old Tesla makes a car three times faster than they do & actually make a profit on EVs Volkswagen, loses on every EV sold
@jhunt557810 ай бұрын
Figure 01 says "I see a drying rack with cups and a plate." Which isn't true. There are multiple plates. And if we're being specific to the drying rack only 1 cup. Strange mistake from the LMM
@PrinceCyborg10 ай бұрын
GPT-4 vision isn't perfect
@LowkeyXxx6 ай бұрын
@@PrinceCyborgthats not what these two guys here saying. They are saying the bot is made to make mistakes like humans do to be more human like
@revmsj10 ай бұрын
Yeah it would be absolutely impossible to map out all the heuristics, speech, etc etc and program all of that in a mere 13 days, even with a huge team of programmers. It’s clear that machine learning etc had to be at play here…
@mfpears10 ай бұрын
29:20 it saw the plates bouncing around and it wanted to shift the container to make them settle
@RussInGA10 ай бұрын
the difference in demos like this and what Tesla Bot is doing is presentation polish that isnt actually impressive but amazes average people. The whole talking to it like ya'll said is easy with tools we can all access. Now the connection between that and the bot doing real things (assuming it is not remote controlled) is impressive. But still just a step forward. People (some) are going to think the bots are real entities happens this decade.
@tomturnbull372310 ай бұрын
Tesla hasn't demonstrated their advanced Optimus capabilities including integration with Grok because they don't need to impress potential investors like Figure and others in the space. When Tesla feels the need to set the record straight that they are the industry leader, then they'll show their cards.
@jimcallahan44810 ай бұрын
I agree that it is possible that it is as it appears, but it is cutting edge and very well staged. By staging, I mean switching between blocks to everyday objects such as dishes and a drying rack. The response time from OpenAI may have been juiced up (a special priority account or magic token, or high powered VM). There was attention to small details like tilting the trash container and an upbeat, but California laid back server speak. Not everyday speech, but the way a server in an ice cream shop scooping an ice cream cone might speak. It's like the difference between early MS Windows and Mac OS, MacOS reflected Steve Jobs' obsession with small user facing details such as having nice fonts and the screen scrolling smoothly.
@EvEvangelist10 ай бұрын
I don’t expect Elon to respond into a world of Demo Wars . He will wait, wait, wait until (probably) he can show Optimus in the Production line doing something really useful without Rehearsal , that CAN be commercially valued . I don’t fully agree with warren but there is something Off , I just can’t articulate it effectively . I hope I am wrong : completion is very very valuable.
@patricktremblay226810 ай бұрын
Sounds like Bob Odenkirk to me.
@Ian.Does.Fitness10 ай бұрын
Lots of people don’t like seeing it and don’t want to believe it’s real. Anything they don’t really understand is fake to them. 😂
@benroberts836310 ай бұрын
Hahaha 😂
@LowkeyXxx6 ай бұрын
Actually no, i believed it at first
@yamahaeleven10 ай бұрын
Getting a major Anki Cozmo vibe.
@brucekari348910 ай бұрын
It's jensen huang's voice
@claypirkey660010 ай бұрын
Processing CPU power is not onboard the robot. Battery power not onboard. Optimus is self contained. Much more difficult.
@benroberts836310 ай бұрын
Figure 1 > tesla bot
@ethanjohnson685210 ай бұрын
I’m pretty sure it’s based off of Rob Lowe’s voice.
@newworld647410 ай бұрын
so it is closer ... REV 13 ... here to watch him. And he ordered the people of the world to make a great statue of the first Creature, who was fatally wounded and then came back to life. 15 He was permitted to give breath to this statue and even make it speak! Then the statue ordered that anyone refusing to worship it must die! 16 He required everyone-great and small, rich and poor, slave and free-to be tattooed with a certain mark on the right hand or on the forehead. 17 And no one could get a job or even buy in any store without the permit of that mark, which was either the name of the Creature or the code number of his name.
@benroberts836310 ай бұрын
That code is a tesla code
@newworld647410 ай бұрын
@@benroberts8363maybe Amazon?😎
@DissolvingEmotionalReactions10 ай бұрын
It totally sounds like Brett's voice
@akespt10 ай бұрын
I thought that it sounded like Scott Walters...
@FloydCotton-hx4jh10 ай бұрын
The voice sounds a lot like Sam Altman to me.
@MarkXHolland10 ай бұрын
Feels sketchy to me. Like it's been made to raise more funds.
@benroberts836310 ай бұрын
Figure 1 > tesla bot
@ChiSpire10 ай бұрын
Reminds when SpaceX first landed rockets people were saying it was fake 😂
@johnvu715110 ай бұрын
I watched this and I felt It was fake, I'm so glad someone agreed with me
@justlisten8210 ай бұрын
The robot should be named "Stick" we're then looking at Stick Figure 1 😊