Clips from Andrej Karpathy's talk at ICML (June 2019). Here's the outline: 0:00 - Sensors 0:29 - Single-task learning challenges 4:35 - Multi-task neural network architecture 11:50 - Loss function considerations 14:34 - Training dynamics 18:25 - Team workflow
@robosergTV4 жыл бұрын
is this the full version?
@ghostlv40303 жыл бұрын
What a presentation! Andrej Karpathy's talk is always full of insights while being enjoyable.
@safekidda464 жыл бұрын
I get the feeling Andrej is giving academia a kick up the arse to start working on some of these problems.
@HeavyDist4 жыл бұрын
Quoting Elon Musk about Andrew, "You wanted neural net cars? This is THE guy." We are so fortunate to have him working on this problem.
@hchattaway3 жыл бұрын
I enjoy listening to insanely smart people... I don't mind that he talks fast.. helps keeps me focused.
@johnlucich50269 ай бұрын
ANDREJ; KEEP UP ALL YOUR GOOD WORK WHEREVER YOU GO
@akarmdit22674 жыл бұрын
indeed humanity at its finest kudos
@bobi4619934 жыл бұрын
I was actually in the audience. 😄
@MasterofPlay74 жыл бұрын
he explains nothing but the challenges, trade secret i guess....
@pks.4 жыл бұрын
I remember the different images of cars explanation in cs231n(but that was cats). This guy is THE best person in explaining computer vision
@RoryFrenn3 жыл бұрын
Excellent presentation, it gave me good intuition on MTL
@FrenchingAround4 жыл бұрын
Have they tried using a depth map and putting a priority value on tasks and sub tasks according to the distance of the task relative to the car? If a road sign is far away it will have less ressources allocated than the task used to detect a car cutting in.
@liuculiu83664 жыл бұрын
I guess the distance to the road sign can only be obtained if the road sign detection network is properly working which requires an importance weight in advance
@FrenchingAround4 жыл бұрын
@@liuculiu8366 If the road sign is far away, the weight will be low (thanks to the depth map). You don't need to know about the road sign until the car needs to act upon it. Just like you do with your own brain, you're not super focused on a stop sign 500ft ahead. You could have a small ressource allocation to detect there is "A sign" (not knowing what it is yet) far away, and put that on hold until you actually have to act upon it. I guess the question is how ressource intensive is the depth map calculation.
@liuculiu83664 жыл бұрын
@@FrenchingAround As far as I know, Tesla cars have no laser radars. So, the depth map can only be obtained by stereo vision(assumed to be not very accurate). I am not very familiar with 3D techniques, but maybe some algorithm is also required to obtain the position of road signs from a not very accurate depth map?
@QuintinMassey2 жыл бұрын
So, all you really get is depth and being able to tell if it is a sign from that depth alone might require some intermediate image processing. You might not need a detector per se, but you still have to recognize it is a sign given the scenario you posed. Same goes for the car.
@FrenchingAround2 жыл бұрын
@@QuintinMassey indeed
@volotat4 жыл бұрын
May we expect more lectures on this channel? I was missing them a lot!
@lexfridman4 жыл бұрын
Full lectures will be on the main channel: kzbin.info This is a channel for shorter clips: kzbin.info Please subscribe to both if you interested in both clips and longer lectures.
@jijie133 Жыл бұрын
Great video!
@vinson22333 жыл бұрын
This is a really good video. I wonder how they solve the team workflow problems in the end.
@HappyLeoul4 жыл бұрын
“Cars upside down” lol
@pw72254 жыл бұрын
Allocating Karpathity is tricky.
@alankirkham55983 жыл бұрын
Andrej, I have two questions. What contrast ratio’s are the current cameras and software capable of & is the neural network utilizing photogrammetry or just photo data sets?
@kalebakeitshokile13663 жыл бұрын
Y’all have to stop uploading this guy at 1.5x
@jonclement Жыл бұрын
didn't he recently say that going this multi route was a mistake? And having one x-large neural network was the way to go?
@JD-kf2ki3 жыл бұрын
I guess they just add the flipped car recently; otherwise...
@adsk20502 жыл бұрын
How is model versioning done exactly? Seems pretty complicated. Like are the weights updated and stored in a git repository? Or is there some other witchcraft involved in it? How can models be non-reproducible? If you are storing weights in git then you can always go back and get those weights, right?
@Splish_Splash Жыл бұрын
weights isn't the problem, since you can actually change the structure of the model (simple example delete or add some layers, change loss function, activation function, data sampling), but I am also curious how the can't reproduce some of the results if they are using git
@dailygrowth79674 ай бұрын
I think he is referring to the process of fine-tuning fine-tuned models for multiple iterations. After a while, this will become non-reproducible
@prabdeepsingh77214 жыл бұрын
Hey, it's badmephisto!
@m_sedziwoj4 жыл бұрын
In first minute, I get this feeling, maybe we need different approach? I think people make assumption what road is constant width, and if we not see road, so something is there, because as driver many times I see something, I do not know what it is, but most time is not so important, as if I should drive over or around.
@nishatmahmud29912 жыл бұрын
Plz help me. which is best?? tensorflow of pytorch???
@Splish_Splash Жыл бұрын
torch
@Neonb88 Жыл бұрын
Torch for research and small networks, Tensorflow for distributed training
@SolidSnake013Duds4 жыл бұрын
Hm I see some of the same materials images in the February presentation.
@gkeers4 жыл бұрын
Ultimately, instead of using intuition/judgement to decide things like the hierarchy of the tasks, they should be optimised via some ML process too. More like the way a brain must do it.
@debayandas11284 жыл бұрын
That is not a good problem statement. That is the issue: everyone wants to do what you stated, but defining the statement is difficult.
@markmd94 жыл бұрын
Forget about identifying all kinds of types of objects on road and focus on identifying drivable road surface and everything else as obstacles
@piyushpatel98303 жыл бұрын
On the surface what you say seems to make sense but in order to coexist with other human drivers and not be too slow to clog traffic in realistic conditions, the driving needs to be what is called in the industry as "naturalistic", which means the machine needs to model behaviors e.g a pedestrian's behavior is different than cars. See for example what MobileEye does (they have videos on their website). Companies in this space have complex behavioral models that they use for "path planning" so that the cars drive naturalistically while being safe and this requires an awareness of what is in the scene (not merely the presence of things but identification) to make intelligent decisions. You also have to follow traffic rules even if you can drive without causing crashes while not following them, because they are enforced locally so you don't want to be fined. To follow the rules, you have to be able to detect signs etc.