Ah, interesting… I had wondered about the distinction between NLU and NLP and now it makes sense! Cheers!
@robertokalinovskyy7347 Жыл бұрын
Great lecture!
@420_gunna9 ай бұрын
Great lecture! :)
@p4r7h-v7 ай бұрын
thanks!!
@mshonle Жыл бұрын
Can using dropout during inference be another way to set the temperature and perform sampling? E.g., if training had a 10% dropout rate, why not apply a similar random dropout during inference? The neurons which get zeroed out could depend on some distribution, such as selecting neurons evenly or favoring the earlier layers or targeting attention heads at specific layers. One might expect the token distributions would be more varied than what beam search alone could find.
@l501l501l Жыл бұрын
Hi there, based on the schedule on your official course website, maybe this course should be lecture 10 and Prompting, Reinforcement Learning from Human Feedback by Jesse Mu) should be lecture 11?
@banruo-tz7tx4 ай бұрын
Yeah seems there is a mistake
@yagneshm.bhadiyadra43592 ай бұрын
There is the mistake. Josh mentions this in lec 9.
@JQ0004 Жыл бұрын
The TA seems attends Ng`s class a lot. Seems to imitate "ok cool" a lot. 😀
@annawilson38246 ай бұрын
1:02:05
@yagneshbhadiyadra7938Ай бұрын
This is lec 10,not 11
@sudhanvasavyasachi252515 күн бұрын
content is getting more abstract as i progress , may be because all these are quite recent devolopments