Stanford CS224N NLP with Deep Learning | 2023 | Lecture 11 - Natural Language Generation

  Рет қаралды 18,203

Stanford Online

Stanford Online

8 ай бұрын

For more information about Stanford's Artificial Intelligence professional and graduate programs visit: stanford.io/ai
This lecture covers:
1. What is NLG?
2. A review: neural NLG model and training algorithm
3. Decoding from NLG models
4. Training NLG models
5. Evaluating NLG Systems
6. Ethical Considerations
What is natural language generation?
Natural language generation is one side of natural
language processing. NLP =
Natural Language Understanding (NLU) +
Natural Language Generation (NLG)
NLG focuses on systems that produce fluent, coherent
and useful language output for human consumption
Deep Learning is powering next-gen NLG systems!
To learn more about this course visit: online.stanford.edu/courses/c...
To follow along with the course schedule and syllabus visit: web.stanford.edu/class/cs224n/
Xiang Lisa Li
xiangli1999.github.io/
Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
#naturallanguageprocessing #deeplearning

Пікірлер: 7
@420_gunna
@420_gunna Ай бұрын
Great lecture! :)
@uraskarg710
@uraskarg710 7 ай бұрын
Great lecture! Thanks!
@mshonle
@mshonle 8 ай бұрын
Ah, interesting… I had wondered about the distinction between NLU and NLP and now it makes sense! Cheers!
@robertokalinovskyy7347
@robertokalinovskyy7347 5 ай бұрын
Great lecture!
@l501l501l
@l501l501l 7 ай бұрын
Hi there, based on the schedule on your official course website, maybe this course should be lecture 10 and Prompting, Reinforcement Learning from Human Feedback by Jesse Mu) should be lecture 11?
@mshonle
@mshonle 8 ай бұрын
Can using dropout during inference be another way to set the temperature and perform sampling? E.g., if training had a 10% dropout rate, why not apply a similar random dropout during inference? The neurons which get zeroed out could depend on some distribution, such as selecting neurons evenly or favoring the earlier layers or targeting attention heads at specific layers. One might expect the token distributions would be more varied than what beam search alone could find.
@JQ0004
@JQ0004 7 ай бұрын
The TA seems attends Ng`s class a lot. Seems to imitate "ok cool" a lot. 😀
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 118 #shorts
00:30
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 118 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 202 М.
Elon Musk's Advice For College Students
0:56
Wealthy Pot
Рет қаралды 2,3 МЛН
Are LLMs the Beginning or End of NLP?
1:00:56
Simons Institute
Рет қаралды 26 М.
Andrew Ng: Opportunities in AI - 2023
36:55
Stanford Online
Рет қаралды 1,8 МЛН
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Рет қаралды 1,9 МЛН
NLP vs NLU vs NLG
6:48
IBM Technology
Рет қаралды 58 М.
Stanford CS25: V4 I Overview of Transformers
1:17:29
Stanford Online
Рет қаралды 42 М.