Martin Wattenberg: Models within models - how do LLMs represent the world?

  Рет қаралды 497

Berkeley EECS

Berkeley EECS

Күн бұрын

You can find more information including the the course syllabus and suggested readings at rdi.berkeley.ed...
Martin Wattenberg, Professor, Harvard University

Пікірлер: 1
@samwight
@samwight 2 ай бұрын
There's a weird personification of language models that nobody does for image generation models. It's weird to assume that these things "think", that they "do" things, when they're fundamentally no different than image generation models: they generate text that sometimes happens to be right. It's just that we've trained these models to *pretend* like they're intelligent.
Nicholas Carlini: The security of LLMs
1:16:58
Berkeley EECS
Рет қаралды 704
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 587 М.
МЕБЕЛЬ ВЫДАСТ СОТРУДНИКАМ ПОЛИЦИИ ТАБЕЛЬНУЮ МЕБЕЛЬ
00:20
Nastya and balloon challenge
00:23
Nastya
Рет қаралды 54 МЛН
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Паша Осадчий
Рет қаралды 7 МЛН
АЗАРТНИК 4 |СЕЗОН 1 Серия
40:47
Inter Production
Рет қаралды 1,4 МЛН
Dan Hendrycks: AI safety primer and representation engineering
1:17:30
How Stable Diffusion Works (AI Image Generation)
30:21
Gonkee
Рет қаралды 148 М.
AI: Your New Creative Muse? with Douglas Eck
42:03
Google DeepMind
Рет қаралды 14 М.
Eric Wallace: Memorization in language models
1:05:46
Berkeley EECS
Рет қаралды 340
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 986 М.
How philosophy got lost | Slavoj Žižek interview
35:57
The Institute of Art and Ideas
Рет қаралды 469 М.
МЕБЕЛЬ ВЫДАСТ СОТРУДНИКАМ ПОЛИЦИИ ТАБЕЛЬНУЮ МЕБЕЛЬ
00:20