Michael I. Jordan: An Alternative View on AI: Collaborative Learning, Incentives, and Social Welfare

  Рет қаралды 8,865

Stanford Data Science

Stanford Data Science

Күн бұрын

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive, biological and social sciences.
Abstract:
Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single, autonomous agent. Social issues are entirely secondary in this paradigm. Indeed, the overall design of deployed AI systems is often naive---a centralized entity provides services to passive agents and reaps the rewards. Such a framing need not be the dominant paradigm for information technology. In a broader framing, agents are active, they are cooperative, their data is valuable, and they wish to obtain value from their participation in learning-based systems. Intelligence inheres as much in the overall system as it does in individual agents, be they humans or computers. This is a perspective familiar in economics, and a first goal in this line of work is to bring economics into contact with the computing and data sciences. The long-term goal is two-fold---to provide a broader conceptual foundation for emerging real-world AI systems, and to upend received wisdom in the computational, economic, and inferential disciplines.

Пікірлер: 3
@tigranishkhanov9521
@tigranishkhanov9521 7 ай бұрын
I always thought that ML is statistics + geometry done on the computer in high-dimensional spaces. Geometry comes in to help with high dimensionality. Instead of learning distributions exactly which is very hard in high dimensions we learn separating spaces of relatively simple geometry (like hyperplanes) as approximations.
Thomas Dietterich, "What’s Wrong with Large Language Models, and What We Should Be Building Instead"
1:15:31
Johns Hopkins Institute for Assured Autonomy
Рет қаралды 16 М.
Bill Dally | Directions in Deep Learning Hardware
1:26:45
Georgia Tech ECE
Рет қаралды 3 М.
Taki Taki Tutorial💃 Where’re you from?🔥
00:14
Diana Belitskay
Рет қаралды 6 МЛН
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 6 МЛН
Balloon Pop Racing Is INTENSE!!!
01:00
A4
Рет қаралды 13 МЛН
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 226 М.
How AI Fails Us, and How Economics Can Help - Michael I. Jordan
1:10:21
CITRIS and the Banatao Institute
Рет қаралды 8 М.
Marcus Hutter:  Foundations of Induction
2:00:31
Machine Learning and Dynamical Systems Seminar
Рет қаралды 2,2 М.
What's next for AI agentic workflows ft. Andrew Ng of AI Fund
13:40
Sequoia Capital
Рет қаралды 212 М.
Jeff Dean (Google): Exciting Trends in Machine Learning
1:12:30
Rice Ken Kennedy Institute
Рет қаралды 165 М.
From Deep Learning of Disentangled Representations to Higher-level Cognition
1:17:05
САМЫЙ дешевый ПК с OZON на RTX 4070
16:16
Мой Компьютер
Рет қаралды 96 М.