Stanford Seminar - Recent progress in verifying neural networks, Zico Kolter

  Рет қаралды 11,238

Stanford Online

Stanford Online

Күн бұрын

Пікірлер: 10
@tkmnus2023
@tkmnus2023 Жыл бұрын
Can this be applied to automated ML such as AutoML and AutoKeras?
@Handelsbilanzdefizit
@Handelsbilanzdefizit 2 жыл бұрын
After several years in AI research, I came to the conclusion that AI is mostly fooling technology. For a dog, a mirror is sufficient to fool him. He barks against the mirror, sees another dog that looks like him, behaves like him. For humans we need complex algorithms for the Illusion of Intelligence. Chatbots, capable of passing the turing-test. But nevertheless, it's just a sophisticated mirror. With some applications, indeed. But mostly its grandstanding of the big players in techindustry.
@armin3057
@armin3057 2 жыл бұрын
so your point is what exactly? we will never get to a real synthetic intelligence?
@Handelsbilanzdefizit
@Handelsbilanzdefizit 2 жыл бұрын
​@@armin3057 It's an economical problem, not a technical. Because nowadays techcompanies are also mediacompanies (searchengine, social media, news- & videoplattforms, ...) that push their own PR. Less regulated by cartel offices. Investors without deep knowledge, but capable of moving billions, put their money mainly in overhyped technologies. That's the risk if you concentrate more money in few hands. Everything depends on their good/bad decisions. Musk will invent brainchips and send people to mars and so on. Such crazy stuff. A vicious circle to the ground. Investments in public education would be more fruitful. But over some generations, market economy becomes unstable and doesn't tend to equilibrium of high standards.
@dreamcatcher8307
@dreamcatcher8307 2 жыл бұрын
It seems you have good command over AI. The more you are learning, the more you are getting the bitter truth but a better way to improve it. Share some knowledge.
@Handelsbilanzdefizit
@Handelsbilanzdefizit 2 жыл бұрын
@@dreamcatcher8307 As I said, it's like looking in a mirror. It reflects the trainingdata we put in. Prejudice, vanities, ... it's all encoded in myriads or parameters, so that we can interpolate and generalize our shit we feed in. With AI we can classify all sorts of bullshit. We can distinguish rabbit-shit from propaganda shit. Putinshit from Bidenshit from Selenskyjshit. Their weights store templates/filters for every possible shit, how it looks, how it smells, ... and shit comes out. Because we placed a pile of shit in front of the mirror, it returns shit. And people are getting shat on with artificially generated content. Especially with AI like AlphaGo that outperforms human players. It leads to the Initial value problem. The hope, that when we start with shit, the shit will reach new quality. Just by training. Someday, AI will produce greater shit than ourselves. The entire AI business is shitty. The only way for real AI is sensual exploration, like an animal, baby, ... not feeding with pre-selected shit.
@bayesianlee6447
@bayesianlee6447 2 жыл бұрын
Interesting fact there are some theories that humans dont have counsciousness. Just biochemical algorithm. Same as you say, We fool ourselves that we are divine but just like other animals which have neuron-synapse information processor.
@dani.afiiq_
@dani.afiiq_ 2 жыл бұрын
Thanks
@lykhoaqs
@lykhoaqs 2 жыл бұрын
hello✌✌✌✌
Stanford Seminar - How can you trust machine learning? Carlos Guestrin
52:08
DDPS | “A first-principles approach to understanding deep learning”
1:17:33
Inside Livermore Lab
Рет қаралды 15 М.
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 784 М.
Stanford Seminar - Where are the Field Robots?
58:28
Stanford Online
Рет қаралды 4 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН