George G. Lendaris
25:02
7 ай бұрын
Daniel S. Levine
27:33
7 ай бұрын
Nicola Fabiano
24:43
Жыл бұрын
David Brown
32:03
Жыл бұрын
Terrence Sejnowski
1:02:13
Жыл бұрын
Paul Werbos
28:06
Жыл бұрын
Impacts of the NSF Career Award Panel
1:05:23
Explainable AI Panel
1:15:37
Жыл бұрын
Learning with No Data Collection
1:00:15
Пікірлер
@RiteshSinghArya
@RiteshSinghArya 27 күн бұрын
Shunichi Amari is a legend.
@Mentaculus42
@Mentaculus42 7 ай бұрын
If only things (politics of funding different approaches) had gone differently in the 1960’s. I wonder how much would be different today ‽ I still reflect very positively on the number of classes that I took from Professor Widrow but I am rather conflicted about TOMEX. Widrow has a nice overview article “The Hebbian-LMS Learning Algorithm” that is freely available. Also there is a book on this topic. Look at the article first. ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) & MADALINE are some of the earliest research in that area. Having watched the complete video something that seems “interesting” is Widrow’s somewhat revisionist history of the whole funding kerfuffle that occurred in the 1960’s. He basically has explicitly “downplayed” the whole Symbolic Al (à la McCarthy et al) vs neural networks funding conflict. ARPA (AKA DARPA, or Defense Advanced Research Projects Agency) chose symbolic reasoning vs neural nets in the 1960s. Widrow at Stanford was at the forefront of neural networks (machine learning) in the early 1960s but seemed to have been so traumatized by the “intellectual / funding disputes” that occurred that he basically reinvented himself. This has been mentioned in other “histories” that are available. He was so "traumatized" by the battle between the two camps (symbolic vs neural net) that he literally dissuaded possible PhD students from entering the field that would become machine learning as there was no funding. Now in his 90’s he seems to be very happy to “reattach himself” to the contemporary successes of this area. Even though I was working on a project (late 1980s) that was associated with Widrow & Stanford that could have used his insights about neural networks, he never brought it up. When I provided a neurological motivation for an image processing algorithm he questioned me about how I came up with the idea, but nothing more. It took a number of years before my research uncovered Widrow’s connection to early NNs. I have a little bit of disappointment that he didn’t provide more support. Also he completely neglected to mention that he started a company called TOMEX (for Tomographic Exploration) that I believe was sold to Schlumberger. It was involved with FOSSIL FUEL drilling, which I guess is not considered particularly environmentally correct these days (but who knows, it may have paid for those Andy Warhol prints). 47:05 The house he is referring to is “faculty housing” on Stanford property. I remember being there and he was wondering how to keep his “swimming pool” from slipping down the side of the hill. I had suggested some solutions, since it is still there I guess it worked. I believe that the local name for that area was called the “Facility Ghetto”, which was a local joke as that house is probably worth around $10,000,000 now. My belief is the house is part of the “Residential Ground Lease” program and will revert back to Stanford at some future date. When talking to one of his daughters, she mentioned that when she was going to Stanford (free tuition), that the joke was when someone asked where she came from, she would reply, “Stanford”, and get a quizzical look. 1:20:02 The generator is still running at the exact same frequency as the “network”. What is described only works for an isolated single generator. To change the frequency of a whole network of interconnected generators is a lot more complicated. The point I am trying to make is not about “frequency control”, but rather about a reasonably safe generalization that most things are a lot more complicated than portrayed. 1:54:52 Surely You're Joking, Mr. Feynman! 2:03:16 The latest “training cost” (what ever that precisely is for the latest “AI Models”) is now over $100,000,000. Eric Schmidt just did an interview (2024) and said an iteration can be as high as $400,000,000, with a lot of that being “electricity” costs. 2:29:54 Are ML Models “intelligent” (transformer neural networks, etc) ‽
@MDNQ-ud1ty
@MDNQ-ud1ty 11 ай бұрын
What if instead of fulling training or completely random one takes a middle ground? One does some training to get a partial representation then compresses the weights down significantly to initialize the weights uses those to train the linear layer? This should make it easier to train the the linear layer and for it to be more accurate.
@CLAUDIOGALLICCHIO
@CLAUDIOGALLICCHIO 5 ай бұрын
Definitely interesting and relevant alternative to explore. Hybrid models are a great line to explore in the near future.
@roccowyatt7237
@roccowyatt7237 Жыл бұрын
'Promo sm'
@jondor654
@jondor654 Жыл бұрын
Thank you for this essential presentation.
@szuh1
@szuh1 Жыл бұрын
Good to see you in real video! Bernie. Remenber 3 decades ago since we were at INNS, we were together, teaching UCLA Engineering Shoort Course Introduction to Artificial Neural Networks, organized by Prof. Bill Goodin, UCLA,
@andyr8812
@andyr8812 Жыл бұрын
F the EU !!!! The very idea that those dumbheads want to regulate everything is a clear sign of fascism. The EU was not supposed to be more than a trading organization, and it has crossed the red line a long time ago with its authoritarian infiltration in every member country. The EU, being a fascist organization, will certainly program the AI's to their own tyrannic political agenda.
@ngc-ho1xd
@ngc-ho1xd Жыл бұрын
I love jay McClelland
@AtticusDenzil
@AtticusDenzil 2 жыл бұрын
at 55:00 to 59:00 damn that's so true and insightful
@richard9985
@richard9985 2 жыл бұрын
??o?o??
@yuqin7580
@yuqin7580 2 жыл бұрын
mark
@GrantCastillou
@GrantCastillou 2 жыл бұрын
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
@GrantCastillou
@GrantCastillou 2 жыл бұрын
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
@chrisdougherty5683
@chrisdougherty5683 3 жыл бұрын
As an engineer also interested in the history of technology, this was an incredibly good talk/interview. I immediately found/bought a used copy of the author’s 1985 book, hoping to dive deeper into his explanation of LMS based adaptive filtering (not my area of engineering). Thanks for sharing!