AMMI Course "Geometric Deep Learning" - Lecture 1 (Introduction) - Michael Bronstein

  Рет қаралды 54,448

Michael Bronstein

Michael Bronstein

Күн бұрын

Пікірлер: 51
@TheAIEpiphany
@TheAIEpiphany 2 жыл бұрын
Bravo Michael! I really love that you put things into a historical context - that helps us create a map (a graph :) ) of how concepts connect and evolve and by introducing this structure into our mental models it's easier to explore this vast space of knowledge.
@marfix19
@marfix19 3 жыл бұрын
This is just pure coincidence. I'm currently interested in this topic and this amazing course poped up. Thank you very much Prof. Michael for opening these resources to the public. I might try to get in touch with you or your colleagues to discuss some ideas. Regards! M Saval
3 жыл бұрын
This is truly amazing. I finished my bachelor in mathematics, with a thesis in differential geometry, and I just started studying a masters degree in Artificial Intelligence Research. I saw some articles on geometric deep learning, but nothing as complete as this. I think this beautiful field fits my interests perfectly and I think I'll orient my research career in this direction. Thank you very much for this.
@petergoodall6258
@petergoodall6258 3 жыл бұрын
Oh wow! Ties together so many areas I’ve been interested over the years - with concrete, intuitive, applications.
@NoNTr1v1aL
@NoNTr1v1aL 3 жыл бұрын
Amazing lecture series!
@gowtham236
@gowtham236 3 жыл бұрын
This will keep me busy for the next few weeks!!
@maximeg3659
@maximeg3659 3 жыл бұрын
thanks for uploading this !
@edsoncasimiro
@edsoncasimiro 3 жыл бұрын
Hi Dear Professor Michael Bronstein, Congratulations for the great job you and your team are doing in the field of AI. Im going to my junior year at university and kinda failed in love with the Goemetric deep learning. Hopefully these lesson and the paper will help me to understand more about. Thanks for sharing, All the best.
@vinciardovangoughci7775
@vinciardovangoughci7775 3 жыл бұрын
Thanks so much for doing this and putting it online for free. Generative models + Gauges fuel my dreams.
@jordanfernandes581
@jordanfernandes581 3 жыл бұрын
I just started reading your book "Numerical geometry ... " today out of curiosity and this shows up on youtube. I'm looking forward to learning something new 🙂
@fredxu9826
@fredxu9826 3 жыл бұрын
What a good time to be alive! I’m going to enjoy this playlist.
@bernardogalvao4448
@bernardogalvao4448 3 жыл бұрын
Same
@97ciaociaociao
@97ciaociaociao 3 жыл бұрын
I'm barely holding to my papers rn
@Alejandro-hh5ub
@Alejandro-hh5ub Жыл бұрын
The portrait on the left @5:35 is Pierre de Fermat and it says Desargues 😅
@madhavpr
@madhavpr 2 жыл бұрын
This is fantastic !! It's great to have access to such amazing content online. What are the prerequisites for understanding the material? I'm aware of basic signal processing, linear algebra, vector calculus and I work (mostly) on deep learning. I'm learning differential geometry (of curves and surfaces in R^3) and abstract algebra on my own. Is my background sufficient? I feel a little overwhelmed.
@MichaelBronsteinGDL
@MichaelBronsteinGDL 2 жыл бұрын
should be sufficient
@abrilgonzalez7892
@abrilgonzalez7892 Ай бұрын
Thanks you Michael, Is there anny chance we could access an certification or exam for guaranteeing the knowdlege ans maybe put that in our resume ? I would really appreciate that!
@fredxu9826
@fredxu9826 3 жыл бұрын
today I got the book that Dr. Bronstein suggested "The Road to Reality" by Roger Penrose...wow I wish that I had came across this book wayyy earlier. If I had this when I was in early undergraduate I would had much much more fun and motivation to study physics and mathematics. This is just amazing.
@samm9840
@samm9840 3 жыл бұрын
I had seen your previous ICLR presentation on the same topic and was still not clear about the invariance and equivariance ideas! Now finally I got hold of the concept of inductive biases (geometric priors) that must be ensured for model architectures 1. images - shift inv. and equiv. 2. graphs - premutation inv. and equiv. 3. sequences/language - ?? and for any other tasks we may encounter - we need to identify which property w.r.t. the resulting function should be invariant and equivariant! Thank you very much Sir for generously putting it all out there for the public good.
@evenaicantfigurethisout
@evenaicantfigurethisout 3 жыл бұрын
23:41 I don't understand why we can simply permute the nodes on the caffeine molecule willy nilly like that? The binding energy depends on what the neighboring atoms are, the number of bonds and also the type of bonds. How can all of this information be preserved if we permute it at will like this? For example the permuted vectors here show all the yellows next to each other when in the actual molecule there are no neighboring yellows at all!
@MichaelBronsteinGDL
@MichaelBronsteinGDL 3 жыл бұрын
Molecular fingerprints are permutation invariant, but based on permutation-equivariant aggregation. The way it works is a sequence and of locally permutation-invariant aggregators (corresponding to one GNN layer) that are permutation-equivariant, followed by a permutation-invariant pooling. So graph structure is taken into account. We explain it in lectures 5-6
@Chaosdude341
@Chaosdude341 3 жыл бұрын
Thank you for uploading this.
@xinformatics
@xinformatics 3 жыл бұрын
05:08 Desargues looks strikingly similar to Pierre de Fermat. I think one of them is wrong.
@MichaelBronsteinGDL
@MichaelBronsteinGDL 3 жыл бұрын
I think you are right
@droidcrackye5238
@droidcrackye5238 3 жыл бұрын
Great work, thanks
@Dr.Nagah.salem1
@Dr.Nagah.salem1 3 жыл бұрын
Oh my God thank you very much for your effort
@jobiquirobi123
@jobiquirobi123 3 жыл бұрын
Thank you!
@rock_it_with_asher
@rock_it_with_asher 2 жыл бұрын
28:32 - A moment of revelation! wow!🤯
@444haluk
@444haluk 2 жыл бұрын
32:45 that approach is too naive. If I say "I hate nachos", it doesn't mean that I have a connection with every nacho past-present-future and I hate every single one of them uniquely. No! I just hate nachos. After 1 minute of thinking you can realize what you need is hypergraphs in almost every situation.
@randalllionelkharkrang4047
@randalllionelkharkrang4047 2 жыл бұрын
I didnt understand most things mentioned here. hopefully the later lectures make provide more insight.
@sumitlahiri4973
@sumitlahiri4973 3 жыл бұрын
Awesome Video !
@krishnaaditya2086
@krishnaaditya2086 3 жыл бұрын
Awesome Thanks!
@channagirijagadish1201
@channagirijagadish1201 2 жыл бұрын
Excellent Lecture. Thanks and appreciate it.
@mingmingtan8790
@mingmingtan8790 2 жыл бұрын
Hi, I can't access the slide. When I clicked on it, it states that This URL has been blocked by Bitly's systems as potentially harmful.
@ifeomaveronicanwabufo3183
@ifeomaveronicanwabufo3183 2 жыл бұрын
The resources, including the slides, can be found here: geometricdeeplearning.com/lectures/
@justinpennington509
@justinpennington509 3 жыл бұрын
Hi Professor Bronstein, what is the practical way of handling graphs networks of different sizes? With a picture, it’s easy to maintain a consistent resolution and pixel count, but with graphs and sub graphs you could have any number of nodes. Is it typical to just pick a maximum N one would expect in practice and leave the unfilled nodes as 0 in the feature vector and adjacency matrix? If the sizes of these matrices are variable, then how does that affect the weights of the net itself?
@MichaelBronsteinGDL
@MichaelBronsteinGDL 3 жыл бұрын
the way graph functions are constructed in GNNs is by aggregating the multiset of neighbour features. This operation is done for every node of the graph. This way GNN does not depend on the number of nodes, number of neighbors, nor their order
@vaap
@vaap 2 жыл бұрын
banger course
@mlworks
@mlworks 3 жыл бұрын
Is there any book that correlates with geometric deep learning course presented in this course?
@marijansmetko6619
@marijansmetko6619 3 жыл бұрын
This is basically the textbook: arxiv.org/abs/2104.13478
@sowmyakrishnan240
@sowmyakrishnan240 2 жыл бұрын
Thank you Dr. Bronstein for the extraordinary introductory lecture. Really excited to go through the rest of the lectures in this series! I have 2 questions based on the introduction: 1) When discussing the MNIST example you mentioned that images are high dimensional. Could not understand that point as generally the images such as the MNIST dataset are considered to be 2-dimensional in other general DL/CNN courses. Can you elaborate more on how the higher dimensions emerge or how to visualize those for cases such as the MNIST dataset? 2) In case of molecules, even though the order of nodes can vary, the neighborhood of each node remains the same under non-reactive conditions (when bond formation/breakage is not expected). In such cases, does permutation invariance only mean the order in which nodes are traversed in the graph (Like variations in atom numbering between IUPAC names of molecules)? Does permutation invariance take into account changes in node neighborhood? I apologize for the naive questions professor. Thank you once again for the initiative to digitize these lectures for the benefit of students and researchers.
@Hyb1scus
@Hyb1scus 2 жыл бұрын
I don't thinhk I can answer you first question in detail but I think in a MNIST picture, there are as many dimensions as there are pixels. It is the analysis of the 256 individually or bundled through a convolution that will enable the program to determine the displayed number.
@MichaelBronsteinGDL
@MichaelBronsteinGDL 2 жыл бұрын
Each pixel is treated as a coordinate of a vector, so even 32x32 MNIST image is ~1K-dimensional
@syedakbari845
@syedakbari845 2 жыл бұрын
The link to the lecture slides is not working, is there anyway to still access them?
@ifeomaveronicanwabufo3183
@ifeomaveronicanwabufo3183 2 жыл бұрын
The resources, including the slides, can be found here: geometricdeeplearning.com/lectures/
@akshaysarbhukan6701
@akshaysarbhukan6701 3 жыл бұрын
Amazing lecture. However, I was not able to understand the mathematical part. Can someone suggest to me the prerequisites for this lecture series?
@JohnSmith-ut5th
@JohnSmith-ut5th 2 жыл бұрын
The very fact that the human brain is captivated and fascinated by manifolds is enough to prove that the brain does not use the concept of manifolds in any manner. I'm going to tell you a scenery secret I happen to know: The brain is purely an sparse-L1 norm processor. It has no notion of "distance" except in the form of pattern matching. You're welcome... so now you can throw this entire video and all related research in the garbage, unless your goal is to make something better than the human brain.
@AtticusDenzil
@AtticusDenzil 2 жыл бұрын
polish accent
@MichaelBronsteinGDL
@MichaelBronsteinGDL 2 жыл бұрын
Russian accent
OYUNCAK MİKROFON İLE TRAFİK LAMBASINI DEĞİŞTİRDİ 😱
00:17
Melih Taşçı
Рет қаралды 12 МЛН
The selfish The Joker was taught a lesson by Officer Rabbit. #funny #supersiblings
00:12
Please Help This Poor Boy 🙏
00:40
Alan Chikin Chow
Рет қаралды 23 МЛН
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 107 МЛН
Geometric Deep Learning | Michael Bronstein || Radcliffe Institute
45:30
Harvard University
Рет қаралды 28 М.
Geometric Deep Learning: GNNs Beyond Permutation Equivariance
1:25:12
Petar Veličković
Рет қаралды 11 М.
Geometric Deep Learning: Past, Present, And Future, by Michael Bronstein
54:28
UCL Centre for Artificial Intelligence
Рет қаралды 11 М.
Lecture 1: A Brief History of Geometric Deep Learning - Michael Bronstein
1:34:14
Theoretical Foundations of Graph Neural Networks
1:12:20
Petar Veličković
Рет қаралды 90 М.
Liquid Neural Networks
49:30
MITCBMM
Рет қаралды 246 М.
OYUNCAK MİKROFON İLE TRAFİK LAMBASINI DEĞİŞTİRDİ 😱
00:17
Melih Taşçı
Рет қаралды 12 МЛН