MIT 6.S191 (2023): Deep Learning New Frontiers

  Рет қаралды 85,341

Alexander Amini

Alexander Amini

Күн бұрын

MIT Introduction to Deep Learning 6.S191: Lecture 7
Deep Learning Limitations and New Frontiers
Lecturer: Ava Amini
2023 Edition
For all lectures, slides, and lab materials: introtodeeplear...​
Lecture Outline - coming soon!
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Пікірлер: 65
@manohariisc
@manohariisc Жыл бұрын
This is wonderful. The speakers in this series are very generous in sharing their isnights and knowledge. And, they are immensely talented in the art of exposition. Many thanks.
@fencerlacroix2512
@fencerlacroix2512 Жыл бұрын
In Lecture 7 of MIT's Introduction to Deep Learning course, the instructor discusses the limitations of deep learning and explores some new frontiers in the field. One of the limitations of deep learning is that it often requires a large amount of labeled data to train a model. This can be difficult to obtain in certain domains, and it can also be expensive and time-consuming to annotate the data. Another limitation is that deep learning models can be difficult to interpret, which can make it challenging to understand why a model is making a particular prediction or decision. To address these limitations, researchers are exploring new frontiers in the field of deep learning. One area of focus is unsupervised learning, which involves training models on unlabeled data. This can be useful in domains where labeled data is scarce or expensive to obtain. Another area of focus is explainable AI, which aims to make deep learning models more transparent and interpretable. This can help to build trust in the models and ensure that they are making decisions that align with ethical and legal standards. The instructor also discusses some new frontiers in deep learning research, including reinforcement learning and generative models. Reinforcement learning involves training models to make decisions based on rewards and punishments, and it has been used to develop autonomous agents that can play games and navigate complex environments. Generative models involve training models to generate new data that is similar to a given dataset. This has applications in fields such as art, music, and natural language processing. Overall, the lecture provides a broad overview of the limitations of deep learning and the new frontiers that researchers are exploring in the field. By understanding these limitations and exploring new approaches to deep learning, researchers can continue to push the boundaries of what is possible with this powerful technology.
@xmohd2011
@xmohd2011 Жыл бұрын
is this generated using AI? It looks like so tbh
@eddiejennings5262
@eddiejennings5262 Жыл бұрын
Thank you all again. I have personally followed and have many times recommended this series to friends and colleagues.I look forward to and will followup on materials on encoding structure and prior knowledge during learning and extrapolation.
@fencerlacroix2512
@fencerlacroix2512 Жыл бұрын
Ayo, just dropped in to say thanks to the MIT crew for putting together this dope lecture on deep learning limitations and new frontiers! Ava Amini/ Alex Amini really killing (ed, past) it with the presentation, and I learning a lot from this video. Keep doing your thing, MIT! You're setting the standard for educational content and helping us all stay ahead of the curve. Peace out!
@biniyam106
@biniyam106 Жыл бұрын
this looks ai generated
@fencerlacroix2512
@fencerlacroix2512 Жыл бұрын
@@biniyam106 damn right!! it is, imagine typing all that. but this is human generated
@AAmini
@AAmini Жыл бұрын
😂
@Phliee
@Phliee Жыл бұрын
For those who wonder how Diffusion models are trained, here's what I figured out (correct me if wrong): First, noise the image for t time steps with some noising equations, so that for each time step, you have a ground truth noise. Then train your network a bit like recurrent net which has t time steps but one set of weights. For each time step, input a noised image from previous time step, and networks predicts what the noise is like in this image. The difference between the predicted noise and the real noise is your loss function. After the training is done, the network already "knows" what noises are like for each time step, so given a completely random noise image from scratch, it's possible to subtract the noise time step by time step until it's completely denoised. So iterate t time steps, predicts a noise at each time step, use a denoising equation some one else has already figured out to get a cleaner image as the next input and finally creates a new image.
@connorkapooh2002
@connorkapooh2002 Жыл бұрын
whooooaaaaa that makes complete sense too! denoising the image at different levels has really made that click for me, thank you very much!
@vikrambhutani
@vikrambhutani Жыл бұрын
Great insights into GCNs in deep learning, well done.
@bluealchemist6776
@bluealchemist6776 Жыл бұрын
MIT, thank you for the incredible knowledge shared!
@gapsongg
@gapsongg Жыл бұрын
Thank you
@SphereofTime
@SphereofTime 7 ай бұрын
44:00 Forward Noising
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
Really enjoyed the presentation. Well structured and organized.
@nsteblay
@nsteblay Жыл бұрын
Thanks for making these lectures available. Well worth anyone's time wanting to understand current state of ML / AI. Funny - Mrs. Davis commercial popped in the middle me watching this lecture. The result of an applied AI algorithm? Who knows!
@alexis91459
@alexis91459 Жыл бұрын
Just awesome, can't wait for lecture 8
@savantofillusions
@savantofillusions Жыл бұрын
Hey, Alexander and Ava - I'm a savant who has some valuable data for training datasets. I draw perfectly sideways without knowing what I'm constructing in the illustration and don't know until I see it rotated 90 degrees. I have potential eye tracking, finger tracking and coordinate data, as well as the math of the pixels as the drawing is made. I am technically a "demon of science" and I break a few psychophysical "laws" as I do art like Maxwell's demon opens a trap door (without really knowing what it's doing". The demon has to just get really lucking in order to break "nature" if you ask me. There's no other way to truly break the 2nd law of thermodynamics. That aside, I just seem to be very lucky like this when I do a profoundly automatic drawing humans cannot visualize until seeing it in its best fit for perspective of viewing the contained objects. I am looking for a home for my work. I'm trying to get Adobe to set up a lab for me in Cambridge. Fingers crossed. I don't want Elon's help. lol I'm open to other ideas if staff at the schools have them and want to get the drop on it. I'm ready to move there from Richmond, Va. I'm working on my own lab software, but can't make it what it should be with what I've got right now. It's a really good candidate for an NSF PAC grant project for someone qualified (other than me)
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
If I understand correctly, that means, if you want to generate random images of dogs, you need a diffusion model trained on dogs. It's not like you can train on 100 different classes of animals, and get random images of those animals. Just want to clarify that.
@AAmini
@AAmini Жыл бұрын
You can do either! In the case where you train on all 100 classes of animals, then you will also need a way to tell the model that you now want to generate a new image *specifically* of a dog. This is called "conditioning" -- checkout Conditional Diffusion Models for more details.
@theawesomeharris
@theawesomeharris Жыл бұрын
this lecture is so informative im fully blown away!
@krajanna
@krajanna Жыл бұрын
Nice lecture with updated syllabus. Superb.
@loremipsum9071
@loremipsum9071 Жыл бұрын
Really love the way Ava explains the materials ❤
@johnpaily
@johnpaily 7 ай бұрын
Deep learning calls to go beyond mind and five sensory organs to connect to the mind of the heart and beyond to the INNER SPACE.
@pavalep
@pavalep Жыл бұрын
Thanks, Ava For this Great Lecture !!!
@yuqiwang3296
@yuqiwang3296 Жыл бұрын
can't wait for the whole course😍
@SphereofTime
@SphereofTime 7 ай бұрын
43:00 Diffusion model rather than
@michaelbuckers
@michaelbuckers 8 ай бұрын
The reason people often see incorrectly generated hands is because human vision is extremely good at recognizing hands. Errors like that are everywhere in equal amounts, it's just your brain exaggerates it greatly when it comes to hands.
@johnpaily
@johnpaily 7 ай бұрын
It then exposes the black hole singularity and exposes the parallel world
@mPajuhaan
@mPajuhaan Жыл бұрын
This was thoughtfully structured and meticulously organized👌
@edmonda.9748
@edmonda.9748 Жыл бұрын
quick question to all DL enthusiasts: I read somewhere (which obviously forgot where!) that there is a new architecture that can capture long distance relationships and far apart features. Let me explain, CNN can capture features in images which are adjacent or very close to each other, so that the sliding filter can capture them as a bigger scale feature. This new DL architecture can capture features that are in the same image but very far apart from each other, much larger than size of filter. Does any body know about this architecture? hope I made some sense! 😂
@nikteshy9131
@nikteshy9131 Жыл бұрын
Thank you 🙏💕 😊
@johnpaily
@johnpaily 7 ай бұрын
Cross Link the mind of the body with the mind of the heart and explore the INNER SPACE
@codingWorld709
@codingWorld709 Жыл бұрын
Love you Sir and Ma'am ❤❤❤❤
@andrewgoodrich3530
@andrewgoodrich3530 10 ай бұрын
Those bubbles are linked to the stock market and finance. The more hype the more money you can get as a sector, as a company.
@johnpaily
@johnpaily 7 ай бұрын
In which direction the time flow is studied . Vertical or horizontal . Do you consider the overall time direction
@johnpaily
@johnpaily 7 ай бұрын
The greatest intellectual of the last century Max Planck said " A conscious and intelligent mind is the Matrix of matter". Einstein went on to call to look deep into nature and search for the mind of. God. We need to look deep into life and unravel consciousness and the root of creativity from atomic levels. This would be a stepping stone to Deep Learning. It can unravel the truth of nature and life and lead humanity from darkness to light.
@saulsaitowitz6023
@saulsaitowitz6023 Жыл бұрын
When a diffusion model produces a generated image, how close is that to some image that was in the training data? Like was there a turtle swimming in the ocean in the training data (54:40) that the model just recreated? Or is the output brand new?
@bigbud369
@bigbud369 7 ай бұрын
51:37 - is the ultimate "noise case" actually the cosmic microwave background (CMB)?
@RedTooNotBlue
@RedTooNotBlue 11 ай бұрын
Thank you for this awesome content! Would be good to see some code examples alongside the models talked about, nonetheless awesome stuff guys!
@dennissdigitaldump8619
@dennissdigitaldump8619 Жыл бұрын
Ai plus researcher data should be included. It should never run via diffusion only. It needs also result data fed back.
@RajabNatshah
@RajabNatshah Жыл бұрын
Thank you :)
@SphereofTime
@SphereofTime 7 ай бұрын
35:00
@liftingmysoul
@liftingmysoul Жыл бұрын
where we could get the shirt if we are watching online? Thanks!
@erkinalp
@erkinalp Жыл бұрын
Did you get any help from AI while updating the syllabus?
@marcosrmgalvao
@marcosrmgalvao Жыл бұрын
Can a diffusion model create any image out of noise?
@misesliberty
@misesliberty 8 ай бұрын
"Garbage in, garbage out." How about LLMs? Isn't this good countre example?
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
Are all unfolded protein the same? That is, can a folded protein that performs a certain biological function come from any unfolded protein?
@AAmini
@AAmini Жыл бұрын
The unfolded state would have to have the same amino acid sequence as the folded state. But the unfolded protein could occupy a number of different conformations (i.e., initial states) before folding to the final folded state. And there may also be slight variations on the folded state.
@AbhishekVerma-kj9hd
@AbhishekVerma-kj9hd 11 ай бұрын
What does it mean - spatially share parameters of each filter
@weiyicho8209
@weiyicho8209 Жыл бұрын
This is really an amazing courses. I got one question. It seems like I could install capsa in google colab in lab3. Is there anyway to solve this problem?
@ZK-iu5gl
@ZK-iu5gl Жыл бұрын
I suggested to create a group to discuss the topics
@jennifergo2024
@jennifergo2024 11 ай бұрын
Thanks for sharing
@rohanchess8332
@rohanchess8332 Жыл бұрын
Hey, is there anyway to buy MIT deeplearning T-Shirts. I really liked it!
@SphereofTime
@SphereofTime Жыл бұрын
6:19
@sonamshrishmagar6035
@sonamshrishmagar6035 Жыл бұрын
Alex, could we get timestamps?
@ayushhhhhh
@ayushhhhhh Жыл бұрын
I need that shirt 🙇
@fsaudm
@fsaudm 10 ай бұрын
Same here!! Really, @Alexander Amini, any way of getting one?? 🙏 🙏
@ojasvisingh786
@ojasvisingh786 Жыл бұрын
👏👏
@raiso9759
@raiso9759 Жыл бұрын
Thank you
@wobblynl1742
@wobblynl1742 Жыл бұрын
Not me watching this and being low-key jelly of lab prices and t-shirts 🫠
@acerhigh09
@acerhigh09 Жыл бұрын
How can overhype be "very dangerous"?
MIT 6.S191 (2023): Text-to-Image Generation
44:36
Alexander Amini
Рет қаралды 48 М.
MIT 6.S191 (2023): Robust and Trustworthy Deep Learning
53:50
Alexander Amini
Рет қаралды 90 М.
Players push long pins through a cardboard box attempting to pop the balloon!
00:31
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,9 МЛН
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 285 М.
KIBM Generative Mind Symposium: The Brain's Generative Model for Reality
1:04:08
Kavli Institute for Brain and Mind
Рет қаралды 352
MIT 6.S191: Language Models and New Frontiers
56:15
Alexander Amini
Рет қаралды 36 М.
Geoffrey Hinton in conversation with Fei-Fei Li - Responsible AI development
1:48:12
Arts & Science - University of Toronto
Рет қаралды 167 М.
The Turing Lectures: The future of generative AI
1:37:37
The Alan Turing Institute
Рет қаралды 624 М.
MIT 6.S191 (2023): Reinforcement Learning
57:33
Alexander Amini
Рет қаралды 136 М.
MIT 6.S191 (2023): Deep Generative Modeling
59:52
Alexander Amini
Рет қаралды 310 М.
Lecture 2: Experimental Facts of Life
1:20:12
MIT OpenCourseWare
Рет қаралды 1,6 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 743 М.