Unsupervised Learning explained

  Рет қаралды 111,299

deeplizard

deeplizard

Күн бұрын

Пікірлер: 66
@deeplizard
@deeplizard 6 жыл бұрын
Machine Learning / Deep Learning Tutorials for Programmers playlist: kzbin.info/aero/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: kzbin.info/aero/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@Otonium
@Otonium 4 жыл бұрын
Yes, please. go deeper into autoencoder someday.
@deepaksingh9318
@deepaksingh9318 6 жыл бұрын
Yess.. Pleas do a video on auto encoder as wel. And just to let uh know that urs is best videos i have found so. Far. Best and easiest to understand the concepts..
@Waleed-qv8eg
@Waleed-qv8eg 6 жыл бұрын
I really love love this playlist. it gives you a clear understating of Machine Learning and Deep Learning! I have a comment, in Python this is [1,2, .....] called a list but a tuple would be like this (1,2, .....) [(1,2), (1,15)] this is a list of tuples and this is [[1,2], [3,4]] a list of lists! Thank you so much!
@xiaomichina5884
@xiaomichina5884 4 жыл бұрын
After listening to your such a sweet voice my brain neurons are predicting how beautiful you are... Here is the prediction result : Train =>(Listen sweet voice); Validation=>0.86; Testing =>Your Reply;
@solanofurlan443
@solanofurlan443 4 жыл бұрын
First things first, this is the best series on KZbin about ML out there. But I wanna know why you keep saying 'tuples' at 1:47 when refering to the list of height and weight samples? Is it a convention to call samples tuples or the data should actually be on the tuple format?
@deeplizard
@deeplizard 4 жыл бұрын
Thank you :) For the example mentioned, each sample had two features: height and weight. A tuple, just being a finite ordered list of elements, would be one appropriate way to store such a sample. A list, an array, or any other type of data structure would be fine as well. Whatever you choose to store your data in, you'll likely need to process it to be in a particular format anyway before you send it to your model. Examples of such processing are shown in the Keras series.
@jrod238
@jrod238 5 жыл бұрын
Thank u for speaking clearly.
@viniciusneto6824
@viniciusneto6824 5 жыл бұрын
Hi! All previous videos have been great so far. Thank you! But for this particular one I felt you were talking mostly about Autoencoders and not Unsupervised Learning in general, as you did when covering Supervised Learning in the last video. Is there a more in-depth video in any playlist? Anyway, thanks a lot! I appreciate your work!
@deeplizard
@deeplizard 5 жыл бұрын
Hi Vinícius - You're welcome! In general, unsupervised learning only means that we train our model with *unlabeled* data. This seems a bit abstract and hard to conceptualize in its own right, especially after only being exposed to supervised learning techniques. To illustrate how we can train models without data, we explore the common unsupervised learning techniques of autoencoders and clustering. You may also find it helpful to study the corresponding blog for this video as well: deeplizard.com/learn/video/lEfrr0Yr684 It has mostly the same content as the video but is in written format. The top of the blog focuses on unsupervised learning in general before jumping into examples.
@technicallyluke9993
@technicallyluke9993 2 жыл бұрын
finally understood :)
@martinmartin6300
@martinmartin6300 4 жыл бұрын
It might be worth mentioning that you can still validating unsupervised learning with accuracies. You can for example use a labeled validation set of data as a benchmark for the unsupervosed learner. For example, suppose a speaker recognition task. You can come up with a labeled data set for this purpose. Than you apply the data where the learner is training from scratch as is goes. Afterwards, you can validate against this data set. The assumption is that the learner will do similarly well for a new set of speakers. Note that it does not make sense to apply supervised leatning in the first place as the set of speakers might very well changing from run to run.
@afdanv
@afdanv 3 жыл бұрын
{ "question": "One common application of autoencoders is:", "choices": [ "Denoising data", "Detrending data", "Predicting labeled data", "Reducing Inputs" ], "answer": "Denoising data", "creator": "Alex D", "creationDate": "2021-10-01T11:50:50.123Z" }
@abdulhameedmalik4299
@abdulhameedmalik4299 2 ай бұрын
Best video madam
@muji_dipto
@muji_dipto 3 жыл бұрын
{ "question": "Which of the following is a use case of Autoencoders?", "choices": [ "Denoise data or images in the inputs", "Convert categorical data to numeric data", "Reduce overfitting ", "Denoise data or images in the outputs" ], "answer": "Denoise data or images in the inputs", "creator": "AresThor", "creationDate": "2021-06-29T12:36:39.046Z" }
@luistorres7661
@luistorres7661 2 жыл бұрын
{ "question": "What is the main difference between supervised and unsupervised learning?", "choices": [ "The input data is not labeled", "The input data must be reconstucted", "The data is clustered by its structure", "The loss function is a logarithmus" ], "answer": "The input data is not labeled", "creator": "Luis Torres", "creationDate": "2022-01-05T14:11:32.851Z" }
@arjunbakshi810
@arjunbakshi810 4 жыл бұрын
Would love to learn about autoencoding too
@jonyejin
@jonyejin Жыл бұрын
Unsupervised learning: Doing tasks without correct labels. = making good feature representation. i.e.) Clustering task: input to good clustering representation, so that datas could be clustered nicely. AutoEncoder: learning good vectorized representation, so that noised datas could remove noise.
@maheshbabu-oe5vh
@maheshbabu-oe5vh 3 жыл бұрын
Hello, U actually sing so sweet in all of your lecture videos. Kindly sing the auto-encoders too. Eagerly awaiting to listen to that.
@aliiabedii
@aliiabedii 2 жыл бұрын
thank you
@lancemarchetti8673
@lancemarchetti8673 Жыл бұрын
Blirriant!
@eleccafe98
@eleccafe98 Жыл бұрын
Perfect
@longmai9343
@longmai9343 4 жыл бұрын
There is no such thing as unsupervised learning . There are only clustering, semi-supervised learning and supervised learning
@tymothylim6550
@tymothylim6550 3 жыл бұрын
Thank you very much for this video! I really enjoyed learning about this and the auto-encoder was new for me! Very interesting and helpful for me!
@AmakanAgoniElisha
@AmakanAgoniElisha 4 жыл бұрын
Hi, thanks for the video. Is it necessary to perform feature selection or extraction if you intend to perform unsupervised learning?
@justchill99902
@justchill99902 5 жыл бұрын
Great explanation. You always pull it off so incredibly. Question - You said "accuracy" is not the metric to judge the performance of a clustering algorithm. So how do we judge it's performance?
@MyFlabbergast
@MyFlabbergast Жыл бұрын
I hope full-fledged tutorial on Autoencoders (concept+code) did get added later on.
@LatpateShubhamManikrao
@LatpateShubhamManikrao 2 жыл бұрын
that was some clear exlplanation there!
@mabasadailycode1781
@mabasadailycode1781 2 жыл бұрын
Thank you , great video 🕺
@gustavomartinez6892
@gustavomartinez6892 6 жыл бұрын
Very good information!!!! very simple. Geat job, genious, excelent
@thenkprajapati
@thenkprajapati 5 жыл бұрын
Please create a video on Autoencoders. If you already have, please share the link.
@deeplizard
@deeplizard 5 жыл бұрын
Thanks for the recommendation, Naresh!
@edmonda.9748
@edmonda.9748 5 жыл бұрын
Autoencoders, autoencoders, autoencoders, ....please
@uchihashisui4597
@uchihashisui4597 3 жыл бұрын
New question proposed for the related quizz, kudos for these amazing courses !! { "question": "Accuracy is typically a metric used in the unsupervised learning process", "choices": [ "False", "True", " ", " " ], "answer": "False", "creator": "Hivemind", "creationDate": "2021-08-17T22:53:53.216Z" }
@Qornv
@Qornv 6 жыл бұрын
Thank you again for these videos
@deeplizard
@deeplizard 6 жыл бұрын
You're welcome, member! Thanks for watching!
@barney3142
@barney3142 6 жыл бұрын
(In theory) Can I train a model unsupervised with a lot of data and later on give labels (manually) to the groups and use the trained model for classification?
@deeplizard
@deeplizard 6 жыл бұрын
Hey Barnabás - Perhaps, but the types of unsupervised models we used in this video would not work well for a classification task. For the scenario you described, it sounds like semi-supervised learning would be more appropriate. This topic is covered here: kzbin.info/www/bejne/mF7cmX6LfrOVbdE
@Waleed-qv8eg
@Waleed-qv8eg 6 жыл бұрын
Hello again, Do you think if we use autoencoders as of an early step to make a model for image detection for example so we can get images with no noises so the train set will be clear before the prediction step! So what I mean is the autoencoders is a good start for training an image detection model. Is this right? Thanks
@deeplizard
@deeplizard 6 жыл бұрын
Hey الانترنت لحياة أسهل - The autoencoder will first need to be trained on on non-noisy images so that it can learn the important features of the data. Then, with what it has learned from training, it can accept noisy images and denoise them based on it's knowledge of the images it was originally trained on. If you passed the model noisy images to begin with, it wouldn't have prior knowledge of the "important" features of the images, so it wouldn't be able to decipher between these features and noise. You'd have to train it first on clear images so the model could learn what features are important. Does this help clarify?
@Waleed-qv8eg
@Waleed-qv8eg 6 жыл бұрын
deeplizard Thank you, I got it. How about if there is such a function to remove noise first to make the images clear then will pass them to a model that we want to build to detect what task we want. Sorry I’m asking a lot but the reason is I’m interested in image processing in a field of machine learning! Have a great day!
@deeplizard
@deeplizard 6 жыл бұрын
No problem, الانترنت لحياة أسهل. I'm not aware of a function myself that will do this from a neural network standpoint since the ones I'm aware of, like autoencoders for example, will need to be trained first to recognize what is important versus what is considered noise.
@gerelbatbatgerel1187
@gerelbatbatgerel1187 5 жыл бұрын
ty
@canmetan670
@canmetan670 6 жыл бұрын
For a 5 minute video, this was a great explanation. Thanks.
@deeplizard
@deeplizard 6 жыл бұрын
Thanks, Can!
@levtunik997
@levtunik997 4 жыл бұрын
I couldn't understand from the video how 2 clusters without a label help us in solving something..BTW great video..short and concise
@sgrouge
@sgrouge 6 жыл бұрын
Very clear. Thanks
@rodom.8753
@rodom.8753 5 жыл бұрын
it is the same learning but in a organized auto-coding (pre-coded) way --AI its not smarter it just records more variable input
@primodernious
@primodernious 5 жыл бұрын
i think i know how we supposed to make a artifical brain. we need not layers but separate single layered networks that is specialized in different type of data, then a output network that separate the data of the different types of pretrained data input to another set of outputs but by allowing this network to self classify one type of data with anoter so that the output of that data goes into another network designed for example to synthesize speach. we need to think of input and output as a hiracy like a pyramid. we also need to take neural net based speach synthesizer output and feed that into a sound recognizer network as primary input together with ordinary sound input so that the larger middle network can hear itself or see that what it sees is the same as it already recognized. i run someones neural network based chatbot and got the impression it was not just thinking, but had some reasoning skills to. that made me think the network was able to separate input data from previous input data, and that would mean than a neural network could just as well have separated different types of input data just as similar ones. its the structure of the wiring that makes the secret of artifical brain model and not how many nodes and layers you have. if we want a robot strucutre, we need to feed input of different types into separate networks that is then feed into a larger network that is then feed into individual smaller networks again to create ouput. the reason why we need smaller networks to feed into a larger on is that the larger one would act as the brain circuit of the self, where the smaller ones behave like lesser brains but more spesific. i assume that the sound in your ear goes into your brain lobes first and then it is wired from network to network higher and higher up in the hiracy until it reach the top of the pyramid and then it get split into other specialized networks designed to do spesific tasks. in the human body some networks would drive muscles and touch sensing and others vocal cord speach synthesis.
@AnimilesYT
@AnimilesYT 4 жыл бұрын
Could an autoencoder also be used to heavily compress video footage so that we can get low bitrates while still getting good image quality? Maybe it could get one or two normally compressed frames per second and use those images as a reference to what the other images are supposed to look like, but this is just pure speculation and I have no clue if this could add any value to the network.
@thespam8385
@thespam8385 4 жыл бұрын
{ "question": "Autoencoders output:", "choices": [ "A reconstruction of the input", "An encrypted form of the data", "An estimation of the input's label", "A feature map" ], "answer": "A reconstruction of the input", "creator": "Chris", "creationDate": "2019-12-12T04:06:11.601Z" }
@deeplizard
@deeplizard 4 жыл бұрын
Thanks, Chris! Just added your question to deeplizard.com
@qusayhamad7243
@qusayhamad7243 3 жыл бұрын
thank you very much for this clear and helpful explanation.
@raajanand2
@raajanand2 3 жыл бұрын
Could you post a link to this presentaion?
@Christian-mn8dh
@Christian-mn8dh 5 жыл бұрын
so a GAN is an auto encoder?
@photographymaniac2529
@photographymaniac2529 4 жыл бұрын
You nailed it mam👏👏👏
@rohitjagannath5331
@rohitjagannath5331 6 жыл бұрын
Great Videos so far presented in a concise way. Can i know when exactly you would be coming up with video series on AutoEncoders(Unsupervised Learning)?
@deeplizard
@deeplizard 6 жыл бұрын
Thanks, rohit! I currently don't have an exact time frame for the coverage of autoencoders, but it is definitely on my list!
@rohitjagannath5331
@rohitjagannath5331 6 жыл бұрын
deeplizard I guess u can touch up on GANS - Generative Adversarial Networks as well. I’m really looking forward for those videos. My research on the later starts in a few days... But great work from your end. Appreciate it.
@MinhVu-fo6hd
@MinhVu-fo6hd 5 жыл бұрын
I love your voice. You have such a beautiful voice. Cheer.
@MRGCProductions20996
@MRGCProductions20996 4 жыл бұрын
you are insane
@Nandu369
@Nandu369 6 жыл бұрын
will there be a video on autoencoders??
@deeplizard
@deeplizard 6 жыл бұрын
Hey ch - We have autoencoders on our list of potential topics to cover in future videos!
@KshitizKamal
@KshitizKamal 4 жыл бұрын
ghatiya hai
Semi-supervised Learning explained
3:46
deeplizard
Рет қаралды 88 М.
How a Neural Network Learns explained
7:00
deeplizard
Рет қаралды 111 М.
💩Поу и Поулина ☠️МОЧАТ 😖Хмурых Тварей?!
00:34
Ной Анимация
Рет қаралды 2 МЛН
How Strong is Tin Foil? 💪
00:26
Preston
Рет қаралды 132 МЛН
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 8 МЛН
Unsupervised Learning: Crash Course AI #6
12:35
CrashCourse
Рет қаралды 175 М.
Backpropagation explained | Part 1 - The intuition
10:56
deeplizard
Рет қаралды 115 М.
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 309 М.
Training an unbeatable AI in Trackmania
20:41
Yosh
Рет қаралды 14 МЛН
Unsupervised Machine Learning Explained
11:19
Futurology — An Optimistic Future
Рет қаралды 16 М.
How AIs, like ChatGPT, Learn
8:55
CGP Grey
Рет қаралды 10 МЛН
What is Machine Learning?
8:23
IBM Technology
Рет қаралды 211 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 501 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 326 М.
💩Поу и Поулина ☠️МОЧАТ 😖Хмурых Тварей?!
00:34
Ной Анимация
Рет қаралды 2 МЛН