You *already* learned a language (English, Hindi, Spanish, whatever). Now you read a new post that has maybe 2% new vocab in it. But 98% is just recombinations of the language you *already* learned. That's the analogy. The real trick is coming up with huge amounts of clean, properly labelled, organized and useful data. Then extracting *features* from it. That was (apparently) done by the Model1 training on .... and even then....the truly *hard* part is having proper data at hand.....millions of images of it....
@ahmedmohammed2837 Жыл бұрын
i would like to appericiate your act of presenting but i need to include actual class with predicted class
@alish30963 жыл бұрын
how to change the dataset?
@hayaquraan63672 жыл бұрын
please i got this error ValueError: could not broadcast input array from shape (28,7,7,512) into shape (32,7,7,512) how to solve it
@masbro19012 жыл бұрын
hi sir, when i try to extract features, i got this error: MemoryError: Unable to allocate 9.68 GiB for an array with shape (51809, 7, 7, 512) and data type float64 how do i solve this. many thanks
@switches_slips_turnouts3 жыл бұрын
darun! (data science masters student)
@NormalizedNerd3 жыл бұрын
WOW
@ansharora32483 жыл бұрын
Great explanation!
@kayvengoh25654 жыл бұрын
Sorry, I am new to machine learning (ML) and deep learning (DL) world. I have a question. Is this video of transfer learning considered to be DL application? Since I learned that Keras application is a DL models that are made available alongside pre-trained weights. I really need your help in these confusion. I am very sorry for the inconvenience.
@NormalizedNerd4 жыл бұрын
First of all Keras library has a module named applications. By using this module we can download the pre-trained models. Transfer learning is the name of the technique where we use pre-trained deep learning models. I hope your confusion is clear now. If not please comment below.
@cadycanny6738 Жыл бұрын
I have a question how deep learning is differ from transfer learning
@switches_slips_turnouts3 жыл бұрын
how to create image classification model when classes are highly imbalances
@NormalizedNerd3 жыл бұрын
You'll need image augmentation (on the smaller classes).
@MinhNguyen-rc8ws2 жыл бұрын
thank you, very helpful !
@heshamkhaled58234 жыл бұрын
I am trying your code to extract features from a small image dataset that i have but when i use the function(extract_features) and print the features i found it have a lot of 0.0 . so is this normal ?
@NormalizedNerd4 жыл бұрын
Exactly 0 values for a lot of features is little suspicious. Start the training process and see how's the accuracy. Make sure you are printing the features not the labels.
@heshamkhaled58234 жыл бұрын
@@NormalizedNerd i tried to train it adding dense layer with softmax function but it still giving me lot of zeros in my features when i get rid of the dense layer to try to extract the features again . I am new to deep learning so i dont know what is this mean or what woud be the problem it is just the same as your code and i made sure i am printing the features so if u can help me with any advice i will be so grateful to you
@heshamkhaled58234 жыл бұрын
To make it clear, iam want to use the features i extract with the vggg16 to use on my LSTM , when i use the imagenet weights without more training i have lot of zeros in the features and when i train it i think that the number of zeros increased
@heshamkhaled58234 жыл бұрын
@@NormalizedNerd iam realy sorry for being Annoying with three comments . when i print train_features[0][0][0] i got a big list of numbers like this 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.889548 , 0. , 0. , 0.0689625 , 0. , 0. , 0.10323834, 0. , 0. , 0.12979957, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.13665244, 0. , 0.44390571, 0. , 0. , 0.0437303 , 0. , 0. , 0. , 0. , 0. , 1.03148663, 0. , 0. , 0. , 0. , 0.13453105, 0. this is an example of what iam saying in my comment. and again sorry for being annoying
@NormalizedNerd4 жыл бұрын
@@heshamkhaled5823 default input shape of vgg is 224×224. If you are working with images of smaller dimension then change the default shape accordingly. Other than this I would suggest you to use the features that you are getting now and see how your model is performing on validation set.
@heshamkhaled58234 жыл бұрын
i am really sorry for asking you again but i need your help , i tried to print the labels from the extract_feature function and it was like this : 0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1.], [0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1.] i have 2 classes cat and dog , like u see the labels of 0 are not consecutive there is (zero label) then the (1 label) then the (zero label) in random way , the labels of the cats are not consecutive and after the cats the dogs come . so is this mean that cats features are not consecutive in the features array and the array of features has the images in random way ? and if this is true what should i do to put all the cats features and labels then go for the dogs ?
@NormalizedNerd4 жыл бұрын
First of all, the dimensions for labels matrix is wrong. It should be (sample_count,2) as you are working with 2 classes. I guess you haven't change this line "labels = np.zeros(shape=(sample_count,6))". Just change this to "labels = np.zeros(shape=(sample_count,2))" and it should work like a charm. Don't worry about the order of classes because the features match with correct labels (thanks to datagen.flow_from_directory).
@muhammadzubairbaloch32244 жыл бұрын
Sir I want you make more lectures on NLP. thanks. I like your work.
@NormalizedNerd4 жыл бұрын
Thank you. More NLP videos are coming. Stay tuned!
@muhammadzubairbaloch32244 жыл бұрын
@@NormalizedNerd I am really happy to hear that. thanks sir.
@furqanafridi54532 жыл бұрын
I am facing this error kindly help "Found 0 images belonging to 0 classes. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [102], in () 24 break 25 return features, labels ---> 27 train_features, train_labels = extract_features(train_dir, 30) # Agree with our small dataset size 28 validation_features, validation_labels = extract_features(validation_dir, 2) Input In [102], in extract_features(directory, sample_count) 17 i = 0 18 for inputs_batch, labels_batch in generator: ---> 19 features_batch = conBase.predict(inputs_batch) 20 features[i * batch_size: (i + 1) * batch_size] = features_batch 21 labels[i * batch_size: (i + 1) * batch_size] = labels_batch"
@pechaaloler4 жыл бұрын
Hey, i really like this video! I would like to build a model to identify textures in images, what kind of approach would you recommend?
@NormalizedNerd4 жыл бұрын
Please give this a read: arxiv.org/ftp/arxiv/papers/1904/1904.06554.pdf
@pechaaloler4 жыл бұрын
@@NormalizedNerd Thank for the reply. I did read that paper, do you think Extracting Features like you showed in the video would also work? I was wondering why they didnt even have an approach with CNNs.
@faheemanjum82253 жыл бұрын
sir i need help to evaluate classification report and confusion matrix
@abhijeetanand3443 Жыл бұрын
can you name a few features of image which is being extracted by convolutional base? Anyone questioned me on which features my model is working?
@galk324 жыл бұрын
great tutorial
@NormalizedNerd4 жыл бұрын
Glad you think so!
@tjvlogs93 жыл бұрын
Excellent
@himanshukharwar38484 жыл бұрын
What exactly are these feature map ? like from first layer what are the outputs we get?
@NormalizedNerd4 жыл бұрын
feature map is what we get after performing the convolution operation. In this case, after the 1st conv layer we get 64 feature maps each having the dimension (224*224).
@arshkatyal28074 жыл бұрын
Which architecture do you consider best for feature extraction of sparse data (i.e images of line graphs)
@NormalizedNerd4 жыл бұрын
Can you explain what are you referring to as sparse data here?
@arshkatyal28074 жыл бұрын
@@NormalizedNerd Like I have made the line graphs with matplolib and saved them as images into a folder....Now I am applying transfer learning on those image...As it is a line graph most of the pixels of the image would ne white and very less pixel would have data stored.Which architecture would you recommend for such data
@NormalizedNerd4 жыл бұрын
@@arshkatyal2807 It's really hard to learn from such a sparse dataset. But here are some things you might try... 1) try to increase the width of the lines while making the plots. (this method is most likely to work because we can train a model to classify hand-drawn doodles) 2) build your own cnn instead of using transfer learning.
@arshkatyal28074 жыл бұрын
@@NormalizedNerd Thanks for the advice
@arshkatyal28074 жыл бұрын
what if I use an architecture with less depth ..like VGG or mobile net ? Also I have made the lines black and quite thick
@abhishekkumarpandey18624 жыл бұрын
Subscribed!
@abusufiun23433 жыл бұрын
can you , please give me the code
@NormalizedNerd3 жыл бұрын
You can find the notebook in the video description.