You are definitely the best hands on tutor online Sreeni.
@DigitalSreeni3 жыл бұрын
Thank you Gautam. I am glad you find my videos useful.
@Dustyinc3 жыл бұрын
Your lectures are amazing and so easy to follow. Thank you so much for all your work!
@windiasugiarto60414 жыл бұрын
Thank you very much for the tutorial. I learnt a lot from your videos. I hope you would do tutorials on semantic segmentation using HRNet model one day. God bless you, Sreeni...
@DigitalSreeni4 жыл бұрын
I know of HRNet and did look into it a few weeks ago but most of the code I explored was written in Pytorch which is not yet in focus for my channel. I am hoping someone to put together keras based code so I can cover it in my video. Putting it together from scratch may be time consuming and I am not convinced if that time is worthwhile.
@pallavi_44883 жыл бұрын
I am also a biomedical engineer your tutorials are the best
@DigitalSreeni3 жыл бұрын
Thanks
@deepalisharma13273 жыл бұрын
Hi Sreeni, I have recently discovered your channel and found it extremely useful. It would be really helpful if you can create a video on how to create masked images for images with more number of classes (non-binary).
@bijulijin8123 жыл бұрын
How to get the mask image ?. Do we need to create it or It should be created by dataset creator.
@carpelev2 жыл бұрын
Hi Sreeni, Thanks a lot for the video! It is very clear and explains the thought process very well. I was trying to re-implement it, and have two questions to you: 1) in your video at 20:16 you have a negative loss value, why is that? I have a similar problem (regardless of whether I'm using jaccard or bse etc.) Any suggestions how to resolve this issue? 2) could you please provide some detail why you do not freeze the encoder weights? If I understand correctly, we would like to initialize the pretrained eoncoder and only train the decoder, but sm does not by default freeze the weights and you did not do it either. I tried both but I think because of question (1) I still dont get proper results. Thanks a lot!
@chaosdesigner1234 жыл бұрын
I'd be very happy if you can share your code for augmentation
@marcusbranch21003 жыл бұрын
It would be really awesome. Do you already know how to do this data augmentation?
@KushalBansal-v5d5 ай бұрын
Can I use this techniques of segmentation for cracks and damage segmentation on the walls or concrete
@ruthikasiddineni38732 жыл бұрын
Sir,I'm getting this error while fitting the model ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray). please provide an answer
Thanks a lot for the really good content, I am learning a lot from your videos daily. I have one question regarding image size. I have high res microscopic images (2048 x 2048) and I want to do cell segmentation - Do I need to crop these images and make smaller patches to train this model. If yes, do I need to do this patching operation during inference as well? or I can use high res 2048 x 2048 images and start training. If I can train the model with high res images, how does the model deal with the change of dimension (original model architecture is not suitable for high res input images, or I am misunderstanding something)?
@ownhaus4 жыл бұрын
Thanks for the tutorial. 3D UNet would be very interesting for an upcoming video, since I work with 3D localization microscopy data
@danishMalik_4 жыл бұрын
Please add some videos regarding instance segmentation and how to make its datasets.
@rachelbj38402 жыл бұрын
Thanks Digital Sreeni !!!
@piyushkumarprajapati99724 жыл бұрын
I have a dataset with bounding boxes of the cells. Can you please suggest how to proceed with it? Boxes enclose cells
@Thetejano19874 жыл бұрын
Hey quick question. Do you know the differences between using the JaccardLoss vs bce_jaccard_loss? I'm using segmentation_models.pytorch but they dont have a bce_jaccard_loss
@johnnysmith68762 жыл бұрын
Beware negative losses! Suffered from that initially.
@DigitalSreeni2 жыл бұрын
What is wrong with negative losses? May be I am missing something here. Loss is just a scalar value that can be positive or negative and this gets minimized during the training process. Of course, it becomes an issue if you just take the magnitude of the loss.
@johnnysmith68762 жыл бұрын
@@DigitalSreeni Makes sense Prof. Had assumed losses always have to be positive. Thanks for the clarification and greater thanks for sharing these videos. You’re doing an amazing job! Thank you.
@alirezasoltani30494 жыл бұрын
In many articles on segmentation in the field of remote sensing, it is mentioned that the input of networks is patchs, for example, 24 by 24 or 50 by 50, etc. However, I do not understand that a network that is trained on the dimensions of 50 by 50 Has he seen how he can segment high-resolution satellite images, for example 8,000 by 8,000 pixels? Also, does a patch contain only one complication, such as a building or a road or ...?
@jharris304 жыл бұрын
Another great video, thanks! QUESTION: Do you prefer this method or pre-trained CNN with VGG16 & RF as in video 159b? Thanks!
@DigitalSreeni4 жыл бұрын
I always prefer Random Forest over deep neural networks. I only use neural networks if traditional approaches fail to solve the problem. Based on my experience VGG16 + RF is very robust and works for most use cases. It only fails for situations where you have a very busy background and trying to segment objects that are hard to distinguish against the background.
@bikkikumarsha3 жыл бұрын
Can we convert the final model to TfLite format?
@bijoyalala56853 жыл бұрын
Hello Sreeni Sir. Thank you for your wonderful tutorial. I have some questions regarding this issue. I have my customized image dataset of needle tip. I use this segmentation model for sementic segmentation. For 2000 'needle tip' image I put the value of batch size=8, epoch=10 and the predicted image comes okay. Then I increased the dataset at 4000, right now I keep the same batch size and epoch but the prediction is not seems okay. Can you please tell me is there any relation between increasing the data samples and batch size? What should be the optimal batch size for a specific number of image dataset?
@montsegur71733 жыл бұрын
Hi, thanks for your vids, super helpful! I am playing with segmentation-models library and this dataset you used in your 73-78 vids. At the beginning, I was using only Unet and some heavy backbones, like resnets or vgg, results were fine. Now I switched into playing with PSPnet (with the same dataset) and no matter which backbone I choose, I always get like 0.1614 accuracy and I just wonder - is it because PSPnet is that awful for bio-datasets or am I doing something wrong? I am aware, that results actually should be worse, but such low and repeating accuracy is kinda worrying for me. Should it be this way?
@laohu15143 жыл бұрын
Same for me, I'm getting worse results with PSPnet and FPN on industirial inspection dataset, Unet and Linknet are fine, not sure what is going on. Another thing that I find strange is that the IOU score for Unet and Linknet sometimes exceeds 100
@successlucky7619 Жыл бұрын
Hi Dr Sreeni, I must confess that following your teachings has made me see that I can continue in this field! thank you for the effort, time, and resources you put into making these videos. Two years later this is still evergreen... while following your videos I ran into an issue that I've tried to resolve but without success. it is with the segmentation library and the error I get when I try to import it -AttributeError: module 'keras.utils' has no attribute 'generic_utils'... I've gone on stack overflow and tried out the solution of downgrading the version of keras but it still isn't working... Please kindly assist to resolve this issue as I'd love to explore this library. Thank you so much
@DigitalSreeni Жыл бұрын
Troubleshooting is an important skill that you need to develop. In your case, the error mentions about keras.utils not having an attribute 'generic_utils'. I quickly checked it on colab and it gave the same error and also pointed to the specific file contributing to this error. This is exactly the path to the file on colab. /usr/local/lib/python3.10/dist-packages/efficientnet/__init__.py You need to identify this file in your specific location, in case you are testing this on your local system. In this file, search for generic_utils. Here is the line giving us the error (line 71): keras.utils.generic_utils.get_custom_objects().update(custom_objects) In the newer versions of tensorflow.keras, the get_custom_objects() is available directly under keras.utils. This means you just need to delete the 'generic_utils' part from the above line making it simply: keras.utils.get_custom_objects().update(custom_objects) Restart the colab runtime and run the cell again. You just run the code again if you are working locally. How did I figure this out? - I installed the segmentation models library and imported it to see the error. - I paid attention to the error and realized that a specific line in a specific file is the cause. - I then experiment with the line giving the error: First: I did - from tensorflow.keras import utils - this worked fine Then, I tried - utils.generic_utils which gave me the same error. But I don't really care about this specific method. I actually care about get_custom_objects method. So I tried directly importing from utils and it worked fine: utils.get_custom_objects So I edited the __init__.py file by removing the generic_utils part and everything worked fine. What would I have done if utils.get_custom_objects did not work? Perform a search on keras documentation for "get_custom_objects" to find where exactly it got moved to and updated the code accordingly. This entire thing took just about 5 to 10 minutes. Please consider such error messages to be opportunities to dig deeper and learn more about troubleshooting. Good Luck and I hope this response helped you.
@Fan-vk9gx8 ай бұрын
@DigitalSreeni I had the exact same issue, and it bugged me for a while. I just surfing online purposelessly, and I found your answer. You response helped me a lot! Not only in this case, but also a path I can follow for troubleshooting. Thank you soooo much!
@umairsabir66864 жыл бұрын
Thanks for this wonderful video Mr. Sreeni.
@DigitalSreeni4 жыл бұрын
My pleasure 😊
@umairsabir66864 жыл бұрын
@@DigitalSreeni I want to clear one more doubt. In one of your previous tutorial you presented Autoencoders using transfer learning where you took the encoder architecture and built the decoder architecture and trained it. Can I say that we are doing the similar thing in Semantic segmentation here as both Encoder and Decoder architectures are available using backbone models and we do not need to explicitly define our decoder model here. we can just retrain the decoder part or the whole architecture ?
@Hmmm01357 ай бұрын
Hey everyone, I trained my model, it showing good result while predicting segmentation on image. But during training it giving negative loss and IOU more than 1 , Can anyone please tell what am doing wrong?
@ztabatabaei26122 ай бұрын
Hi.. thanks for all of your perfect videos. May I have the code regarding the augmentation in this video? Thanks
@ruqayyahessa103 Жыл бұрын
Thank so much for your good explanation. Could you please explane , how can I feed the segmented output or the result that segmented by (Unet with Resnet34 as encoder) to the pretrained EfficientNet classifier to make binary classification if the input have disease or not
@PriyankaJain-dg8rm Жыл бұрын
Can you please lemme know from where to get the exact dataset, as the link provided, when visited, only let me download .tif file. Is there anything wrong I am considering.
@konkoboaxel88873 жыл бұрын
thank you for the tutorial, it's very well explained. I tested the training on google colab but when importing the model on my pc, an error occurs at: prediction_image = prediction.reshape (mask.shape) "mask is not defined ", any help from you?
@ahpacific3 жыл бұрын
Hi @DigitalSreeni thank you for video - does the preprocessing that you use take care of one-hot encoding your masks or do you do that yourself? If you do it yourself can you cover how? Thank you.
@DigitalSreeni3 жыл бұрын
I covered one-hot encoding (categorical) in many of my videos. Please checkout videos on multiclass segmentation.
@marcusbranch21003 жыл бұрын
Awesome video, good job and thanks for sharing this with us, Sreeni. Can you tell me how can I do data augmentation on device in this case? No needing to create two new folders/paths of images and masks
@marcusbranch21003 жыл бұрын
And feed it directly to the network
@djdekabaruah34574 жыл бұрын
Very useful tutorial. Could you please add the code for augmentation?
@xtraeone59472 жыл бұрын
Do I need to change anything more if I'm using vgg16 as backbone architecture?
@talha_anwar3 жыл бұрын
I think data split should be before augmentation to avoid data leakage
@samardarooei79403 жыл бұрын
Hi thanks a lot for nice video but what is difference between backbone and weights?
@nor4eto999 Жыл бұрын
Hello, my original images are dicoms. How to read them and the script still works properly?
@sayedhamdi44722 жыл бұрын
Thanks sir for this wonderful explanation, but dataset link not working for me i want to reach to images and masks, help me please
@vivek-159-icd Жыл бұрын
A very informative video, Thank you
@talha_anwar3 жыл бұрын
While random augmentation, let suppose image rotate by 30 and mask rotate by 40, because it's random, how you handle this
@evgeniynekrasov11462 жыл бұрын
Thnks for nice video, but may SIZE_X and SIZE_Y of input picture and mask be different? For example 240x216? Thanks!
@DigitalSreeni2 жыл бұрын
Your inputs can be of any size, as long as the image and mask sizes match.
@tapansharma4603 жыл бұрын
sir please make us more familiar with 3d image processing as you have created on bratts data set .......I am working in neuro-imaging domain on brain aneurysms detection and classification
@imageprocessing96452 жыл бұрын
thanks alot for this great tutorial....How can we evaluate testdataset in this code? for example test accuracy
@bhargavireddy6043 жыл бұрын
Hi Sreeni Sir, will you please share the google colab link for this tutorial.
@diegostaubfelipe43103 жыл бұрын
Congratulations on your channel, it is really useful and very well organized. Is the preprocess image (preprocess_input(x_train)) only used at the time of training while in inference is not necessary?
@DigitalSreeni3 жыл бұрын
Preprocessing needs to be done to both training and testing data exactly the same way.
@rishikhajuria30293 жыл бұрын
For semantic segmentaion getting error expected sigmoid to have 4 dimensions,but got array with shape (800,1)..how to reshape...??
@visheshbreja33413 жыл бұрын
Hello sir Thank you for such an interesting tutorial. I am stuck at a point where I have 50 classes to predict. I don’t know how can I map my 50 classes on the model to learn and the corresponding color map for each class. Any kind of help will be appreciable. Thank you in advance
@NS-te8jx2 жыл бұрын
could you share all the slides for your various videos. because that helps me to revise. I see only code in git
@djdekabaruah34574 жыл бұрын
Hello Sir, what was the accuracy of the model built for this tutorial? I tried the same method with my images (around 55) and got an accuracy of around 62-63% (tried resnet, vgg, efficient net). Segmentation output was not very good. Any suggestion/methods to improve the results?
@DigitalSreeni4 жыл бұрын
The tutorial is about using segmentation models library for semantic segmentation. The library contains many models and it is hard to say which one works best for your images. The whole point of the video is that if you plan on writing code for one of the standard models may be it is not worth wasting time rewriting code as you can use the library. This does not mean the standard models are going to give you the best accuracy. It depends on many factors including the amount and quality of your labels. Also, accuracy is not a good metric for semantic segmentation, I hope you will look into IOU and other metrics. In summary, please use a subset of your data to test various models from this library. Then, pick the best one to see if it performs well on your entire dataset. If the accuracy (or other metric) is not to your goal you will have to put together your own network, for example replacing encoder with Efficient net. For that you need to have the required knowledge.
@djdekabaruah34574 жыл бұрын
@@DigitalSreeni Thank you Sir..I understand your comments. I was thinking of one more approach.. Will the model performance improve if we increase no. of epochs (though it is very time consuming)?
@jzjMacwolfz3 жыл бұрын
Thank you for thevideo! I am really grateful I learned quite much! Sorry to ask a rookie question, the content of "Label" folder are just the images inverted or how may I create those?
@shreyasbharadwaj7243 жыл бұрын
The contents of the "label" folder are segmented versions of the images in the "images" folder. That is, every pixel of each image in the "images" folder is assigned to a particular label; if the images have biological cells, the labels might be 1 and 0 for "this pixel is part of a cell" and "this pixel is not part of a cell." This assignment may be done manually or the use of some other algorithm.
@rohinigaikar41173 жыл бұрын
Hi, thank you for this video. Will you prepare a video about multi-label segmentation in medical images? I want to know how to create training data as it is different than binary segmentation. Thanks
@DigitalSreeni3 жыл бұрын
Yes, please stay tuned. I am planning on U-net based videos for binary, multiclass and even 3D images.
@gulshanmohiddinshaik72243 жыл бұрын
Thank you Sir, chala baga explain chesaru
@samarafroz98524 жыл бұрын
Wow this is best tutorial sir
@AlgoTribes4 жыл бұрын
Hey Sreeni..if you don't mind coming up with the Semantic Analysis for the text data..that would be of great help...BTW your content are more often if not great than just Awesome!!
@AbdullahJirjees2 жыл бұрын
Thank you for this video, but there is an important part I wished you showed in this video is how did you create the labeled images?
@dimane76312 жыл бұрын
this is the reponse i get from M.DigitalSreeni "If you do not have labeled data then you need to label it yourself. I covered a few videos on this topic and you may find this to be useful: kzbin.info/www/bejne/gaDTnqakeJ16jas"
@upasana26573 жыл бұрын
Thank you, Mr Sreeni
@منةالرحمن4 жыл бұрын
please what is the number or the name of blocnote in github directory you published? can't find the exact code
@DigitalSreeni4 жыл бұрын
I forgot to upload it, now it is there. The number should be 177. I uploaded multiple files with same number, all supporting content for this tutorial.
@منةالرحمن4 жыл бұрын
@@DigitalSreeni thank you soo much you're the best
@rezadarooei2483 жыл бұрын
Thanks a lot for your nice video that was awesome, but I have a question, why your loss is negative? I think you need to normalize your image and masks, is it coorect?
@DigitalSreeni3 жыл бұрын
Loss can be negative, depends on the loss function. For example, if you want to use accuracy as loss function, you want to maximize accuracy but you want to minimize the loss function. So you multiply your loss with -1 to make it negative. Now, the loss will be minimized (as it is a negative number and -90 is smaller than -80), and the accuracy maximized (80 to 90 and so on...).
@vikashkumar-cr7ee2 жыл бұрын
Dear Sreeni, Would it be possible to upload Electron Microscopy Dataset directly into google colab without being downloaded to a local drive or google drive
@DigitalSreeni2 жыл бұрын
Here is a tutorial on how to load Kaggle data directly into Colab. A similar approach can be followed for other data sets: kzbin.info/www/bejne/r3a7nHiLprBoaLM
@vikashkumar-cr7ee2 жыл бұрын
@@DigitalSreeni I have gone through your Kaggle dataset download tutorial, and I followed a similar approach to download the mitochondria dataset, but it didn't work. I am requesting you to write a code here for the same dataset, which can directly be downloaded in the Goggle colab. Many thanks in advance
@indirakar50952 жыл бұрын
I have some CT image data but I don't know how to do the masking. Any idea ?
@DigitalSreeni2 жыл бұрын
You can try annotation tools like Label Studio or Labelme.
@indirakar50952 жыл бұрын
@@DigitalSreeni thank you so much. I will try with this
@danicalifornia944033 жыл бұрын
First of all, thank you so much for the great lecture series. So you augmented 2000 images but only use 1000 images for training?
@danicalifornia944033 жыл бұрын
I used your code in Colab and I ran into this issue. For the sanity check, I found out that the images and masks are not matching. can you give me some advice on this problem?
@DigitalSreeni3 жыл бұрын
Yes, I can. I will record a tips and tricks video soon on the topic. In summary, load the file names first for images and masks, sort them and then load them. This ensures that the names are all sorted the same way for images and masks.
@asmabenbrahem63 жыл бұрын
Hello sir, Thank you very much for this tutorial, it is very helpful. I tried using segmentation models repository and tried to train unet on colab with pretrained imagenet weights and using jaccard loss as loss function but the training is very slow (one epoch took 15 min) and the training loss is going down (0.6 on epoch 1 ->0.17 on epoch4) but not the val_loss (it is stuck in 0.8) also the iou score for the training set is 0.82 and 0.2 for validation set. Can you help me with this ?
@DigitalSreeni3 жыл бұрын
Did you enable GPU on colab?
@asmabenbrahem63 жыл бұрын
@@DigitalSreeni That was the problem, I forgot to enable the GPU, that was stupid. Anyway, thank you sir for these nice tutorials, they are very helpful. You are amazing, keep up the good work. May god bless you.
@biswassarkarinusa32303 жыл бұрын
Hello sir, I was trying to segment the exudates from the retinal fundus images for detection of diabetic retinopathy. But I encountered a bug while doing (model.fit) section.I got the following error: ValueError: Error when checking target: expected sigmoid to have shape (None, None, 1) but got array with shape (2848, 4288, 3) I tried to reshape /resizing the training image but could not fix it. Can you give some idea regarding that? Thank you.
@samk45843 жыл бұрын
Did you find a solution please?
@biswassarkarinusa32303 жыл бұрын
@@samk4584 No brother, I am still stuck with that issue -_-
@samk45843 жыл бұрын
@@biswassarkarinusa3230 i am working on the same subject , trying to segment those lesions is hard, good luck!
@biswassarkarinusa32303 жыл бұрын
@@samk4584 Thank you. Are you facing the same issues? If you have any other solutions please let me know. Thank you.
@rishikhajuria30293 жыл бұрын
@@biswassarkarinusa3230 I am also similar problem of sigmoid shape ..do u got solution??
@ananyabhattacharjee42173 жыл бұрын
This piece didn't work for me. I am not able to fit the model at the end
@dentonlister3 жыл бұрын
I've followed your code exactly but am getting error: AttributeError: 'Unet' object has no attribute 'compile'. The docs don't seem to mention compiling at all, or the search bar on it isn't working for me. Could you help me?
@DigitalSreeni3 жыл бұрын
Looks like you may be having a file in the same directory called unet.py. So when you import unet it may be importing your file rather than the one from another library. Just rename your local unet.py to something else. This is what I can think of with limited information about your system.
@dentonlister3 жыл бұрын
@@DigitalSreeni I don't have any files called unet.py. Is there anything else you can think of?
@YeasirArefinTusher-f9n5 ай бұрын
I have trained a model using this approach and its working fine. But the problem is the model seems that it is not tflite compatible. Converting the model into tflite model the input and output shape being changed due to model architecture I think.
@dattijomakama97034 жыл бұрын
Thanks a lot for this awesome tutorial. I like your channel.
@letslovestraydogs46482 жыл бұрын
hi thanks for the video but when i write the code the IOU passes one and loss goes to negative number. i even did normalizing by (xtrain/ 255.0) but still the code dosent work. i`m looking foeward to ur help thanks
@lijinp34302 жыл бұрын
iam having the same problem.have you resolved it?
@letslovestraydogs46482 жыл бұрын
@@lijinp3430 not yet sementation is just so hard
@giulsdrll3 жыл бұрын
iou_score should be a number between 0 and 1, according to its definition and to the segmentation model library documentation. Unfortunately, I obtain iou_score bigger than 1 such as 16 or 3 for example and no error occurs in the code. Can anyone help me understand what I am doing wrong, please? Is IoU expressed as a percentage? It doesn't seem like that in the documentation... For completeness, in the model.compile function, I use the bce_jaccard_loss as loss function while the iou_score as a metrics. (I use Colab) @DigitalSreeni Thank you for the useful content and your plain explanations
@lijinp34302 жыл бұрын
I having the same issue.can you resolved it?
@giulsdrll2 жыл бұрын
@@lijinp3430 Yes. The problem was in the data normalization. You need to check that the values of your image are normalized (between 0 and 1), before putting them into the model. Otherwise the metrics will give you problems. To be sure, check data normalization immediately before the model training.
@lijinp34302 жыл бұрын
@@giulsdrll i have normalaised by dividing with 255.but the problem is that the score becomes 0.025 or like 0.0something value.Never touches 0.2 or greater than that.
@jinchuntew27383 жыл бұрын
Hi Sir, I am running the code on Google Colab and I faced an error in model.fit. The error message is "TypeError: Input 'y' of 'Mul' Op has type uint8 that does not match type float32 of argument 'x'." May I know how to resolve this issue on Google Colab?
@DigitalSreeni3 жыл бұрын
Just convert your data into float32 and see if that helps. x = x.astype(np.float32) y = y.astype(np.float32)
@jinchuntew27383 жыл бұрын
@@DigitalSreeni Thank you for your reply. I have tried to convert it to float32. Then is able to run for first epoch before same error appears. (Originally the error occurs when I run before first epoch). I am still not able to fit the model with same error.
@jinchuntew27383 жыл бұрын
Also, in 22:45 of the video seems like there is similar error as there is a red underline below validation_data which is similar to my problem.
@satyasismishra34892 жыл бұрын
sir, good evening, can i run this program ANACONDA, JUPITER, please provide an answer
@DigitalSreeni2 жыл бұрын
The IDE that you use does not matter.
@mechanicalloop12314 жыл бұрын
Can you please do a tutorial on Reinforcement Learning too?
@lalitsingh51504 жыл бұрын
Sir, I have breast thermograms for segmentations...how do I generate a mask?
@NehadHirmiz4 жыл бұрын
I would recommend using Label Studio. github.com/heartexlabs/label-studio. This is a fantastic tool to do data annotation
@wg2 Жыл бұрын
Litteral goldmine could have solved a lot of my problems 1 year ago 🤦♂.
@khaledbenaggoune85984 жыл бұрын
Thanks a lot. Could you please explain attentiin for conv1d and conv2d in your future videos.
@khondokermirazulmumenin82013 жыл бұрын
thank you for your tutorial ,well explained .
@DigitalSreeni3 жыл бұрын
Glad it was helpful!
@nobinmathew28612 жыл бұрын
Is this unsupervised or supervised learning ?
@unamattina60232 жыл бұрын
I can not download the dataset, do you have any available link?
@DigitalSreeni2 жыл бұрын
www.epfl.ch/labs/cvlab/data/data-em/
@anishjain36634 жыл бұрын
Sir really big thanks but sir let say i have data set in mscooc format so first i need to create mask so mask array value should be what here i have 273 unique classes please sir can u explain how to do multi classes image segmentation i kinda confused
@DigitalSreeni4 жыл бұрын
COCO format is for instance segmentation (object). If you would like to use it for semantic segmentation you will have to find a way to import COCO as pixel level labels. Also, I don't understand having 273 unique classes for semantic segmentation. I have a feeling you are looking for object detection and not semantic segmentation.
@anishjain36634 жыл бұрын
@@DigitalSreeni sir its food data set where it has 273 unique categories for food items , and about 20000 images
@rishikhajuria30293 жыл бұрын
I am also getting error in fit function as u were getting sir..same underlined in red..how to proceed sir??
@DigitalSreeni3 жыл бұрын
Looks like some syntax error, remove the last comma.
@منةالرحمن4 жыл бұрын
hello please i tried the data aug code and meet up this error ------ IndexError: list index out of range with the line (mask=masks[number]) the code doesn't generate more than 20 augmented images then it stops with this error !!!!!
@منةالرحمن4 жыл бұрын
i solved it thanks .... ^_^
@DigitalSreeni4 жыл бұрын
The pleasure you get in solving your own issues is incredible. I learn a lot during troubleshooting.
@talha_anwar3 жыл бұрын
if we have two or three classes, for example mitochondria and nucleus in one images, still we treat it as 2D image or 3D image
@geponen4 жыл бұрын
Are those masks 3 channel or 1 channel?
@kibruyesfamesele30872 жыл бұрын
I am happy with your tutorials and I want to apply on plant disease detection with four class(folder of disease) with 6000 image I got the error got multiple values for argument 'batch_size' on validation please help me on it
@kaushalyasivayogaraj58623 жыл бұрын
Sir, Your videos are really good and those are very helpful for learning. can you please make videos on few-shot learning semantic segmentation?
@suganyasambasivam83592 жыл бұрын
Thanks a lot sir it was very helpful Can we do segmentation without using ground truth pls clarify my doubt sir
@fardinsaboori87703 жыл бұрын
Thanks a lot for this great tutorial, can you please share the dataset(pictures) with us?
@DigitalSreeni3 жыл бұрын
The link to dataset is given in the description of the video.
@fardinsaboori87703 жыл бұрын
@@DigitalSreeni thanks a lot
@fardinsaboori87703 жыл бұрын
@@DigitalSreeni Hello, I have been trying to sign up and download the dataset from the website but the website has technical issues and I can't receive it, can you please upload the dataset to your GitHub account so we can dl the dataset from there?
@gloryprecious11333 жыл бұрын
nice explanation n very informative sir. kindly upload video for 3D volumetric segmentation.
@DigitalSreeni3 жыл бұрын
Will try, thanks.
@dardar99133 жыл бұрын
Is anyone able to access the dataset?
@AA-qe9hm3 жыл бұрын
It says attribute error when I try and import segmentation_models
@DigitalSreeni3 жыл бұрын
Please read their documentation, you need to have minimum version for keras and tensorflow.
@iamkrty5222 жыл бұрын
I am neither finding the code nor the dataset anywhere
@DigitalSreeni2 жыл бұрын
The code is on my GitHub, link provided under description. Alternate link to the dataset is: www.epfl.ch/labs/cvlab/data/data-em/
@arindamkashyap63083 жыл бұрын
Sir can make a video in image segmentation for dental data
@hamidt-sarraf30693 жыл бұрын
Hi sir, I did the training, but for prediction, I get the following error: ------> cannot reshape array of size 1048576 into shape (1024,1024,3) the SIZE_X and SIZE_Y = 1024 it performs the prediction but when I apply: #View and Save segmented image pred = prediction.reshape(mask.shape) I got that error. The mask size shape and test_image shape is (1024, 1024, 3) array of unit 8 when I apply: test_img = np.expand_dims(test_img, axis=0) the test_image became (1, 1024, 1024, 3) array of unit 8 and then by applying prediction I got: (1, 1024, 1024, 1) array of fload32 thank you for your amazing tutorials.
@rohitgupta20043 жыл бұрын
thank you sir for the tutorial, it's very well explained. Can you please add some vidoes on EfficientNet architecture with some dataset?
@djdekabaruah34574 жыл бұрын
Hello sir, very good tutorial. Could you please share the code for data augmentation?
@DigitalSreeni4 жыл бұрын
It is on my github. Just look for 177. github.com/bnsreenu/python_for_microscopists
@djdekabaruah34574 жыл бұрын
@@DigitalSreeni thank you very much
@ccr4igg Жыл бұрын
Thank you so much sir
@DigitalSreeni Жыл бұрын
Most welcome
@venkatesanr94554 жыл бұрын
Thanks a lot for your informative video. Actually, I am beginner of image segmentation but i hav some knowledge on image processing. I hav started to work on prostate cancer detection but struck on finding medium or large dataset. Can anyone specify some sources/links on biomedical dataset for doing ML approaches that will be helpful. Thanks
@anishjain36634 жыл бұрын
U may find data in kaggle
@venkatesanr94554 жыл бұрын
@@anishjain3663 I believe the images not available in kaggle. If so kindly refer me the same