If you want to work with TensorFlow 2.x you will get an error while loading Segmentation models library, please follow these steps to fix the issue: kzbin.info/www/bejne/qaqti6t6qbGooNU
@vassilistanislav3 жыл бұрын
Do you know any other software such as Apeer, for wound imaging . Is Apeer only useful for microscopic imaging or can we even do labeling for chronic wounds ?. Kindly please let me know.
@mahyasadatebrahimi31642 жыл бұрын
@@vassilistanislav Try to use Ilastiks for this purpose. It might help you.
@aishashahnawaz98983 жыл бұрын
Thank you for the great and explicit lessons. Your way of explaining is amazing and can be easily understood by beginners as well. Thank you very much for your efforts!
@DigitalSreeni3 жыл бұрын
I was a beginner once and I know the pain, so my explanation comes out of my empathy towards beginners.
@yjoliiyki7063 жыл бұрын
Thank you for the video, I think instead of a semantic segmentation, using 3d Unet to generate an instance segmentation is more interesting!!!
@caiyu5383 жыл бұрын
Great lectures. I follow up with your series. I learned a lot. Excellent teacher. I am doing a 3D Unet segmentation. Your tutorial is very helpful.
@DigitalSreeni3 жыл бұрын
Cool, thanks!
@kavithashagadevan76983 жыл бұрын
This is great. Thank you very much for your wonderful videos
@abbasagha9661 Жыл бұрын
Thanks!
@DigitalSreeni Жыл бұрын
Thank you very much.
@venkatesanr94553 жыл бұрын
Thanks for your efforts and sharing knowledge
@farhanshadiquearronno74533 жыл бұрын
Your tutorials and explanations are on point.
@rajeevgupta40588 ай бұрын
Hi sir, First of all, thanks for posting these videos; they are really helpful. I have a pressing doubt, though. In a previous video (159b), you taught how to slice the blocks of the VGG16 model using SM 3D, but that was all for 2D images. Now, VGG16 is made with 2D kernels, so that makes sense. However, here you are feeding a 3D shape to sm.Unet with VGG16 backbone, and this works. I saw the SM's code, and there they have declared input_shape as (none, none, 3). Additionally, for the Unet function in the models folder, they have simply fed the args list of the Model factory from the Classification 3D library, which then directly picks the model from Keras. What I want to ask is how can I get sliced blocks for 3D conv layers with VGG16 weights, as we did for 2D in the 159b video? It should be possible since they have a 3D Unet built upon VGG16 without the hassle of doing slice-by-slice methods to create the 3D shape blocks (even though there are papers available on that technique). Lots of thanks. Hoping for a quicker response from you. :)
@caiyu5382 жыл бұрын
thumb it up. Thank you Dr. Sreeni for your excellent tutorials
@DigitalSreeni2 жыл бұрын
My pleasure 😊
@hartree.y2 жыл бұрын
Marvellous work! Thank you.
@qaw542 жыл бұрын
Hi, could you suggest what tweak to do if the image cube dimensions are not equal for e.g z not equal to x and y? Thank you.
@ChristianRichardson-i5f Жыл бұрын
Im a bit confused on why we use patchify, dont we need our images to have certain dimensions to break it into specified patch sizes? this requires me to resize my images to fit the patch dimensions,but the purpose i thought was so that we dont need to resize anything or have images with the same dimensions
@vassilistanislav3 жыл бұрын
Is there a way to create a tutorial for 3D reconstruction using multiple 2D images, If such tutorial is possible ?.
@mbq21510 ай бұрын
Hi @DigitalSreeni ... can you tell me if the backbone model only works for a symmetric volume and sub volume in this case...i have a volume (128x128x51) and its throwing error. Pls help me. Tq
@王松晨 Жыл бұрын
Hello, I want to ask why I used the website apeer to open your tif file, but there is no 2D to 3D in the lower right corner, is this function charged?
@neginpirannanekaran1236 Жыл бұрын
Thanks for the nice video. Just one question, you are using train_test_split which randomly picks slices. This seems to defeat the whole purpose of using 3d Unet that wants to learn from the geometry of 3rd dimension (image slices are not independent and they have spatial dimension).
@georgevonfloydmann17977 ай бұрын
Can I use this workflow if I want to segment liver tumor? I have a dataset of nifti files of different depth. Their dimensions vary. How can I divide every nifti file to equal patches?
@srinivasanvenkatramanan1713 жыл бұрын
I am having image data as size of (240,240,155,4) where height-240 ,width-240, depth(layers) - 155, channels-4 . How we can use patchify for this?
@user-maomao-tsai8 ай бұрын
excuse me , Sir , can we still use Apeer on arivis cloud to see multi-channels segmentation in 3 D style like this task?
@faheem51913 жыл бұрын
how can i handle different number of slices in dataset, such as verse2020.
@rajeshwarsehdev23183 жыл бұрын
What about loading batches of images? like 125 no. of images? Below are the steps I am trying to perform. Data -> imgHeigh = 256, width = 256 & channel = 196 1. Stored images into NumPy & Resizing, result is -> (125, 128, 128, 192) But I would I use patchify and restructuring here as my trying to preprocess these, Goggle Colab memory is getting crashed
@matthewavaylon1962 жыл бұрын
Pip installing those packages changes the tensroflow version to 2.x instead of keeping the 1.x defined at first.
@nohinlab3 жыл бұрын
Thank you for sharing your knowledge . I have XCT scan images of a cylindrical part manufactured using additive manufacturing process . The parts have porosity defects that i wanna segment. Please can you tell me if i should remove background before labeling my images in apeer ? ( in your case the part is cubique but mine are cylindrical )
@BernardoSaab Жыл бұрын
Thank you for the great presentation! Would you recommend using 3D UNet for abdominal image segmentation ? And if positive is there a strong reason to utilise this architecture over 2D UNet ?
@MariemMakni-jg6un6 ай бұрын
Thank you so much this is really helpful!! Bless you ^^
@dhaferalhajim Жыл бұрын
Thanks a lot... I have 3D ct scan medical data image and I want to segment it by 3D unet, I don't understand How can get mask image as input?
@houdahassouane636 Жыл бұрын
In order to train your model, you need to provide masks too and then test it on unseen data without labels
@kavithashagadevan76983 жыл бұрын
Thank you for this wonderful video. How would I be able to view my segmented multi-channel image in 3D in Apeer? I am unable to see any button to obtain the 3D view.
@DigitalSreeni3 жыл бұрын
If you do not see a button for 3D view that means it does not recognize your data as 3D. Please verify if the image indeed has 3 channels and if they are in the right order. You can open the image in imageJ to see if it recognizes the z direction correctly.
@kavithashagadevan76983 жыл бұрын
@@DigitalSreeni Thank you for your advice
@hamadyounis18402 жыл бұрын
How can i applied this to my dataset? my data shape is 190 How can i run it getting IndexError: index 255 is out of bounds for axis 1 with size 1
@talllankywhiteboy3 жыл бұрын
Really enjoyed the video and have been trying to use the code, but the step at 23:36 is a huge waste of resources. Tripling the size of my already fairly large images basically eats up all the RAM Colab offers. Really would have liked to see a more efficient approach.
@houdahassouane4018 Жыл бұрын
Hello sir. Indeed the RAM provided by colab is insufficient. I'm facing the same problem, did you find a way to it?
@talllankywhiteboy Жыл бұрын
@@houdahassouane4018 I sadly never manage to actually solve the issue of needing the three channels. One strategy I used to make things better though was to convert the data-type of the arrays to be as small as possible. For me that meant initializing some of my arrays to be a uint8 datatype. Example: train_lbls = np.zeros(train_dims, dtype='uint8')
@clueless15502 жыл бұрын
If I don't want to use the concept of patchify, Can I use the whole volume as input to the 3D CNN?
@DigitalSreeni2 жыл бұрын
If your system memory can handle working with entire 3D volume then you do not need patchify. 3D CNN itself has no limitation, it relies on your system memory to access data.
@linachato58173 жыл бұрын
Great video, Thank you so much for the great explanation! I just have questions: did you used the weights values from the vgg16 and start from these weight training your Unet? or did you use the layers from Vgg16 in encoder instead of using the usual unet layers? Also, is the Unet model that you downloaded is 3D Unet? and for which application the downloaded model was trained on? and which dataset was used to train the downloaded model?
@DigitalSreeni3 жыл бұрын
I loaded imagenet trained weights for VGG16 to start the training, as explained at 30.15 in the video. And yes, the Unet is 3D from segmentation-models-3D library. Not sure what you mean by which application the model got trained on; I was using a tomography image of sandstone - commonly used in oil and gas exploration.
@umairsabir35193 жыл бұрын
can we please have a tutorial on YOLO v3 or v4 using keras ?
@shabinaa64078 ай бұрын
Can you share some videos on specially how to make multi label mask data ready for semantic segmentation. Specifically I have 2 binary images(having 0 to 1 pixels value). Now How to prepare mask data and how to lable without any tools(because we already have binary images).
@kalhormh28833 жыл бұрын
How we can convert EX.DCM file format to tiff? there is no such an option in the Cirrus software to do that. Please help!
@DigitalSreeni3 жыл бұрын
I am not familiar with DCM format, sounds like some sort of DICOM file format. You need to look for libraries that can read these files.
@kalhormh28833 жыл бұрын
@@DigitalSreeni thanks for the quick response. DCM is the file format for Zeiss OCT device. I've been trying different ways like python or other software to open but no success. After some searches, I noticed this is not just regular Dicom formats that you can handle easily but it's locked by the company and it's not for public. I sent emails to Zeiss team as well to see if there is any way to unlock and convert these files and now waiting for a response.
@monaallaam8652 Жыл бұрын
Can we find a nice tutorial/code like this in pytorch?
@wrtxubaid9114 Жыл бұрын
Can we use nii files in this ?.
@matthewavaylon1962 жыл бұрын
Have you tried training from scratch? Any recommendations for doing so?
@FDXMSAIF2 жыл бұрын
how do you create the mask image set? please help
@DigitalSreeni2 жыл бұрын
Check this playlist.
@syedsajid78233 жыл бұрын
U are amazing my question is: suppose we are using BRATS dataset for segmentation purposes , how the following statement is going to change:: Encoder weights='imagenet'
@mehnaztabassum18782 жыл бұрын
I appreciate your effort! for my case, my training images(3D, t1 modality) are in nifty (.nii.gz) format. How can I convert them into a .tif stack? please help me in this regard.
@DigitalSreeni2 жыл бұрын
Please check this playlist about BraTS2020 data segmentation. Your questions may be answered. kzbin.info/aero/PLZsOBAyNTZwYgF8O1bTdV-lBdN55wLHDr
@mehnaztabassum18782 жыл бұрын
@@DigitalSreeni Thanks for the reply. I will definitely follow your advice.
@sophiez7952 Жыл бұрын
WHETHER THE THRESS DIMENSIONS MUST BE THE SAME LIKE 64*64*64, IF THE LAST DIMENTION IS LESS, IS IT still okay?
@georgevonfloydmann17977 ай бұрын
Hello did you find the answer for your question? I have similar concern.
@carlotarivera97543 жыл бұрын
Hello, thanks for the video, I have a question .. I have a dicom file of a cerebral angiography, I opened it in Imagej and there are 384 images, could I segment them and convert them into 3d with your tutorial? if not .. how could I?
@DigitalSreeni3 жыл бұрын
Yes you can follow this method to segment your 3D data set. Please convert your dicom image into a 3D tiff stack, for example using imageJ. This will give you a volume you can work with. You can use www.apeer.com for image annotation if you do not have any existing solution.
@carlotarivera97543 жыл бұрын
@@DigitalSreeni Thanks, I have a question ... I don't know how to convert to 3D from imageJ, but with 3DSlicer yes ... you know if a lot of information is lost from an AVM with that software ..
@DigitalSreeni3 жыл бұрын
DICOM is a tricky format and I do not have a lot of knowledge about all types of DICOM. Normally, you should be able to open the image in imageJ using one of the plugins and then save the opened image as a tiff stack. You just need to find the right plugin that can handle DICOM files.
@carlotarivera97543 жыл бұрын
@@DigitalSreeni thaaaanks :D
@johannesschmidt86113 жыл бұрын
What are the 3D models trained on? What datasets were used?
@surajneelakantan66253 жыл бұрын
Hello sir, This is a wonderful video. Can I use this for nd numpy array data stored in npy files (which are data of 3D images) ??? and the mask are also in npy files which are basically 0 for uninterested regions and 1 for interested regions
@pandian1537 Жыл бұрын
Hi brother could you tell me any idea or method about how to get patches in 3 d mri image
@DigitalSreeni Жыл бұрын
You can use Patchify library or of course write your own code.
@abderrahimhaddadi40233 жыл бұрын
Hello doctor .. Can you give me some guidelines/tips&tricks and resources to read to achieve better results/metrics for semantic segmentation for medical images? Thanks a lot for all the videos ^^ !
@cim54103 жыл бұрын
Your video is very helpful to me, I would like to ask if you have published a paper?
@DigitalSreeni3 жыл бұрын
I am not into academic research, my day job is in marketing so no opportunities to publish papers. I do have a few patents related to machine learning. Of course, you will find many of my previous publications online, just google search for my name Sreenivas Bhattiprolu :)
@rijotom88393 жыл бұрын
wonderful presentation
@karthikp72913 жыл бұрын
Sir, I have 140 3D Niftii files, and I need to extract patches on the fly and use a data generator. In your case, you have loaded one volume, how do I scale this to my case and load data such that it does not run out of memory? Can we work on this together and make a flexible pipeline for everyone to use?
@karthikp72913 жыл бұрын
Right now, I save every patch for all patients and then use a dataloader to load the data. But this is not very flexible if I want to change the patch size during fine tuning.
@saifeddinebarkia71863 жыл бұрын
@@karthikp7291 I ran out of memory trying to run the 3D U-net on brats2020 dataset despite using batch_size = 1 and having 6GB memory, I think 3D segmentation needs a lot of memory :/
@karthikp72913 жыл бұрын
@@saifeddinebarkia7186 yes, 3D segmentation requires a lot of memory. You need to create patches of mxnxt size and then train.
@houdahassouane636 Жыл бұрын
@@saifeddinebarkia7186 Hello sir, did you find a way to train your model? My colab crashes and the RAM space doesn't help. I'd like to know if Colab Pro would be of great help or not? thanks in advance.
@saadiaazeroual88573 жыл бұрын
Hello Mr.Sreeni , thank you for this video! i have 1 question , I want to know how to choose the best model to segment a multi-organ ? are they all free and open access like Unet ? please answer me , i am very confused ! thank you
@DigitalSreeni3 жыл бұрын
If you want to put together your own code, all useful python libraries are free. Also, you fill find a lot of useful code in the public domain. If you don’t want to write your own code, you can try www.apeer.com where you can annotate, train, and segment your images, it is free. I am sure you will find other online and offline platforms that offer these services.
@saadiaazeroual88573 жыл бұрын
thank you a lot for this informations!
@kibetwalter85282 жыл бұрын
Can you combine the 3D unet with gnn/gcn at the base layer?
@kibetwalter85282 жыл бұрын
Like in this paper A joint 3D UNet-Graph Neural Network-based method for Airway Segmentation from chest CTs
@rameshwarsingh58593 жыл бұрын
Thank you Sreeni Sir
@kibetwalter85282 жыл бұрын
You are just the best
@nouhinchannel3 жыл бұрын
Hello , can you please show us how u did annotate your dataset ?
@DigitalSreeni3 жыл бұрын
I used www.apeer.com. I annotated the images and downloaded the masks. You can watch the video on how to do the annotation on APEER. Of course, there are many other annotation tools out there but for my purposes APEER is the easiest. Disclaimer: APEER is developed by my team at work. It is free so you can check out if it fits your needs.
@sophiez7952 Жыл бұрын
thank your great work!
@gabrielmonacoribeirodasilv86432 жыл бұрын
please, do some videos handling with this pore model using porespy and openpnm libraries
@nouhamejri16983 жыл бұрын
good job , you can find Brats dataset on kaggle
@DigitalSreeni3 жыл бұрын
I cannot find the dataset on Kaggle, can you please provide the direct link? Everyone is referring about going to www.smir.ch/ and making a request which I tried and never heard back.
@nouhamejri16983 жыл бұрын
@@DigitalSreeni this is the link www.kaggle.com/awsaf49/brats20-dataset-training-validation ,sorry for being late
@Xiaoxiaoxiaomao3 жыл бұрын
@@DigitalSreeni I have sent you the link of BRATS dataset. Please have a look at your email. Thanks.
@talha_anwar3 жыл бұрын
much needed tutorial. if the data z-axis is different in every image, then what to do?
@DigitalSreeni3 жыл бұрын
What do you mean by a axis being different in different datasets. If you mean that z axis scale is different then it doesn’t matter much for training. In fact, it may help generalize the model a bit. You care about scale when you segment images and report object measurement parameters. Until then a pixel or a voxel is measured in pixel or voxels and not real units.
@talha_anwar3 жыл бұрын
@@DigitalSreeni some images have shape (512,512,110), some have (512,512,103), (512,512,117) etc. Do i need to make them on one scale
@ahhhhhhhh69472 жыл бұрын
Amazing explanation
@mohamedomar-rp3kz2 жыл бұрын
I'm not able to open Apeer for the first time! Could you help?
@DigitalSreeni2 жыл бұрын
Please post it on the APEER discord server: discord.gg/xffrNwm78e
@mohamedomar-rp3kz2 жыл бұрын
@@DigitalSreeni I did and I got no reply yet!
@a96yonan3 жыл бұрын
Can this work with 2D images too?
@elnaz82023 жыл бұрын
very nice thank u. I wanna use different datasets that size of image are (512,512), how can i use them without error
@DigitalSreeni3 жыл бұрын
Crop them to smaller size as you may not be able to fit 512x512x512 volumes in memory.
@anitakhanna27663 жыл бұрын
superbly explained sir. sir can i use the same procedure for 2 class problem also?
@DigitalSreeni3 жыл бұрын
Yes, of course.
@anitadhawan03083 жыл бұрын
@@DigitalSreeni thanks a lot sir. Sir what to do if i want to train the models with different volumes of patients one by one. one single .tif of huge number of images (adding volumes together) and giving at one time is making system crash and if we want model to learn, we have to provide many test volumes. Pl suggest, i am stuck.
@salmahayani27103 жыл бұрын
Hello firstly thankx for this useful video , i wanna ask something about dice coef loss , i'm doing a semantic segmentation on 3D Ct scan (Luna16 database )using 3D unet i have a problem that my dice loss function blocked in 50% and don't decrease anymore for both training and validation, do you have any idea what could be the problem? Waiting for your answer :)
@houdahassouane4018 Жыл бұрын
Salam salma, I hope you're doing well. we're doing a semantic segmentation on 3D xct images too, but we have a problem with large data and insufficient RAM ; it crashes at training stage ( we're using 3D Unet too), did you face the same problem and if so how did you manage it? even 32Gb won't do the work :/
@salmahayani2710 Жыл бұрын
@@houdahassouane4018 hi Houda, for me as first solution i have used Library TorchIO that helped me to charge data on flow while training so you don't need to charge all data in RAM and even you can do with the same library data augmentation on fly. thus, The second solution was to change machine with one that has NVIDIA RAM . I Hope this can help you to deal with this problem.
@houdahassouane4018 Жыл бұрын
@@salmahayani2710 thaanks a lot for repling, I really appreciate it. I'll try it. I have another question if you don't mind. Did you use his colab? If so didn't you get a problem with output label n ground truth not giving anything, but just a plain purple screen?
@salmahayani2710 Жыл бұрын
@@houdahassouane4018 do you mean google colab ?
@johnyang54402 жыл бұрын
Thank you for the fantastic video. I am establishing 3D cardiac muscle cell segmentation. I don't know if it will also work on that. Hope it will.
@gabrielcerono3063 жыл бұрын
Amazing work!!
@olubukolaishola48403 жыл бұрын
Thank you 🙏🏿
@DigitalSreeni3 жыл бұрын
You are so welcome
@moumitamoitra18293 жыл бұрын
could you please make some video of classification of 3D images using deep learning.
@moumitamoitra18293 жыл бұрын
I want to learn 3D image based classifications using different deep CNN pretrained models. Please help us.
@dev8343 жыл бұрын
it would be nice if you put out a video about reading, preprocessing and segmentation of color images
@AlexanderFIOsman3 жыл бұрын
Very nice! Thanks. It would be great if you could make videos on the adaptive domain using transfer learning (VGG16, Inception, etc.) and GANs.
@gurinderjeetkaur80873 жыл бұрын
Create one video related to 3d unet for brats dataset too, if possible.
@ritikaagarwal1123 жыл бұрын
Thank you sir for sharing the knowledge. Any plans to cover UNet++ architecture in upcoming lectures?
@Suman-zm7wx3 жыл бұрын
Sir could you please provide a vedio regarding "Super Resolution" using SRGanns, or any other algorithm, coz need an explanation just from you 😇
@DigitalSreeni3 жыл бұрын
Sure.
@Suman-zm7wx3 жыл бұрын
@@DigitalSreeni thank you sir, much appreciated ❤
@manishnarnaware65072 жыл бұрын
Dear Sir, can you help me in my project
@manishnarnaware65072 жыл бұрын
I can pay you for that
@DigitalSreeni2 жыл бұрын
Sorry, I got no time for helping with individual projects. I wish I had the time but I have a full time job that requires my full attention. I am sure you will find some freelancers, if you are willing to pay.
@cirobrosa Жыл бұрын
Keep it up!
@morniang38453 жыл бұрын
Thanks You
@МатвейБрюшков3 жыл бұрын
Let's create video about semantic segmentation of satellite images (buildings, roads, forests, rivers) using Unet
@DigitalSreeni3 жыл бұрын
What dataset do you recommend for semantic segmentation of satellite images?
@МатвейБрюшков3 жыл бұрын
@@DigitalSreeni For example, this www.kaggle.com/humansintheloop/semantic-segmentation-of-aerial-imagery
@DigitalSreeni3 жыл бұрын
@@МатвейБрюшков Thanks. This looks like a small dataset but fun to work with. I will try to record a video.
@МатвейБрюшков3 жыл бұрын
@@DigitalSreeni Can you give me please link to code, that devide large training images to 256x256 part?? Thanks
@МатвейБрюшков3 жыл бұрын
@@DigitalSreeni oh. I forgot send link to other datasets. zenodo.org/record/1154821#.YImFqaGEaUk