Sheep Brain Dissection Demonstration
1:12:05
Editing your graphs in G.I.M.P.
57:41
Пікірлер
@Dmat1937
@Dmat1937 14 күн бұрын
Hi, I'm using approach 3 (verdict good) mentioned in this video, but now I have a question. Its advisable to use the threshold function? Or would it no longer be necessary since they are divided by channel (or color) and why? I would really appreciate your answer.
@kruneuro
@kruneuro 10 күн бұрын
Hello, I don't think the threshold function would be appropriate here, though it depends on your goals. Thresholding flattens any image - color or otherwise - into a binary of black vs white. The approaches detailed in this video are aimed to measure differences in color of an image (original or deconvoluted), where the color spectrum is preserved. Thresholding is more appropriate for tasks like cell counting, stained area measurement, and/or creating stark boundaries between stained vs unstained elements in an image. If you are doing any of those tasks, then thresholding after color deconvolution is advised. The color deconvolution can isolate one stain from multiple others, then thresholding can allow for cell counting or area measurement of just that particular stain.
@carmenma7300
@carmenma7300 22 күн бұрын
OMG, this is really helpful! Thank you for sharing ~
@zahraahmadi2234
@zahraahmadi2234 Ай бұрын
hello. I have a question.would you please help me?I want to count the number of grafts inside the plates with the Fiji (filament detector). how can i do that(can i have your email address i want to send photos of analysis)
@kruneuro
@kruneuro Ай бұрын
Hi there, I am unsure how well I can assist you (I've never used filament detector), but I can try. We should converse over email, with some picture examples you can provide. I don't post my email in the youtube comments as I don't want to be harassed by bots, but you can find it via the website link in this video's description above.
@zahraahmadi2234
@zahraahmadi2234 Ай бұрын
@@kruneuro thank you so much
@mariainesmercado4370
@mariainesmercado4370 2 ай бұрын
Hi thanks for this amaizing tutorials!! I have to calculate diferent intensities of coloration with phloroglucinol (magenta specific stain for lignin). I think approach 3 custom color deconvolution then measure tool on RGB channel images.I have three questions please 1. If we only want to measure one colour intensity we can only click without selecting any other custom color for the other two options? 2. to express mayor color intensity as a posItive values. Since the scale goes from 0 (for black) to 255 (for white) we should use the formula 255-value or express as percentage with the formula (255-value)/255*100 3. And last is correct to use all the figure were different elements are stained?? For explample from photographs always taken at the same magnification that therefore always have the same square micrometers of surface.
@kruneuro
@kruneuro 2 ай бұрын
So, I tried playing around with the color deconvo widget on a sample phloroglucinol stain I found online. After using the color picker to find "ideal" staining values, and then trying the "user defined" setting on that PG stain as well as a masson trichrome stain (from my video)... I'm not pleased with how the program handles the user-defined values. The outcome pictures just look really wrong, and aren't separating out the stains correctly. Maybe it's due to high potential for error in the user picking out the *exact* correct & ideal shade of the stain color. In any case, you may have better luck just using the deconvo's RGB setting and then averaging your results of the R + B channels. Alternatively, the Azan-Mallory setting isolates magenta a bit more cleanly. That said, try out the various other deconvo settings to see which one best separates out your stain from the background tissue coloring. For representing the color values and moreover the staining intensity, the second option (percentage of the inverse) is the most intuitive for viewers/readers. For your last question: Picture sizing standardization is always best. As far as what you decide to show - if you're asking whether or not you should show the images that are output by color deconvolution, that seems like a good idea but may not be required for your publication... unless it's just as an example. Showing the separated channels may be more visually important if there are very subtle differences that aren't easily seen from the original, non-deconvoluted photos.
@mariainesmercado4370
@mariainesmercado4370 24 күн бұрын
@@kruneuro Thanks!!!
@janithchathuranga1301
@janithchathuranga1301 2 ай бұрын
It does not show color histogram option in analyze. Where can i find it?
@kruneuro
@kruneuro 2 ай бұрын
I suspect you might have the regular version of ImageJ without the various add-ons. I recommend getting this version, called FIJI. fiji.sc/
@mellanieferreira3491
@mellanieferreira3491 3 ай бұрын
Hi again! Could you help me with the unit measure that I should use to describe my results? I used 8-bit images and the mean gray value. Is it "pixels/micrometers²"? Or "pixels²"? Thanks again
@kruneuro
@kruneuro 3 ай бұрын
If you're just describing color and are not specifying any measurement of area, the answer is technically "neither". The mean gray value is out of 256 (the scale goes from 0 at black/color to 255 at white). So, if you're making a bar graph of the mean gray values for each condition, perhaps you can make a more interpretable metric like "Percent of maximum staining". This would be (256 - [value]) / 256 * 100. If you were quantifying staining in fluorescent microscopy photos, then the equation would be simpler: [value] / 256 * 100. To the other point in your question, although you're not using the pixels per um metric in such a bar graph, it's important to note it in your paper's methods. An example might be: "Quantification was performed by selecting 200 um x 200 um areas within each image. Mean gray values were calculated within these selection boxes."
@mellanieferreira3491
@mellanieferreira3491 3 ай бұрын
@@kruneuro hi! I measured the mean gray value of different regions of the picture with the same area of measurement. I analysed the histochemistry reaction in the tissues. Then, I compared the intensity (mean) of gray between the groups (treatment X control), and described if there was an increase or decrease in the histochemistry reaction when comparing to the control group. In my graphs I called as "optical density". I am still redacting the article for submission, so how do you believe I should correct these information? Thank you so much, you're really helping me A LOT 🙏🏻
@kruneuro
@kruneuro 3 ай бұрын
@@mellanieferreira3491 Optical density is a metric I'm still not fully knowledgeable in, but I know this approach is not an optical density measurement. I think something similar to what I mentioned in the prior comment, or even just using a non-percentage version where you provide the averaged mean intensity of subjects in treatment. In that case, the y axis label could be "Mean intensity averaged across subjects".
@lovethanosike3247
@lovethanosike3247 4 ай бұрын
This was really informative and helpful. Thank you for the video. How would you remove the agarose on the sections? E.g With OCT-embedded tissues/sections, one would wash with PBS. Is it okay to carryout the immunostaining protocol with the agarose still attached to the sections, if not is there a way one can remove the agarose attached to the sections without damaging the tissue?
@kruneuro
@kruneuro 4 ай бұрын
The removal of agarose from sections is something I also wonder about. I can typically peel an agarose ring off of a tissue section by using one brush to hold the section down and another to pull the agarose away. However, I don't recommend this for tissue sections that are at all fragile. I've definitely applied IHC procedures to tissue w/ agarose attached without problems. The only main reason to get rid of it is if it occludes your view of certain things in the tissue. Even then, that's only an problem if the agarose folds on top of or under the tissue, and further if it takes on a stain - which is effectively only relevant if using a chromagen like DAB. At some point this summer, I'll be investigating the use of gelatin as a lower-cost embedding medium substitute. I'll post a video about it if it works.
@lovethanosike3247
@lovethanosike3247 4 ай бұрын
@@kruneuro Thank you so much for your input. I am looking forwards to that video and I have my notifications turned on.
@isamaracarvalho7551
@isamaracarvalho7551 5 ай бұрын
Hello, for a few weeks now my laboratory colleague and I have been looking for step-by-step instructions for quantifying the intensity of red in images of C. elegans stained with Oil Red O. In the images, we have the stained animals and bacterial remains (which are the food source for nematodes) which also end up blushing, do you believe that this methodology is efficient for quantifying the red coloration of only the animals and then using it to compare the treatments with the control group?
@kruneuro
@kruneuro 5 ай бұрын
I think the approaches in this video can work for your case (some were indeed tested with Oil Red O). However, the stains might have more utility if there were some way to semi-selectively label the bacterial fragments. That way, you could subtract the bacteria-preferring stain from the Oil Red O stain.
@isamaracarvalho7551
@isamaracarvalho7551 5 ай бұрын
@@kruneuro Hello again, sorry for so many questions, but we are confused to finish our analyses. What is the best way to graphically represent the data obtained using the analysis described in method 3?
@kruneuro
@kruneuro 5 ай бұрын
@@isamaracarvalho7551 No worries. I think the best way to represent the data is via bar graph (or "column chart" as it is sometimes called in excel). Note that the output of ImageJ in the "Mean" column is the "mean intensity value" for the selection that you made. But, you should average those mean intensity values so that you get an average across your samples. So, for your graph, the bars represent your averaged "mean intensity" values coming from your multiple selections. Insert custom error bars representing standard error of the mean (SEM). You can calculate SEM by using the stdev function on the cells containing your "raw means" , and then just putting divided by sqrt of number of values. If the comparisons are not complicated, t-tests should be fine.
@isamaracarvalho7551
@isamaracarvalho7551 5 ай бұрын
@@kruneuro If I understood the explanation in the video correctly, values closer to 0 indicate a purer red, while values closer to 255 indicate lighter colors. If you put it that way on the bar graph, wouldn't it be confusing because it's the other way around?
@kruneuro
@kruneuro 5 ай бұрын
@@isamaracarvalho7551 That's true. You can correct this counterintuitive scale by subtracting the value from 255 and having that result serve as the new value: 255 - 40 = 215 And, if you are worried that having the data represented on a scale 0 to 255 might be unclear to readers, you can convert it to a percentage: 215/255 = 84.3% You can call this value "Percentage of maximum possible staining intensity", or something similar.
@myfirstblackdress
@myfirstblackdress 5 ай бұрын
Hi! Thank you for such a helpful video. I'm trying to compare nuclei intensity. I have taken Approach 3 of Custom Color Deconvolution then Measure tool on RGB channel images. I first did that with my control images and measured the intensity of a couple of nuclei. Upon doing that, i've realized that the mean value for my darker nuclei are lower than my lighter nuclei. With the values the other way round, i'm unsure as to how to express my data. Would you be able to advice me on what to do? Thanks!
@myfirstblackdress
@myfirstblackdress 5 ай бұрын
Also i'm trying to form a threshold based on my control image. Would it make sense to take 3 different 20x magnification images of the control. to those images, find the colour intensity of a 100 or so nuclei with the same roi. Following that, for one image, find the average intensity from the colour intensity of the 100 nuclei and do the same thing to the other 2 images. With those averages, find an average. And with this value, this will be my threshold and anything that goes above or below that will be positively or negatively stained. Sorry if this is all over the place but would this method be appropriate?
@kruneuro
@kruneuro 5 ай бұрын
@@myfirstblackdress So, for the weird inverse value problem, I think you can normalize it into a more understandable metric (for readers of your work) by some manner of inverse calculation. One that comes to mind is: 255-value. If you wanted to do the same but in percentage, then it would be (255-value)/255 * 100. For your second question, I think I understand - you are trying to take large separate control datasets and repeatedly average them to get a consistent value. To clarify, you are saying that you are determining your own personal threshold, and you are *not* referring to the threshold function & values in ImageJ, correct? If that's so, I think your approach makes sense. You can then rate how much things differ (positive or negative) from your determined threshold as raw difference, percent change, or standard deviations - not sure which to choose, but those are all options.
@myfirstblackdress
@myfirstblackdress 5 ай бұрын
@@kruneuro Thanks for your quick reply! I will apply the inverse calculation, that sounds great! Yes- I am trying to determine my own personal threshold. Okay I will determine which to choose! Thanks so much again!
@manualbreathing1stform
@manualbreathing1stform 5 ай бұрын
tinker cad is owned by autodesk which also owns fusion 360 both probably the best cading software on the market but fusion 360 is quite expensive for a license lol.
@kruneuro
@kruneuro 5 ай бұрын
Good to know! I'm a proponent for trying to utilize cheaper or sometimes free alternatives (GIMP vs Photoshop, JASP vs SPSS, etc.), so I'll be sticking with TinkerCAD as my requirements are relatively simple - even for small parts used in my lab work. Worth noting is that I do still think those who make these alternatives should receive compensation for their work. The mainstream versions of these tools are just typically prohibitively expensive for labs & students who don't have cash flowing from their cabinets. >_>
@am13134
@am13134 6 ай бұрын
Hi, I’m trying to measure the staining intensity in one region compared between control and subject groups. Since there is only blue staining, is there any issue with just converting the image to RGB and then using Ctrl M and recording the mean to use in analysis (with the lower intensities being a higher number and darker intensities being a lower number)?
@kruneuro
@kruneuro 6 ай бұрын
That *should* work. But one issue I could imagine running into is if the blue has slightly different reddish or yellowish staining in it between samples, or as a result of how the images are captured. I was also considering how "fainter" stains, closer to white, would have more of other colors involve - red + green. But the Measure tool may just simplify that anyway - I think it reads things as if they were on a grayscale spectrum, rather than a mix of colors. That said, you could ensure this by changing the images to grayscale (essentially "flattening" the color variety). I recommend giving both a try: see what the difference is when using Measure on the color images between treatment groups, and then re-test with the same images in grayscale. In theory there shouldn't be much difference between the values that Measure yields in these two situations, but it's worth a look.
@am13134
@am13134 6 ай бұрын
@@kruneuro Thank you so much for your reply. I did try converting to greyscale and measuring and it gives the same exact values (as measuring the average mean on RGB image). Initially I was going to just use the blue histogram from the RGB images but then watched your video and realised that would’ve been a big mistake. Once again, thank you immensely for these videos and also your comment. You’re having a big impact on students (undergrad here). Keep up the amazing work ❤️!
@kruneuro
@kruneuro 6 ай бұрын
@@am13134 Glad to have helped! These videos are definitely aimed for students (undergrad & grad) and some post-docs as well. I know it can be hard to find the right info, so I strive to keep posting videos when time permits. Thanks much!
@sabinestolker8490
@sabinestolker8490 6 ай бұрын
Hi! I'm looking for something similar to this wheel insert to make washing steps of my smaller tissue more efficient, but I can't find this anywhere. Would you want to share where you got this netwell insert?
@kruneuro
@kruneuro 6 ай бұрын
I made variants of this via 3D printing, which you can find on my research website: sites.google.com/view/kru-neuro-lab/home Otherwise if you want to buy that specific type, they're Prep-Eze by Pelco (Ted Pella company). But they're kinda pricey for what they are. If 3D printing my designs, I recommend setting a 75% infill, as there are otherwise micropores that cause leaking. Plus you'll still have to buy the net mesh and glue it on, but sheets of nylon mesh are cheap - mine's from McMaster-Carr.
@danko6582
@danko6582 7 ай бұрын
Nice video. I started using TC because I'm in my 50s and didn't fancy learning a complex CAD software. It's totally free, and easy enough for both young kids and older folks. It was just after I bought my first 3D printer and I couldn't find the obscure photography related parts I needed as downloadable online designs. The learning curve was real steep. There's not much to learn to get fairly proficient. My earliest designs were already pretty good and did the job. Eventually I got to be well known (I'm @ZDP189) and now I draw for relaxation.
@paylbajaj
@paylbajaj 7 ай бұрын
Thank you for the video. Can you please help me with the brain region for the following coordinates of rat brain from the bregma 4mm lateral, 2mm anterior and 4mm deep
@kruneuro
@kruneuro 7 ай бұрын
At a glance, it looks to be the primary somatosensory cortex, "oral dysgranular zone". It's a more internal layer of that cortex, close to the external capsule.
@paylbajaj
@paylbajaj 7 ай бұрын
Thank you. Can you please make a video on how to identify brain region from the coordinates. That would be highly appreciated..
@kruneuro
@kruneuro 7 ай бұрын
@@paylbajaj I think that's covered to some degree in this video, at 21:40 and beyond. If this video is insufficient, let me know what other information you would need. Is it that the acronyms in the diagram are hard to follow and may not give the full name?
@guitardude32787
@guitardude32787 8 ай бұрын
Might want to edit that email address in the tab. Delete my comment when you do 🙈👍
@kruneuro
@kruneuro 8 ай бұрын
Actually, no worries. I have that email linked on my research website, and am open to people emailing me questions there. Thanks for keeping an eye out though!
@guitardude32787
@guitardude32787 8 ай бұрын
good thing! thanks for this tutorial its got me going in the right direction @@kruneuro
@apoorvasondh
@apoorvasondh 8 ай бұрын
Hi, how do we measure thickness using the software?
@kruneuro
@kruneuro 8 ай бұрын
Hello! If you are talking about 2D microscope images, I don't think there's a good way to measure thickness in that manner. That said, I suppose you could figure it out via staining intensity. This would require the stain to be homogenously distributed across the depth of the tissue. Thus, thicker tissue would be darker, and those differences in staining intensity/darkness could be measured as a proxy for thickness. Otherwise, if you're talking about a 3D image, via confocal or the like, and it's derived from an image stack, the thickness/Z-axis info should be displayed within the title bar if everything is calibrated correctly.
@apoorvasondh
@apoorvasondh 8 ай бұрын
@@kruneuro Sir, I have similar pictures to that shown in this video. So, do I need to follow the steps as per the video for the staining intensity and then measure the thickness?
@kruneuro
@kruneuro 8 ай бұрын
@@apoorvasondh You can follow the steps I show here, but you'll have to be clear that you're only measuring staining intensity when you report your methodology to others. I don't think we can guarantee that staining intensity is always a good proxy for thickness.
@apoorvasondh
@apoorvasondh 8 ай бұрын
@@kruneuro Okay sir, noted. Thank you for the guidance.
@pranalishinde1083
@pranalishinde1083 8 ай бұрын
Hii, useful video sir . I just want to know more about the formula used for calculation of co ordinates according to weight.
@kruneuro
@kruneuro 8 ай бұрын
Hi again - I will post what I responded to you with in my email, so that others can view if they have the same question. I will also start a new question on ResearchGate, since I did not find quite the right one after some searching. "Hello Pranali, This is a tricky question to answer - I haven't heard enough talk about a consistent formula among colleagues that did the surgeries. I think it is because the rat skull doesn't elongate evenly across the whole front-to-back span. That said, I'll try to recall what I had done when I did these surgeries, now 7+ years ago. The Paxinos & Watson atlases are modeled after rats that are around 300 grams. For regions anterior to or near bregma, I *think* we added +0.5 mm to the AP coordinate for every 100 grams above the starting 270 gram weight. But if your coordinates are more posterior, perhaps closer to the interaural line, I cannot be sure how much to suggest adding, if any. I decided to look on ResearchGate in case this question was asked & answered there. I only found it for mice (www.researchgate.net/post/Do-I-need-to-correct-for-body-weight-on-AP-coordinate-in-mouse-brain-surgery), but I gleaned a few insights: 1. In this case, we're using weight as a correlate for age - particularly for male rodents. Researchers should be aware if their rodent models have increased fat or muscle gain (either from genes or from experimental diet) that skews that weight upwards from what it should be for a given age. 2. The thread mentions how the brain stops growing in size by adulthood. This is commonly established in many animals. But, I want to add that the skull can still grow a bit in some cases, and this is what ultimately adds error to targeting the brain coordinates. Since I've never worked with mice, I am unsure how well the mouse info tracks with rat situations. I think we should be cautious in assuming it still applies. I also found this thread: www.researchgate.net/post/What-are-the-stereotaxic-coordinates-for-the-lateral-ventricle-in-the-Sprague-Dawley-rats/2 The most helpful comment on page 2 was this: "... the brain/cranium growth is enough to change the coordinates of LV (and other areas). The atlas was designed from rats weighing 270 +/- 20 g, although can be utilized for rats between 250 and 350 g with minimal variation (less than 0.1 mm). Thus, you must utilize rats in this weigh range, or [otherwise] validate the LV coordinates when using mature animals." Another comment recommended using the Waxholm Space atlas of Papp et al. 2014, which they linked via here: scalablebrainatlas.incf.org/ . I don't have time to compare the two atlases, so you may have to investigate on your own. You can continue browsing through the "Questions" portion of ResearchGate to see if another post more directly addresses your question. I scrolled pretty far, and there are still many more questions that came up in the search I used (stereotaxic surgery rat weight), but the vast majority were not quite relevant enough. There are likely more things buried deeper in that search."
@kruneuro
@kruneuro 8 ай бұрын
OK, started a question thread on RG: www.researchgate.net/post/How_do_you_adjust_AP_coordinates_by_body_weight_for_stereotaxic_rat_brain_surgeries
@kruneuro
@kruneuro 8 ай бұрын
I have updates based on answers I received. See the thread again: www.researchgate.net/post/How_do_you_adjust_AP_coordinates_by_body_weight_for_stereotaxic_rat_brain_surgeries Contrary to what I was taught (warned about), this age-related drift in coordinates may not be an issue after all.
@FuzzyJohn
@FuzzyJohn 8 ай бұрын
If an object is above or below the work plane, just select the object and press D on the keyboard. Also, instead of moving the cutout box in little increments inside the big box, use the Alignment tool. It is very powerful and easy to use.
@kruneuro
@kruneuro 8 ай бұрын
Thanks much for this! Students watching this video, take note!
@javeentharka513
@javeentharka513 9 ай бұрын
Hey Kevin, thank you for this, this is amazing! Sorry for another question, but can I know if I can use this to compare the intensity of a colour vs blank. (I have a dye I use, I need to see how intense the colour is, can I use this to quantify the blue I get and how blue it is to another blue etc. ?
@kruneuro
@kruneuro 9 ай бұрын
Hello! I did post a newer video on this channel: kzbin.info/www/bejne/hWGvfpuJlK-nfdUsi=gZYk_dInTXcpL11K Take a look at that, and see if it might help you. I know I covered specific color staining intensity approaches there, but comment again if you're needing more pointers.
@javeentharka513
@javeentharka513 6 ай бұрын
@@kruneuro Hey Kevin, thank you again for this video and your contribution towards science, this is greatly appreciated. Can I please know how to use ImageJ to get the different in colour intensity of two normal images. I have two images of teeth, both have being taken in accurate, camera settings, DSLR Macro lens and same lighting (the image to image difference is less than 2% and a lot of work has being done to ensure the accuracy of the two) My question is one image of the tooth I have is of just a normal tooth with active carries, the second image I take is stained with the dye we use. I need to find a way to quantify the colour blue vs the control which does not have the dye of the images of teeth. Can you please suggest me a way to help with this, thank you kindly!
@kruneuro
@kruneuro 6 ай бұрын
​@@javeentharka513Hi there, I think the portion of my other video (kzbin.info/www/bejne/hWGvfpuJlK-nfdUsi=YbYk81-ctG6o8K5y) gives the particular method you could use. Specifically, the portion from 13:11 to 17:00 will cover what you need (<- but don't click on these time stamp links, as they'll just re-direct to the video on **this** page). The difference for you is to use the blue channel images that are isolated from the RGB option of the color deconvolution approach, rather than the red ones I show in the video. From there, you select areas of interest, use the Measure tool, acquire the mean values, and compare those mean values of the distinct images.
@seyedasaadkarimi7769
@seyedasaadkarimi7769 10 ай бұрын
This was really helpful and informative video for selecting antibodies (and rationale behind their functions). Thank you @kruneuro
@kruneuro
@kruneuro 10 ай бұрын
Thanks, glad to help!
@nouramohamednabawy7606
@nouramohamednabawy7606 11 ай бұрын
Thank you. it's very helpful. Is Approach 3 good for PAS and Mercury Bromophenol blue stains? And what about Masson trichrome stain? could you clarify those, plz? Thanks
@kruneuro
@kruneuro 10 ай бұрын
Hello! In general, I recommend Approach 3 for image analysis of many sorts of multi-color stains. I have used images for Masson trichrome, and so it works for that. I am less familiar with the other two stains. However, you'll notice that the color deconvolution function has presets for some of these stains. So instead of using the RGB preset, you can use the Masson Trichrome preset. I saw one for PAS... or at least H PAS (I don't know the difference; I haven't used either). There may be one for the mercury bromophenol. But if there isn't, I recommend either using RGB or using another stain preset that has similar color split-ups as MB stains.
@nouramohamednabawy7606
@nouramohamednabawy7606 10 ай бұрын
Thank you for your reply! So you recommend using approach 2 for stains that hasn't preset in color deconvoltion?
@kruneuro
@kruneuro 10 ай бұрын
@@nouramohamednabawy7606 I'm not a fan of approach 2. Instead, I think there's a custom setting in the color deconvolution tool. If you use that, you can enter the specific "ideal" color values for each stained element. Let's imagine if there were no setting for masson trichrome. What you should be able to do is use the color sampling tool (should look like a dropper icon) to find out the specific R,G, and B values of each part of the stain. So, you would find the R,G,B values for an ideal red/pink portion of the stain, R,G,B values for the ideal purple portion, and the same again for an ideal blue portion. After recording these values manually (in a notebook or excel), you should be able to enter them manually in the color deconvolution dialog's custom setting.
@nouramohamednabawy7606
@nouramohamednabawy7606 10 ай бұрын
@@kruneuro Thank you very much! I'll try this method and let you know the result! Thanks alot!
@chandankadur
@chandankadur Жыл бұрын
Thank you for this video. It is very helpful.
@kruneuro
@kruneuro Жыл бұрын
You're welcome! I am planning to make a follow-up to this for troubleshooting poor-quality sections. Hopefully I'll get to that in the next month.
@HaLe-jx4rf
@HaLe-jx4rf Жыл бұрын
Dear Professor Urstadt, thank you very much for spending your precious time making invaluable and informative lectures for poor students like us 🥰 I wish all the best upon you and your family, Professor Urstadt 🤩 I hope to see more lectures from you, Professor. I have just found your collection of Immunohistochemistry and they are deeply for a beginner in immunology like me Professor Ustadt. You are the best. Have a great week there, Sir!!!
@kruneuro
@kruneuro Жыл бұрын
Thanks for the appreciation!
@mikerak985
@mikerak985 Жыл бұрын
Great presentation. Thanks.
@caglarozdemir7384
@caglarozdemir7384 Жыл бұрын
Great video, thanks. I was wondering if this would apply to fresh brains. I tried using 4% agarose for a whole hemisphere and it was a disaster.
@kruneuro
@kruneuro Жыл бұрын
Thank you! Although I don't have hands-on experience with fresh brain sectioning, there are some considerations I can note: 1. Make sure the blade is advancing extremely slowly. 2. Try increasing the vibration speed to ensure the blade cuts the tissue rather than pushing it. 3. Angle the blade so that it is not too flat/level. It should have some slant. There may be an ideal slant that is better for fresh tissue compared to fixed tissue; I use between 20-30 degrees for fixed tissue, though my tissue is not ultra-fixed like a lot of labs do (<1 week in 4% PFA). 4. Try varying the concentration of agarose, likely to something lower. The agarose stiffness might need to more closely match that of the brain tissue. 5. Ensure that the brain chunk is not too tall when affixed to the vibratome. There's more likelihood of tilting or pulling if it is too high. This is probably even more true for fresh tissue. My guess is that the height of the chunk should not be greater than the width of the base/flat end that is attached to the chuck. The slimmer (less high) the chunk the better the cuts for fragile whole chunks. 6. What thickness are you sectioning it at? IIRC from people who did this to acquire live slabs for electrophysiology, they sectioned slabs at around 200 um and not thinner than that. They may even have isolated parts of the brain out beforehand, like manually cutting out things outside the hippocampus by hand with a razor blade. But if you are trying to do some other application where the tissue does not have to be still alive, and you are staining it in a way that needs to avoid fixation, I recommend not using a vibratome and instead a cryostat. Let me know your application so I can better advise.
@caglarozdemir7384
@caglarozdemir7384 Жыл бұрын
​@@kruneuro Thank you very much for the detailed answer! I am also trying to get 200 um sections. I want to try Golgi-Cox staining on sectioned tissues and fixating them reduces neuronal staining and increases the glial, which is why I try to section fresh hemispheres. I always glue the mid sagittal plane on the plate (never did it with chunks as you did in the video). Couple days ago I tried filling the chamber with cold buffer and ice and kept the temperature below +5 C, but that didn't help. Before that I tried 4% agarose and failed again. I also tried sliding microtome with dry ice and that also didn't work. Since we don't have a cryostat in our lab, I prefer reserving renting one as a last resort for now. As for if I need an alive tissue or not, I really don't know. In the usual procedure you place the fresh hemisphere into the impregnation solution and section it a couple weeks later. However, doing transcardiac perfusion to fixate the brain and then placing the brain in the solution also works, although with a worse staining as I mentioned. So since fixating does not entirely stop the staining, I guess I don't really need a living tissue? Idk.
@kruneuro
@kruneuro Жыл бұрын
@@caglarozdemir7384 So I started reading into Colgi-Cox staining due to my unfamiliarity with it. I was using this document: www.ncbi.nlm.nih.gov/pmc/articles/PMC4814522/ I have a few more ideas and some follow-ups to your points. 1. It looks like keeping the tissue non-fixed is a must for better quality of staining, so I don't advise changing things up on that front. 2. Are you making sure that the base of the tissue is adhering properly to the vibratome? There must be no agarose between the brain and where it attaches to the vibratome. 3. The fact that you're doing sagittal cuts addresses my prior concerns about the brain chunk height vs base width; it should be fine to section a whole hemisphere sagittally without having to pre-cut it into slimmer slabs. The height issue is mainly for coronal sectioning. 4. Your cold buffer idea was good - it'd make sense that it should've stiffened the brain up, though it's unfortunate that it didn't quite do the job. 5. The linked document notes that 60 Hz blade frequency and advancement speed of up to 15 mm/s is best. I feel like that advancement speed is still a bit too fast. 6. I also noticed that the authors use "tissue protectant", which is the recipe you will find online for "cryoprotectant" and more accurately called anti-freeze. My guess is that even though the tissue is not being frozen or stored in a freezer, the substantial amount of sugar and ethylene glycol might stiffen up the tissue after it has time to absorb it. I feel like that last detail might be what helps.
@ouafaeslife123
@ouafaeslife123 Жыл бұрын
Hi ,thanks for the video. i have some questions please , can you tell me which part in the brain those coordinates x,y,z 3,20,36 AND x,y,z 5,-85,-5 , im really wondering about that. thank you
@kruneuro
@kruneuro Жыл бұрын
Hi there, These look like human brain coordinates, and unfortunately I don't have experience with those nor do I have a human brain atlas. For a direct answer, you may want to contact someone who does MRI research.
@TaylorSkibicky
@TaylorSkibicky Жыл бұрын
What blades are those? I'm learning how to use this vibratome now and i'm finding it hard to make sure the blade is straight so my cuts are not off. I don't think the blades what i am using are right for the blade holder
@kruneuro
@kruneuro Жыл бұрын
The ones I use in the video and have used previously are Feather double-edge razor blades. Previously I had broken those in half by bending them, but as this produces a slight curve, I've since cut through the connecting metal with scissors. Usually the whole unbroken blade being put in there will not allow enough clearance if the tissue chunk is tall, or otherwise I imagine there may be some potential for the blade to bend during cutting if the tissue or embedding medium is any bit resistant. I know there are other blades out there that look like the *might* work, but they may have issues working on a vibratome. For instance, longer disposable blades are more suited for cryostats, and the thick metal wedge blades are for cryotomes or microtomes (and they're way too heavy weight for a vibratome). If you're still having trouble, you can send pictures via email. My contact info is on my website, which you can find in my channel info.
@RaquelTips
@RaquelTips Жыл бұрын
Hi! Thanks for you complete and useful video! I have a question. My version of Image J does not show Color Histogram, only Histogram. I need the gray color distribution for greens, and the frequency, do you know how I can get those values? Thanks!
@kruneuro
@kruneuro Жыл бұрын
Hello! It may be the case that you are using the prior version of ImageJ that lacks the various plug-ins. The newer version is FIJI, or ImageJ2 with add-ons. imagej.net/software/fiji/ Give that a try and let me know if you still have issues.
@RaquelTips
@RaquelTips Жыл бұрын
@@kruneuro thank you!
@natybernal8838
@natybernal8838 Жыл бұрын
Thank you for this video. It really helped me to understand
@PetraKraus-gu6wj
@PetraKraus-gu6wj Жыл бұрын
Thank you!
@santhoshisahani6648
@santhoshisahani6648 Жыл бұрын
hi i would like to contact you regarding fibrosis quantification may I get your email ID or may I request you plz upload the video of fibrosis quantification. thank you
@DoctorV_
@DoctorV_ Жыл бұрын
This is really fantastic, thank you for taking the time to put these up. Have you published Approach 3 anywhere? I'd like to cite you if possible. additionally, I'm trying to apply the threshold tool to be able to deconvolute true Oil red O stain from noise generated by the colour deconvolution, have you tried this before?
@kruneuro
@kruneuro Жыл бұрын
I appreciate the potential citation. Unfortunately, I haven't published work using that specific approach - I only have a lot of older data from which I extracted RGB values, and I have no desire to go back through and deconvolute all of those images! I think your approach for noise reduction sounds reasonable. I know that deconvolution can leave in a bit of background, so applying the threshold tool in as consistent of a manner as possible is a good idea.
@mariainesmercado4370
@mariainesmercado4370 2 ай бұрын
You have to publish these analyzes so we can cite the references!!!
@mariabelenolivares7745
@mariabelenolivares7745 Жыл бұрын
Thanks a lot for yhis useful tip! Where can I find the next steps to selectively cuantify the blue part of the image?
@kruneuro
@kruneuro Жыл бұрын
I recently posted another video on my channel that should help you: kzbin.info/www/bejne/hWGvfpuJlK-nfdU (Title starts with "Comparing color intensities..." if the link doesn't work.)
@erikamariadebilio3190
@erikamariadebilio3190 Жыл бұрын
So useful for my exam!!THANK YOU
@kruneuro
@kruneuro Жыл бұрын
Glad this was helpful!
@diyarajesh9707
@diyarajesh9707 Жыл бұрын
Dear Kevin, Thank you so much for this video! I am a third year medical student in the UK and am doing a lab-based dissertation for a semester. I am only 19 and have NO idea what I'm doing. My professor has just told me that we will be doing TSA on mice brain slices and being confused is an understatement. Moreover, I have absolutely zero clue how to write a 10,000 word dissertation on something I don't even know how to pronounce. This video really helped explain a lot of what im about to do tomorrow. Keep going! Regards, A very confused and sleepy student :)
@kruneuro
@kruneuro Жыл бұрын
Thanks for the support! Do note that I also have a live demo video that accompanies this, aiming to visualize the nuance & logistics of actually doing the technique. Good luck, and feel free to ping me back if you have follow-up Qs.
@mellanieferreira3491
@mellanieferreira3491 Жыл бұрын
Thank you again for these amazing tutorials and explanations. You have helped me A LOT. Seriously. I was struggling with these analysis before finding your videos. I took notes of everything and watched it many times. Thanks a million!!
@kruneuro
@kruneuro Жыл бұрын
Glad I could help!
@kruneuro
@kruneuro Жыл бұрын
Hi all - If you have questions about isolating specific colors in ***brightfield*** images, check my video description above for a link to a new, more helpful video. Viewers who work with fluorescent microscopy will still probably find this video helpful.
@bhawnapandey4375
@bhawnapandey4375 Жыл бұрын
Hi Kevin, I am trying to count bubbles in my image. So, can the process of counting particles be applicable to bright field images, i.e., not labelled?
@kruneuro
@kruneuro Жыл бұрын
Hello - yes, it should still be applicable. Staining isn't required as long as your target (the bubbles) contrast with the background. Even if the image is in color, I think the regular thresholding will convert it to grayscale first. If it doesn't do that, change the image type to 8-bit first. Then, you would need to check on the "include holes" option before analyzing particles, and you should avoid doing the process -> binary -> watershed function as it will fragment the bubbles. Hopefully this is helpful.
@oliviakang5657
@oliviakang5657 Жыл бұрын
Thanks you very much on your tutorial. I have a question regarding the calculation for RGB brightfield image. I am using Oil red o staining, could you briefly walk me through how to get the mean overall intensity for the area of interest after you get Red, Green and blue mean values ? Is it to use the average of the sum of the 3 channels from the color histogram even if the stain is red in this case and white background - which mean I would have higher mean (R+G+B) for specimens with lower signal but lower mean value for those with red signal for the fat ? Thank you again !
@kruneuro
@kruneuro Жыл бұрын
Hi Olivia, I've been meaning to do another video tutorial on this. It seems that my approach for just obtaining the mean color values may be problematic for doing comparisons between brightfield images, due to how all three colors contribute to white. I think of how a good red stain could be compared to a poor/faded red stain, and it's not as straightforward as it seems. Indeed, a faded stain might have higher R,G, and/or B values due to being closer to white - whereas intuition would otherwise make us think that the R value should be higher (but that's not how it works!). I'm going to test a few things in ImageJ, put up a video this week, and then link it here in a reply once I'm done. You'll also note that another user or two asked about the Oil Red issue, so I think my video will focus on that as an example.
@kruneuro
@kruneuro Жыл бұрын
Hi again - I've now made and posted that new video tutorial with detailed explanations on how the color systems work. Please see here: kzbin.info/www/bejne/hWGvfpuJlK-nfdU
@blauhimmelsky
@blauhimmelsky Жыл бұрын
How many slices do you put into each netted well?
@kruneuro
@kruneuro Жыл бұрын
This is a good question without a simple answer. It can vary depending on the following factors: 1. antibody concentration in solution, 2. section thickness, 3. amount of antigen present in each section (more ubiquitous antigens suck out more antibody from solution), 4. level of solution, 5. speed/rpm/shaking force applied to the netted well(s), 6. how much the sections clump or attach to each other, 7. width/height of the sections relative to the wells and whether they fold easily or even too excessively, and 8. addition of reagents that aid antibody penetration into tissue (usually Triton x-100, tween-20, or other surfactants & tissue permeabilizers). There might even be other factors I'm not aware of. But, to give you at least some estimate: I've done well having 3-4 formaldehyde-fixed adult rat brain sections (50 um thick) per netted well, half full with solution, set to shake just enough to move the sections. This is using the netted wells depicted from Ted Pella, and the 6-well insert + basin fits ~8 mL to fill it to about 1/2 height before sections are added. The antibody concentration is usually 1:2000 (0.5 ug/mL) for antigens that aren't too ubiquitous; DeltaFosB primary antibodies works well enough at this amount whereas something found everywhere like a GABA receptor antibody would probably need to be between 1:100 and 1:500. I haven't tried varying the secondary and tyramide dilutions accordingly but really should at some point, but when I use either of those at 1:300-1:500 it usually suffices to label the target appropriately. One consideration is that I hemisect my brain tissue as each hemisphere could be considered a back up if the target is expressed equally in both hemispheres. As such, I see no need to stain intact (both hemisphere) sections when I can just use half-sections; the former would just eat up more antibody from solution and give extra info that I honestly wouldn't spend much time analyzing.
@martinsaad4891
@martinsaad4891 Жыл бұрын
Thanks alot for this video, i would like to ask about , if i want to take the length for this part, how i can do it?
@kruneuro
@kruneuro Жыл бұрын
Could you clarify what you mean by "take the length" and which part you are referring to?
@mellanieferreira3491
@mellanieferreira3491 Жыл бұрын
Hi Kevin, thank you for sharing this lecture, it helps a lot. I have a question about my research analysis: I need to quantify the intensity of purple in a picture (which is the extracellular matrix), but I'm working with a metachromatic stain (toluidine blue), so the nuclei around my sample are very dark blue. Will I have to measure in a higher magnification to select only small pieces of matrix, so I don't get the interference of the blue nuclei? And in this case, I should look for the blue ou red values? If I convert to 8-bit and measure the mean of gray values, is it better? Sorry for so many questions, but I am really confused about the method I should use that better represents my data, and your experience would help me a lot 🙏
@kruneuro
@kruneuro Жыл бұрын
No worries about the questions. I wish I had been able to answer them more promptly, but this semester had a rough workload. I think that it can be difficult to filter out similar colors, so I usually advise for isolating the specific areas of interest (the EM in your case) via higher magnification. BUT, if going back and snapping more microscope pics is not feasible, then I think the color deconvolution can separate the blue and the purple out. Depending on the setting, it could isolate the image into blue and red or blue and purple components; I'm less certain that it can do the latter, though the latter is more of what you want in your case. I was testing out some things with an H&E stain, which is somewhat similar to what you are working with, even if not identical. Color deconvolution produced one image of only blue nuclei, one image of green haze (to be ignored) and one image of pink EM-type substance. Worth noting is that the pink image still did not have "holes" where the nuclei were; it was still pink in those areas. In this regard, I have concerns of trying to assess EM staining efficacy by selecting a broad area - including the cell nuclei - if the influence of the nuclear stain cannot be excised from your analysis. I guess a follow-up question you could answer is: When you use your EM stain alone, are there holes where nuclei exist? If so, I think that the higher magnification option will be a better fit, as color deconvolution doesn't seem to carefully excise the influence of the nuclear staining from the ME stain channel, only vice versa seems to work (no EM staining in the nuclear staining channel).
@kruneuro
@kruneuro Жыл бұрын
One additional thing I did find from further playing around is that the nuclei can be subtracted out of the image. After performing the above color deconvolution, I took the nuclear stain image, converted it to 8-bit grayscale via Image -> Type (might need to toggle to RGB and then 8-bit to get it to work), and then I inverted it via Edit -> Invert. I then took the EM stain image and converted it to an RGB image type. Finally, I went to Process -> Image Calculator, and then put the EM stain first, function Add, and the inverted nuclear stain next. The end result was that the EM stain now has white holes where the nuclei were, without otherwise affecting other parts of the image color or intensity-wise.
@mellanieferreira3491
@mellanieferreira3491 Жыл бұрын
@@kruneuro Thank you SO much. I will try all your suggestions and see what works best for my samples. You've helped me a lot, really. Happy holidays!
@kruneuro
@kruneuro Жыл бұрын
@@mellanieferreira3491 Hi again - I've created a new video that may be helpful for your continued image analysis. Please see here: kzbin.info/www/bejne/hWGvfpuJlK-nfdU
@navnath6188
@navnath6188 Жыл бұрын
how to calculate total very value of Xray image in this software thanks
@kruneuro
@kruneuro Жыл бұрын
Did you mean total gray value? If so, I suspect you want the mean gray value. If you want to do it for the whole image, you make sure to either select the entire image or otherwise select nothing, and then use the Measure function, which will give the average gray value in the Mean column. Let me know if this is not what you were looking for.
@TaniaRodeznoAntunes
@TaniaRodeznoAntunes Жыл бұрын
hello! I was wondering what kind of netted inserts you were using? I am new to free-floating IHC and was trying to figure out a good protocol, Thank you!
@kruneuro
@kruneuro Жыл бұрын
In these videos I am using Ted Pella's Prep-Eze. However, I recall these small ones being rather expensive - $100 each. Thus, I replicated the design for 3D printing, and those designs can be found on my website link in the video description. You'll still need to buy the net from any sort of industrial supply company (I got mine from McMasterCarr - some sort of nylon mesh net) and then cut+superglue it on, but it should work the same for far less of a cost.
@TaniaRodeznoAntunes
@TaniaRodeznoAntunes Жыл бұрын
@@kruneuro Thank you very much!
@medicynn
@medicynn Жыл бұрын
hey howd you get the color histogram
@kruneuro
@kruneuro Жыл бұрын
The option should be under the Analyze menu. If it is not, you may have a different version of ImageJ. I recommend the "FIJI" version (a.k.a. "distribution") that has various plug-ins installed and calibrated. imagej.net/imaging/
@ayahhamdan1649
@ayahhamdan1649 Жыл бұрын
Hi! Thanks for this video. Do you have any tutorials on how to use imageJ to count co-localization of c-fos cells and another red-reporting cell?
@kruneuro
@kruneuro Жыл бұрын
Hello there! I do not yet have my own protocol for colocalization, even though it is a measurement that is of much interest to many. From what I recall reading about it, it is definitely a bit more complicated, involving color thresholding. A brief search & viewing makes me think this help get you part of the way there: kzbin.info/www/bejne/qIPce6F8pJpqrrc . Unfortunately it seems the video focuses on percent overlap and correlations rather than discrete counts. For such counts, I think the thresholding needs to eliminate other single-color targets from the image, and then counts should be able to be done normally via the Analyze Particles tool from there (as the image should be binary black & white, then). Other videos seem to exist on the subject but may not give straightforward & quick answers for ImageJ... so that's a video I should do at some point. One alternative is to do regular thresholding and analyze particles on the individual green and red channels (after they are grayscale'd) to figure out how many red and green cells there are, then also taking note (in the same results) how much area each occupies. Then on the multi-color image, you can do this color thresholding to isolate the yellow, and then take note of the area measurement of that. You can then divide the "yellow area" number by the "red area" number to get a fraction, and multiply that fraction by the total number of red cells. That would roughly give you "the number of red cells that are also yellow". The same procedure could be done for green.
@Mirabell97
@Mirabell97 Жыл бұрын
Thanks a lot for the tutorial - really helpful so far! I'm using Oil red O staining and would like to compare the intensity of red between images/strains. Is it fair to divide mean(red) by mean(mean(red,blue,green)) and compare the resulting values to determine the intensity of red while not considering overall brightness? Again, thanks a lot!
@kruneuro
@kruneuro Жыл бұрын
Absolutely! I had been thinking about this and intent to make a brief video on it. I work mostly with fluorescence (black background), so I needed to re-think how to subtract white background in brightfield examples. Your example of dividing out the combined RBG means is what will do the trick, especially if applying it to a full image rather than a selection. I have a separate video on color deconvolution if that is helpful for you as well.
@Mirabell97
@Mirabell97 Жыл бұрын
@@kruneuro Thanks a lot for reassuring me that that approach might work!
@kruneuro
@kruneuro Жыл бұрын
@@Mirabell97 I thought on that equation you devised, and I am unsure if it's the best way to isolate red values minus white in order to compare red values between images. I'll need some time to work on that idea, as there might be multiple ways to approach it. Some methods might involve either the Color Threshold tool or the Subtract option as in my other video, but I haven't gotten them to work in the right way yet. I'll try to keep you posted.
@Mirabell97
@Mirabell97 Жыл бұрын
@@kruneuro that would be great! At least for the few images I've tested so far the values I get seem representative for the intensity of red I see - but since I don't really understand the theory behind it, I'd appreciate any feedback/advice!
@kruneuro
@kruneuro Жыл бұрын
@@Mirabell97 Hi again - I finally had time to investigate the weird color measurement issues we discussed. Please see my new video here, if you still have question on your analysis approach: kzbin.info/www/bejne/hWGvfpuJlK-nfdU
@gengpan
@gengpan 2 жыл бұрын
Will try this
@gengpan
@gengpan 2 жыл бұрын
i have an evan blue staining sample, in which BLUE is the positive signal. In this case measuring gray is not accurate and actually gives a obvious fault conclusion. how to only measure blue in this case?
@kruneuro
@kruneuro 2 жыл бұрын
Hi again: For your method, you'll want to use the "color histogram" function rather than the simpler "measure" function. The color histogram is best applied for color-picture light/brightfield microscopy situations like yours, and the measure function is best applied for either monochrome brightfield or single-channel fluorescence photos. So, go to 33:40, perform the steps, and specifically look at the "blue" row of the Results window. When you compare specimens where one is positive (much Evans blue staining) and one is negative (little to no Evans blue), you'll acquire the "blue" means for both specimens to quantify the difference between them.
@gengpan
@gengpan 2 жыл бұрын
@@kruneuro I tried, the issue I saw is that negative group has a higher mean value of blue color, while looking by eye, there is no blue staining......
@kruneuro
@kruneuro 2 жыл бұрын
​@@gengpan Sorry about the delayed reply. That issue is odd: the blue value should be near or equal to zero if there is no blue in the image. However, other colors can contribute to the blue value, specifically whites/grays, yellows and purples. If such colors exist in the image, you can attempt color deconvolution to separate things out and re-attempt the intensity measurement or color histogram measurement. kzbin.info/www/bejne/rp2ZpGOgndRsaq8 You'd have to tweak the setting to your needs, but hopefully this resolves things.
@gengpan
@gengpan 2 жыл бұрын
@@kruneuro thank you
@mellanieferreira3491
@mellanieferreira3491 Жыл бұрын
@@kruneuro Hi. I had the same issue above. I think its because the positive is a dark blue and the negative is a light one, then if it is near to white it happens to be higher in the color histogram, right? how can we solve that, doing the calibration of optic density?
@gengpan
@gengpan 2 жыл бұрын
why choose red channel for gray intensity analyse?
@kruneuro
@kruneuro 2 жыл бұрын
I had to take a quick look through the video to see which part you were referring to; it is ~18:45, correct? I select the red coloring in this specific example because this particular stain (a diluted cresyl violet stain) only produces fluorescence in the red emission channel. And although I could capture it as red, I just convert it to grayscale for better visibility on screen. Whether I keep it red or I keep it gray does not affect how the measuring process or results go - both a red-only image (sometimes called "indexed") and a gray-only image still have shades from 0 to 255. To your question again: you can select any fluorescence color channel to perform these measurements or convert to grayscale - I just chose red in this example.
@gengpan
@gengpan 2 жыл бұрын
Then How to measure the single color intensity, if dimmer blue actually gives a very high bMEAN value in the "color histogram"?
@kruneuro
@kruneuro Жыл бұрын
Hi there - I know it has been a while since we corresponded, but I posted a new video that has different approaches to color analysis. See here, in case it is helpful: kzbin.info/www/bejne/hWGvfpuJlK-nfdU
@gengpan
@gengpan Жыл бұрын
@@kruneuro thank you.