Thank you very much for these videos, professor Andrew. They have been extremely helpful in understanding image recon in PET/CT.
@AndrewJReader3 жыл бұрын
Thanks for your feedback, very glad to hear they are helpful! There are theory ones as well on my channel, in addition to programming / implementation.
@walidarabat75482 жыл бұрын
Thank you so much for this amazing demonstration, hope to see a lot more videos using the up-to-date deep learning network architectures
@AndrewJReader2 жыл бұрын
Many thanks Walid. I've now prepared a video on putting a CNN into iterative reconstruction: kzbin.info/www/bejne/eIm7faKMrqaHZrM
@李继涛-c3x3 жыл бұрын
This is very useful, especially thanks for your video. Hope to see more videos from your channel in the future.
@AndrewJReader3 жыл бұрын
Thanks for your feedback. I do hope to do more videos, including programming ones.
@alisejk2 жыл бұрын
Dear Andrew! Thank you for these videos about the reconstruction techniques, they are very instructive and clear. I implemented the MLEM method in C++ and a question came up that I could not answer, I am sure you can help me. In my simulation, I model the real counts by using Poisson statistics. The I0 of the beam is a parameter of the simulation which is attenuated by the phantom. As you discussed, the standard method is to calculate ln(I_meas/I0) to get the line integrals, this causes two problems. 1) If I_meas==I0 then the line integral is zero which has to be shifted with a small number when calculating the "Ratio" before backprojection. 2) when we use Poisson statistics, ln(I_meas/I0) can be negative which has to be clipped to zero otherwise the MLEM does not converge. These solutions modify the original statistics which we use for fitting. These problems can be overcome if we use the detected counts directly instead of fitting to ln(I_meas/I0). The only change is that we have to use the inverted "Ratio". My question is why the use of the direct counts is not the "normal"? My basic calculation shows that fitting to the direct counts converges, but I did not investigate other aspects e.g. speed of convergence. Best regards, Alex
@AndrewJReader2 жыл бұрын
Dear Alex, Thanks for your comment and question. I assume you are doing a CT reconstruction of the linear attenuation coefficients, as you mention taking the log of the ratio of two datasets (presumably the data without the object (I0), and the data with the object present (I_meas)). That will allow creation of a dataset corresponding to line integrals through the object (subject to other corrections being made). Let's call that processed dataset m, which in this Python video for MLEM is just called the "sinogram" (the vector m in the equation shown).So I think it sounds like you will need to modify your implementation. For Poisson noisy data you can still do the above, but it is not the best approach. It is possible to do a transmission (CT) version - you can find some very old work that I did on that here:ieeexplore.ieee.org/document/1352238 That's just a starting point - there are better methods available (I am not an expert on transmission tomography / CT reconstruction). Hope that helps? Best regards, Andrew
@TheLazyEngineer3 жыл бұрын
Thanks! Ive been enjoying your lectures.
@AndrewJReader3 жыл бұрын
Many thanks for your comment!
@TheLazyEngineer3 жыл бұрын
@@AndrewJReader Can you explain how the matrix A is typically constructed? I understand its an approximation to the Radon Transform, and it needs to map a discretized function (pixel values in this case) to a data points in the sinogram. Which means that A_ij represents the contribution of the j'th pixel to the i'th data point in the sinogram. So, how is this contribution typically modeled?
@AndrewJReader3 жыл бұрын
@@TheLazyEngineer For the discrete Radon transform, the contribution, a_ij, is usually modelled as the intersection length of line of response i with pixel j, using methods like the Siddon algorithm. For modelling real PET data, the system matrix A is factorised into easy to model components, as shown in this video: kzbin.info/www/bejne/pn7bZpWohspor6c (4 minutes in)
@danish07delhi2 жыл бұрын
Thanks a lot, very good tutorials and exercises for beginners to get started!
@AndrewJReader2 жыл бұрын
Many thanks for the feedback, really appreciated
@danfernandez12692 жыл бұрын
Great video resource to start learning from. I was curious if you had any reference recommendations on how to incorporate images from different source activity or DECT.
@AndrewJReader2 жыл бұрын
For different distributions, different ground truth objects (images) can be used. E.g. in a more recent video I included a CT slice: kzbin.info/www/bejne/jZiqXod8bb2kf6M . Not sure if I have answered your question though?
@user.1903 Жыл бұрын
This video help me a lot to understand the implementation of the math equations used in PET recon, thank you very much Andrew. I have just a few questions: Does the code structure would change for using realistic 3d projection data matrices and not just one slice from the phantom? How we could apply the attenuation, scatter, etc. corrections in the recon? it would be applying in the sensitivity matrix?
@AndrewJReader Жыл бұрын
That's great to hear, glad it helped! Yes, you would need a 3D projector that works for your particular scanner geometry. A 3D projector would operate on a 3D image, and project into a fully 3D set of sinograms. Attenuation and other corrections: perhaps best to take a look at this video: kzbin.info/www/bejne/pn7bZpWohspor6c and this one: kzbin.info/www/bejne/fGi1YoN5bLugqpI
@daniehlblinder2716 Жыл бұрын
Hello Dr. Reader, Thank you very much for this very helpful video. I have a quick question about the way that the skimage library handles the back projection. After reading the documentation of the library, it seems like the "iradon" function uses a filtered back projection algorithm to create the correction factor from the sinogram ratios (m/Ax^k in your video). Would there happen to be any way of handling this back projection through determining the system matrix and then multiplying the ratio by its transpose since the inverse of the projector is what we are looking for when using ML-EM? I attempted to compute the system matrix by applying radon function from the skimage library to a zero matrix with a single element set to 1, but it appears as though the radon function uses a non-conventional algorithm as the dimension of my output matrices did not make sense. Of course this can easily be done with MatLab but would you happen to be aware of any other Python library that could be used to determine the system matrix? Best regards, Daniehl Blinder
@AndrewJReader Жыл бұрын
Hi Daniehl, Thanks for the feedback and questions. Yes, iradon will default to doing the inverse of the Radon transform, by doing filtered backprojection. In this video, I use iradon with no filter, and as a result, it only does backprojection only - and this is exactly what is needed by MLEM (i.e. the transpose of the forward model is needed, and not the inverse of the forward model). To make your own system matrix, simply step a point source through every pixel position, and save the sinogram in each case as a column vector of the system matrix. I had been thinking of doing a video to show this. Dimensions of the forward projection may vary depending on whether a circular FOV is used or not. I avoid the circular FOV, and hence I have sqrt(2) times more bins in the radial direction for the 45 degree projection angle. Hope that makes sense.
@daniehlblinder2716 Жыл бұрын
Thank you very much for taking the time to help me with my issue. I was able to appropriately fill my system matrix through using a circular FOV and now see that the issue with this method is the size of this matrix, hence why you used the iradon function. Thank you again for posting these videos as they helped me and countless others get a good understanding of PET iterative reconstruction models.
@AndrewJReader Жыл бұрын
Makes sense - storing the system matrix is only really viable for 2D reconstruction (and hence fully 3D sinogram data would need rebinning). Thanks again for the feedback.
@learn2444 Жыл бұрын
Thank you for this wonderful explanation an practice.I have a question can we use FBP instead of BP in ML-EM.
@AndrewJReader Жыл бұрын
Thanks for you comment! It is possible to use FBP instead of BP in the iterative update - but it would no longer be seeking a Poisson maximum likelihood estimate. It would likely accelerate the method towards a (not well-defined) reconstruction, yet could result in instabilities if not implemented carefully.
@learn2444 Жыл бұрын
@@AndrewJReader oh ok , thank you Professor for this clear awnser 🙏.
@王博赛3 жыл бұрын
Hellow professor Andrew, what you show here is so clear to understand! Now we used CASToR to reconstruct our small animal PET data, which employ the image-based PSF method. But this measure does not address serious DOI effects(80mm diameter, 0.78mm LYSO pixel). Is that possible that I can reconstruct the image correctly only use the image-based PSF method in CASToR?
@AndrewJReader3 жыл бұрын
Thanks for your comment. It is known that image-based PSF modelling is approximate, especially if a shift-invariant PSF is being used. More accurately, a resolution model is needed both in image space (e.g. for positron range) as well as in the detector space (e.g. for crystal effects). However, if an image-space only method is used, it could be improved by making it spatially variant. Note that all models will be inaccurate to some degree of course, so the definition of "correct reconstruction" is open to interpretation - it just needs to be good enough for the task at hand (e.g. quantification to within an acceptable uncertainty in various regions of interest).
@MatangPanchal Жыл бұрын
Thanks, Professor, for the great explanation, can you please provide source code for the same?
@AndrewJReader Жыл бұрын
Hi Matang, many thanks for the feedback. I do not store the source code in any repository. I had originally thought part of the learning process was to write the code. But perhaps at some point I could simply paste the code into the comments section for the video. May consider doing that.
@MatangPanchal Жыл бұрын
@@AndrewJReader Thank you for your response, I am just typing your code in my editor to try.. 😀
@MatangPanchal Жыл бұрын
@@AndrewJReader Sir, I tried the code and its working as expected as shown in tutorial with the Shepp Logan phantom. In my case I have to do 3d reconstruction from the stack of images (360 image )taken at different angles (1 deg). So as per my understanding, as a sinogram I gave it "data[:,100:]" i.e 100th slice of the image data. but in my case, it reconstructs good in first iteration and then in subsequent iterations it detoriates reconstruction (makes the whole slice darker, even the ratio sinogram appears total black). Can you help me with this ? If yes , where should I share you further details.
@AndrewJReader Жыл бұрын
@@MatangPanchal you might have noise blowing up in the image with increasing iterations. Have you tried displaying the reconstruction iterates with a fixed min and max, e.g. by setting the display max value to be based on the first or second iteration? That would check whether it's a display issue arising from some hot pixel values blowing up.
@MatangPanchal Жыл бұрын
@@AndrewJReader No, I have not tried this yet, but I will try and let you know.
@saswatadas54899 ай бұрын
Can u do without iradon
@AndrewJReader9 ай бұрын
Great question - yes, you can. One approach would be to use the deep image prior representation (or even just a pixel grid in fact, if not early stopping or regularisation sought) with a forward model only and use an optimiser such as those used in AI, such as Adam, but it would be slow
@saswatadas54899 ай бұрын
@@AndrewJReader i mean unfiltered backprojection without iradon
@AndrewJReader9 ай бұрын
But backprojection is iradon (when used with no filter). The transpose of the radon transform can be found via iradon(no filter). If you mean to avoid using iradon as a function, then you can write your own version, such as, for example, what I did in this video: kzbin.info/www/bejne/eIm7faKMrqaHZrMsi=D6PoRXZRiT3Jm9z3 (see about 18 mins into the video)
@ahmadhambaliismail3178 Жыл бұрын
hi, appreciate if you can share how to reconstruct using data in data file such as .csv, .xlsx
@AndrewJReader Жыл бұрын
The data would need to be in the form of projections (or sinograms) for the algorithm shown in this video. If your data are in some kind of list-mode format, then you would need to bin them (histogram) into sinograms, or else forward and backproject according to each item of data for each iteration. Calculating the sensitivity image would be the challenging part if you do not use sinograms. In brief, there are many possibilities for data formats, and so this video assumes that the data have first been put into a standard projection data / sinogram format.
@ahmadhambaliismail3178 Жыл бұрын
@@AndrewJReader you mean, converting a histogram to sinogram?
@AndrewJReader Жыл бұрын
@@ahmadhambaliismail3178 I meant converting your data into a sinogram (which is a histogram of events for all the possible lines of response)
@ahmadhambaliismail3178 Жыл бұрын
@@AndrewJReader Hi Andrew I get your point. It is just I am new to this python coding and your video seems to be very helpful. Appreciate if you can make one for this. Reconstruction using data in data file 😄
@AndrewJReader Жыл бұрын
@@ahmadhambaliismail3178 Data will be in different formats according to the scanner, and according to the way the data are stored. So it would be necessary to know all the specifics of the scanner geometry and data format. This is a unique problem for each scanner. Hence in this video I start from a standard reference format of the data, which is a sinogram. I have other videos about sinograms which might help. Thanks for your support!