In Conditional DDPM, the output can be controlled. We can provide a particular label of an image, and the model will generate that exact same image. But, if we don't use conditioning, we can't control the output. Model randomly gives one from the entire dataset.
@freenrg88828 ะบาฏะฝ ะฑาฑััะฝ
Well. Your mic is definitely mediocre.
@MediocreGuy202325 ะบาฏะฝ ะฑาฑััะฝ
@@freenrg888 Agreed
@MediocreGuy2023ะะน ะฑาฑััะฝ
00:00 Sorry, I made a mistake in the title (๐๐ฎ๐๐๐ผ๐ผ๐๐ ๐๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฒ๐ฟ using Scikit-Learn) of this video in the Jupyter Notebook. Scikit-Learn doesn't have this algorithm. Sorry for the mistake.
@boleto7467ะะน ะฑาฑััะฝ
Work on your audio
@MediocreGuy2023ะะน ะฑาฑััะฝ
@@boleto7467 If there is a proper audacity video available on youtube for dealing with keyboard issue, provide the link here. I didn't find one to take care of the keyboard sound.
@majidmohammadhosseinzadeh95422 ะฐะน ะฑาฑััะฝ
Hi there! Great job, man. Your tutorials are amazing. Please keep going and upload more tutorials. If you don't mind, I have a suggestion for you. I believe it would be more beneficial if you explained each section or line while coding. For example, clarify the purpose of each section or each line. One more thing, your voice is not clear. The typing noise is louder than your voice. Thank you so much.
@MediocreGuy20232 ะฐะน ะฑาฑััะฝ
@@majidmohammadhosseinzadeh9542 Really poor sound and video editing skills, unfortunately. But, thanks for watching.
@majidmohammadhosseinzadeh95422 ะฐะน ะฑาฑััะฝ
You can't imagine how useful your tutorials are. If possible, could you please prepare a tutorial on ANN regression? What I really appreciate about your work is that you provide the complete code and procedure from start to finish, which is quite unique. I've been searching for a tutorial like this for a while but haven't been able to find one. It would be fantastic if you could create an ANN regression model tutorial.
@MediocreGuy20232 ะฐะน ะฑาฑััะฝ
@@majidmohammadhosseinzadeh9542 Okay, hopefully.
@syedmuzammilahmed68722 ะฐะน ะฑาฑััะฝ
What if we have Tabular 1D data. Can you please guide how can we implement conditional DDPM on 1D data. Thanks
@MediocreGuy20232 ะฐะน ะฑาฑััะฝ
@@syedmuzammilahmed6872 I am only familiar with implementations using 2D data. Check out this repository: github.com/yandex-research/tab-ddpm
@syedmuzammilahmed68722 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 Actually i have implemented the DDPM on 1D data but now want to apply condition to it. So searching for that Conditioning in 1D DDPM.
@MediocreGuy20232 ะฐะน ะฑาฑััะฝ
@@syedmuzammilahmed6872 Doesn't your 1D data have labels?
@syedmuzammilahmed68722 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 It has labels (yes/no)
@MediocreGuy20232 ะฐะน ะฑาฑััะฝ
@@syedmuzammilahmed6872 Doesn't nn.Embedding work the same way it's working for MNIST? You have to change embedding dimensions I guess based on your dataset requirements.
@do_you_interested2 ะฐะน ะฑาฑััะฝ
yo bro that video is awesome. can you make more videos like this!
@patrickcraig46082 ะฐะน ะฑาฑััะฝ
Do you know of any way to do this for coco annotated data?
@MediocreGuy20232 ะฐะน ะฑาฑััะฝ
@@patrickcraig4608 I have never worked with COCO dataset but, I found out from the Internet that this dataset is used for object detection and image segmentation tasks. Image segmentation is different from cropping small patches from a large image.
@@SixuXiao-j9l Thanks for your comment. I do not have expertise on video and sound editing. Most of my videos have sound issues because I don't know how to effectively remove noise from them. To minimize noise, I had to decrease the sound volume too much.
@BELLAFaiza-p5z4 ะฐะน ะฑาฑััะฝ
good job
@BELLAFaiza-p5z3 ะฐะน ะฑาฑััะฝ
how cani use input image size 224 pixel help
@MediocreGuy20233 ะฐะน ะฑาฑััะฝ
@@BELLAFaiza-p5z The current code resizes the input to 32x32 pixels. So, 224x224 pixels will also get reduced to that size if you use this code. See the transforms.Compose() section.
@BELLAFaiza-p5z3 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 the code work very good with me but i want use it to generate images with size 224*224 pixel is this possible
@MediocreGuy20233 ะฐะน ะฑาฑััะฝ
@@BELLAFaiza-p5z What is the input image size of your dataset?
@BELLAFaiza-p5z3 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 is 224*224 pixel and its a medical dataset i want generat a new dataset using this code but for the same size can you help me plz
@barshneyatalukdar14925 ะฐะน ะฑาฑััะฝ
How to save the images in another folder after getting the patches
@MediocreGuy20235 ะฐะน ะฑาฑััะฝ
plt.savefig(f'output_dir/{i+1}.jpg', dpi=300, bbox_inches='tight', pad_inches=0); Put this line within for loop. Here, output_dir should be the folder where your images will be saved based on i (number of images).
@barshneyatalukdar14925 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 I need one by one patch that is being generated already am able to save the whole collected patches image as one image but i want it separately not joined. I have inserted the line after ax1[R1, C1].axis('off')
@barshneyatalukdar14925 ะฐะน ะฑาฑััะฝ
I need to save each image one by one separately if you can show
@MediocreGuy20235 ะฐะน ะฑาฑััะฝ
@@barshneyatalukdar1492 Not like that. You need extra lines of code using a for loop.
Thank you for sharing i really aprecciate it, i would try to train the model using a 2D latent space, do you think this architecture will also work for CelebA dataset?
@MediocreGuy20236 ะฐะน ะฑาฑััะฝ
I don't think this structure is good enough for Celeb A as they have a much bigger resolution. Even if you resize them, I think a few additional layers are required.
@MediocreGuy20236 ะฐะน ะฑาฑััะฝ
36:13 In the scaling term, I accidentally wrote "beta_t" instead of "beta_t_square". I corrected it in the slide. Check out the GitHub address.
@ๅ้ฉde่่7 ะฐะน ะฑาฑััะฝ
Can a PyTorch identify handwritten numbers from 0-99, the dataset is spliced into 0-99 using mnist
@MediocreGuy20237 ะฐะน ะฑาฑััะฝ
In that case, there are 100 classes.
@maomao-hc2zt7 ะฐะน ะฑาฑััะฝ
Can you help me design a CNN model? I already have a data set
@MediocreGuy20237 ะฐะน ะฑาฑััะฝ
CNN video is available on the channel. Take a look. I will be very busy in the next 2 weeks.
@maomao-hc2zt7 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023
@ๅ้ฉde่่7 ะฐะน ะฑาฑััะฝ
Do you know how to concatenate datasets
@MediocreGuy20237 ะฐะน ะฑาฑััะฝ
Do you mean concatenating images?
@ๅ้ฉde่่7 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 yes yes
@ๅ้ฉde่่7 ะฐะน ะฑาฑััะฝ
Can I add you as a friend? I come from China and am a beginner. I would like to ask you some questions,โ@@MediocreGuy2023
@MediocreGuy20237 ะฐะน ะฑาฑััะฝ
@@ๅ้ฉde่่ In PyTorch, "torch.cat" function is available and in the case of NumPy, it is "numpy.concatenate".
@ๅ้ฉde่่7 ะฐะน ะฑาฑััะฝ
@@MediocreGuy2023 nonono l have more questions
@StephenWightTn10 ะฐะน ะฑาฑััะฝ
You mentioned around 16:50 that you weren't sure why train loss was much higher than test loss. The reason is because of the L1 term. The returned loss value for train includes the L1 term. The returned loss value test does not. If you want comparable values between train and test, you need to either include the L1 term in the test batch function, or you need to only return the classification loss from the train batch function. Otherwise, a good explanation!
@MediocreGuy202310 ะฐะน ะฑาฑััะฝ
Thanks for the explanation.
@fishersmen10 ะฐะน ะฑาฑััะฝ
Thank you very much for all your time into these lessons. I have found it more helpful than lectures by MIT professors.
@MediocreGuy202310 ะฐะน ะฑาฑััะฝ
LOL. Are you serious?
@dancek..837010 ะฐะน ะฑาฑััะฝ
This help me, and your git hub code, Thanks
@NuskaGirru10 ะฐะน ะฑาฑััะฝ
thank you! this was very helpful!
@imenelj734111 ะฐะน ะฑาฑััะฝ
HI again, I want to kindly ask if you could consider doing a video about 1)the selection of clusters by computing the eigengap scores and plotting them as an eigengap plot.2) the use of normalized mutual information (NMI) score and Rand index to quantify the overlap between discovered and ground truth clusters. Thanks in advance.
@MediocreGuy202311 ะฐะน ะฑาฑััะฝ
I will try.
@MediocreGuy202311 ะฐะน ะฑาฑััะฝ
Does this link help you? github.com/ciortanmadalina/high_noise_clustering/blob/master/spectral_clustering.ipynb
@anneryan405111 ะฐะน ะฑาฑััะฝ
Thank you! This is the best patchify example I've found.
@imenelj7341 ะัะป ะฑาฑััะฝ
I'm grateful for your video, and I'm presently exploring spectral clustering for data analysis for my Ph.D. dissertation in agriculture. Given my limited experience in this area, I'm curious if you could kindly consider sharing the scripts employed in your video and to share more videos about how to identify cluster sizes and how to validate them and how to do Character analysis of identified clusters too . thanks in advance .
@MediocreGuy2023 ะัะป ะฑาฑััะฝ
I have a nightmare schedule till November 2. But, I will try to provide the script for this video either today or tomorrow hopefully.
@@MediocreGuy2023 I wish you all the best for your studies and thank you so much for sharing this.
@mashfiqulhuqchowdhury6906 ะัะป ะฑาฑััะฝ
This is a good channel and clearly explined. Can I get the code in the Github Repository?
@MediocreGuy2023 ะัะป ะฑาฑััะฝ
I appreciate your comment. In my lab, we have distributed servers (you can notice the name JupyterHub). For this reason, I haven't used GitHub to store the code. But, I can upload just this code tonight since you asked for it hopefully within the next 6-7 hours. I will mention you when it is available.
@MediocreGuy2023 ะัะป ะฑาฑััะฝ
github.com/randomaccess2023/MG2023/tree/main/Video%2023 You can find the code here.
@mashfiqulhuqchowdhury6906 ะัะป ะฑาฑััะฝ
@@MediocreGuy2023, Thank you very much. HonestlyThis is truly helpful. I will watch all the videos you have uploaded. Thanks again.
@MediocreGuy2023 ะัะป ะฑาฑััะฝ
At 06:19, I performed preprocessing but forgot to use the scaled features later. I have corrected this mistake in the code that I shared on GitHub. Check that out.
@MediocreGuy2023 ะัะป ะฑาฑััะฝ
Within 10:11 - 10:43, I scaled the features but eventually forgot to use them later. It's better not to scale the features for this example. It seems unscaled features work better in calculating AIC.
26:05 ---> I made a mistake here; Train loss: {train_per_epoch_loss} should be the correct line but I wrongly wrote Train loss: {test_per_epoch_loss}. Remember to correct this portion.