Recently discovered your channel and loving the videos so far, I am into deep learning as more of a hobby and trying to leverage that to get into grad school (if I do then then I want to turn this into my profession), this specific video is exactly what I was looking for since my video card has just 8gb vram and voila you have this too, awesome content, please do keep it up!
@HeatonResearch3 жыл бұрын
Thanks! Planning on more.
@bigdreams55543 жыл бұрын
Great video! I had to figure this out of my own a few weeks ago, great to see you talk about this. I ended needing 100-500MB paging file to get my setup to work.
@HeatonResearch3 жыл бұрын
Thanks, good setup.
@luisluiscunha2 жыл бұрын
That noise!... I will really avoid to go for anything that is not a "gaming X" gpu!, from MSI... this video is precious just for that. thank you!
@jashaswimalyaacharjee95853 жыл бұрын
Sir, Yesterday I was training one of my GAN models and similar thing happened, and I used exactly this print("HERE !!!!!!!") strategy.... I am laughing out loud right now!!!!
@HeatonResearch3 жыл бұрын
Ah great! hah I am very non-creative with my logging :)
@chickenp70383 жыл бұрын
wow i actually do this all the time in my neural networks… hahahha can’t imagine that other people do it
@lashlarue593 жыл бұрын
Jeff have you tried using the version of WSL2 which supports Linux graphical programs to run Cuda programs under WSL2/Linux like for example the Nvidia Cuda demos?
@tugrultastekin98593 жыл бұрын
Great video sir, but i have a question. I am in the middle of buying a computer for starting dl. My options for gpu is rtx 3070 and rtx 3060. As you know they have vRam differencies. I want to ask if i only need 12gb gpu ram for computer vision or train face detection models? thank you in advance.
@MrLucaPug3 жыл бұрын
it would be nice to see the same experiment on the WSL2 Linux subsystem..
@0Zed03 жыл бұрын
I'm running a 16 gig machine and a GTX1080 and all I had to do to get it to work was drop the dataloader workers from 3 to 1. I've only tried with about 600 256x256 images though, so maybe that's what the difference is.
@RD-fb6ei3 жыл бұрын
Can you do some videos on the current gen AMD GPUs? Some level of ROCm support is present already in Tensorflow. The RX 6800 supposedly has similar performance to a Titan V.
@kdlin13 жыл бұрын
Hi Jeff, isn't increasing paging size addressing the low system RAM issue? Do you have videos about addressing low GPU RAM issues for Tensorflow?
@jackkwok68032 жыл бұрын
Could you elaborate a little more as to why stylegan2 is a 'semi-supervised' learning rather than unsupervised learning? thank you
@HeatonResearch2 жыл бұрын
I really should have called it self-supervised. Meaning, it is using supervised algorithms, a loss function, but since its a GAN the loss function encompasses both the generator and discriminator.
@maxit10822 жыл бұрын
Hi. Do you know which of these CPUs will perform better in machine learning and data science tasks, no need to say I would use a Nvidia GPU like 3070 besides the cpu, but I wanna choose an appropriate cpu for these types of tasks. These are my choices: 1. 5900X: $250 (used) 2. 13600KF: $400 (used) 3. 13700KF: $500 (new) But as you know there is another important factor, GPU. if I choose the 5900x then I could spend the extra money on a better GPU. If I wanna summarize both CPU and GPU configuration I afford are these three options: 1. 5900X + 3080Ti 2. 13600KF + 3070Ti 3. 13700KF + 3060 / 3060Ti which one should be a better combination?
@abcdefg-zl7ew3 жыл бұрын
Thank you for the video Jeff! I have a question: Is there any way to convert my .pkl file to a .pt file? I need to do that to work with a different AI but all the conversion methods I saw online were only for stylegan2 and not for stylegan2 ada pytorch. They don't seem to work for me, I always get errors. Thanks in advance!
@gregarayamandelbrot3 жыл бұрын
Really appreciating your videos Jeff! I'm a data scientist who recently built a computer and was poking around to find some fun ML stuff to do with it. Found you just this week, consider my interest piqued! :) Curious if you have a moment for my query: I have an rtx 3070 with 8 gb vram and 2x16gb ram in my computer right now and was planning to sell my previous rtx 3060 with 12 gb vram and 2x12gb ram sticks to recoup the cost of that 3070. However, would the 3060 be better for this specific situation because it has higher gb of vram? Also, since I have 4 slots for ram, would adding the 2 smaller ram sticks into the other open slots be helpful at all, or does this process lean more heavily on the gpu?
@subramanyak61873 жыл бұрын
Can you please teach me how to train using getforce 110mx 2GB ram ;) ?
@HeatonResearch3 жыл бұрын
With 2GB RAM? In all seriousness, use CoLab.
@WhyShubham.3 жыл бұрын
Sir the thread ripper can take only 256 gb of ram which cpu domu use for ur pc ?
@HeatonResearch3 жыл бұрын
I only have 128MB on my Threadripper. When I use larger, it is usually in the cloud running on Xeon.
@investorfriends3 жыл бұрын
Will I get this issue if I use RTX 3060 12GB memory or Colab Pro?
@HeatonResearch3 жыл бұрын
Generally no, but I have seen the metrics issue in Colab, on occasion, when I do, I just downgrade the metrics.
@investorfriends3 жыл бұрын
@@HeatonResearch Thanks alot, I will buy 3060 now.
@pasanperera82353 жыл бұрын
What’s your monitor
@HeatonResearch3 жыл бұрын
I really like it, it is a LG 32UN550-W 32-Inch UHD (4K)
@pasanperera82353 жыл бұрын
@@HeatonResearch Thank you ♥️
@javiercorona103 жыл бұрын
heyy I have a zotac 3070 gpu and I have this error does someone knows anything?? ..... warnings.warn(f'conv2d_gradfix not supported on PyTorch {torch.version}. Falling back to torch.nn.functional.conv2d().')