Hello, I recently discovered your Channel, I thought it was very good, and you are very kind, and you have a good explanation, congratulations, I want your channel to grow ,If my English is bad, sorry, because I'm Brazilian
@khangvutien253810 ай бұрын
At 7:05, I see in the specs sheet 256 Tensor cores. Are they the same tensor processing as in TPU? Maybe you can also explain TPU? Note that I’m just starting to watch. Maybe you will explain later in the video?
@DigitalSreeni10 ай бұрын
The tensor cores in GPUs and TPUs involve tensor processing but they are different technologies designed for different purposes. GPUs are more general-purpose and versatile, suitable for a range of tasks like gaming, graphics rendering, and parallel computing workloads. TPUs are purpose-built for machine learning and are highly optimized for tensor operations.
@Ayzal.Y_Liebe7 ай бұрын
Sir what type of laptop should i get to do deep learning?what do you recommend?
@anshagarwal982610 ай бұрын
@DigitalSreeni hi can you explain why you divide array size by time to calculate FLOPS how does it give what floating point operations per second it took, what I understood from your calculation is that you might be considering an estimation like how much time it will take to build the newly calculated array is ~ saying FLOPs.
@DigitalSreeni10 ай бұрын
The calculation of FLOPS in my code is based on the time taken to perform a specific operation (e.g., DAXPY) on arrays of a given size. The rationale behind this calculation is that it estimates the rate at which floating-point operations are executed per second. If you consider the DAXPY operation (A * X + Y), each element in the arrays X and Y undergoes a multiplication and an addition, which are floating-point operations. So the total number of floating-point operations is proportional to the array size. It provides a rough measure of the performance in terms of floating-point operations per second. In reality, the actual number depends on the type of operations and of course the underlying hardware.
@anshagarwal982610 ай бұрын
Thanks @@DigitalSreenimuch appreciated 👍
@alihajikaram800411 ай бұрын
Hi, I found your channel very informative and thanks for your great educational videos. Would you make a video about using conv1d in time series? Could we use it for feature extraction?
@msaoc2211 ай бұрын
thank you for amazing video and time you spend on us=)
@aaalexlit10 ай бұрын
Awesome as always, thank you! any chance to have a follow-up that includes TPUs?
@scrambledeggsandcrispybaco207011 ай бұрын
Hi DigitalSreeni, I have been using your tutorials as a guideline for segmentation using traditional machine learning. Apeer has changed a lot since your videos were made. When I export the file it gives masks for different classes separately. What can I do ? Thank you for all your knowledge, you are a life safer.
@hamidgholami268311 ай бұрын
Hi sir hope your doing well May i ask you to make some videos relating to instance segmentation ? I mean a good explanations and also doing some projects based on that? I will be happy if you respond
@zainulabideen_111 ай бұрын
Found Amazing information, thanks ❤❤❤
@DigitalSreeni11 ай бұрын
Glad it was helpful!
@vidyasvidhyalaya11 ай бұрын
Sir please upload a seperate video for convert this "195 - Image classification using XGBoost and VGG16 imagenet as feature extractor" into loal web application sir...please sir....dont skip my comment sir...please sir...awaiting to see the video sir
@tektronix47511 ай бұрын
I got 5k times faster for the T4 GPU setup for a 10000 matrix size. which is disheartening and eye popping at the same time.