your videos are very easy to grasp and are free unlike a college or university. Thanks a lot!.
@ComputerScienceLessons4 жыл бұрын
Thanks for saying so. :)KD
@AwesomeDooshi10 ай бұрын
You helped me to understand a topic that I missed in class. Thank you so much namaste 🙏
@charles-y2z6c5 жыл бұрын
I loved the HBO series Silicon Valley. It made data compression fun. As a young programmer I was happy to get ASCII 2 bytes into one. I doubled the size of my text files on floppy disks.
@ComputerScienceLessons5 жыл бұрын
I haven't see any of Silicon Valley yet. I'l have to get the Now TV free trial and binge my way through them. :) KD
@ameliaj45572 жыл бұрын
Thank you so much, yet again you have helped me with my revision!
@le_plankton3 жыл бұрын
exactly the video i was looking for
@vijayjangid89674 жыл бұрын
Best educational videos with great explaination and presentation ❤️
@ComputerScienceLessons4 жыл бұрын
Thanks so much :)KD
@ayubchowdhury3669 Жыл бұрын
@@ComputerScienceLessons what is it mean by kd??
@ComputerScienceLessons Жыл бұрын
@@ayubchowdhury3669my name is Kevin Drumm :)KD
@zigaudrey Жыл бұрын
Consider how simple it is, it may be the birth of compress data method! 4:00 Eliminate the 1s! 5:43 Lesser colors, lighter size 6:10 One fat compressed data!
@ComputerScienceLessons Жыл бұрын
Perhaps. A new, efficient, lossless data compression algorithm could make someone a lot of money.
@tails_the_god2 жыл бұрын
Thanks! I've been studying quake 2 source code it uses pcx for the textures and it uses run length encoding for the compression. Your explanation was easy to understand! 😄
@ComputerScienceLessons2 жыл бұрын
Delighted to help. I love Quake. :)KD
@isimpdorasmonke62302 жыл бұрын
Thank you so much this video helped me with my presentation
@ComputerScienceLessons2 жыл бұрын
You're very welcome :)KD
@ahmedmuhammed69052 жыл бұрын
Very well done explanation thanks sir
@ComputerScienceLessons2 жыл бұрын
Thank you, and you're welcome :)KD
@baylee83664 жыл бұрын
Thank you, this is really useful!
@ComputerScienceLessons3 жыл бұрын
You're welcome :)KD
@ninad2740 Жыл бұрын
thanks a lot for this video.
@ComputerScienceLessons Жыл бұрын
You're very welcome :)KD
@leonidahnyabuto35442 жыл бұрын
Thanks a lot
@ComputerScienceLessons2 жыл бұрын
You're very welcome :)KD
@mohammademaditaj94794 жыл бұрын
i wish there were more lessons in this playlist
@ComputerScienceLessons4 жыл бұрын
Is there something in particular you need? - perhaps it's in a different playlist. :)KD
@yoganandaynr167228 күн бұрын
Thank you
@ninjapirate12310 ай бұрын
What if the letters repeat itself after another letter, for example the word BOOKKEEPER has an E that repeats itself after the letter P, would that be B1O2K2E2P1E1R1 or B1O2K2E3P1R1
@ComputerScienceLessons10 ай бұрын
Interesting question. B1O2K2E2P1E1R1 is produced by the algorithm I described. However, as you can see, the result is negative compression; the algorithm has made things worse. In practice, you could adjust the algorithm any way you like to suit the nature of the data you are trying to compress. :)KD
@ginnusidhu54894 жыл бұрын
suppose we want to compress the following set of binary digits using RLE compression 1011100101 and the most a digit can be repeated in a run is 8 times. then how we can calculate compressed file size using algorithm
@AZN420_2 жыл бұрын
How did you get the value 134? please help i am confused
@ComputerScienceLessons2 жыл бұрын
Hi Aahan. If you can the number of values you will see that there are 134 of them. Each of these values requires 4 bits of storage so together they require 536 bits. kzbin.info/www/bejne/qaTRaJqgnNGdjrM
@Petarek5 жыл бұрын
I think you are mistaking bits and bytes. 1 kilobyte is 1000 (or 1024 but that's another discussion) byte which itself is 8 bits.
@ComputerScienceLessons5 жыл бұрын
Which part of the video are you referring to please ? I hate to make mistakes.
@Petarek5 жыл бұрын
@@ComputerScienceLessons Around 4:51 and 5:49
@ComputerScienceLessons5 жыл бұрын
Thanks for that, I see where you are coming from. I have fixed the problem.
@Engeryu4 жыл бұрын
That's not really another discuss but a complete different thing... 1 kilobyte (1kB) is equal to 1000 (o), not at all 1024(kbiB). It's a marketing technique from electronic systems which created the bibyte, in windows data is stock in byte, so a conflict occurs, when you think to buy a 1024 TB for your pc at "x" price, in reality, you buy a 1000 TB, so the enterprise which want to sell it to you win a merge of 0024 Byte... all you have to reflect, is, if i speak in byte, so the convert is in byte, if i don't speak about kbiB, so i don't have to convert for a kbiB value. So you could just say : 1 kilobyte equal 1000 byte which itself correspond to 8 bits. (Without speaking of the 1024 value which is a confusion, that you are doing yourself and so you propage to the others). Just in case x B = x Bytes and x b = x Bits.
@CR7ST1AN0. Жыл бұрын
How do u get 134 I think it’s supposed to be 124
@CR7ST1AN0.8 ай бұрын
hello me in the past
@BryanChance Жыл бұрын
How bad is the Maui fire prevention department? Not so bad that they actually prevent people from escaping the fire. Nor so bad that they don't know how turn on the the emergency sirens. And so bad that fire selectively burn certain areas and leave others untouched?
@zdechlinaaa957111 ай бұрын
Niceu
@ComputerScienceLessons11 ай бұрын
Thanks
@رأفتالهاشمي-ق1م4 жыл бұрын
How i can found 1-Compression ratio 2-Entropy by RLE coding of this code : (AAATTTPQ155555)
@ComputerScienceLessons4 жыл бұрын
Calculating the compression ratio is easy; divide the number of characters after compression by the number of characters before compression and express this as a percentage. Calculating the entropy is a little more involved. In science, entropy is a tendency towards disorder (things tend to fall apart). You could therefore say that entropy is a measure of unpredictability. When it comes to a data set, entropy is a measure of how much information it contains. When you compress a data set, you remove duplication, which means you remove information, which means you increase entropy. This concept is becoming increasingly important in the field of machine learning. Entropy depends on context, so to calculate the entropy of your data set AAATTTPQ155555, you have to assume the frequency of each character reflects its probability. You can then apply Claude Shannon’s formula: information(x) = -log( p(x) ) (Where p(x) is the probability of a particular character occurring in the data set) This video about entropy is definitely worth a watch: kzbin.info/www/bejne/g2bGkIV8gLueodE There’s also a great explanation and worked examples on this web page. machinelearningmastery.com/what-is-information-entropy/
@guillediaz-varela4195 Жыл бұрын
Great video it's just his voice enrages me hahah
@ComputerScienceLessons Жыл бұрын
Sometimes, when I have a cold, my voice is deep and resonant. Most other times, it enrages me too. :)KD