@@SerranoAcademy Repetition (redundancy) is dual to variation -- music. Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics. Randomness (entropy) is dual to order (predictability) -- "Always two there are" -- Yoda.
@MrofficialC2 жыл бұрын
And low entropy is easier to rig
@lani02 жыл бұрын
You win
@carnivalwrestler5 жыл бұрын
Luis, you are such an incredibly gifted teacher and so meticulous in your explanations. Thank you for your hard work.
@sphengle5 жыл бұрын
This was exactly the baby step I needed to get me on my way with entropy. Far too many people try to explain it by going straight to the equation. There's no intuition in that. Brilliant explanation. I finally understand it.
@jankinsics4 жыл бұрын
Sean Walsh feel the same way.
@drakZes5 жыл бұрын
Great work. Compared to my textbook you explained it 100 times better, Thank you.
@josephbolton809211 күн бұрын
an amazing teacher is an invaluable thing
@dyutinrobin5 ай бұрын
Thank you so much. This was the only video in youtube that clarified all my doubts regarding the topic of entropy.
@elmoreglidingclub30303 жыл бұрын
Excellent! Great explanation. Enjoyable video (except YT’s endless, annoying ads). Thank you for composing and posting.
@poxyu_was_here7 жыл бұрын
Easy and Great explanation! Thank you very much, Luis
@Bvic35 жыл бұрын
At 13:44 it's not 0.000488 but 0.00006103515 ! There is a computation error. The entropy is correct, 1.75.
@SerranoAcademy5 жыл бұрын
Thank you for the correction! Yes, you're right.
@sasthra31592 жыл бұрын
Great clarity. Have never got this idea about the Shannon Entropy. Thank you. Great work!
@christinebraun96104 жыл бұрын
Great explanation. But I think what’s still missing is an explanation of why we use log base 2....didn’t quite get that
@olivercopleston4 жыл бұрын
In the last minute of the video, he explains that using Log base 2 corresponds to the level of a decision tree, which is the number of questions you'd have to ask to determine a value.
@sdsa007 Жыл бұрын
Wow! Awesome, so books and encyclopedias and biographies of Shannon to understand what you just clearly explained! Thank You!
@jordyb486211 ай бұрын
I find sum(p*log(p^-1)) more intuitive. Inverse p (i.e. 1/P) is the ratio of total samples to this sample. If you ask perfect questions you'll ask log(1/p) questions. Entropy is then the sum of these values, each multiplied by the probability of each, which is how much it contributes to the total entropy.
@MatheusSilva-dragon5 жыл бұрын
Wow, thank you, man. I needed that information! There are many ways to teach the same stuff! That number of question stuff is great! It's good to have more than one way to measure something!
@kleberloayza78395 жыл бұрын
hi Luis, nice to meet you, I am reading the book of Deep learning of Ian Godfellow, and I needed to view your video for understand the chapter, 3.13 information theory. thanks very much.
@hanaelkhalifa26303 жыл бұрын
Thank you for excellent explanation of entropy concept first... Then reach to final equation step-by-step it is really good and simple way
@amperro3 жыл бұрын
I watched it straight through. Very good.
@haimmadmon35314 жыл бұрын
Very good explanation - hope to hear more of your videos
@SixStringTheory66 жыл бұрын
Wow ..... I wish more people could teach like you this is so insightful
@JohnsonChen-t9r5 жыл бұрын
It's very helpful for me to introduce the concept of entropy to students. Thank you for your clear presentation of entropy.
@rajudey16733 жыл бұрын
Really, you have given us outstanding information.
@erdalkaraca22135 жыл бұрын
So, after watching the video, the entropy for giving you thumbs up and subcribing to your channel was 0 - i.e. great explanation!
@patriciof.calatayud98613 жыл бұрын
I think that the Huffman compression that you use and the end of the video is near the entropy value but not exactly the same
@clarakorfmacher73944 жыл бұрын
Great Video! I really liked the intuitive approach. My professors was waaaay messier.
@karinasakurai98675 жыл бұрын
Brilliant lecture! I learn so much with this explanation. Thanks from Brazil :)
@meshackamimo19455 жыл бұрын
Hi. Thanks a million times for simplifying a very complicated topic. Kindly find time n post a simplified tutorial on mcmc.... I am overwhelmed by your unique communication skills. Markov chain Monte Carlo. God bless you.
@TheGenerationGapPodcast3 жыл бұрын
Help us smash Markov chain Monte Carlo
@paulstevenconyngham78806 жыл бұрын
this is a really great explanation, thanks so much for sharing mate!
@aymenalawadi78584 жыл бұрын
If Shannon were alive, he would enjoy seeing such a perfect explanation for his theory. Many thanks.
@scherwinn5 жыл бұрын
Very clever explanation of mighty ENTROPY.
@nijunicholas6315 жыл бұрын
Thanks..Got the intuition behind Entropy
@aryamahima33 жыл бұрын
Thank you so much for a such a easy explanation...respect from india...
@hanaizdihar43683 жыл бұрын
What a great explanation! And so i subscribed😊
@RenanCostaYT4 жыл бұрын
Great explanation, greetings from Brazil!
@kasraamanat54532 жыл бұрын
best, as always ❤️ thank you Luis❤️
@yikenicolezhang4 жыл бұрын
Thank you so much for explaining this concept!
@ravivar13 жыл бұрын
Thanks Luis!
@RobertLugg6 жыл бұрын
I have learned so much from your teaching. Thank you.
@ArvindDevaraj14 жыл бұрын
Mind blowing explanation
@scottsara1235 жыл бұрын
Easy and excellent explain, Please do for loss and cost function as well (convex)
@rejanebrito43664 жыл бұрын
In the third sequence, I can ask if it is a vowel of consonant, but...if it is not a vowel I still have to ask at leat 2 questions...
@sitharamubeen3 жыл бұрын
Fantastic explanation
@israilzarbaliev70243 жыл бұрын
PLEASE COULD YOU SEND LINK TO DOWNLOAD THIS PRESENTATION?
@LAChinthaka4 жыл бұрын
Thanks for the great explanation.
@JabaDr6 жыл бұрын
Great video!! Thank You. Would be great to add some explanation for information gain (as for example used for feature selection)
@ihgnmah2 жыл бұрын
Don't we have to match the sequence that we started the game (RRRB)? If so, 4 red balls would give 1*1*1*0 because there isn't a blue ball in that bucket?
@mortezaaslanzadeh37353 жыл бұрын
Awesome explanation, you rock
@archerchian77613 жыл бұрын
Very clear, thank you!
@nassimbahri5 жыл бұрын
For the first time in my life i understand the real meaning of the Entropy
@MrJhmw014 жыл бұрын
Although a good description of informatic entropy, the analogy used at the beginning of a phase change doesn't describe thermodynamic entropy very well. The reason why ice melting constitutes an increase in entropy in this case is because it is in an open thermodynamic system with its environment. Heat has been transferred from the room (a closed system) to the ice. It is this irreversible movement of heat from the room that constitutes the increase in entropy since the average temperature of the room and the ice has decreased and will continue decreasing until it reaches a stable equilibrium. Indeed, we would not arrive at gas if there was not sufficient potential energy in the room. While Boltzmann entropy is similar, the similarity lies in the fact that this transfer of heat understood on a macro level is translation of the probability of this energy distribution on a micro level. Entropy is then a measure of the extent to which the particles are in a probable microstate.
@andrashorvath24112 жыл бұрын
Fantastic work.
@miguelfernandosilvacastron32795 жыл бұрын
Thank you. Nice, concise explanation.
@abhishek_sengupta3 жыл бұрын
Awesome explanation!!
@yhat3146 жыл бұрын
Lovely job Luis! Very very good!
@Talesofdahustle16 жыл бұрын
easy and thorough explanation, thank you !
@bakdiabderrahmane80095 жыл бұрын
good explanation thank you so much
@rajacspraman17914 жыл бұрын
Best Entropy explanation ever! Luis is darn good at explaining hard things simple.
@hyperbitcoinizationpod3 ай бұрын
And the entropy is number of bits needed to convey the information.
@pratikjha27423 жыл бұрын
Really nice explanation.
@hoarong19936 жыл бұрын
You are awesome!. I hope you will make more videos like this
@zinahe4 жыл бұрын
Thank you very much for sharing. I found it very helpful.
@adityagupta-hm2vs2 жыл бұрын
amazing, thank you!
@ArNod0r2 жыл бұрын
genius explanation!
@johnsalazar39035 жыл бұрын
Is there a specific reason why we need to take the average of the logs for the entropy? What's wrong with just leaving it as the sum of logs?
@MinhLe-xk5rm4 жыл бұрын
Amazing explanation, thanks sir!
@jaskaransingh0304 Жыл бұрын
Thank you!
@adonis1006 жыл бұрын
Fascinating. Excellent Job
@kwameokrah76625 жыл бұрын
Great explanation! ... coming from a statistician ;)
@YugoGautomo5 жыл бұрын
Hi Luis, Thanks for your explanation. I guess you're wrong in minute 6.29 and 7.41. I think P winning for P bucket 1 should be 0, since there were no Blue Balls in the bucket as expected outcome of the game. should be R, R, R, B. Am I right?
@eltajbabazade11893 жыл бұрын
Thank you.
@abhisheks39654 жыл бұрын
Sir, how can you explain the number of questions approach when only two variables are present (for example AABBBB)?
@cantkeepitin5 жыл бұрын
Syper cool, from the 1st second till the end!!
@smitomborah4675 жыл бұрын
thanks a lot, Luis. This was helpful.
@carolinnerabbi9654 жыл бұрын
Great explanation, thank you!
@riderblack64016 жыл бұрын
Thank you so much! Luis!
@nicholasteong24852 жыл бұрын
great explanation. May I know what the function of entropy in decision tree? is there any examples i can refer to ? anyway Thank you so much for the great work
@debapriyaroy80856 жыл бұрын
excellent explanation.
@initdialog4 жыл бұрын
Have I missed it or did you forgot to explain why logarithm of 2 and what Information Gain actually is?
@muhammadsarimmehdi4 жыл бұрын
can you do a video on KL Divergence?
@maryamzolnoori16215 жыл бұрын
Awesome video, very well explained.
@marakhider70696 жыл бұрын
Nice explanations! Thank you.
@razanalolimat2 жыл бұрын
I didn't get why it was log2, does anyone know why it is a 2?
@georgesmith30225 жыл бұрын
Why can i ask is it a or b, and not is it a or b or c or d?
@trampflips1014 жыл бұрын
very well explained, thanks!
@rrksshah885 жыл бұрын
18:40 !! I got the idea behind Entropy !! Thank you Luis !!
@cryptos74883 жыл бұрын
Brilliant.
@kshitizomar67304 жыл бұрын
AMAZING AMAZING AMAZING AMAZING
@parizad505 жыл бұрын
I love it, so good explanation
@zd6764 жыл бұрын
Great video! However, I think the probability for getting the exact placement in the order (red, red, red, blue) is actually lower than 0.105 right? Because you still need to factor in the probability or this exact placement.
@MrDiego1634 жыл бұрын
So good!
@milleniumsalman19845 жыл бұрын
really good learning
@raz4786 жыл бұрын
Good and clear explanation
@mukeshjoshi10422 жыл бұрын
Respected sir, Can you suggest any book to read more about shannon entropy? Thank you.
@SerranoAcademy2 жыл бұрын
Hi Mukesh! Here's a good one people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
@vman0494 жыл бұрын
Your game show example is actually confusing because you conflate the number of ways four balls of one or two colors can be rearranged with the total number of ways that you can draw four balls of one or two colors with replacement. The latter directly specifies the probability of winning the game show. First off, if there is only one color, both the number of ways to rearrange the four balls and the total number of ways to draw four balls with replacement is 1. If there are two colors, then the number of ways to rearrange the balls depends on the number of balls of each color while the number of ways to draw four balls with replacement is always 16 (= 2 x 2 x 2 x 2). Assuming there is one red ball, then the number of ways to arrange the four balls is 4. Assuming there are two red balls, then the number of ways to arrange the four balls is 6. (How you get these numbers in general can be a video of its own on combinatorics, but for this simple problem, you can either (1) enumerate the possibilities by hand or (2) realize that these are just the total number of ways to arrange four balls in four slots - 4! - divided by the number of ways to arrange balls of the same color - 1! times 3! in the first case and 2! times 2! in the second. For (2), we divide because the order of the balls of the same color doesn't matter since the only thing distinguishing balls is color.) To determine the probability of drawing with replacement the exact same sequence of four balls twice (i.e. winning the game show), we can create a 16 x 16 matrix (for the two-color case; for the one color case, the matrix is 1 x 1). Rows correspond to outcomes of event 1 (a sequence of four draws with replacement) and columns correspond to outcomes of event 2. Since events 1 and 2 are independent, the probability of both event 1 and event 2 happening is P(event 1, event 2) = P(event 1) x P(event 2). We can calculate the probabilities of each event based on the fractions of balls of each color within the container. Then, the probability of P(event 1 = outcome z, event 2 = outcome z) (i.e. both events result in the same outcome) is the sum of the probabilities along the diagonal of this matrix. Note that the game does not stipulate that the outcome of event 2 must match the one sole outcome for each container given in the video. It just so happens that the outcomes for each of the containers matched the distribution of the balls in the container, but this need not be the case. There are many more outcomes that can satisfy this criterion. This is ultimately from where the confusion stems. Working out the sum, we find that the probability of winning in the case in which the four balls are all the same color is 1 since there is only one possible outcome which occurs with probability 1. When there are three red balls and one blue ball, the diagonal sums to 0.153. When there are two red balls and two blue balls, the diagonal sums to 0.0625. Even though the first and the last of these three numbers match those given at 7:30, the reasons for these are completely different than those given in the video, which is why the second result DOES NOT match. While entropy does increase as the distribution of balls of a given color within a container becomes more uniform, it is not equal to the probability of outcomes 1 and 2 matching. In other words, the point I'm trying to make is that while entropy and the probability of winning your game may be correlated (in this case of a finite, discrete sample), they ARE NOT equal.
@amneuro82552 жыл бұрын
The probability of blue ball is 0. In the table you multiplied by 1 (1x1x1x1). Why you didn't multiply by 0 ?
@seyedmansourbeigi91266 жыл бұрын
Luis Great. how to get a degree from the Udacity University?
@pauldacus45905 жыл бұрын
OK, I think I am lost here... at 9:55 in the table you have P(winning) column with 1, .105, and .0625 as the values. Then in the next column, you have "-log2(P(winning))", which is actually the same as log2(1/p(winning)). So the first row is right, however the 2nd row isn't. The -log2(.105) is 3.25, not .81, and the -log2(.0625) is 4, not 1. Your final column has the average, and entropy is the sum of -p(x)log2(p(x)). But I think it's just that the middle column is kinda confusing. It computes the probability a sequence, then sums the logs, and gives the entropy, but the entropy is unrelated to anything else in the table.
@matejtacer75302 жыл бұрын
I agree, there is a mistake there however, my numbers are different than yours. Since the formula for entropy is -sum(pi*log2(pi)), the second row should have been 1.433, and the third should be 2.