Shannon Entropy and Information Gain

  Рет қаралды 204,928

Serrano.Academy

Serrano.Academy

Күн бұрын

Пікірлер: 333
@freemanguess8634
@freemanguess8634 6 жыл бұрын
With great knowledge comes low entropy
@SerranoAcademy
@SerranoAcademy 6 жыл бұрын
Hahaaa, love it!!!
@fantomraja9137
@fantomraja9137 4 жыл бұрын
lol
@hyperduality2838
@hyperduality2838 4 жыл бұрын
@@SerranoAcademy Repetition (redundancy) is dual to variation -- music. Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics. Randomness (entropy) is dual to order (predictability) -- "Always two there are" -- Yoda.
@MrofficialC
@MrofficialC 2 жыл бұрын
And low entropy is easier to rig
@lani0
@lani0 2 жыл бұрын
You win
@carnivalwrestler
@carnivalwrestler 5 жыл бұрын
Luis, you are such an incredibly gifted teacher and so meticulous in your explanations. Thank you for your hard work.
@sphengle
@sphengle 5 жыл бұрын
This was exactly the baby step I needed to get me on my way with entropy. Far too many people try to explain it by going straight to the equation. There's no intuition in that. Brilliant explanation. I finally understand it.
@jankinsics
@jankinsics 4 жыл бұрын
Sean Walsh feel the same way.
@drakZes
@drakZes 5 жыл бұрын
Great work. Compared to my textbook you explained it 100 times better, Thank you.
@josephbolton8092
@josephbolton8092 11 күн бұрын
an amazing teacher is an invaluable thing
@dyutinrobin
@dyutinrobin 5 ай бұрын
Thank you so much. This was the only video in youtube that clarified all my doubts regarding the topic of entropy.
@elmoreglidingclub3030
@elmoreglidingclub3030 3 жыл бұрын
Excellent! Great explanation. Enjoyable video (except YT’s endless, annoying ads). Thank you for composing and posting.
@poxyu_was_here
@poxyu_was_here 7 жыл бұрын
Easy and Great explanation! Thank you very much, Luis
@Bvic3
@Bvic3 5 жыл бұрын
At 13:44 it's not 0.000488 but 0.00006103515 ! There is a computation error. The entropy is correct, 1.75.
@SerranoAcademy
@SerranoAcademy 5 жыл бұрын
Thank you for the correction! Yes, you're right.
@sasthra3159
@sasthra3159 2 жыл бұрын
Great clarity. Have never got this idea about the Shannon Entropy. Thank you. Great work!
@christinebraun9610
@christinebraun9610 4 жыл бұрын
Great explanation. But I think what’s still missing is an explanation of why we use log base 2....didn’t quite get that
@olivercopleston
@olivercopleston 4 жыл бұрын
In the last minute of the video, he explains that using Log base 2 corresponds to the level of a decision tree, which is the number of questions you'd have to ask to determine a value.
@sdsa007
@sdsa007 Жыл бұрын
Wow! Awesome, so books and encyclopedias and biographies of Shannon to understand what you just clearly explained! Thank You!
@jordyb4862
@jordyb4862 11 ай бұрын
I find sum(p*log(p^-1)) more intuitive. Inverse p (i.e. 1/P) is the ratio of total samples to this sample. If you ask perfect questions you'll ask log(1/p) questions. Entropy is then the sum of these values, each multiplied by the probability of each, which is how much it contributes to the total entropy.
@MatheusSilva-dragon
@MatheusSilva-dragon 5 жыл бұрын
Wow, thank you, man. I needed that information! There are many ways to teach the same stuff! That number of question stuff is great! It's good to have more than one way to measure something!
@kleberloayza7839
@kleberloayza7839 5 жыл бұрын
hi Luis, nice to meet you, I am reading the book of Deep learning of Ian Godfellow, and I needed to view your video for understand the chapter, 3.13 information theory. thanks very much.
@hanaelkhalifa2630
@hanaelkhalifa2630 3 жыл бұрын
Thank you for excellent explanation of entropy concept first... Then reach to final equation step-by-step it is really good and simple way
@amperro
@amperro 3 жыл бұрын
I watched it straight through. Very good.
@haimmadmon3531
@haimmadmon3531 4 жыл бұрын
Very good explanation - hope to hear more of your videos
@SixStringTheory6
@SixStringTheory6 6 жыл бұрын
Wow ..... I wish more people could teach like you this is so insightful
@JohnsonChen-t9r
@JohnsonChen-t9r 5 жыл бұрын
It's very helpful for me to introduce the concept of entropy to students. Thank you for your clear presentation of entropy.
@rajudey1673
@rajudey1673 3 жыл бұрын
Really, you have given us outstanding information.
@erdalkaraca2213
@erdalkaraca2213 5 жыл бұрын
So, after watching the video, the entropy for giving you thumbs up and subcribing to your channel was 0 - i.e. great explanation!
@patriciof.calatayud9861
@patriciof.calatayud9861 3 жыл бұрын
I think that the Huffman compression that you use and the end of the video is near the entropy value but not exactly the same
@clarakorfmacher7394
@clarakorfmacher7394 4 жыл бұрын
Great Video! I really liked the intuitive approach. My professors was waaaay messier.
@karinasakurai9867
@karinasakurai9867 5 жыл бұрын
Brilliant lecture! I learn so much with this explanation. Thanks from Brazil :)
@meshackamimo1945
@meshackamimo1945 5 жыл бұрын
Hi. Thanks a million times for simplifying a very complicated topic. Kindly find time n post a simplified tutorial on mcmc.... I am overwhelmed by your unique communication skills. Markov chain Monte Carlo. God bless you.
@TheGenerationGapPodcast
@TheGenerationGapPodcast 3 жыл бұрын
Help us smash Markov chain Monte Carlo
@paulstevenconyngham7880
@paulstevenconyngham7880 6 жыл бұрын
this is a really great explanation, thanks so much for sharing mate!
@aymenalawadi7858
@aymenalawadi7858 4 жыл бұрын
If Shannon were alive, he would enjoy seeing such a perfect explanation for his theory. Many thanks.
@scherwinn
@scherwinn 5 жыл бұрын
Very clever explanation of mighty ENTROPY.
@nijunicholas631
@nijunicholas631 5 жыл бұрын
Thanks..Got the intuition behind Entropy
@aryamahima3
@aryamahima3 3 жыл бұрын
Thank you so much for a such a easy explanation...respect from india...
@hanaizdihar4368
@hanaizdihar4368 3 жыл бұрын
What a great explanation! And so i subscribed😊
@RenanCostaYT
@RenanCostaYT 4 жыл бұрын
Great explanation, greetings from Brazil!
@kasraamanat5453
@kasraamanat5453 2 жыл бұрын
best, as always ❤️ thank you Luis❤️
@yikenicolezhang
@yikenicolezhang 4 жыл бұрын
Thank you so much for explaining this concept!
@ravivar1
@ravivar1 3 жыл бұрын
Thanks Luis!
@RobertLugg
@RobertLugg 6 жыл бұрын
I have learned so much from your teaching. Thank you.
@ArvindDevaraj1
@ArvindDevaraj1 4 жыл бұрын
Mind blowing explanation
@scottsara123
@scottsara123 5 жыл бұрын
Easy and excellent explain, Please do for loss and cost function as well (convex)
@rejanebrito4366
@rejanebrito4366 4 жыл бұрын
In the third sequence, I can ask if it is a vowel of consonant, but...if it is not a vowel I still have to ask at leat 2 questions...
@sitharamubeen
@sitharamubeen 3 жыл бұрын
Fantastic explanation
@israilzarbaliev7024
@israilzarbaliev7024 3 жыл бұрын
PLEASE COULD YOU SEND LINK TO DOWNLOAD THIS PRESENTATION?
@LAChinthaka
@LAChinthaka 4 жыл бұрын
Thanks for the great explanation.
@JabaDr
@JabaDr 6 жыл бұрын
Great video!! Thank You. Would be great to add some explanation for information gain (as for example used for feature selection)
@ihgnmah
@ihgnmah 2 жыл бұрын
Don't we have to match the sequence that we started the game (RRRB)? If so, 4 red balls would give 1*1*1*0 because there isn't a blue ball in that bucket?
@mortezaaslanzadeh3735
@mortezaaslanzadeh3735 3 жыл бұрын
Awesome explanation, you rock
@archerchian7761
@archerchian7761 3 жыл бұрын
Very clear, thank you!
@nassimbahri
@nassimbahri 5 жыл бұрын
For the first time in my life i understand the real meaning of the Entropy
@MrJhmw01
@MrJhmw01 4 жыл бұрын
Although a good description of informatic entropy, the analogy used at the beginning of a phase change doesn't describe thermodynamic entropy very well. The reason why ice melting constitutes an increase in entropy in this case is because it is in an open thermodynamic system with its environment. Heat has been transferred from the room (a closed system) to the ice. It is this irreversible movement of heat from the room that constitutes the increase in entropy since the average temperature of the room and the ice has decreased and will continue decreasing until it reaches a stable equilibrium. Indeed, we would not arrive at gas if there was not sufficient potential energy in the room. While Boltzmann entropy is similar, the similarity lies in the fact that this transfer of heat understood on a macro level is translation of the probability of this energy distribution on a micro level. Entropy is then a measure of the extent to which the particles are in a probable microstate.
@andrashorvath2411
@andrashorvath2411 2 жыл бұрын
Fantastic work.
@miguelfernandosilvacastron3279
@miguelfernandosilvacastron3279 5 жыл бұрын
Thank you. Nice, concise explanation.
@abhishek_sengupta
@abhishek_sengupta 3 жыл бұрын
Awesome explanation!!
@yhat314
@yhat314 6 жыл бұрын
Lovely job Luis! Very very good!
@Talesofdahustle1
@Talesofdahustle1 6 жыл бұрын
easy and thorough explanation, thank you !
@bakdiabderrahmane8009
@bakdiabderrahmane8009 5 жыл бұрын
good explanation thank you so much
@rajacspraman1791
@rajacspraman1791 4 жыл бұрын
Best Entropy explanation ever! Luis is darn good at explaining hard things simple.
@hyperbitcoinizationpod
@hyperbitcoinizationpod 3 ай бұрын
And the entropy is number of bits needed to convey the information.
@pratikjha2742
@pratikjha2742 3 жыл бұрын
Really nice explanation.
@hoarong1993
@hoarong1993 6 жыл бұрын
You are awesome!. I hope you will make more videos like this
@zinahe
@zinahe 4 жыл бұрын
Thank you very much for sharing. I found it very helpful.
@adityagupta-hm2vs
@adityagupta-hm2vs 2 жыл бұрын
amazing, thank you!
@ArNod0r
@ArNod0r 2 жыл бұрын
genius explanation!
@johnsalazar3903
@johnsalazar3903 5 жыл бұрын
Is there a specific reason why we need to take the average of the logs for the entropy? What's wrong with just leaving it as the sum of logs?
@MinhLe-xk5rm
@MinhLe-xk5rm 4 жыл бұрын
Amazing explanation, thanks sir!
@jaskaransingh0304
@jaskaransingh0304 Жыл бұрын
Thank you!
@adonis100
@adonis100 6 жыл бұрын
Fascinating. Excellent Job
@kwameokrah7662
@kwameokrah7662 5 жыл бұрын
Great explanation! ... coming from a statistician ;)
@YugoGautomo
@YugoGautomo 5 жыл бұрын
Hi Luis, Thanks for your explanation. I guess you're wrong in minute 6.29 and 7.41. I think P winning for P bucket 1 should be 0, since there were no Blue Balls in the bucket as expected outcome of the game. should be R, R, R, B. Am I right?
@eltajbabazade1189
@eltajbabazade1189 3 жыл бұрын
Thank you.
@abhisheks3965
@abhisheks3965 4 жыл бұрын
Sir, how can you explain the number of questions approach when only two variables are present (for example AABBBB)?
@cantkeepitin
@cantkeepitin 5 жыл бұрын
Syper cool, from the 1st second till the end!!
@smitomborah467
@smitomborah467 5 жыл бұрын
thanks a lot, Luis. This was helpful.
@carolinnerabbi965
@carolinnerabbi965 4 жыл бұрын
Great explanation, thank you!
@riderblack6401
@riderblack6401 6 жыл бұрын
Thank you so much! Luis!
@nicholasteong2485
@nicholasteong2485 2 жыл бұрын
great explanation. May I know what the function of entropy in decision tree? is there any examples i can refer to ? anyway Thank you so much for the great work
@debapriyaroy8085
@debapriyaroy8085 6 жыл бұрын
excellent explanation.
@initdialog
@initdialog 4 жыл бұрын
Have I missed it or did you forgot to explain why logarithm of 2 and what Information Gain actually is?
@muhammadsarimmehdi
@muhammadsarimmehdi 4 жыл бұрын
can you do a video on KL Divergence?
@maryamzolnoori1621
@maryamzolnoori1621 5 жыл бұрын
Awesome video, very well explained.
@marakhider7069
@marakhider7069 6 жыл бұрын
Nice explanations! Thank you.
@razanalolimat
@razanalolimat 2 жыл бұрын
I didn't get why it was log2, does anyone know why it is a 2?
@georgesmith3022
@georgesmith3022 5 жыл бұрын
Why can i ask is it a or b, and not is it a or b or c or d?
@trampflips101
@trampflips101 4 жыл бұрын
very well explained, thanks!
@rrksshah88
@rrksshah88 5 жыл бұрын
18:40 !! I got the idea behind Entropy !! Thank you Luis !!
@cryptos7488
@cryptos7488 3 жыл бұрын
Brilliant.
@kshitizomar6730
@kshitizomar6730 4 жыл бұрын
AMAZING AMAZING AMAZING AMAZING
@parizad50
@parizad50 5 жыл бұрын
I love it, so good explanation
@zd676
@zd676 4 жыл бұрын
Great video! However, I think the probability for getting the exact placement in the order (red, red, red, blue) is actually lower than 0.105 right? Because you still need to factor in the probability or this exact placement.
@MrDiego163
@MrDiego163 4 жыл бұрын
So good!
@milleniumsalman1984
@milleniumsalman1984 5 жыл бұрын
really good learning
@raz478
@raz478 6 жыл бұрын
Good and clear explanation
@mukeshjoshi1042
@mukeshjoshi1042 2 жыл бұрын
Respected sir, Can you suggest any book to read more about shannon entropy? Thank you.
@SerranoAcademy
@SerranoAcademy 2 жыл бұрын
Hi Mukesh! Here's a good one people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
@vman049
@vman049 4 жыл бұрын
Your game show example is actually confusing because you conflate the number of ways four balls of one or two colors can be rearranged with the total number of ways that you can draw four balls of one or two colors with replacement. The latter directly specifies the probability of winning the game show. First off, if there is only one color, both the number of ways to rearrange the four balls and the total number of ways to draw four balls with replacement is 1. If there are two colors, then the number of ways to rearrange the balls depends on the number of balls of each color while the number of ways to draw four balls with replacement is always 16 (= 2 x 2 x 2 x 2). Assuming there is one red ball, then the number of ways to arrange the four balls is 4. Assuming there are two red balls, then the number of ways to arrange the four balls is 6. (How you get these numbers in general can be a video of its own on combinatorics, but for this simple problem, you can either (1) enumerate the possibilities by hand or (2) realize that these are just the total number of ways to arrange four balls in four slots - 4! - divided by the number of ways to arrange balls of the same color - 1! times 3! in the first case and 2! times 2! in the second. For (2), we divide because the order of the balls of the same color doesn't matter since the only thing distinguishing balls is color.) To determine the probability of drawing with replacement the exact same sequence of four balls twice (i.e. winning the game show), we can create a 16 x 16 matrix (for the two-color case; for the one color case, the matrix is 1 x 1). Rows correspond to outcomes of event 1 (a sequence of four draws with replacement) and columns correspond to outcomes of event 2. Since events 1 and 2 are independent, the probability of both event 1 and event 2 happening is P(event 1, event 2) = P(event 1) x P(event 2). We can calculate the probabilities of each event based on the fractions of balls of each color within the container. Then, the probability of P(event 1 = outcome z, event 2 = outcome z) (i.e. both events result in the same outcome) is the sum of the probabilities along the diagonal of this matrix. Note that the game does not stipulate that the outcome of event 2 must match the one sole outcome for each container given in the video. It just so happens that the outcomes for each of the containers matched the distribution of the balls in the container, but this need not be the case. There are many more outcomes that can satisfy this criterion. This is ultimately from where the confusion stems. Working out the sum, we find that the probability of winning in the case in which the four balls are all the same color is 1 since there is only one possible outcome which occurs with probability 1. When there are three red balls and one blue ball, the diagonal sums to 0.153. When there are two red balls and two blue balls, the diagonal sums to 0.0625. Even though the first and the last of these three numbers match those given at 7:30, the reasons for these are completely different than those given in the video, which is why the second result DOES NOT match. While entropy does increase as the distribution of balls of a given color within a container becomes more uniform, it is not equal to the probability of outcomes 1 and 2 matching. In other words, the point I'm trying to make is that while entropy and the probability of winning your game may be correlated (in this case of a finite, discrete sample), they ARE NOT equal.
@amneuro8255
@amneuro8255 2 жыл бұрын
The probability of blue ball is 0. In the table you multiplied by 1 (1x1x1x1). Why you didn't multiply by 0 ?
@seyedmansourbeigi9126
@seyedmansourbeigi9126 6 жыл бұрын
Luis Great. how to get a degree from the Udacity University?
@pauldacus4590
@pauldacus4590 5 жыл бұрын
OK, I think I am lost here... at 9:55 in the table you have P(winning) column with 1, .105, and .0625 as the values. Then in the next column, you have "-log2(P(winning))", which is actually the same as log2(1/p(winning)). So the first row is right, however the 2nd row isn't. The -log2(.105) is 3.25, not .81, and the -log2(.0625) is 4, not 1. Your final column has the average, and entropy is the sum of -p(x)log2(p(x)). But I think it's just that the middle column is kinda confusing. It computes the probability a sequence, then sums the logs, and gives the entropy, but the entropy is unrelated to anything else in the table.
@matejtacer7530
@matejtacer7530 2 жыл бұрын
I agree, there is a mistake there however, my numbers are different than yours. Since the formula for entropy is -sum(pi*log2(pi)), the second row should have been 1.433, and the third should be 2.
@RameshKumar-is8no
@RameshKumar-is8no 6 жыл бұрын
awesome (y) very nice explanation :)
@seyedmansourbeigi9126
@seyedmansourbeigi9126 6 жыл бұрын
Luis Great.
@hydropascal
@hydropascal 4 жыл бұрын
Thank you very much~
@thelastone1643
@thelastone1643 5 жыл бұрын
You are amazing..
A Short Introduction to Entropy, Cross-Entropy and KL-Divergence
10:41
Aurélien Géron
Рет қаралды 350 М.
Entropy (for data science) Clearly Explained!!!
16:35
StatQuest with Josh Starmer
Рет қаралды 609 М.
Поветкин заставил себя уважать!
01:00
МИНУС БАЛЛ
Рет қаралды 6 МЛН
🍉😋 #shorts
00:24
Денис Кукояка
Рет қаралды 3,4 МЛН
Watermelon magic box! #shorts by Leisi Crazy
00:20
Leisi Crazy
Рет қаралды 11 МЛН
A friendly introduction to Bayes Theorem and Hidden Markov Models
32:46
Serrano.Academy
Рет қаралды 475 М.
Claude Shannon - Father of the Information Age
29:32
University of California Television (UCTV)
Рет қаралды 356 М.
Biological Diversity with Shannon Entropy and Other Measures
13:55
Wild Earth Photo
Рет қаралды 330
Entropy in Compression - Computerphile
12:12
Computerphile
Рет қаралды 392 М.
Why Information Theory is Important - Computerphile
12:33
Computerphile
Рет қаралды 152 М.
The physics of entropy and the origin of life | Sean Carroll
6:11
ALL OF PHYSICS explained in 14 Minutes
14:20
Wacky Science
Рет қаралды 2,8 МЛН
A friendly introduction to Deep Learning and Neural Networks
33:20
Serrano.Academy
Рет қаралды 699 М.