"a bit" "a bit more" after years living with the pack of geniuses, he had slowly become one
@laurenpinschannels2 жыл бұрын
ah yes I recognize this sense of genius. it's the same one people use when I say that doors can be opened. "thanks genius" I am so helpful
@comradepeter878 ай бұрын
"he had slowly become *one* "
@dudeman04015 ай бұрын
@@laurenpinschannels it's a double entendre, one coin can convey one bit ("a bit") of information, while two coins can convey two bits of information ("a bit" + "a bit more")
@Ziferten2 жыл бұрын
EE chiming in: you stopped as soon as you got to the good part! Shannon channel capacity, equalization, error correction, and modulation are my jam. I'd love to see more communications theory on Computerphile!
@Mark-dc1su2 жыл бұрын
If anyone wants an extremely accessible intro to these ideas, Ashby's Introduction to Cybernetics is the gold standard.
@hellowill2 жыл бұрын
Yeah feels like this video was a very simple starter
@travelthetropics61902 жыл бұрын
Greetings EE! those are the first topics on our "communications theory" subject back at Uni.
@OnionKnight5412 жыл бұрын
Hey! What channel is that stuff on ? I'm still a bit confused by IT
@mokovec2 жыл бұрын
Look at the older videos on this channel - prof. Brailsworth already covered a lot of the details and history.
@louisnemzer68012 жыл бұрын
This is the best unscripted math joke I can remember! How surprised are you? >A bit One bit?
@JavierSalcedoC2 жыл бұрын
_Flips 2 coins_ "And now, how surprised are you?" "A bit more" *exactly*
@068LAICEPS2 жыл бұрын
I noticed during the video but after reading here now I am laughing
@roninpawn2 жыл бұрын
Nice. This explanation ties so elegantly to the hierarchy of text-compression. While I've, many times, been told its mathematically provable that there is no more efficient method... This relatively simple explanation leaves me feeling like I understand HOW it is mathematically provable.
@gaptastic2 жыл бұрын
I'm not gonna lie, I didn't think this video was going to be interesting, but man, it's making me think about other applications. Thank you!
@Double-Negative2 жыл бұрын
The reason we use the logarithm is because it turns multiplication into addition. The chances of 2 independent events X and Y happening is P(X)*P(Y) if entropy(X) = -log(P(X)) entropy(X and Y) = -log(P(X)*P(Y)) = -log(P(X))-log(P(Y)) = entropy(X) + entropy(Y)
@PetrSojnek2 жыл бұрын
isn't that more of a result of using logarithm, instead of reason of using logarithm? It feels like using logarithm for better scaling was still the primary factor.
@entropie-36222 жыл бұрын
@@PetrSojnek There are lots and lots of choices for functions that model diminishing returns, but only the log functions will turn multiplication into addition. Considering how often independent events show up in probabilistic theory it makes a lot of sense to use the log function for this specific property and it will yield all kinds of nice results that you would not see if you were to use another diminishing returns model. If we go by the heuristic of it representing information this property is fairly integral. Because you would expect that the total information for multiple independent events should come out as the sum of the information about the singular events.
@GustavoOliveira-gp6nr2 жыл бұрын
Exactly, the choice of the log function is more due to the addition property than about diminishing returns. Also, it is totally related to the number of bits it uses to code a sequence of fair coins using binary digits. 1 more digit on a sequence changes the sequence probability by a factor of 2 while adding exactly 1 more bit of information, which works well with the logarithm formula.
@temperedwell62952 жыл бұрын
The reason for using logarithm to base 2 is that there are 2^N different words of length N formed with the alphabet {H,T}; i.e., length of word =log_2 number of words. The reason for the minus sign is so that N gives a measure of the amount of information.
@LostTheGame62 жыл бұрын
The way I like to do that conclusion would be to say : ok let's describe a population where everyone plays once. In the case of the coin flip, if a million people play, you need to, on average, give the name of 500k people who got tails (or heads). Otherwise your description is incomplete. In the case of the lottery, you can just say "no one won", or just give the name of the winner. So, you can clearly see how much more information is needed in the first case.
@MrKohlenstoff2 жыл бұрын
That's a nice explanation!
@sanferrera2 жыл бұрын
Very nice, indeed!
@NathanY0ung2 жыл бұрын
This makes me think of something like an ability to correctly guess. For a coin flip, which requires more information, it's harder to guess the outcome than the wining of a lottery.
@guesswhomofo4 ай бұрын
thats exactly what he meant in the video with his “surprisingness” analogy. easy to guess = unsurprising
@elimgarak35972 жыл бұрын
I believe Popper made this connection between probability and information a bit earlier on his Logik Der Forschung (1934 Shannon's first paper was written in 1949). That's why he says that we ough to search for "bold" theories, that is, theories with low probability and thus more content. Except, at first, he used a simpler formula: Content(H) = 1-P(H), where H is a scientific hypothesis. Philosopher's role on the history of logic and computer science is a bit underrated and obscured imo (see for example, Russell's type theory). Btw, excellent explanation. Please, bring this guy more often.
@yash11522 жыл бұрын
thanks a lot for bringing philosophy up in here 😇
@Rudxain2 жыл бұрын
This reminds me of quantum superposition
@drskelebone2 жыл бұрын
Either I missed a note, there's a note upcoming, or there is no note stating that these are log_2 logarithms, not natural or common logarithms.@ @5:08. "upcoming" is the winner, giving me log_2(1/3) ~= 1.585 bits of information.
@RickGladwin3 ай бұрын
That caught me as well, coming from physics, where log with no subscript is assumed base 10, and ln for the natural log with base e. I’m doing more computer science than physics these days, so I’ll be on the lookout for the base 2 assumption! It makes sense, for binary information.
@travelthetropics61902 жыл бұрын
This and Nyquist-Shannon sampling theorem are two of the buildings block of communication as we know today. So we can say even this video is brought to us by those two :D
@agma2 жыл бұрын
The bit puns totally got me 🤣
@Jader77772 жыл бұрын
Coffee machine right next to computer speaks louder than any theory in this video.
@CristobalRuiz2 жыл бұрын
Been seeing lots of documentary videos about Shannon lately. Thanks for sharing.
@scitortubeyou2 жыл бұрын
"million-to-one chances happen nine times out of ten" - Terry Pratchett
@-eurosplitsofficalclanchan60572 жыл бұрын
how does that work?
@AntonoirJacques2 жыл бұрын
@@-eurosplitsofficalclanchan6057 By being a joke?
@IceMetalPunk2 жыл бұрын
"Thinking your one-in-a-million chance event is a miracle is underestimating the sheer number of things.... that there are...." -Tim Minchin
@davidsmind2 жыл бұрын
Given enough time and iterations million to one chances happen 100% of the time
@hhurtta2 жыл бұрын
@@-eurosplitsofficalclanchan6057 Terry Pratchett knew human behavior and reasoning really well. We tend to exaggerate a lot, we have trouble comprehending large numbers, and we are usually very bad at calculating probabilities. Hence we often say one-in-a-million chance when it's actually much lower. On the other hand, one-in-a-million events do occur much more often than we intuitively expect, when iterating enough, like brute forcing guessing a 5 letter password (abt 1 in 12 millions).
@Juurus2 жыл бұрын
I like how there's almost every source of caffeine on the same computer desk.
@adzmarsh2 жыл бұрын
I listened to it all. I hit the like button. I did not understand it. I loved it
@nathanbrader75912 жыл бұрын
3:41 "So 1 in 2 is an odds of 2, 1 in 10 is an odds of 10" That's not right: If the probability is 1 in x then the odds is (1/x)/(1-(1/x)). So, 1 in 2 is an odds of 1 and 1 in 10 is an odds of 1/9.
@patrolin2 жыл бұрын
yes, probability 1/10 = odds 1:9
@BergenVestHK2 жыл бұрын
Depends on the system, I guess. Where I am from, we would say that the odds are 10, when the probability is 1/10. I know you could also call it "one-to-nine" (1:9), but that's not in common use here. Odds of 10 would be correct here.
@nathanbrader75912 жыл бұрын
@@BergenVestHK Interesting. Where are you from?
@BergenVestHK2 жыл бұрын
@@nathanbrader7591 I'm from Norway. I just googled "odds systems", and found that there are supposedly three main types of odds: "fractional (British) odds, decimal (European) odds, and moneyline (American) odds". I must say, that seeing as Computerphile is UK based, I do agree with you. I am a little surprised that they didn't use the fractional system in this video. However, I see that Tim, the talker in this video, previously studied in Luxembourg and the Netherlands, so perhaps he imported the European decimal odds systems from there. :-)
@nathanbrader75912 жыл бұрын
@@BergenVestHK Thanks for this. That explains his usage which I take to be intentionally informal for an audience perhaps more familiar with gambling lingo. I'd expect (hope) that with a more formal discussion, the term "odds" would be reserved for the fractional form as it is used in statistics.
@clearz36002 жыл бұрын
Alice and Bob are sitting at a bar when Alice pulls out a coin, flips it and says heads or tails. Bob calls out heads while looking on in anticipation. Alice reveals the coin to be indeed heads and asks how surprised are you. A bit proclaims Bob.
@gdclemo2 жыл бұрын
You really need to cover arithmetic coding, as this makes the relationship between Shannon entropy and compression limits much more obvious. I'm guessing this will be in a followup video?
@DeanHorak2 жыл бұрын
Greenbar! Haven’t seen that kind of paper used in years.
@CarlJohnson-jj9ic2 жыл бұрын
Boolean algebra is awesome!!!: Person(Flip(2), Coin(Heads,Tails)) = Event(Choice1, Choice2) == (H+T)^2 == (H+T)(H+T) == H^2 + 2HT + T^2 (notice coefficient orderings) where the constant coefficient is the frequency of the outcome and the exponent or order is the amount of times the identity is present in the outcome. This preserves lots of the algebraic axioms which are largely present in expanding operations. If you try to separate out the object and states from agents using denomination of any one of the elements, you can start to be able to combine relationships and quantities with standard algebra words with positional notation(I like abstraction be used as the second quadrant, like exponents are in the first, to resolve differences of range in reduction operations from derivatives and such) polynomial equations to develop rich descriptions of the real world and thus we may characterize geometrically the natural paths of systems and their components. These become extraordinarily useful when you consider quantum states and number generators which basically describe the probability of events in a world space which allows one to rationally derive the required relationships elsewhere, events or agents involved by stating with a probability based on seemingly disjoint phenomena, i.e. coincident and if we employ a sophisticated field ordering, we can look at velocities of gravity to discern what the future will bring. Boolean algebra is awesome! Right up there with the placeholder-value string system using classification of identities.
@PopescuAlexandruCristian6 ай бұрын
Not sure but for arithmetic encoding we should get better results then the entropy because we have "fractional bits" there right ?
@Lokesh-ct8vt2 жыл бұрын
Question : is this entropy in anyway related to the thermodynamic one?
@temperedwell62952 жыл бұрын
I am no expert, so please correct me if I am wrong. As I understand, entropy was first introduced by Carnot, Clausius, and Kelvin as a macroscopic quantity whose differential temperature is integrated with respect to to give energy. Boltzman was the first to relate macroscopic quantities of thermodynamics, i.e., heat and entropy to what is happing on the molecular level. He discovered that entropy is related to the number of microstates associated to a macrostate, and as such is a measure of disorder of the system of molecules. Nyquist, Hartley, and Shannon extended Boltzman's work by replacing statistics on microsystems of molecules to statistics on messages formed from a finite set of symbols.
@danielbrockerttravel11 ай бұрын
Related but not identical because the thermodynamic one still hasn't been worked out and because Shannon never defined meaning. I strongly suspect that solving those two will allow for a unification.
@tlrndk123 Жыл бұрын
the comments in this video are surprisingly informative
@liminal272 ай бұрын
This is unbelievably cool
@sean_vikoren2 жыл бұрын
I find my best intuition of Shannon Entropy flows from Chaos Math. Plus I get to stare at clouds while pretending to work.
@oussamalaouadi85212 жыл бұрын
I guess information theory is - historically - a subset of communications theory which is a subset of EE.
@sean_vikoren2 жыл бұрын
Nice try. Alert! Electrical Engineer in building, get him!
@eastasiansarewhitesbutduet98252 жыл бұрын
Not really. Well, EE is a subset of Physics.
@oussamalaouadi85212 жыл бұрын
@@eastasiansarewhitesbutduet9825 Yes EE is a subset of Physics. Information theory was coined solving EE problems ( transmission of information, communication channel characterisation and capacity, minimum compression limit, theoritical model for transmission.. etc) , and Shannon himself was an EE. Despite the extended use of information theory in many fields such as computer science and statistics and physics, it's historically an EE thing.
@nHans2 жыл бұрын
@@oussamalaouadi8521 Dude! Engineering is nobody's subset! It's an independent and a highly rewarding profession-and it predates science by several millennia. Engineering *_uses_* science. It also uses modern management, finance, economics, market research, law, insurance, math, computing and other fields. That doesn't make it a "subset" of any of those fields.
@laurenpinschannels2 жыл бұрын
if you don't specify what base of log you mean, it's base NaN
@user-fd9rx8dh9b Жыл бұрын
Hey, I wrote an article using information theory, I was hoping I could share it and receive some feedback?
@Mark-dc1su2 жыл бұрын
I'm reading Ashby at the moment and we recently covered Entropy. He was very heavy handed with making sure we understood that the measure of Entropy is only applicable when the states are Markovian, or that the state the system is currently in is only influenced by the state immediately preceding it. Does this still hold?
@connormcmk2 жыл бұрын
You can relax the markovian assumption if you know more about your environment. You can still compute the entropy of a POMDP, it just requires guesses at the underlying generative models + your confidence in those models
@TheNitramlxl2 жыл бұрын
A coffee machine on the desk 🤯this is end level stuff
@YouPlague2 жыл бұрын
I already knew everything he talked about, but boy this was such a nice concise way of presenting it to laymen!
@MrVontar2 жыл бұрын
stanford has a page about the entropy in the english language, it is interesting as well
@stephencarlsbad4 ай бұрын
General relativity perspective: You're only surprised by a 1 in a million outcome relative to your usual experience of a 1 in 3 outcome.
@pedro_82402 жыл бұрын
6:58 in absolute terms, no, not really, but when you start taking into consideration the chances of just randomly getting your hands on a winning ticket, without actively looking for a ticket, any ticket, that's a whole other story.
@David-id6jw2 жыл бұрын
How much information/entropy is needed to encode the position of an electron in quantum theory (either before or after measurement)? What about the rest of its properties? More generally, how much information is necessary to describe any given object? And what impact does that information have on the rest of the universe?
@ANSIcode2 жыл бұрын
Surely, you don't expect to get an answer to that here in a KZbin comment? Maybe start with the wiki article on "Quantum Information"...
@johnhammer86682 жыл бұрын
how can a bit be floating point
@DrewNorthup2 жыл бұрын
The DFB penny is a great touch
@elixpo2 жыл бұрын
This explanation was really awesome
@sdutta89 ай бұрын
We claim Shannon as a communication theorist, rather than a computer theorist, but concede with Shakespeare: what’s in a name.
@068LAICEPS2 жыл бұрын
Information Theory and Claude Shannon 😍
@TheFuktastic2 жыл бұрын
Beautiful explanation!
@assepa2 жыл бұрын
Nice workplace setup, having a coffee machine next to your screen 😀
@TheArrogantMonk2 жыл бұрын
Extremely clever bit on such a fascinating subject!
@filipo41142 жыл бұрын
1:54 - "A bit more." - "That's right - one bit more" ;D
@megablademe49308 күн бұрын
one way to think about “surprise” and information is that if i tell you that i bought a lottery ticket, you’ll just assume i lost, so very little information is transmitted on average when i tell you the result since you already assumed i lost, however if i tell you i flipped a coin, you would have no idea which one is it, so by telling you the result i transmit a lot of information
@Veptis2 жыл бұрын
Variance as the derivation of the expected value is the interesting concept of statistics, entropy as the amount of information is the interesting concept.of information theory. But I feel like they kinda do the same.
@atrus38232 ай бұрын
The other nice thing about log is that when the probability is 1, the information is zero.
@jimjackson4256 Жыл бұрын
Actually I wouldn’t be surprised at any combination of heads and tails.If it was purely random why would any combination be surprising?
@juliennapoli2 жыл бұрын
Can we imagine a binary lottery where you bet on a 16bits séquence of 0 an 1 ?
@abiabi67332 жыл бұрын
wait, so this is base on probability?
@danielg92752 жыл бұрын
It is indeed
@Wyvernnnn2 жыл бұрын
The formula log(1/p(n)) was explained as if it was arbitrary, it’s not
@OffTheWeb2 жыл бұрын
experiment with it yourself.
@arinc92 жыл бұрын
I understood not much because of my bad math but this was fun to watch
@dixztube2 жыл бұрын
I got the talis tails one on a guess and now I understand the allure of gambling and casinos it’s fun psychologically
@desmondbrown55082 жыл бұрын
What is the known compression minimum size for things like RAW text or RAW image files? I'm very curious. I wish they'd have given some examples of known quantities of common file types.
@damicapra942 жыл бұрын
It's not really the file type, rather the file contents that determine it's ideal minimum size. At the end of the day, files are simply a collection of bits. Wheter they represent text, images, video or more.
@Madsy92 жыл бұрын
@@damicapra94 The content *and* the compressor and decompressor. Different file formats use different compression algorithms or different combinations of them. And lossy compression algorithms often care a great deal about the structure of the data (image, audio, ..).
@Andrewsarcus2 жыл бұрын
Explain TLA+
@CalvinHikes2 жыл бұрын
I'm just good enough at math to not play the lottery.
@sundareshvenugopal6575Ай бұрын
The seeming or the apparent truth of something on its surface and the actual or the real truth of it in its depths are two vastly different things.There is always more to the truth of a thing, than meets one first in the eye, even to the truth concerning numbers, and data compression. Only on the surface does it seem like massive lossless data compression is an impossibility, but in truth, in its depths massive lossless data compression is a hard reality. There is the superficial or a shallow knowledge and understanding of the truth and a more deeper and a more profound knowledge and understanding of the truth.
@blayral2 жыл бұрын
i said head for the first throw, tail-tail for the second. i'm 3 bits surprised...
@retropaganda84422 жыл бұрын
4:02 Surprise, the paper has changed! ;p
@pedropeixoto55322 жыл бұрын
It is really maddening when someone calls Shannon a Computer Scientist. It would be a terrible anachronism if Electrical Engineering didn't exist! He was really (a mathematician and) an Electrical Engineer and not only The father of Information Theory, but The father of Computer Engineering (as a subarea of Electronics Engeneering), i.e., the first to systematize the analysis of logic circuits for implementing computers in his famous masters thesis, "A Symbolic Analysis of Relay and Switching Circuits", before gifting us with Information Theory. CS diverges from EE in the sense EE cares about the computing "primitives". Quoting Brian Harvey: "Computer Science is not about computers and it is not a science [...] a more appropriate term would be 'Software Engineering'". Finally, I think CS is beaultiful and has a father that is below no one, Turing.
@levmarcus81982 жыл бұрын
I want an expresso machine right on my desk.
@GordonjSmith12 жыл бұрын
I am not sure that the understanding of 'information theory' has been moved forward by this vlog, which is unusual for Computerphile. In 'digital terms' it might have been better to explain Claude Shannon's paper first, but from an 'Information professional's perspective' this was not an easy watch.
@sedrickalcantara95882 жыл бұрын
Shoutout to Thanos and Nebula in the thumbnail
@danielbrockerttravel11 ай бұрын
I cannot believe that philosophers, who always annoying go on about what stuff 'really means' never thought to try to update Shannon's theory to include meaning. Shannon very purposefully excludes meaning from his analysis of information. Which means it provides an incomplete picture. In order for information to be surprising, it has to say something about a system that a recipient doesn't know. This provides a clue as to what meaning is- a configuration of a system. If a system configuration is already known, then no information about it will be surprising to the recipient. If the system configuration changes, then the amount of surprise the information contains will increase in proportion. In order for information to be informative there must be meanings to communicate, which means that meaning is ontologically prior to information. All of reality is composed of networks and these networks exhibit patterns. In networks with enough variety of patterns to be codable, you create the preconditions for information.
@sanderbos42432 жыл бұрын
I loved this
@jamsenbanch Жыл бұрын
It makes me uncomfortable when people flip coins and don’t catch them
@hypothebai46342 жыл бұрын
So, Claude Shannon was a figure in communications electronics - not computer science. And, in fact, the main use of the Shannon Limit was in RF modulation (which is not part of computer science).
@liambarber90502 жыл бұрын
My suprisal was very high @4:58
@KX362 жыл бұрын
after all that you could have at least given us some lottery numbers at the end
@joey1994122 жыл бұрын
Amazing video, title should have been something else because I was expecting something mundane, not to have my mind blown and look at computation differently forever.
@sundareshvenugopal65757 ай бұрын
Claude shannon theory is not in the least bit true. It is at best a very supercilious view of compression coding. I have come up with scores of methods where 2^n bits of information can be losslessly coded in O(n) bits(order of n bits). So c*n bits of data, where c is a very very small constant can contain at least 2^n bits of information coded losslessly. Not only is massive lossless data compression a reality, but large numbers of terabytes sizes can be represented and manipulated within a few bytes, all mathematical operations performed within those few bytes.
@user-js5tk2xz6v2 жыл бұрын
So there is one arbitrary equation and I don't understand form where it came and also what is it's purpose. And once he said that 0.0000000X is minimal amount of bits ,but then he says he needs 1 bit for information about wining and 0 for losing, so it seems the minimal amount of bits to store information is always 1, so how can it be smaller than 1 ?
@shigotoh2 жыл бұрын
A value of 0.01 means that you can store on average 100 instances of such information in 1 bit. It is true that when storing only one piece of information it cannot use less than one bit.
@hhill54892 жыл бұрын
You typically take the ceiling of that function output when thinking practically about it, or for computers. Essentially, the information contained was that miniscule number, but realistically you still need 1 bit to represent it. For an event that is guaranteed, or probablity 100% /1.0, there is 0 information gained by its observance....therefore it takes zero bits to represent that sort of event.
@codegeek982 жыл бұрын
You only have fractional bits in _practice_ with amortization (or reliably if the draws are batched).
@rmsgrey2 жыл бұрын
"We will talk about the lottery in one minute". Three minutes and 50 seconds later...
@GordonjSmith12 жыл бұрын
Let me add a 'thought experiment'. Some people spend money every week on the Lottery, their chance of winning is very small. So what is the difference between a 'smart' investment strategy' and an 'information' based strategy? Answer: Rational investors will consider their chances of winning and conclude that for every dollar extra they invest (say from one dollar to two dollars) their chance will increase proportionally. An 'Information engaged' person will see that the chance of winning is entirely remote, and increasing the investment hardly improves the chances, in this case they know that in order to 'win' they need to be 'in', but even the smallest amount spent is nearly as likely to win as those who place more bets. No !! Scream the 'numbers' people, but 'Yes'!!! scream anyone who has considered the opposite case. The chance of winning is so small that the increase in paying for more Lotto numbers really does not do that much to improve the payback from entering, better to be 'just in' than 'in for a lot'...
@Maynard05042 жыл бұрын
I have the same coffee machine
@h0w13472 жыл бұрын
thanks
@hypothebai46342 жыл бұрын
The logs that Shannon originally used were natural logs (base e) for obvious reasons.
@filda20052 жыл бұрын
8:34 No one really no one has been rolling on the floor? LOOL and in addition the cold blood face to it. It's like visa card, you can't buy that with money.
@Thomas-J-Foolery2 жыл бұрын
“Expected amount of surprisal” seems like quite an oxymoron.
@eliavrad28452 жыл бұрын
The "reasonable intuition" about this formula is that, if there are two independent things, such as a coin flip and a lottery ticket, the information about them should be a sort of sum H(surprise about a coinflip and a lottery result)=H(surprise about coinflip result)+H(surprise about lottery result) but the probabilities should be multiplication p(head and win lottery)=p(head)p(win) and the best way to get from multiplication to addition is a log Log(p(head)p(win))=Log(p(head)) + Log(p(win))
@AntiWanted2 жыл бұрын
Nice
@TheCellarGuardian2 жыл бұрын
Great video! But terribile title... Of course it's important!
@anorak93832 жыл бұрын
Eighth
@atrus38232 жыл бұрын
This explains why they don't announce the losers!
@kofiamoako30982 жыл бұрын
So no jokes in the comments??
@CandyGramForMongo_2 жыл бұрын
Lies! I zip my zip files to save even more space!
@ThomasSirianniEsq Жыл бұрын
Wow. Reminds me how stupid I am
@stephencarlsbad4 ай бұрын
Sorry, but gauging math based on an arbitrary emotion such as surprise seems ridiculous. The level of surprise that a coin flipper senses is based on elements of their personality construct and some personalities are not the least bit surprised by anything. But surprise can also be a function of intellectual capacity. Although this isn't applicable to the human race since we don't have a high enough intellectual capacity for it to be a factor, imagine an alien race with an average IQ of 10 million... Their ability to see higher degrees of complexity and order where we would simply see chaos and randomness would be a factor in whether or not they experienced surprise simply because they could potentially know the outcome before the coin was tossed. So I don't like the concept of surprise being used to gauge math results.
@mcjgenius2 жыл бұрын
wow ty🦩
@karavanidet Жыл бұрын
Very difficult :)
@BAMBAMBAMBAMBAMval Жыл бұрын
A bit 😂
@artic02032 жыл бұрын
i solved AI join me now before we run out of time
@atsourno2 жыл бұрын
First 🤓
@Ellipsis1152 жыл бұрын
@@takotime NEEEEEEEEEEEEEEEEEERDS
@atsourno2 жыл бұрын
@Rubi ❤️
@zxuiji2 жыл бұрын
Hate to be pedantic but a coin flip has more than 2 possible outcomes, there's the edge after all, it's the reason why getting either side is not a flat 50% Likewise with dice, they have edges and corners, they can also be an outcome, it's just made rather unlikely due to the air circulation and the lack of resistance vs the the full drag of the landing zone, by full drag I mean the earth dragging it along while rotating and by lack of resistance I mean that not enough air molecules not slam into it through their own drag state, thereby allowing it to just roll over/under the few that do
@galliman1232 жыл бұрын
Except you just rule those out and skew the probability 🙃
@roninpawn2 жыл бұрын
There is no indication, whatsoever, that you "hate to be pedantic" about this. ;)
@zxuiji2 жыл бұрын
@@roninpawn ever heard of OCD, it's similar, I couldn't ignore the compulsion to correct the info
@zxuiji2 жыл бұрын
@@galliman123 except that gives erroneous results, the bane of experiments and utilization
@JansthcirlU2 жыл бұрын
doing statistics is all about confidence intervals, the reason why you're allowed to ignore those edge cases is that they only negligibly affect the odds of those events you are interested in
@elijahromer65442 жыл бұрын
IN FIRST
@sundareshvenugopal657519 күн бұрын
Claude shannon theory is not in the least bit true. It is at best a very supercilious view of compression coding. I have come up with scores of methods where 2^n bits of information can be losslessly coded in O(n) bits(order of n bits). So c*n bits of data, where c is a very very small constant can contain at least 2^n bits of information coded losslessly. Not only is massive lossless data compression a reality, but large numbers of terabytes sizes can be represented and manipulated within a few bytes, all mathematical operations performed within those few bytes.