"Learn stocks with this one weird trick. Quants HATE him."
@henryginn74909 сағат бұрын
Nice clean little video here, it's got everything I could have asked for!
@malcolm743613 сағат бұрын
Trippy.
@FF-ms6wq19 сағат бұрын
The equation you write at 5:50 is in fact not correct as stated. The expectation you write must be *under the risk-neutral probability measure* (!) rather than the actual probability measure. This is a crucial point. So, to make it correct, you’d have to write an index at the bottom of the expectation: \E_{Q} where Q is this risk neutral measure (which is unique in the Black-Scholes model).
@FF-ms6wq19 сағат бұрын
Very beautiful proof. I liked the combinatorial nature underlying the argument. Excellent exposition on this important and fascinating, most fundamental result in all of probability theory & statistics.
@FF-ms6wq20 сағат бұрын
Amazing job! You really explain this well. Keep it up!
@FF-ms6wq20 сағат бұрын
I’m a researcher in math. Great effort, good and clear explanation. One thing that perhaps could be added (it might clarify it more for some students, but perhaps could also confuse some others): Given a rv (that is, a measurable function on \Omega), say X, we get a distribution *of X with respect to* some given probability measure P; if we were to consider an other, different probability measure say Q, then the rv X is (obviously!) still exactly the same as it was before, but its *distribution* (considered wrt Q now) is different from before (when it was under P). So, the concept of rv is in some sense more fundamental (or “primitive”/rudimentary)-indeed it is defined simply by a measurability condition, and has a priori nothing to do with any probability measure. Keep up the good work!
@victorrobert4600Күн бұрын
Thank you !
@MihaiNicaMathКүн бұрын
Poll: What looks cooler in your opinion? Zooming in (beginning of video), or Zooming out (middle of video)? Like the comments below to vote.
@MihaiNicaMathКүн бұрын
Like this for: Zooming in (beginning of video) is cooler
@MihaiNicaMathКүн бұрын
Like this for: Zooming out (middle of video) is cooler
@Kram103212 сағат бұрын
Zooming in looks like "going forward", zooming out like "going backwards", perceptually speaking. As a result, zooming in feels more mesmerizing whereas zooming out feels uncanny.
@pawebielinski4903Күн бұрын
Lovely!
@ZB-ih7ev2 күн бұрын
I loved it thanks a lot
@DrSimulate2 күн бұрын
Extremely insightful. Thanks a lot for sharing!
@ProductionsExoTic2 күн бұрын
Question: If Delta draws from [0,1], does the X-Delta not count {1,2,3,4,5} twice? Thus it´s not exactly the same as nU? And if you simply use Delta draws from (0,1] or [0,1), then you exclude 0 or 6. Also, if you round up nU, and you happened to draw 0, then you round to 0, which is not in the ranga of X. Does this matter for the proof at all?
@MihaiNicaMath2 күн бұрын
@@ProductionsExoTic good question! it doesn't matter because being exactly equally to 0 or exactly equal to 1 has probability zero to happen.b
@alexanderdaum80532 күн бұрын
I think we want U to be (0, 1] and Δ to be [0, 1). For X = ceil(nU) to hold, U should not include 0, as n0 = 0 and X does not include 0. Then for the same reason, our rounding error can never actually include 1, as when nU is an integer, ceil(nU) = nU.
@MihaiNicaMath2 күн бұрын
@@alexanderdaum8053 It's completely the same: a (0,1] random variable and a (0,1) random variable and a [0,1] are indistinguishable from each other: the have a 0% chance of being different! This is the notion of "almost sure" equality which is used in probability
@ItayTheItay5 күн бұрын
not sure if i "discovered" this or not, but i think i found the formula for the average maximum of an N sided die with M advantage(m+1 rolls): Sum(from X=1 to N): (X*Sum(from A=1 to M+1): (((X-1)/N)^(A-1) * (1/N) * (X/N)^(M+1-A)) sorry for the HORRIBLE formmating of the comment, its hard to write in plaintext because the formula i made up uses sigma notation INSIDE a sigma notation. So please, if someone who understands math can tell me if its right/ if theres a mistake in it id be glad :) P.S im not sure if it can even qualify as a formula since it uses sigma notatoin inside of a sigma notation, either way its O(n^2) complexity in computer terms so its not too bad for a computer to calculate. also i just think its neat. EDIT: found an easier way to calculate it, heres the formula: N = amount of sides M = amount of rolls Sum(from x=1 to n): ((x/n)*(x^m * (x-1)^m))
@Kaepsele3375 күн бұрын
Isn't the P(N_maxers) dependent on the value of the maxiumum? E.g. if the max of 3 dice is 1 then the probability that all three dice have the maximum value of 1 is 100%. It's probably a higher order term, but I think this isn't considered in your exact formula
@MihaiNicaMath5 күн бұрын
Do you mean P(N_Maxers=2)? (P(N_Maxers) on its own is a non-sensical notation). It's true that if you want to calculate P(N_Maxers=2) and you start conditioning on the value of the maximum it will get complicated, like P(N_maxers=2 and Max=x) depends on x. But the way I do it in the video is correct: you just thinking about assigning whether or not the dice are the same value without looking at their values (like in the birthday paradox)...in other words you think of P(N_maxers =2) on its own without looking at the value of the maximum.
@kevinwaugh51925 күн бұрын
A formula that is exact is 1 + n - sum_{k=1}^{n-1} (k/n)^m. I found this by defining a matrix A such that pi' = A*pi takes the probability distribution of the max value of rolling m n-sided die to the probability distribution of the max value of rolling m+1 n-sided die. The Eigendecomposition of A is easy to compute by hand as it has a nice form. Using the Eigendecomposition you can compute pi_m = Q*(diag(v)^(m-1))*(Q^-1)*(1/n, ..., 1/n) efficiently by exponentiating the Eigenvalues. If you just want the expected value of the max, you can simplify further to the formula provided.
@MihaiNicaMath5 күн бұрын
Yup this formula is mentioned at 7:47 . The eigendeconpostion idea is interesting....I think that would give the m goes to infinity, n fixed asymptotics (rather than the n goes to infinity m fixed that I did in the video)
@dirksj5 күн бұрын
I feel like this doesnt pass the simplest sniff test. As m approaches infinity the expected max should approach n as it becomes extremely likely you roll the highest value. This equation instead approaches n +1/2 - infinity which is very wrong.
@MihaiNicaMath5 күн бұрын
The approximation holds as n to infinity with m fixed as explained in the video :) I.e. the error in the approximation goes to zero when n goes to infinity and m is fixed. if you want m to infinity start with the exact equation in terms of nMaxers and approximate it from there, it will be different than what I did here!
@1972C1826 күн бұрын
Hello Dr. Nica, If, per chance, you are looking for an interesting problem to solve and post to YT, may I suggest a puzzle I have long wondered about, but have not been able to make progress on-the Strong Birthday Puzzle. Rather than asking how many people you need in a room to have a 50% chance that there is _some_ match (yeah, making all the uniform assumptions and 365 days), I am interested in how many people you need in a room to be 50% sure that _everyone_ has at least one match. Such a simple change, but other than simulation, I can’t make progress on it. Thanks again.
@MihaiNicaMath5 күн бұрын
This is interesting! I think I know how to do it recursively but it's indeed a kind of tricky one. I added to my potential videos list :)
@sheppa282 күн бұрын
3064
@1972C1822 күн бұрын
@@sheppa28 :-). That is funny. OP here. You are absolutely correct that I phrased my question as my interest being in the number. And, you have provided the number. Perfect Asperger’s response. Love it. Of course, my real interest is in the probability distribution of the number of people sharing a birthday as a function of the number of people, and of the number of days in the year (as I might be interested in, say, people born in the same month). But, also of course, I do not actually want that distribution, I want to understand how it can be figured out. An asymptotic approximation would also be of interest. You have totally teased me now-can you provide your reference? As I am not a mathematician, I hope there is some derivation that uses elementary tools. (Ph.D. in statistics, which is far far from mathematics). Thanks again for the funny answer.
@sheppa28Күн бұрын
@@1972C182 Source: DasGupta, Anirban. "The matching, birthday and the strong birthday problem: a contemporary review." Journal of Statistical Planning and Inference 130.1-2 (2005): 377-389. (Available on google scholars) Section 2.6 with Eq 4 for the probability distribution and Theorem 3 for the asymptotic theory. Following example after Theorem 3, asymptotic formula can be set to 1/2 and solved for n using the -1 branch of Lambert W function yielding n_est = -m LambertW[-1, -(Log[2]/m)] which for m=365 yields n=3063.78.
@demonwolfies6 күн бұрын
Hi! I randomly came across your video and spent the whole day exploring this topic-I've found it fascinating! I think I’m starting to grasp the coin flip problems, but I’m still struggling with some aspects and would love your insight. Here’s what I’ve been working on: For coin flips, I think I understand the idea of focusing on the last few flips for overlaps. For example: HTTTH = 2⁵ + 2⁴ + 2² + 2¹ = 54 I feel pretty confident about this one, though I’d like to know if my interpretation is correct. However, when I try to apply the same framework to other problems like dice rolls or words, I keep running into issues. Here are a few examples: 1. HTHTHHTTTH = 2¹⁰ + 2⁴ + 2² + 2¹ = 1046 I think this is right, but I’m unsure how to check if I handled the overlaps correctly (e.g., for the final TT). 2. Dice Sequence: 62453621 = 6⁸ = 1,679,616 Here, I treated it the same as the coin flips but with a base of 6 instead of 2, since there are no repeating numbers. Is this the right approach? 3. Word: circumference = 26¹² = ~9.54e+16 For this, I’m unsure if repeating letters (e.g., c, r, e) should change the calculation or if I’m overthinking it. I’m having trouble understanding how repeating elements (like letters or numbers) affect the total and whether my method for handling overlaps is correct. If anything I wrote here is wrong or unclear, I’d really appreciate your feedback-I’ve never worked on anything like this before but find it really interesting. Thank you so much for the video! (Reworded with AI for readability reasons.) (If anything I wrote is wrong, unclear, or confusing, please let me know!)
@demonwolfies4 күн бұрын
After re-reading my original comment, I noticed a few mistakes in how I worded things, so I updated it for clarity! :)
@knowhereman77256 күн бұрын
I think the exact solution to the average maximum should be exactly n+1 minus the average minimum, so n+1 - (1^m + 2^m + 3^m + ... + (n-1)^m + n^m)/(n^m), but I don't have an elegant way to prove why yet, so I'm going full Fermat just laying out the claim.
@knowhereman77256 күн бұрын
maybe this result is obvious, I'm not the best with probabilities
@MihaiNicaMath6 күн бұрын
Yes this is totally true! And in fact there is a detailed proof of it in my max-of-dice video kzbin.info/www/bejne/ZpWYmHawraljqcksi=LVs764aLriOM1x-7
@knowhereman77256 күн бұрын
@@MihaiNicaMath oh crap I'll admit I probably accidentally skipped that part 😭 that's cool though! I was trying to find an equation for the exact result myself before watching this and I didn't realize you had already covered it in the previous video
@adw1z6 күн бұрын
This is my attempt before watching the video: X1,X2,…Xm are IID from X ~ Unif{1,…,n}. For constant a taking values in {1,…, n} ==> P(min(Xi) >= a) = P(X1 >= a, … , Xm >= a) = Π[i = 1 to m] P(X >= a) by independence = [ P(X = a) + P(X = a+1) + .. + P(X = n) ] ^ m = [ 1/n + 1/n + … + 1/n ] ^ m = ((n - a + 1)/ n)^m ==> P(min(Xi) <= a) = 1 - P(min(Xi) >= a+1) = 1 - ((n - a)/ n)^m = 1 - (1 - a/n)^m (Cumulative distribution F(a) of minX) ==> discrete pmf given by: P(minXi = a) = F(a) - F(a-1) = 1 - (1 - a/n)^m - 1 + (1 - (a-1)/n)^m = (1 - (a-1)/n)^m - (1 - a/n)^m ==> E[minXi] = Σ[a = 1 to n] a(1 - (a-1)/n)^m - a(1 - a/n)^m It’s horrible but it actually turns out to be the same as your solution (probably just using binomial), but far less elegant
@MihaiNicaMath6 күн бұрын
Yes this is totally correct! The Darth Vader rule basically "un-does" all the manipulations you did, which is why the solution in the video is so slick. You can sort of skip the nasty steps.
@jjose38296 күн бұрын
Hello Dr could you please make a video explain the intuition behind the concept of expected values, there are so many way of explaining it but each one doesn't always fit perfectly when it comes to different contexts.
@MihaiNicaMath6 күн бұрын
Great question! I think the simplest explanation of what it really is is in my "what is a random variable?" video: kzbin.info/www/bejne/gYKrl4KQfbONac0
@Lost_Evanes7 күн бұрын
Cool video, but the thing that catches the eye about 2term vs 3rd term approximations is: if we fix the N and let M get arbitrary large, with only 2 terms E[max] approaches N, which sounds correct since no matter the size N, if we throw the dice many times (M >> N) then surelly we will get the biggest possible value sooner or later, but with 3rd turm E[max] goes to negative inf, maybe the possitive O(N^2) term could fix this? idk but the concept of better approximation which only works on the N ~ M interval seems strange... surely the simple (M * N)/(M+1) +0.5 formula cant be the best for every pair (N, M)?
@MihaiNicaMath7 күн бұрын
The expansion here holds when n -> \infty, and m is fixed (this is baked into the "O" notation). If you want to make m also very large, then you can start at the exact formula using the N_maxers random variable, and apprimxate from there. For example, if m >> n is actually even larger then n, then a good approximation is m/m+1*n (with no 1/2). This is because if m is very large, you should expect N_maxers to be very larger, so 1/(1+N_maxers) is approximately 0. If you want m/n = constant, then you can approximate N_maxers to be a Poisson random variable of rate m/n to get a pretty good approximation!
@MathNerdGamer7 күн бұрын
I wrote a comment to Matt Parker's video with my own attempt at proving this 2 years ago after watching it. I'll copy it here, since I think it works but want to know where I've erred if it doesn't. For n,k >= 1 and m <= n, let D(n, k; m) := # ways that the max value of k rolls of n-sided dice is m. (m > n obviously gives 0, so that's not very interesting) Equivalently, D(n, k; m) = #{ s in {1, . . ., n}^k | max(s) = m}. An interesting thing happens here, though: Since we're taking m <= n, we can actually restrict the domain to {1, . . ., m}^k without losing any elements. This is because any set of k rolls where some roll > m occurs, we won't be counting it. Therefore, D(n, k; m) = #{s in {1, . . ., m}^k | max(s) = m} Now, let's enumerate all the possibilities. By definition, we have to roll at least one m, and nothing larger than m. Suppose m shows up as the first roll. Then, the other (k-1) rolls can be anything between 1 and m, so this accounts for m^(k-1) elements. If m doesn't show up as the first roll, but is the second roll, that means we had (m-1) possible values for the first roll, 1 value for the second (this is where m is first rolled), and then m^(k-2) possibilities for the last k-2 rolls. This accounts for m^(k-2) * (m-1) elements. This process continues until the only case is where the final roll is m, leaving (m-1)^(k-1) possibilities for the first (k-1) rolls. This gives us the sum: Sum( m^(k-j) * (m-1)^(j-1), j = 1 to k ). I'll leave it as an exercise to the reader to show that this sum comes out to m^k - (m-1)^k. Therefore, D(n, k; m) = m^k - (m-1)^k. Since there are n^k possible sets of k rolls, the probability that the max value of k rolls of n-sided dice is m = (m^k - (m-1)^k)/n^k. Now, we can compute the expected value. Let E(n, k) denote the expected value for the k-roll experiment with n-sided dice. E(n, k) = Sum( m * (m^k - (m-1)^k)/n^k, m = 1 to n ) = (1 / n^k) * Sum( m * (m^k - (m-1)^k), m = 1 to n ) = (1 / n^k) * (n^(k+1) - Sum( m^k, m = 1 to n-1 )). As before, I'll leave the final equality as an exercise to the reader. For k = 1,2,3, it's easy to simplify this down further using the well-known formulas for the sum of the first (n-1) integers, squares, and cubes (resp.): E(n, 1) = (1/n) * (n^2 - [n * (n - 1)] / 2) = (n + 1) / (2) -> For a single roll of a D6, we get (6 + 1) / (2) = 3.5, as expected. E(n, 2) = (1/n^2) * (n^3 - [n * (n - 1) * (2n - 1)] / 6) = (4n^2 + 3n - 1) / (6n) -> For 2 rolls of a D20, we get (1600 + 60 - 1)/(120) = 13.825, which agrees with the 2 rolls of D20 simulations in the video. E(n, 3) = (1/n^3) * ([n^2 * (n-1)^2] / 4) = (3n^2 + 2n - 1) / (4n) -> Matches the value computed in the video. Taking limits as n->infinity also work (using the ~ asymptotic notation): E(n, k) ~ kn/(k+1) and so E(n, k) / n -> k/(k+1) as n->infinity. For the full expression of E(n, k) / n, we'd need to make use of Faulhaber's formula, which is a big complicated expression involving Bernoulli numbers. This means that your conjecture isn't quite true, but it is "true" in an asymptotic sense. Using Faulhaber's formula, E(n, k) = kn/(k+1) + 1/2 + o(1) as n->infinity. [Note: Little-o notation - f(n) = o(g(n)) means f(n)/g(n) -> 0 as n -> infinity. In this case, f(n) = o(1) means f(n) -> 0] For rolling with disadvantage (formulas provided without justification; it's essentially the same as above): d(n, k; m) := # ways that the min value of k rolls of n-sided dice is m = (n-m+1)^k - (n-m)^k probability of min value after k rolls of n-sided dice is m = ( (n-m+1)^k - (n-m)^k ) / n^k e(n,k) := expected value of taking the min of k rolls of n-sided dice = (1/n^k) * Sum( m^k, m = 1 to n ) Using Faulhaber's formula for the sum of the first n kth powers, we find: e(n, k) = n/(k+1) + 1/2 + o(1) as n->infinity where that o(1) packs in a bunch of complicated expressions involving Bernoulli numbers.
@complainer4067 күн бұрын
Another way to get to the maximum of 3 points being ¾ is to think of finding 1 minus the minimum, so 1 - S1 and since S1 is ¼ you get ¾
@mikeflowerdew78777 күн бұрын
This is one of the best descriptions of series approximations that I've seen on YT. It's great that you've given each term a qualitatively different explanation, which definitely has parallels with perturbation theory for me (while being simple enough for a general audience!). I particularly liked the graphs at the start, they really show what you gain at each step
@MihaiNicaMath7 күн бұрын
Thank you! Indeed, these kinds of series things come up all the time in math/physics/perturbation theory and are often super useful :)
@romansapp52198 күн бұрын
When you say “k-th largest” I think you mean to say “k-th smallest” 18:17
@MihaiNicaMath7 күн бұрын
Yes this is true! I mean the "k-th from the left"
@mars_titan8 күн бұрын
At 38:15 why is it that it's multiplied by (1-2/n)? Shouldn't it be (1-1/n) ? Because the first two dices have the same values so there is only one unique value that the next dice should be different from?
@MihaiNicaMath8 күн бұрын
Yes I agree! Should be (1-1/n)*...*(1-(m-2)/n)
@mars_titan8 күн бұрын
@@MihaiNicaMath now would this change the ending that the 1/n^2 coefficient is 0?
@MihaiNicaMath8 күн бұрын
@@mars_titan no I did that correctly on paper, just wrote it wrong in the animation!
@ckq9 күн бұрын
1:30 yo the 1/12n reminds me of the adjustment in stirlings formula
@yedidiapery9 күн бұрын
really cool video man! funny as it is, the bit that was most confusing to me was the part you subtracted and added the 1/2. took me a few good seconds to realize what has happened there.
@MihaiNicaMath9 күн бұрын
Haha that's funny that the other parts were fine. I did add that but near the end of my process so that you don't need to estimate P(nMaxers = 1) accurately, and you only need P(nMaxers=2)
@shadeblackwolf15089 күн бұрын
For this problem i'd probably have done with descriptive shorthands, like s for sides, and n for number, or d for dice
@MihaiNicaMath9 күн бұрын
Yes I thought about this! But in the end I wanted to match the notation from Matt Parker's video so that people could go back forth between them
@Babakinha9 күн бұрын
so cool >:3 nice animations btw
@ComputerNerd982346169 күн бұрын
In the calculation of P(All Different) around 37:40 shouldn't the error term be of order O(m^3/n^2), which is not negligible when m^3 is comparable to n^2?
@ComputerNerd982346169 күн бұрын
this error is actually more apparent later when you state that the number of coincidences is O(1/n^2) when clearly as m gets larger the probability of 2 or more coincidences must approach 1
@MihaiNicaMath9 күн бұрын
Yes this is true! The big O analysis I do is for m fixed and n growing large. I believe if you want to do it more carefully it's always powers of m/n. (So the next term would be m^2/n^2)
@efkastner9 күн бұрын
Yahtzee!
@Juttutin10 күн бұрын
As a mathy person who struggles with probability beyond just crunching examples, I'm loving this! I'm at 28:07, and almost shivering with antici...pation to discover where that 12th is gonna emerge from.
@MihaiNicaMath10 күн бұрын
Haha that's great! That's exactly where it all comes together full circle!
@ronaldking105410 күн бұрын
With the subtraction of a uniform distribution, you introduced 0 as an outcome. The only way that would be true is if it never could happen, but it has an infinitely small probability rather than a 0 probability. You claimed an equal distribution that is not the same. It still requires an approximation. The delta at 1 can also tie the maximum as 6 - 1 = 5 - 0. This would be exceptionally rare, but not impossible. This also means that delta is ambiguous when you claim it to be a constant as two terms will have deltas and the delta for one is 1 and the delta for the other is 0. From a statistics standpoint, this is still a 0 for the probability, but it is not a 0 exactly. Yes, it is so small that it probably would be covered by the O(1/n^3), but one could argue with the equal sign.
@MihaiNicaMath10 күн бұрын
Great thinking! There is a counterintuitive fact about probability I think you are confused about: For continuous random variables (like the uniform distribution from this video, but also the Gaussian distribution and many others), the probability of any individual outcome is literally 0% (not approximately). The probability of a uniform being exactly equal to 0 is 0%. So these kinds of exceptional cases don't contribute to the final distribution and don't matter (not even approximately!)
@ronaldking105410 күн бұрын
@@MihaiNicaMath No, it is defined as 0 to make most of the mathematics make sense, but it is infinitesimally small. As m approaches infinity, you end up with things that start to break the math. This was the original problem, but this error could grow to something that can be seen. You're trying to claim that dx is 0. It is not. It's a small value that we cannot write directly. I do, however, feel confident that it is well within the bounds of your error no matter how big m is.
@MihaiNicaMath10 күн бұрын
@@ronaldking1054 I like your passion! You are a bit mixed up on the use of infinitesimal here: here's a university lecture I gave for a 1st year probability course on this topic that you can watch if you want to learn kzbin.info/www/bejne/kGGTfaeVfM1ggJI
@ronaldking105410 күн бұрын
@@MihaiNicaMath This is a multivariable limit with m being the number of rolls going to infinity with the value of delta approaching the value of 1 or 0. The problem is that I do not know which it is for the first part. Based on your representation it is 1. That, however, is squeezing a value to 0, and that value being squeezed is a divisor. I think that it shouldn't be a problem because the variables are independent. You did not however even argue that there was a term except it definitely exists.
@MihaiNicaMath10 күн бұрын
@ronaldking1054 A couple clarifications: 1. m is fixed (n is the thing going to infinity in this analysis) 2. Delta is always uniform from (0,1), it's not changing or getting squeezed
@paulpagels742910 күн бұрын
Excellent explanation, thank you!
@mrosskne11 күн бұрын
Why are you pointing? We can see what's on the screen. You don't need to tell us what to look at.
@MihaiNicaMath10 күн бұрын
Do you mean the green "laser pointer" or do you mean when I'm pointing with my fingers?
@umbraemilitos11 күн бұрын
Could you do more on the general strategy of Ninja Proofs? I really like what you showed here.
@simonwillover417511 күн бұрын
so, we can add more terms, and they will have n^2, n^3, etc..., upto n^m terms.
@MihaiNicaMath11 күн бұрын
Yes! (Actually the exponents are negative i.e. n^-2 etc)
@simonwillover417511 күн бұрын
18:56 correction: it's the "kth smallest", not the "kth largest", but we can just use the min-max trick from earlier. The kth largest would be (m-k)/(m+1).
@MihaiNicaMath11 күн бұрын
Yes this is true! I guess the exAmple I did happened to be equal but indeed I meant kth from the left
@justcarcrazy11 күн бұрын
10:53 Why not 1+(5*rand)? It should be impossible to ever get zero with any n-sided die.
@MihaiNicaMath11 күн бұрын
Yes indeed! How to fix it is explained later in the video
@ronaldking105410 күн бұрын
What is the probability of [0,1] in your distribution compared to the probability of [0,1] in nU? They are not the same, and introducing the 0 makes more sense than changing the probability of an entire interval from [0,1] in the proof. I noticed that his maximum might all be 0's for delta for the 6's and the 5's could all have 1, but he claimed it would not change the maximum, and it is true, but his delta no longer is a constant. None of it really matters because it is probably encompassed in the O(1/n^3). He, however, would have to show that the E(delta = 1) = E(delta = 0).
@haywardhaunter262011 күн бұрын
Does the 0 to 1 range of the uniform random variables include both 0 and 1? When you "round up", you use the ceiling notation. If your uniform random number happens to be 0 and you multiply that by the number of sides on a die, then you'd get 0, which is not a valid roll. Likewise, if you got a 1 but the delta that you subtract is also 1, you'd also get a 0. Rounding 0 up to 1 to make it a valid roll adds a minuscule bias, making a roll of 1 ever so slightly more likely than any other value. Since the goal is to improve an approximation that's already ignoring very small terms, perhaps this doesn't matter. But I'm not a mathematician, so maybe I'm wrong about how that endpoints of the uniform random variables affect the reasoning.
@MihaiNicaMath11 күн бұрын
Great thought! The truth is that even though things go slightly wrong when it's exactly equal to 0 (exactly as you explained), it doesn't effect things because there is a 0% chance that it will be exactly equal to 0. For example, the chance of being less than 0.00001 is 0.00001 and the chance of being exactly equal to 0 is less than 0.0000001 for any number of 0s (i.e. the chance is 0)
@R.B.11 күн бұрын
Didn't you mean: _X_ ≈ _floor(nU +1)_ _nU_ is going to be a value [ 0, _n_ ), or is this why you say approximately equal? Edit: hmm, I should have just watched further.
@shpensive11 күн бұрын
Really thoughtful explanation of fundamental ideas. The reason to use of equations involving random variables is totally clear. It is clear that random variables are just functions, not themselves random, but some probability over omega is *pushed through* the random variable function, leading to the possibility of calculating expected values, probability mass/density vs values, and, one is led to presume, calculation of statistics in general. What a treat!
@statisticsmatt11 күн бұрын
Great video! As always! On a side note, here's a link to a video that also proves the general formula for Matt Parker's max-of-dice conjecture. This video shows the exact probability distribution for any n and m, which an exact mean value can be calculated. kzbin.info/www/bejne/aJTLhaeZord0qbcsi=ZiyYxadGX2axbuJZ
@FireStormOOO_12 күн бұрын
Great presentation, that was surprisingly easy to follow.