#037

  Рет қаралды 7,547

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Connor Tann is a physicist and senior data scientist working for a multinational energy company where he co-founded and leads a data science team. He holds a first-class degree in experimental and theoretical physics from Cambridge university. With a master's in particle astrophysics. He specializes in the application of machine learning models and Bayesian methods. Today we explore the history, practical utility, and unique capabilities of Bayesian methods. We also discuss the computational difficulties inherent in Bayesian methods along with modern methods for approximate solutions such as Markov Chain Monte Carlo. Finally, we discuss how Bayesian optimization in the context of automl may one day put Data Scientists like Connor out of work.
Panel: Dr. Keith Duggar, Alex Stenlake, Dr. Tim Scarfe
00:00:00 Duggar's philosophical ramblings on Bayesianism
00:05:10 Introduction
00:07:30 small datasets and prior scientific knowledge
00:10:37 Bayesian methods are probability theory
00:14:00 Bayesian methods demand hard computations
00:15:46 uncertainty can matter more than estimators
00:19:29 updating or combining knowledge is a key feature
00:25:39 Frequency or Reasonable Expectation as the Primary Concept
00:30:02 Gambling and coin flips
00:37:32 Rev. Thomas Bayes's pool table
00:40:37 ignorance priors are beautiful yet hard
00:43:49 connections between common distributions
00:49:13 A curious Universe, Benford's Law
00:55:17 choosing priors, a tale of two factories
01:02:19 integration, the computational Achilles heel
01:35:25 Bayesian social context in the ML community
01:10:24 frequentist methods as a first approximation
01:13:13 driven to Bayesian methods by small sample size
01:18:46 Bayesian optimization with automl, a job killer?
01:25:28 different approaches to hyper-parameter optimization
01:30:18 advice for aspiring Bayesians
01:33:59 who would Connor interview next?
Connor Tann: / connor-tann-a92906a1
/ connossor
Pod version: anchor.fm/machinelearningstre...

Пікірлер: 31
@felixwhise4165
@felixwhise4165 3 жыл бұрын
this podcast concept is so nice! Damn.
@amitkumarsingh406
@amitkumarsingh406 3 жыл бұрын
ASAP Ferg for a ML podcast intro. Genius move
@SuperChooser123
@SuperChooser123 3 жыл бұрын
Ride with the mob, Alhamdulillah
@oncedidactic
@oncedidactic 3 жыл бұрын
Hiphop intro song for applied bayes discussion episode, first thing, "let's talk about socrates and eikos" LOVE IT
@afafssaf925
@afafssaf925 3 жыл бұрын
I finished the episode. Thought you focused a bit too much on the philosophical advantages of Bayes. The more practical subjects you touched were discussed in a way that is a bit too disconnected from the practice nowadays (such as conjugacy, flat priors or certain versions of MCMC). And a lot of today's practices are not fully compatible with the supposed philosophical advantages (the idea that you should check your model and iterate on it goes against the pristine idea that "prior + likelihood = all that you can know about a data set"). Bring on Andrew Gelman (you talked briefly about him). He is very interesting to talk to. Very philosophical and practical at the same time.
@marekglowacki2607
@marekglowacki2607 3 жыл бұрын
Regarding hyperparameter optimization, there is a BOHB algo that really nicely combines bayesian optimization with succesive halving from bandit methods. Great episode.
@LiaAnggraini1
@LiaAnggraini1 3 жыл бұрын
This is what I need about bayesian especially for my master. Thank you ❣️
@quebono100
@quebono100 3 жыл бұрын
Wow this episode is so rich. Thats such an useful content. Im not a Data Scientist/ML Engineer, but even for me its helps a lot.
@DavenH
@DavenH 3 жыл бұрын
Loving the host-specific intro edit. Looking forward to the episode as always.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 жыл бұрын
Cheers Daven!! Duggar is a pro!
@nomenec
@nomenec 3 жыл бұрын
@@MachineLearningStreetTalk Umm ... I'm working on it ;-) and will improve. Keeping up with Tim is a seriously tall order. As for DavenH, thank you very much!
@matt.jordan
@matt.jordan 3 жыл бұрын
Seriously so much to think about with this one goddamn what a good podcast
@afafssaf925
@afafssaf925 3 жыл бұрын
Badass intro music!
@abby5493
@abby5493 3 жыл бұрын
Very interesting! Great work! 😍
@Chr0nalis
@Chr0nalis 3 жыл бұрын
Second channel that I clicked the bell Icon on :)
@nomenec
@nomenec 3 жыл бұрын
Thank you for your support, Teymur! That certainly puts us in a tiny elite group and one step closer to the sub big leagues ;-)
@leafy-chan880
@leafy-chan880 2 жыл бұрын
damn this is amazing, enjoyed the ride.
@compressionart4996
@compressionart4996 3 жыл бұрын
In the chapter on Gambling and coin flips, I had the same problem some time ago and I needed a tool to determine two phenomena are from the same distribution and the only parameter was the probability of heads. So I had to derive what you mentioned for calculating the true probability of heads with the complex integration and the confidence measures. and I always wondered why this is never discussed. Is there any book that considers this kind of analysis?
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 3 жыл бұрын
After Zeroth!! So technically second 😛😛🤘
@6710345
@6710345 3 жыл бұрын
Is it possible that Bayesian methods could be applied for NLU problems? As NLU problems are not truly solvable by a statistical method like deep NNs
@sayanbhattacharya3233
@sayanbhattacharya3233 11 күн бұрын
Damn! This ❤
@SimonJackson13
@SimonJackson13 2 жыл бұрын
Integrations? :D Are you sure the exactness of the integral is necessitated by the uncertainty?
@DavenH
@DavenH 3 жыл бұрын
Another great street talk, and as evidence I have a half page of new notes on the Bayes page in my ML wiki. Thanks so much. So not coming from a math or stats background, I didn't follow where these infinite, unsolvable integrals come from. "To conduct marginalization" -- correct me if I'm mistaken but I believe this means the summing over certain dimensions of a multivariate probability density function, arriving at a marginal distribution, so as to make predictions when you ONLY have the marginal variables. With discrete variables this summation seems almost trivial. So is it only a computational problem when performing this operation analytically, trying to find a closed-form solution?
@nomenec
@nomenec 3 жыл бұрын
DavenH that's great that you were able to pull more notes for your wiki! Regarding unsolvable integrals, if one of us literally said "infinite" we were probably exaggerating with poetic license for effect. There probably are scenarios where one might want to integrate over infinite dimensionality, but we can understand the difficulty without going to that extreme ;-) Where do these integrals arise? You are correct that they arise from marginalization to eliminate nuisance parameters, to compare models by integrating out all free parameters, to normalize distributions, to reduce dimensionality, to generate statistical summaries, and other purposes. Why are they hard? Well first off, naive integration say by grid sampling, grows in complexity exponentially as K^N where K is the number of samples per dimension and N in the number of dimensions. The is simply a consequence of the search volume growing exponentially. With complex functions K is often quite large in practice, hundreds or even thousands of samples, which makes for a nasty exponential growth, and definitely not a schoolbook 2^N growth which is gentle by contrast. Of course we try to be much smarter that simple grid sampling and deploy a variety of tricks to reduce that required sample count in aggregate. Unfortunately, those tricks can rapidly breakdown in higher dimensions. To get an intuition for why tricks breakdown, think of sampling f(x) as gaining information about function f in a small neighborhood around point x. A good concept of a small neighborhood around x would be the volume within a certain distance, r, from x. In other words, we learn information about f in a ball of radius r centered at x. Now the domain of x that spans that ball is obviously a hypercube with sides of length 2r. So what portion of that domain is covered by the ball? Let's calculate. For one dimension a "ball" is just a line segment of radius r of "volume" (length) 2r and likewise the volume of a 1-d hypercube is also 2r. So a single sample covers 100% of the local domain around x. In two dimensions the volume of the ball (a disk) is pi*r^2 while the volume of the hypercube is 4 r^2 so we cover 3.1459/4 or 79% of the local domain. In 3-d it's (4/3)pi*r^3 / 8r^3 or 52%. You can guess where this is going; the higher the dimension gets a single sample tells us less and less even compared to a small local hypercube around the sample point! In fact given that the volume of an n-ball of radius r is pi^(d/2)/Gamma[n/2+1]*r^n and the volume of an n-cube of side 2r is (2r)^n, the ratio of n-ball/n-cube approaches zero *very* fast. At just 10 dimensions each point is covering less than 1% of the volume. To recap, as the dimensionality increases our search volume grows exponentially and each sample tells us super-exponentially less. Ouch! In that sense one can roughly say that integration becomes "infinitely" hard since we are rapidly approaching a divide by zero number of samples ;-) Cheers!
@DavenH
@DavenH 3 жыл бұрын
@@nomenec Keith, thanks a million for taking the time to spell this out so unambiguously. I get it, the curse of dimensionality subsumes the problem rapidly. I had no idea that such Bayesian methods were used on 100s of dimension problems; but that's a sampling bias on my end. Textbooks and blogs only ever seem to use at most 3 easily visualized dimensions. Cheers!
@dhruvpatel4948
@dhruvpatel4948 3 жыл бұрын
Zeroth
@dontthinkastronaut
@dontthinkastronaut 3 жыл бұрын
no mention of kolmogorov in the intro lul
@nomenec
@nomenec 3 жыл бұрын
Kolmogorov is for frequentists ;-). Bayesians rely on a more general concept of *conditional* probability first axiomatized by Richard Cox (1,2,3,4). Cox's axioms allow one to prove conditional probability theory as a theoretical result. In a Kolmogorov framework one must instead assume/introduce conditional probability as another postulate/definition (5). Here are Cox's original postulates for those interested (though the paper is a great read even for just the first section): Let b|a denote some measure of the reasonable credibility of the proposition b when the proposition a is known to be true, . denote proposition conjunction, and ~ denote proposition negation. I) c.b|a = F(c|b.a,b|a) where F is some function of two variables II) ~b|a = S(b|a) where S is some function of a single variable From those he derives a generalized family of rules of conditional probability where if we assume, by convention, that certainty is represented by the real number 1 and impossibility by 0 (and maybe another convention as well such as the lowest order polynomial function) then we get the conventional laws of conditional probability. (1) jimbeck.caltech.edu/summerlectures/references/ProbabilityFrequencyReasonableExpectation.pdf (2) stats.stackexchange.com/questions/126056/do-bayesians-accept-kolmogorovs-axioms (3) en.wikipedia.org/wiki/Cox%27s_theorem (4) en.wikipedia.org/wiki/Probability_axioms (5) en.wikipedia.org/wiki/Conditional_probability
#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)
1:40:07
Machine Learning Street Talk
Рет қаралды 17 М.
#033 Karl Friston - The Free Energy Principle
1:51:25
Machine Learning Street Talk
Рет қаралды 23 М.
格斗裁判暴力执法!#fighting #shorts
00:15
武林之巅
Рет қаралды 93 МЛН
Elliptic Curves - Computerphile
8:42
Computerphile
Рет қаралды 535 М.
The Bayesian Trap
10:37
Veritasium
Рет қаралды 4 МЛН
Kernels!
1:37:30
Machine Learning Street Talk
Рет қаралды 19 М.
EMERGENCE.
1:55:29
Machine Learning Street Talk
Рет қаралды 18 М.
#044 - Data-efficient Image Transformers (Hugo Touvron)
52:26
Machine Learning Street Talk
Рет қаралды 5 М.
#036 - Max Welling: Quantum, Manifolds & Symmetries in ML
1:42:32
Machine Learning Street Talk
Рет қаралды 21 М.
Dr. THOMAS PARR - Active Inference
1:37:10
Machine Learning Street Talk
Рет қаралды 16 М.
NLP is not NLU and GPT-3 - Walid Saba
2:20:33
Machine Learning Street Talk
Рет қаралды 11 М.
How Bayes Theorem works
25:09
Brandon Rohrer
Рет қаралды 533 М.
AI Hardware w/ Jim Keller
33:29
Tenstorrent
Рет қаралды 26 М.