Multi-Armed Bandits and A/B Testing

  Рет қаралды 5,660

Jay Feng

Jay Feng

Күн бұрын

Today I'm talking to Sandeep, a PhD student studying Information and Decision Sciences at the University of Minnesota. We talk Multi-Armed Bandits, A/B Testing, and the key differences between the two.
Check out Sandeep's website: sandeepgangarapu.com/
Want to be featured in the next mock interview video? Apply here: airtable.com/shrdQrwKK7xxGLm6l
👉 Subscribe to my data science channel: bit.ly/2xYkyUM
Use the code "datasciencejay" and get 10% off data science interview prep 🔥 : www.interviewquery.com/pricin...
❓ Check out our data science courses: www.interviewquery.com/course...
🔑 Get professional coaching here: www.interviewquery.com/coachi...
🐦 Follow us on Twitter: / interview_query
More from Jay:
Read my personal blog: datastream.substack.com/
Follow me on Linkedin: / jay-feng-ab66b049
Find me on Twitter: / datasciencejay

Пікірлер: 10
@CruiserPup
@CruiserPup 2 жыл бұрын
Wow, this was such a great convo! Thanks Sandeep for sharing your wisdom, going to be checking out your other work!
@tinawang1291
@tinawang1291 2 жыл бұрын
Learnt something today , thanks! I think for the last example of unlearnai, they will still need to test few real people with placebo to validate their model performance. With a proven working model, they can test mainly with real drug for side effect, etc
@YaminiKurra
@YaminiKurra 2 жыл бұрын
Such a great talk sandy! So proud of you
@ravennsiregar
@ravennsiregar 5 ай бұрын
Hello Sandeep, thank you for the quick overrun. Do you mind to tell us how to connect or discuss with you after this session? Follow up, so I feel that Multi Armed Bandit is sort of Optimisation Problem given such constraint that it is quite hard and ineffective to perform AB Testing? Do you agree with such motion? Let me know your inputs
@adhithyajoe1417
@adhithyajoe1417 2 жыл бұрын
Great content!!
@shankars4384
@shankars4384 8 ай бұрын
This was a great video!
@sriharshamadala4656
@sriharshamadala4656 2 жыл бұрын
Its not often you hear a researcher give a high level talk that regular folks can understand. Great talk. Enjoyed it thoroughly. About that 20$ though, whats the algo haha
@ravennsiregar
@ravennsiregar 5 ай бұрын
at the moment it is often using UCB/Upper Confidence Bound to maximise utility return. But the overall problem is, in casino the reward is not simply one state. It is far complex than simple one state bandit context tho. The casino example is a mere oversimplifying.
@iancheung3587
@iancheung3587 2 жыл бұрын
What's Sandeep's full name/ linkedin
@radio-controlledcouk
@radio-controlledcouk 11 ай бұрын
You cant use Multi armed bandits in online experimentation because they cause return user bias. MAB's can only be used once per user. The problem is that bandit machines have a fixed probability of payout.... whilst a user of a websites probability of buying something increases over time. This means that if they are switched into a new variation that new variation is more likely to incur an outcome of a sale...... flawed experiment!
A/B Testing Interview with a Google Data Scientist
13:06
Jay Feng
Рет қаралды 32 М.
Best Multi-Armed Bandit Strategy? (feat: UCB Method)
14:13
ritvikmath
Рет қаралды 38 М.
Super gymnastics 😍🫣
00:15
Lexa_Merin
Рет қаралды 92 МЛН
Watermelon Cat?! 🙀 #cat #cute #kitten
00:56
Stocat
Рет қаралды 26 МЛН
Reinforcement Learning Theory: Multi-armed bandits
12:19
Boris Meinardus
Рет қаралды 3,8 М.
Obumu mu kukungaana | Abafiripi 2:1-4 | Peter H Ssenyimba
25:03
St. Stephen Luzira COU
Рет қаралды 3
5 concepts of A/B testing you should know as a Data Scientist
11:14
Amazon Data Scientist Mock Interview - AB Testing
16:15
DataInterview
Рет қаралды 24 М.
Wayfair Data Science Explains It All: Multi-Armed Bandits
5:21
Wayfair Data Science
Рет қаралды 18 М.
wireless switch without wires part 6
0:49
DailyTech
Рет қаралды 3,8 МЛН
Непробиваемый телевизор 🤯
0:23
FATA MORGANA
Рет қаралды 449 М.
China 🇨🇳 Phone 📱 Charger
0:42
Edit Zone 1.8M views
Рет қаралды 382 М.
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30