Membership Inference Attacks against Machine Learning Models

  Рет қаралды 15,112

IEEE Symposium on Security and Privacy

IEEE Symposium on Security and Privacy

Күн бұрын

Пікірлер: 13
@tjr3117
@tjr3117 3 жыл бұрын
Thank you for sharing this video. It's good to study about privacy leakage in machine learning with sensitive data.
@valentin8482
@valentin8482 5 жыл бұрын
Outstandingly good explanation of the topic. Thanks a lot
@top_coder
@top_coder 7 жыл бұрын
Great talk! I learned a a lot.
@simoncarter3541
@simoncarter3541 Жыл бұрын
What is the difference between the train and test data for the shadow models? As they're labelled "in" and "out" respectively?
@silentrandom
@silentrandom 4 жыл бұрын
Very well explained. Is there any talk related to how privacy-preserving ml e.g. Homomorphic encryption, mpc technique countermeasure such attacks?
@tjr3117
@tjr3117 3 жыл бұрын
In my opinion, i'm now studying privacy preserving methods, the best thing against this attack is differential privacy. PHE is too slow, so many research to faster encryption performance with gpu parallel operation is going on.
@sarbajit.g
@sarbajit.g 10 ай бұрын
This attack is implemented over too many assumptions. If somebody knows a similar model architecture and can collect similar types of datasets why he might bother to attack a model. Also to launch an MIA, to only know, that a particular person's data, is used during training in an ML model seems like absolute overwork. Is there any real-world example of MIA till data or it is just for an academic purpose?
@s4098429
@s4098429 Жыл бұрын
Terrible audio, very hard to understand.
@高天林-r2w
@高天林-r2w 5 жыл бұрын
Good paper, Great talk
@Orqngesyck
@Orqngesyck 5 жыл бұрын
Great Talk
@ramizkarim9037
@ramizkarim9037 4 күн бұрын
CANT UNDERSTAND THE ACCENT
@chaoliu4328
@chaoliu4328 3 жыл бұрын
very clear
@hamidfazli6936
@hamidfazli6936 2 жыл бұрын
Very clever!
Membership inference attacks from first principles
22:26
IEEE Symposium on Security and Privacy
Рет қаралды 4,8 М.
SecureML: A System for Scalable Privacy-Preserving Machine Learning
18:36
IEEE Symposium on Security and Privacy
Рет қаралды 4,2 М.
The Lost World: Living Room Edition
0:46
Daniel LaBelle
Рет қаралды 27 МЛН
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
19:00
IEEE Symposium on Security and Privacy
Рет қаралды 3,9 М.
Towards Evaluating the Robustness of Neural Networks
20:49
IEEE Symposium on Security and Privacy
Рет қаралды 8 М.
On Evaluating Adversarial Robustness
50:32
CAMLIS
Рет қаралды 9 М.
Overview of Model Inversion Attacks
18:05
Tanzim Mostafa
Рет қаралды 180
Differential Privacy - Simply Explained
6:59
Simply Explained
Рет қаралды 96 М.
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
24:30
IEEE Symposium on Security and Privacy
Рет қаралды 5 М.
NDSS 2018 -  Trojaning Attack on Neural Networks
19:11
NDSS Symposium
Рет қаралды 2,3 М.