Thank you for sharing this video. It's good to study about privacy leakage in machine learning with sensitive data.
@valentin84825 жыл бұрын
Outstandingly good explanation of the topic. Thanks a lot
@top_coder7 жыл бұрын
Great talk! I learned a a lot.
@simoncarter3541 Жыл бұрын
What is the difference between the train and test data for the shadow models? As they're labelled "in" and "out" respectively?
@silentrandom4 жыл бұрын
Very well explained. Is there any talk related to how privacy-preserving ml e.g. Homomorphic encryption, mpc technique countermeasure such attacks?
@tjr31173 жыл бұрын
In my opinion, i'm now studying privacy preserving methods, the best thing against this attack is differential privacy. PHE is too slow, so many research to faster encryption performance with gpu parallel operation is going on.
@sarbajit.g10 ай бұрын
This attack is implemented over too many assumptions. If somebody knows a similar model architecture and can collect similar types of datasets why he might bother to attack a model. Also to launch an MIA, to only know, that a particular person's data, is used during training in an ML model seems like absolute overwork. Is there any real-world example of MIA till data or it is just for an academic purpose?