Рет қаралды 1,097
All across the world, everyone is pedal-to-the-metal on machine intelligence, almost as though we're still assembling the plane mid-flight. With that being said, there's a lot about machine learning models that might surprise you and definitely surprises many ML and security engineers. For example, models can contain malware and still give accurate results. Did you know you can administer the ML repos for household names and just have their engineers hand you over their models, training sets, and more? As it stands today, ML is a great place for an attacker to operate in, because these environments have access to your data 'crown jewels' by necessity. No lengthy or complicated pivoting and privesc processes are needed. Simultaneously, tools to assess models proactively for safety, DFIR understanding of ML constructs, and how to analyze models suspected to be malicious are all few and far between.
This presentation demonstrates how we have distributed malware using undocumented, novel techniques to compromise some of the largest companies in the world, one of which we discovered entirely unintentionally! Additionally, we will show you how to write ML malware, how to distribute it, and how to loot the environments after gaining access. You'll learn both how I developed a technique to allow me to avoid detection and what you can expect to find post-compromise. Finally, we'll discuss some techniques and tools available to analyze models, and we'll talk through threat hunting we've conducted to look for machine learning malware in the wild.
All the work done will be released as open source code. We hope to not only help you do what we've done (so you can try out your own ideas and to help secure your organization) but also provide advice on mitigation and prevention.
By:
Adrian Wood | Security Engineer, Dropbox
Mary Walker | Security Engineer, Dropbox
Full Abstract & Presentation Materials:
www.blackhat.c...