Рет қаралды 226
Tuesday, October 1st, 4-5pm EST | Amy Lu, PhD student (UC Berkeley)
Existing protein machine learning representations typically model either the sequence or structure distribution, with the other modality implicit. The latent space of sequence-to-structure prediction models such as ESMFold represents the joint distribution of sequence and structure; however, we find these embeddings to exhibit massive activations, whereby some channels have values 3000x higher than others, regardless of the input. Further, on continuous compression schemes, ESMFold embeddings can be reduced by a factor of 128x along the channel and 8x along the length, while retaining structure information at 2A scale accuracy, and performing competitively on protein function and localization benchmarks. On discrete compression schemes, we construct a tokenized all-atom structure vocabulary that retains high reconstruction accuracy, thus introducing a tokenized representation of all-atom structure that can be obtained from sequence alone. We term this series of embeddings as CHEAP (Compressed Hourglass Embedding Adaptations of Proteins) embeddings, obtained via the HPCT (Hourglass Protein Compression Transformer) architecture. CHEAP is a compact representation of both protein structure and sequence, sheds light on information content asymmetries between sequence and structure, democratizes representations captured by large models.
Preprint: www.biorxiv.or...