New research from Meta AI reduces latency of existing Vision Transformer models with no additional training. ToMe combines similar tokens, reducing computation w/o losing information. Results are 2-3x speed for state-of-the-art models w/ minimal performance loss.
— AI at Meta (@AIatMeta) 13 février 2023
Read more ⬇️
New research from Meta AI reduces latency of existing Vision Transformer models with no additional training. ToMe combines similar tokens, reducing computation w/o losing information. Results are 2-3x speed for state-of-the-art models w/ minimal performance loss. Read more
Leave a Reply