Research from Meta AI reduces latency of existing Vision Transformer models with no additional training. Token Merging can cut inference time in half and we expect it to unlock more use of large-scale ViT models in real-world applications. Read more https://
bit.ly/3ZJv61D
Meta AI Reduces Vision Transformer Latency with Token Merging
By
–
Leave a Reply