Along with the Meta Segment Anything Model 2 (SAM 2), we also released SA-V: a dataset containing ~51K videos and >600K masklet annotations.
— AI at Meta (@AIatMeta) 30 juillet 2024
We’re sharing this dataset with the hope that this work will help accelerate new computer vision research ➡️ https://t.co/PkgCns9qjz pic.twitter.com/j6hDTFWH4b
Along with the Meta Segment Anything Model 2 (SAM 2), we also released SA-V: a dataset containing ~51K videos and >600K masklet annotations. We’re sharing this dataset with the hope that this work will help accelerate new computer vision research https://
go.fb.me/0boivt
Leave a Reply