Publication: SignCLIP: Connecting Text and Sign Language by Contrastive Learning
SignCLIP: Connecting Text and Sign Language by Contrastive Learning
Date
Date
Date
Citations
Jiang, Z., Sant Muniesa, G., Moryossef, A., Müller, M., Sennrich, R., & Ebling, S. (2024). SignCLIP: Connecting Text and Sign Language by Contrastive Learning. 9171–9193. https://aclanthology.org/2024.emnlp-main.518
Abstract
Abstract
Abstract
We present SignCLIP, which re-purposes CLIP (Contrastive Language-Image Pretraining) to project spoken language text and sign language videos, two classes of natural languages of distinct modalities, into the same space. SignCLIP is an efficient method of learning useful visual representations for sign language processing from large-scale, multilingual video-text pairs, without directly optimizing for a specific task or sign language which is often of limited size.We pretrain SignCLIP on Spreadthesign, a prominent sign language dictio
Additional indexing
Creators (Authors)
Event Title
Event Title
Event Title
Event Location
Event Location
Event Location
Event Start Date
Event Start Date
Event Start Date
Event End Date
Event End Date
Event End Date
Page range/Item number
Page range/Item number
Page range/Item number
Page end
Page end
Page end
Item Type
Item Type
Item Type
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Language
Language
Language
Date available
Date available
Date available
OA Status
OA Status
OA Status
Citations
Jiang, Z., Sant Muniesa, G., Moryossef, A., Müller, M., Sennrich, R., & Ebling, S. (2024). SignCLIP: Connecting Text and Sign Language by Contrastive Learning. 9171–9193. https://aclanthology.org/2024.emnlp-main.518