Abstract: Vision-language models (VLMs), particularly contrastive language-image pretraining (CLIP), have recently demonstrated great success across various vision tasks. However, their potential in ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results