Assignment 4
Points:
ViT, CLIP, SigLIP
Use this notebook as a reference and the video below, to answer the following questions.
Q1: What trade-offs come with smaller vs. larger patch sizes in ViT?
Q2: What inductive biases do CNNs have that ViTs lack? What are the consequences?
Q3: Why is positional encoding necessary in ViT, and how is it implemented?
Q4: What are the two separate encoders in CLIP, and what is their purpose?
Q5: Explain CLIP’s contrastive loss. How does it align image and text representations?
Q6: How does CLIP enable zero-shot classification? What role do prompts like “a photo of a ___” play?
Q7: What is the main difference in loss function between CLIP and SigLIP?
Q8: What might be the impact of removing the softmax normalization across the batch in SigLIP?
Q9: What are potential advantages of SigLIP when deploying models in low-latency environments?
Source: ViT, CLIP, SigLIP