ViT, CLIP, SigLIP

Use this notebook as a reference and the video below, to answer the following questions.

Q1: What trade-offs come with smaller vs. larger patch sizes in ViT?

# Your answer or notes here

Q2: What inductive biases do CNNs have that ViTs lack? What are the consequences?

# Your answer or notes here

Q3: Why is positional encoding necessary in ViT, and how is it implemented?

# Your answer or notes here

Q4: What are the two separate encoders in CLIP, and what is their purpose?

# Your answer or notes here

Q5: Explain CLIP’s contrastive loss. How does it align image and text representations?

# Your answer or notes here

Q6: How does CLIP enable zero-shot classification? What role do prompts like “a photo of a ___” play?

# Your answer or notes here

Q7: What is the main difference in loss function between CLIP and SigLIP?

# Your answer or notes here

Q8: What might be the impact of removing the softmax normalization across the batch in SigLIP?

# Your answer or notes here

Q9: What are potential advantages of SigLIP when deploying models in low-latency environments?

# Your answer or notes here