text
stringlengths 0
1.36k
|
---|
**Pooling mechanism |
**: Embeddings for each token are useful, but how do we roll those up into something more meaningful? To get a single vector representation of the |
entire document or query |
, a |
pooling mechanism |
is applied to the token |
level embeddings. |
### Pooling Mechanism Pooling mechanisms are used to get an embedding that represents an entire document or query. How can we condense the token-level embeddings into a single vector? There are several common approaches: |
#### Mean Pooling Mean pooling involves averaging the embeddings of all tokens in the sequence. This method takes the mean of each dimension across all token embeddings, resulting in a single embedding vector that represents the average contextual information of the entire input. |
This approach provides a smooth and balanced representation by considering all tokens equally. For example: |
` |
#### [CLS] Token Embedding In models like BERT, a special [CLS] token is added at the beginning of the input sequence. The embedding of this [CLS] token, produced by the final layer of the model, is often used as a representation of the entire sequence. The [CLS] token is designed to capture the aggregated information of the entire input sequence. |
This approach provides a strong, contextually rich representation due to its position and function. |
` |
#### Max Pooling Max pooling selects the maximum value from each dimension across all token embeddings. This method highlights the most significant features in each dimension, providing a single vector representation that emphasizes the most prominent aspects of the input. |
This method captures the most salient features, and can be useful in scenarios where the most significant feature in each dimension is important. |
` |
In summary: |
**Mean Pooling |
**: Averages all token embeddings to get a balanced representation. |
**[CLS] Token Embedding |
**: Uses the embedding of the [CLS] token, which is designed to capture the overall context of the sequence. |
**Max Pooling |
**: Selects the maximum value from each dimension to emphasize the most significant features. |
These pooling mechanisms transform the token-level embeddings into a single vector that represents the entire input sequence, making it suitable for downstream tasks such as similarity comparisons and document retrieval. |
## Loss Functions The training objective is to learn embeddings such that queries are close to their relevant documents in the vector space and far from irrelevant documents. |
Common loss functions include: |
**Contrastive loss |
**: Measures the distance between positive pairs and minimizes it, while maximizing the distance between negative pairs. See also Geoffrey Hinton's paper on [Contrastive Divergence](http://www.cs.toronto.edu/~hinton/absps/nccd.pdf). |
**Triplet loss |
**: Involves a triplet of (query, positive document, negative document) and aims to ensure that the query is closer to the positive document than to the negative document by a certain margin. This [paper on FaceNet](https://arxiv.org/abs/1503.03832) describes using triplets, and [this repository](https://github.com/davidsandberg/facenet) has code samples. |
**Cosine similarity loss |
**: Maximizes the cosine similarity between the embeddings of positive pairs and minimizes it for negative pairs. |
## Training Procedure |
The training process involves feeding pairs of queries and documents through the model, obtaining their embeddings, and then computing the loss based on the similarity or dissimilarity of these embeddings. |
**Input pairs |
**: Query and document pairs are fed into the model. |
**Embedding generation |
**: The model generates embeddings for the query and document. |
**Loss computation |
**: The embeddings are used to compute the loss (e.g., contrastive loss, triplet loss). |
**Backpropagation |
**: The loss is backpropagated to update the model weights. |
## Embedding Extraction After training, the model is often truncated to use only the layers up to the point where the desired embeddings are produced. |
For instance: |
**Final layer embeddings |
**: In many cases, the embeddings from the final layer of the model are used. |
**Intermediate layer embeddings |
**: Sometimes, embeddings from an intermediate layer are used if they are found to be more useful for the specific task. |
## Let's consider a real example |
Subsets and Splits