SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
neither
  • 'it might sound strange, but in my opinion, sams intelligence intimidates him from expressing himself and creating personal art. for example, since product is a masterpiece in the sense, the bar is set very high, so he might even subconsciously be unable to put anything out less'
  • 'lately, i really enjoy the genre of joke that makes you say the punchline in your head.'
  • 'any idea in regard to the product product not being seen? i have 1 device with it, the rest are missing it. same wufb policies.'
pit
  • "brand or brand are behaving like lazy interns. when you need something useful from them like researching and consolidating a large bunch of information they'll just tell you to look it up yourself or right away refuse to do the work. useless."
  • 'the moment i found out what exactly product does i just uninstalled product and went back to 10'
  • "at least 80% of the product stuff posted here has produced erroneous results, and many have utilized ip theft/copyright infringement in informing the model. we're not going to spend community time on it at this point."
peak
  • "man, product/whatever is my new best friend. i like product but the integration of product into office and product is a lot of fun. i just spent the day feeding it my training presentation i'm preparing in my day job and it was very helpful. almost better than humans."
  • "excited to share my experience with product, an incredible language model by brand! from answering questions to creative writing, it's a powerful tool that amazes me every time."
  • 'product in product is a game changer!! here is a list of things it can do: it can answer your questions in natural language. it can summarize content to give you a brief overview it can adjust your pcs settings it can help troubleshoot issues. 1/2'

Evaluation

Metrics

Label Accuracy F1 Precision Recall
all 0.8996 [0.5217391304347826, 0.5142857142857142, 0.9478260869565217] [0.42857142857142855, 0.4090909090909091, 0.9775784753363229] [0.6666666666666666, 0.6923076923076923, 0.919831223628692]

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("jamiehudson/725_model_v2")
# Run inference
preds = model("product the way it shows the sources is so fucking cool, this new ai is amazing")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 5 29.1484 90
Label Training Sample Count
pit 44
peak 62
neither 150

Training Hyperparameters

  • batch_size: (32, 32)
  • num_epochs: (3, 3)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0000 1 0.2383 -
0.0119 50 0.2395 -
0.0237 100 0.2129 -
0.0356 150 0.1317 -
0.0474 200 0.0695 -
0.0593 250 0.01 -
0.0711 300 0.0063 -
0.0830 350 0.0028 -
0.0948 400 0.0026 -
0.1067 450 0.0021 -
0.1185 500 0.0018 -
0.1304 550 0.0016 -
0.1422 600 0.0014 -
0.1541 650 0.0015 -
0.1659 700 0.0013 -
0.1778 750 0.0012 -
0.1896 800 0.0012 -
0.2015 850 0.0012 -
0.2133 900 0.0011 -
0.2252 950 0.0011 -
0.2370 1000 0.0009 -
0.2489 1050 0.001 -
0.2607 1100 0.0009 -
0.2726 1150 0.0008 -
0.2844 1200 0.0008 -
0.2963 1250 0.0009 -
0.3081 1300 0.0008 -
0.3200 1350 0.0007 -
0.3318 1400 0.0007 -
0.3437 1450 0.0007 -
0.3555 1500 0.0006 -
0.3674 1550 0.0007 -
0.3792 1600 0.0007 -
0.3911 1650 0.0008 -
0.4029 1700 0.0006 -
0.4148 1750 0.0006 -
0.4266 1800 0.0006 -
0.4385 1850 0.0006 -
0.4503 1900 0.0006 -
0.4622 1950 0.0006 -
0.4740 2000 0.0006 -
0.4859 2050 0.0005 -
0.4977 2100 0.0006 -
0.5096 2150 0.0006 -
0.5215 2200 0.0005 -
0.5333 2250 0.0005 -
0.5452 2300 0.0005 -
0.5570 2350 0.0006 -
0.5689 2400 0.0005 -
0.5807 2450 0.0005 -
0.5926 2500 0.0006 -
0.6044 2550 0.0006 -
0.6163 2600 0.0005 -
0.6281 2650 0.0005 -
0.6400 2700 0.0005 -
0.6518 2750 0.0005 -
0.6637 2800 0.0005 -
0.6755 2850 0.0005 -
0.6874 2900 0.0005 -
0.6992 2950 0.0004 -
0.7111 3000 0.0004 -
0.7229 3050 0.0004 -
0.7348 3100 0.0005 -
0.7466 3150 0.0005 -
0.7585 3200 0.0005 -
0.7703 3250 0.0004 -
0.7822 3300 0.0004 -
0.7940 3350 0.0004 -
0.8059 3400 0.0004 -
0.8177 3450 0.0004 -
0.8296 3500 0.0004 -
0.8414 3550 0.0004 -
0.8533 3600 0.0004 -
0.8651 3650 0.0004 -
0.8770 3700 0.0004 -
0.8888 3750 0.0004 -
0.9007 3800 0.0004 -
0.9125 3850 0.0004 -
0.9244 3900 0.0005 -
0.9362 3950 0.0004 -
0.9481 4000 0.0004 -
0.9599 4050 0.0004 -
0.9718 4100 0.0004 -
0.9836 4150 0.0004 -
0.9955 4200 0.0004 -
0.0000 1 0.2717 -
0.0013 50 0.0686 -
0.0026 100 0.088 -
0.0000 1 0.1796 -
0.0013 50 0.0584 -
0.0026 100 0.1018 -
0.0039 150 0.128 -
0.0052 200 0.0761 -
0.0065 250 0.0216 -
0.0078 300 0.1652 -
0.0091 350 0.0384 -
0.0104 400 0.0062 -
0.0117 450 0.0442 -
0.0130 500 0.0452 -
0.0143 550 0.0081 -
0.0156 600 0.0205 -
0.0169 650 0.0125 -
0.0182 700 0.0012 -
0.0195 750 0.0011 -
0.0208 800 0.0315 -
0.0221 850 0.0009 -
0.0009 1 0.0006 -
0.0429 50 0.0008 -
0.0858 100 0.0005 -
0.1288 150 0.0015 -
0.1717 200 0.0013 -
0.2146 250 0.0237 -
0.2575 300 0.0304 -
0.3004 350 0.0005 -
0.3433 400 0.0013 -
0.3863 450 0.03 -
0.4292 500 0.0005 -
0.4721 550 0.0006 -
0.5150 600 0.0005 -
0.5579 650 0.0005 -
0.6009 700 0.0004 -
0.6438 750 0.0004 -
0.6867 800 0.0004 -
0.7296 850 0.0004 -
0.7725 900 0.0004 -
0.8155 950 0.0003 -
0.8584 1000 0.0004 -
0.9013 1050 0.0003 -
0.9442 1100 0.0004 -
0.9871 1150 0.0003 -
1.0300 1200 0.0003 -
1.0730 1250 0.0004 -
1.1159 1300 0.0003 -
1.1588 1350 0.0005 -
1.2017 1400 0.0003 -
1.2446 1450 0.0003 -
1.2876 1500 0.0003 -
1.3305 1550 0.0003 -
1.3734 1600 0.0003 -
1.4163 1650 0.0003 -
1.4592 1700 0.0003 -
1.5021 1750 0.0005 -
1.5451 1800 0.0003 -
1.5880 1850 0.0003 -
1.6309 1900 0.0003 -
1.6738 1950 0.0005 -
1.7167 2000 0.0003 -
1.7597 2050 0.0007 -
1.8026 2100 0.0003 -
1.8455 2150 0.0003 -
1.8884 2200 0.0003 -
1.9313 2250 0.0003 -
1.9742 2300 0.0003 -
2.0172 2350 0.0003 -
2.0601 2400 0.0003 -
2.1030 2450 0.0003 -
2.1459 2500 0.0003 -
2.1888 2550 0.0002 -
2.2318 2600 0.0003 -
2.2747 2650 0.0004 -
2.3176 2700 0.0002 -
2.3605 2750 0.0003 -
2.4034 2800 0.0002 -
2.4464 2850 0.0002 -
2.4893 2900 0.0002 -
2.5322 2950 0.0002 -
2.5751 3000 0.0002 -
2.6180 3050 0.0004 -
2.6609 3100 0.0004 -
2.7039 3150 0.0003 -
2.7468 3200 0.0003 -
2.7897 3250 0.0003 -
2.8326 3300 0.0003 -
2.8755 3350 0.0003 -
2.9185 3400 0.0003 -
2.9614 3450 0.0005 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.5.1
  • Transformers: 4.38.1
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.18.0
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
26
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jamiehudson/725_model_v2

Finetuned
(310)
this model

Evaluation results