zenml/finetuned-snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m")
# Run inference
sentences = [
'What is the expiration time for the GCP OAuth2 token in the ZenML configuration?',
'━━━━━┛\n\nConfiguration\n\n┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓┃ PROPERTY │ VALUE ┃\n\n┠────────────┼────────────┨\n\n┃ project_id │ zenml-core ┃\n\n┠────────────┼────────────┨\n\n┃ token │ [HIDDEN] ┃\n\n┗━━━━━━━━━━━━┷━━━━━━━━━━━━┛\n\nNote the temporary nature of the Service Connector. It will expire and become unusable in 1 hour:\n\nzenml service-connector list --name gcp-oauth2-token\n\nExample Command Output\n\n┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓\n\n┃ ACTIVE │ NAME │ ID │ TYPE │ RESOURCE TYPES │ RESOURCE NAME │ SHARED │ OWNER │ EXPIRES IN │ LABELS ┃\n\n┠────────┼──────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨\n\n┃ │ gcp-oauth2-token │ ec4d7d85-c71c-476b-aa76-95bf772c90da │ 🔵 gcp │ 🔵 gcp-generic │ <multiple> │ ➖ │ default │ 59m35s │ ┃\n\n┃ │ │ │ │ 📦 gcs-bucket │ │ │ │ │ ┃\n\n┃ │ │ │ │ 🌀 kubernetes-cluster │ │ │ │ │ ┃\n\n┃ │ │ │ │ 🐳 docker-registry │ │ │ │ │ ┃\n\n┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛\n\nAuto-configuration\n\nThe GCP Service Connector allows auto-discovering and fetching credentials and configuration set up by the GCP CLI on your local host.',
'Can you list the steps to set up a Docker registry on a Kubernetes cluster?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_384
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.2952 |
cosine_accuracy@3 | 0.5241 |
cosine_accuracy@5 | 0.5843 |
cosine_accuracy@10 | 0.6867 |
cosine_precision@1 | 0.2952 |
cosine_precision@3 | 0.1747 |
cosine_precision@5 | 0.1169 |
cosine_precision@10 | 0.0687 |
cosine_recall@1 | 0.2952 |
cosine_recall@3 | 0.5241 |
cosine_recall@5 | 0.5843 |
cosine_recall@10 | 0.6867 |
cosine_ndcg@10 | 0.4908 |
cosine_mrr@10 | 0.4284 |
cosine_map@100 | 0.4358 |
Information Retrieval
- Dataset:
dim_256
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.259 |
cosine_accuracy@3 | 0.506 |
cosine_accuracy@5 | 0.5783 |
cosine_accuracy@10 | 0.6446 |
cosine_precision@1 | 0.259 |
cosine_precision@3 | 0.1687 |
cosine_precision@5 | 0.1157 |
cosine_precision@10 | 0.0645 |
cosine_recall@1 | 0.259 |
cosine_recall@3 | 0.506 |
cosine_recall@5 | 0.5783 |
cosine_recall@10 | 0.6446 |
cosine_ndcg@10 | 0.4548 |
cosine_mrr@10 | 0.3935 |
cosine_map@100 | 0.4034 |
Information Retrieval
- Dataset:
dim_128
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.2711 |
cosine_accuracy@3 | 0.4699 |
cosine_accuracy@5 | 0.5663 |
cosine_accuracy@10 | 0.6145 |
cosine_precision@1 | 0.2711 |
cosine_precision@3 | 0.1566 |
cosine_precision@5 | 0.1133 |
cosine_precision@10 | 0.0614 |
cosine_recall@1 | 0.2711 |
cosine_recall@3 | 0.4699 |
cosine_recall@5 | 0.5663 |
cosine_recall@10 | 0.6145 |
cosine_ndcg@10 | 0.4443 |
cosine_mrr@10 | 0.3894 |
cosine_map@100 | 0.3989 |
Information Retrieval
- Dataset:
dim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.2169 |
cosine_accuracy@3 | 0.4217 |
cosine_accuracy@5 | 0.5181 |
cosine_accuracy@10 | 0.5843 |
cosine_precision@1 | 0.2169 |
cosine_precision@3 | 0.1406 |
cosine_precision@5 | 0.1036 |
cosine_precision@10 | 0.0584 |
cosine_recall@1 | 0.2169 |
cosine_recall@3 | 0.4217 |
cosine_recall@5 | 0.5181 |
cosine_recall@10 | 0.5843 |
cosine_ndcg@10 | 0.3964 |
cosine_mrr@10 | 0.3365 |
cosine_map@100 | 0.3466 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,490 training samples
- Columns:
positive
,anchor
, andnegative
- Approximate statistics based on the first 1000 samples:
positive anchor negative type string string string details - min: 9 tokens
- mean: 21.02 tokens
- max: 64 tokens
- min: 23 tokens
- mean: 375.16 tokens
- max: 512 tokens
- min: 10 tokens
- mean: 17.51 tokens
- max: 31 tokens
- Samples:
positive anchor negative What details can you provide about the mlflow_training_pipeline runs listed in the ZenML documentation?
mlflow_training_pipeline', ┃┃ │ │ │ 'zenml_pipeline_run_uuid': 'a5d4faae-ef70-48f2-9893-6e65d5e51e98', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.005'} ┃
┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨
┃ tensorflow-mnist-model │ 2 │ Run #2 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_08_467212', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃
┃ │ │ │ 'zenml_pipeline_run_uuid': '11858dcf-3e47-4b1a-82c5-6fa25ba4e037', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.003'} ┃
┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨
┃ tensorflow-mnist-model │ 1 │ Run #1 of the mlflow_training_pipeline. │ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃
┃ │ │ │ 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃Can you explain how to configure the TensorFlow settings for a different project?
How do you register a GCP Service Connector that uses account impersonation to access the zenml-bucket-sl GCS bucket?
esource-id zenml-bucket-sl
Example Command OutputError: Service connector 'gcp-empty-sa' verification failed: connector authorization failure: failed to fetch GCS bucket
zenml-bucket-sl: 403 GET https://storage.googleapis.com/storage/v1/b/zenml-bucket-sl?projection=noAcl&prettyPrint=false:
[email protected] does not have storage.buckets.get access to the Google Cloud Storage bucket.
Permission 'storage.buckets.get' denied on resource (or it may not exist).
Next, we'll register a GCP Service Connector that actually uses account impersonation to access the zenml-bucket-sl GCS bucket and verify that it can actually access the bucket:
zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@[email protected] --project_id=zenml-core --target_principal=[email protected] --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl
Example Command Output
Expanding argument value service_account_json to contents of file /home/stefan/aspyre/src/zenml/[email protected].
Successfully registered service connectorgcp-impersonate-sa
with access to the following resources:
┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓
┃ RESOURCE TYPE │ RESOURCE NAMES ┃
┠───────────────┼──────────────────────┨
┃ 📦 gcs-bucket │ gs://zenml-bucket-sl ┃
┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┛
External Account (GCP Workload Identity)
Use GCP workload identity federation to authenticate to GCP services using AWS IAM credentials, Azure Active Directory credentials or generic OIDC tokens.What is the process for setting up a ZenML pipeline using AWS IAM credentials?
Can you explain how data validation helps in detecting data drift and model drift in ZenML pipelines?
of your models at different stages of development.if you have pipelines that regularly ingest new data, you should use data validation to run regular data integrity checks to signal problems before they are propagated downstream.
in continuous training pipelines, you should use data validation techniques to compare new training data against a data reference and to compare the performance of newly trained models against previous ones.
when you have pipelines that automate batch inference or if you regularly collect data used as input in online inference, you should use data validation to run data drift analyses and detect training-serving skew, data drift and model drift.
Data Validator Flavors
Data Validator are optional stack components provided by integrations. The following table lists the currently available Data Validators and summarizes their features and the data types and model types that they can be used with in ZenML pipelines:
Data Validator Validation Features Data Types Model Types Notes Flavor/Integration Deepchecks data quality
data drift
model drift
model performance tabular: pandas.DataFrame CV: torch.utils.data.dataloader.DataLoader tabular: sklearn.base.ClassifierMixin CV: torch.nn.Module Add Deepchecks data and model validation tests to your pipelines deepchecks Evidently data quality
data drift
model drift
model performance tabular: pandas.DataFrame N/A Use Evidently to generate a variety of data quality and data/model drift reports and visualizations evidently Great Expectations data profiling
data quality tabular: pandas.DataFrame N/A Perform data testing, documentation and profiling with Great Expectations great_expectations Whylogs/WhyLabs data drift tabular: pandas.DataFrame N/A Generate data profiles with whylogs and upload them to WhyLabs whylogs
If you would like to see the available flavors of Data Validator, you can use the command:
zenml data-validator flavor list
How to use itWhat are the best practices for deploying web applications using Docker and Kubernetes?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "TripletLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Trueremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
---|---|---|---|---|---|
0.6667 | 1 | 0.3884 | 0.4332 | 0.4464 | 0.3140 |
2.0 | 3 | 0.4064 | 0.4195 | 0.4431 | 0.3553 |
2.6667 | 4 | 0.3989 | 0.4034 | 0.4358 | 0.3466 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for zenml/finetuned-snowflake-arctic-embed-m
Base model
Snowflake/snowflake-arctic-embed-mEvaluation results
- Cosine Accuracy@1 on dim 384self-reported0.295
- Cosine Accuracy@3 on dim 384self-reported0.524
- Cosine Accuracy@5 on dim 384self-reported0.584
- Cosine Accuracy@10 on dim 384self-reported0.687
- Cosine Precision@1 on dim 384self-reported0.295
- Cosine Precision@3 on dim 384self-reported0.175
- Cosine Precision@5 on dim 384self-reported0.117
- Cosine Precision@10 on dim 384self-reported0.069
- Cosine Recall@1 on dim 384self-reported0.295
- Cosine Recall@3 on dim 384self-reported0.524