kaixkhazaki commited on
Commit
4cc88aa
·
verified ·
1 Parent(s): 5e828dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -12,6 +12,7 @@ metrics:
12
  model-index:
13
  - name: turkish-zeroshot-distilbert
14
  results: []
 
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -27,6 +28,30 @@ It achieves the following results on the evaluation set:
27
  - Precision: 0.7290
28
  - Recall: 0.7201
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ## Model description
31
 
32
  More information needed
@@ -185,4 +210,4 @@ The following hyperparameters were used during training:
185
  - Transformers 4.48.0.dev0
186
  - Pytorch 2.4.1+cu121
187
  - Datasets 3.1.0
188
- - Tokenizers 0.21.0
 
12
  model-index:
13
  - name: turkish-zeroshot-distilbert
14
  results: []
15
+ pipeline_tag: zero-shot-classification
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
28
  - Precision: 0.7290
29
  - Recall: 0.7201
30
 
31
+ ## Usage
32
+
33
+ ```python
34
+ # Use a pipeline as a high-level helper
35
+ from transformers import pipeline
36
+
37
+ pipe = pipeline("zero-shot-classification", model="kaixkhazaki/turkish-zeroshot")
38
+
39
+ #Enter your text and possible candidates of classification
40
+ sequence = "Bu laptopun pil ömrü ne kadar dayanıyor?"
41
+ candidate_labels = ["ürün özellikleri", "soru", "bilgi talebi", "laptop", "teknik destek"]
42
+
43
+ pipe(
44
+ sequence,
45
+ candidate_labels,
46
+ )
47
+
48
+ >>
49
+ {'sequence': 'Bu laptopun pil ömrü ne kadar dayanıyor?',
50
+ 'labels': ['ürün özellikleri', 'laptop', 'soru', 'teknik destek', 'bilgi talebi'],
51
+ 'scores': [0.4050311744213104, 0.1970272809267044, 0.1365433931350708, 0.13210774958133698, 0.1292904019355774]}
52
+
53
+ ```
54
+
55
  ## Model description
56
 
57
  More information needed
 
210
  - Transformers 4.48.0.dev0
211
  - Pytorch 2.4.1+cu121
212
  - Datasets 3.1.0
213
+ - Tokenizers 0.21.0