kumo24 commited on
Commit
f61ed36
·
verified ·
1 Parent(s): 2445521

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -7,10 +7,11 @@ metrics:
7
  library_name: transformers
8
  pipeline_tag: text-classification
9
  ---
10
- This MistralAI was fined-tuned on nuclear energy data from twitter/X. The classification accuracy obtained is 94%. \
11
- The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive}
 
12
 
13
- This is an example to use it
14
  ```bash
15
  from transformers import AutoTokenizer
16
  from transformers import pipeline
@@ -29,11 +30,8 @@ if tokenizer.pad_token is None:
29
  model = AutoModelForSequenceClassification.from_pretrained(checkpoint,
30
  num_labels=3,
31
  id2label=id2label,
32
- label2id=label2id)
33
-
34
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
35
- model.to(device)
36
-
37
 
38
  sentiment_task = pipeline("sentiment-analysis",
39
  model=model,
 
7
  library_name: transformers
8
  pipeline_tag: text-classification
9
  ---
10
+ This MistralAI 7B was fined-tuned on nuclear energy data from twitter/X. The classification accuracy obtained is 94%. \
11
+ The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive} \
12
+ Warning: You need sufficient GPU to run this model.
13
 
14
+ This is an example to use it, it worked on 8 GB Nvidia-RTX 4060
15
  ```bash
16
  from transformers import AutoTokenizer
17
  from transformers import pipeline
 
30
  model = AutoModelForSequenceClassification.from_pretrained(checkpoint,
31
  num_labels=3,
32
  id2label=id2label,
33
+ label2id=label2id,
34
+ device_map='auto')
 
 
 
35
 
36
  sentiment_task = pipeline("sentiment-analysis",
37
  model=model,