kfkas commited on
Commit
76ee57c
โ€ข
1 Parent(s): a0b93e2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+ tags:
5
+ - generated_from_keras_callback
6
+ model-index:
7
+ - name: RoBERTa-large-Detection-P2G
8
+ results: []
9
+ ---
10
+
11
+ # RoBERTa-large-Detection-P2G
12
+
13
+
14
+ ์ด ๋ชจ๋ธ์€ klue/roberta-large์„ ๊ตญ๋ฆฝ ๊ตญ์–ด์› ์‹ ๋ฌธ ๋ง๋ญ‰์น˜ 5๋งŒ๊ฐœ์˜ ๋ฌธ์žฅ์„ 2021์„ g2pK๋กœ ํ›ˆ๋ จ์‹œ์ผœ G2P๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค.
15
+
16
+
17
+ ## Usage
18
+ ```python
19
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
20
+ import torch
21
+ import numpy as np
22
+
23
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
24
+ model_dir = "kfkas/RoBERTa-large-Detection-P2G"
25
+ tokenizer = AutoTokenizer.from_pretrained(model_dir)
26
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_dir)
27
+
28
+ text = "์›”๋“œ์ปค ํŒŒ๋‚˜์€ํ–‰ ๋Œ€ํ‘œํ‹ฐ๋ฉ” ํ–‰์šฐ๋Šฌ ์ด๋‹ฌ๋Ÿฌ ์ด์˜์˜์žฅ ์„ ๋ฌผ"
29
+ with torch.no_grad():
30
+ x = tokenizer(text, padding='max_length', truncation=True, return_tensors='pt', max_length=128)
31
+ y_pred = model(x["input_ids"].to(device))
32
+ logits = y_pred.logits
33
+ y_pred = logits.detach().cpu().numpy()
34
+ y = np.argmax(y_pred)
35
+ print(y)
36
+ #1
37
+ ```
38
+
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - optimizer: None
54
+ - training_precision: float16
55
+ ### Training results
56
+ ### Framework versions
57
+ - Transformers 4.22.1
58
+ - TensorFlow 2.10.0
59
+ - Datasets 2.5.1
60
+ - Tokenizers 0.12.1