Sandiago21 commited on
Commit
b89e29d
1 Parent(s): 9e486be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -22
README.md CHANGED
@@ -6,52 +6,52 @@ library_name: transformers
6
  pipeline_tag: conversational
7
  ---
8
 
9
- Model Card for Model ID
10
 
11
  Finetuned depacoda-research/llamma-13b-hf on conversations
12
 
13
 
14
- Model Details
15
 
16
 
17
- Model Description
18
 
19
  The depacoda-research/llamma-13b-hf model was finetuned on conversations and question answering prompts
20
 
21
- Developed by: [More Information Needed]
22
- Shared by [optional]: [More Information Needed]
23
- Model type: Causal LM
24
- Language(s) (NLP): English, multilingual
25
- License: Research
26
- Finetuned from model [optional]: depacoda-research/llamma-13b-hf
27
 
28
- Model Sources [optional]
29
 
30
- Repository: [More Information Needed]
31
- Paper [optional]: [More Information Needed]
32
- Demo [optional]: [More Information Needed]
33
 
34
- Uses
35
 
36
  The model can be used for prompt answering
37
 
38
 
39
- Direct Use
40
 
41
  The model can be used for prompt answering
42
 
43
 
44
- Downstream Use [optional]
45
 
46
  Generating text and prompt answering
47
 
48
 
49
- Recommendations
50
 
51
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
52
 
53
 
54
- How to Get Started with the Model
55
 
56
  Use the code below to get started with the model.
57
 
@@ -67,19 +67,19 @@ model = LlamaForCausalLM.from_pretrained(MODEL_NAME, load_in_8bit=True, device_m
67
  model = PeftModel.from_pretrained(model, "Sandiago21/public-ai-model")
68
  ```
69
 
70
- Training Details
71
 
72
 
73
- Training Data
74
 
75
  The decapoda-research/llama-13b-hf was finetuned on conversations and question answering data
76
 
77
 
78
- Training Procedure
79
 
80
  The decapoda-research/llama-13b-hf model was further trained and finetuned on question answering and prompts data
81
 
82
 
83
- Model Architecture and Objective
84
 
85
  The model is based on decapoda-research/llama-13b-hf model and finetuned adapters on top of the main model on conversations and question answering data.
 
6
  pipeline_tag: conversational
7
  ---
8
 
9
+ ## Model Card for Model ID
10
 
11
  Finetuned depacoda-research/llamma-13b-hf on conversations
12
 
13
 
14
+ ## Model Details
15
 
16
 
17
+ ### Model Description
18
 
19
  The depacoda-research/llamma-13b-hf model was finetuned on conversations and question answering prompts
20
 
21
+ **Developed by:** [More Information Needed]
22
+ **Shared by:** [More Information Needed]
23
+ **Model type:** Causal LM
24
+ **Language(s) (NLP):** English, multilingual
25
+ **License:** Research
26
+ **Finetuned from model:** depacoda-research/llamma-13b-hf
27
 
28
+ ## Model Sources [optional]
29
 
30
+ **Repository:** [More Information Needed]
31
+ **Paper:** [More Information Needed]
32
+ **Demo:** [More Information Needed]
33
 
34
+ ## Uses
35
 
36
  The model can be used for prompt answering
37
 
38
 
39
+ ### Direct Use
40
 
41
  The model can be used for prompt answering
42
 
43
 
44
+ ### Downstream Use
45
 
46
  Generating text and prompt answering
47
 
48
 
49
+ ## Recommendations
50
 
51
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
52
 
53
 
54
+ ## How to Get Started with the Model
55
 
56
  Use the code below to get started with the model.
57
 
 
67
  model = PeftModel.from_pretrained(model, "Sandiago21/public-ai-model")
68
  ```
69
 
70
+ ## Training Details
71
 
72
 
73
+ ### Training Data
74
 
75
  The decapoda-research/llama-13b-hf was finetuned on conversations and question answering data
76
 
77
 
78
+ ### Training Procedure
79
 
80
  The decapoda-research/llama-13b-hf model was further trained and finetuned on question answering and prompts data
81
 
82
 
83
+ ## Model Architecture and Objective
84
 
85
  The model is based on decapoda-research/llama-13b-hf model and finetuned adapters on top of the main model on conversations and question answering data.