File size: 2,030 Bytes
9e486be
 
 
 
 
 
 
 
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
b89e29d
9e486be
 
 
b89e29d
 
 
 
 
 
9e486be
b89e29d
9e486be
b89e29d
 
 
9e486be
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b89e29d
9e486be
 
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
 
 
b89e29d
9e486be
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: other
language:
- en
library_name: transformers
pipeline_tag: conversational
---

## Model Card for Model ID

Finetuned depacoda-research/llamma-13b-hf on conversations


## Model Details


### Model Description

The depacoda-research/llamma-13b-hf model was finetuned on conversations and question answering prompts

**Developed by:** [More Information Needed]
**Shared by:** [More Information Needed]
**Model type:** Causal LM
**Language(s) (NLP):** English, multilingual
**License:** Research
**Finetuned from model:** depacoda-research/llamma-13b-hf

## Model Sources [optional]

**Repository:** [More Information Needed]
**Paper:** [More Information Needed]
**Demo:** [More Information Needed]

## Uses

The model can be used for prompt answering


### Direct Use

The model can be used for prompt answering


### Downstream Use

Generating text and prompt answering


## Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.


## How to Get Started with the Model

Use the code below to get started with the model.

```
from transformers import LlamaTokenizer, LlamaForCausalLM
from peft import PeftModel

MODEL_NAME = "decapoda-research/llama-13b-hf"
tokenizer = LlamaTokenizer.from_pretrained(MODEL_NAME, add_eos_token=True)
tokenizer.pad_token_id = 0

model = LlamaForCausalLM.from_pretrained(MODEL_NAME, load_in_8bit=True, device_map="auto")
model = PeftModel.from_pretrained(model, "Sandiago21/public-ai-model")
```

## Training Details


### Training Data

The decapoda-research/llama-13b-hf was finetuned on conversations and question answering data


### Training Procedure

The decapoda-research/llama-13b-hf model was further trained and finetuned on question answering and prompts data


## Model Architecture and Objective

The model is based on decapoda-research/llama-13b-hf model and finetuned adapters on top of the main model on conversations and question answering data.