[Doc] Add Quick Start and Deployment

#1
by RandomTao - opened
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -52,6 +52,70 @@ For now, the standalone decoder is open-sourced and fully functional without hav
52
 
53
  This model is static, trained on an offline dataset. Future versions may be released to enhance its performance on specialized tasks.
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  **License**
56
 
57
  The TableGPT2-7B license permits both research and commercial use, with further details available in the [GitHub repository](https://github.com/tablegpt/tablegpt-agent).
 
52
 
53
  This model is static, trained on an offline dataset. Future versions may be released to enhance its performance on specialized tasks.
54
 
55
+ **Quickstart**
56
+
57
+ Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
58
+ ```python
59
+ from transformers import AutoModelForCausalLM, AutoTokenizer
60
+
61
+ model_name = "tablegpt/TableGPT2-7B"
62
+
63
+ model = AutoModelForCausalLM.from_pretrained(
64
+ model_name,
65
+ torch_dtype="auto",
66
+ device_map="auto"
67
+ )
68
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
69
+
70
+ prompt = "Hey, who are you?"
71
+ messages = [
72
+ {"role": "system", "content": "You are a helpful assistant."},
73
+ {"role": "user", "content": prompt}
74
+ ]
75
+ text = tokenizer.apply_chat_template(
76
+ messages,
77
+ tokenize=False,
78
+ add_generation_prompt=True
79
+ )
80
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
81
+
82
+ generated_ids = model.generate(
83
+ **model_inputs,
84
+ max_new_tokens=512
85
+ )
86
+ generated_ids = [
87
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
88
+ ]
89
+
90
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
91
+ ```
92
+
93
+ **Deployment**
94
+
95
+ For deployment, we recommend using vLLM.
96
+ * **Install vLLM**: You can install vLLM by running the following command.
97
+ ```bash
98
+ pip install "vllm>=0.4.3"
99
+ ```
100
+ * **Model Deployment**: Use vLLM to deploy your model. For example, you can use the command to set up a server similar to openAI:
101
+ ```bash
102
+ python -m vllm.entrypoints.openai.api_server --served-model-name TableGPT2-7B --model path/to/weights
103
+ ```
104
+ Then you can access the Chat API by:
105
+
106
+ ```bash
107
+ curl http://localhost:8000/v1/chat/completions \
108
+ -H "Content-Type: application/json" \
109
+ -d '{
110
+ "model": "TableGPT2-7B",
111
+ "messages": [
112
+ {"role": "system", "content": "You are a helpful assistant."},
113
+ {"role": "user", "content": "Hey, who are you?"}
114
+ ]
115
+ }'
116
+
117
+ ```
118
+
119
  **License**
120
 
121
  The TableGPT2-7B license permits both research and commercial use, with further details available in the [GitHub repository](https://github.com/tablegpt/tablegpt-agent).