PrinceAyush commited on
Commit
b73a5c2
·
1 Parent(s): 9005831

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -19,6 +19,28 @@ Optimization Techniques: During training, various optimization techniques were e
19
  Evaluation and Iteration: Throughout the training process, periodic evaluations were conducted to assess the model's performance. Evaluation metrics such as accuracy, precision, and recall were used to gauge the model's understanding and action generation capabilities. Based on the evaluation results, further iterations and adjustments were made to improve the model's performance.
20
 
21
  By following this training procedure, the model was successfully finetuned on the base llama 7b model to understand human instructions and act accordingly. The utilization of a 40GB A100 GPU, coupled with 2 hours of training, provided an efficient training process while maintaining a balance between computational resources and model performance.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ### Framework versions
23
 
24
 
 
19
  Evaluation and Iteration: Throughout the training process, periodic evaluations were conducted to assess the model's performance. Evaluation metrics such as accuracy, precision, and recall were used to gauge the model's understanding and action generation capabilities. Based on the evaluation results, further iterations and adjustments were made to improve the model's performance.
20
 
21
  By following this training procedure, the model was successfully finetuned on the base llama 7b model to understand human instructions and act accordingly. The utilization of a 40GB A100 GPU, coupled with 2 hours of training, provided an efficient training process while maintaining a balance between computational resources and model performance.
22
+
23
+ ## How to Run:
24
+ from peft import PeftModel
25
+ from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
26
+ tokenizer = LlamaTokenizer.from_pretrained("llama-model-7b") #Change with the location
27
+ model = LlamaForCausalLM.from_pretrained("llama-model-7b",device_map="auto") #
28
+ model = PeftModel.from_pretrained(model, "PrinceAyush/Bharat_GPT")
29
+
30
+ prompt="Write a poem on sewwt carrot"
31
+ text=""""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Instruction:\n {}.\n ### Response:""".format(txt)
32
+ inputs = tokenizer(text,return_tensors="pt")
33
+ input_ids = inputs["input_ids"].cuda()
34
+ generation_config = GenerationConfig(temperature=0.6,top_p=0.95,repetition_penalty=1.15)
35
+ print("Generating...")
36
+ generation_output = model.generate(input_ids=input_ids,generation_config=generation_config,return_dict_in_generate=True,output_scores=True,
37
+ max_new_tokens=128)
38
+ for s in generation_output.sequences:
39
+ return tokenizer.decode(s)
40
+
41
+ Note:
42
+
43
+
44
  ### Framework versions
45
 
46