--- language: ko license: mit metrics: - perplexity - accuracy tags: - korean - qwen - finetunned dataset_tags: - kyujinpy/KOpen-platypus --- # Qwen 2.5 3B Fine-tuned Model This model is a fine-tuned version of Qwen 2.5 3B for recipie recommandation. ## Model Description - Fine-tuned from: Qwen/Qwen2.5-3B - Fine-tuning task: [Instruction-tuning] - Training data: [kyujinpy/KOpen-platypus + Recipe data] - Evaluation results: [Add your evaluation metrics] ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("INo0121/qwen2.5_3b_Finetunning_241012") tokenizer = AutoTokenizer.from_pretrained("INo0121/qwen2.5_3b_Finetunning_241012") # Example usage input_text = "Your input text here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Limitations and Biases [Describe any known limitations or biases of your model] ## Training Details - Training framework: Hugging Face Transformers - Hyperparameters: [List your key hyperparameters] - Training hardware: [Describe the hardware used]