--- license: apache-2.0 language: - ko pipeline_tag: text-generation tags: - Mistral --- ### BaseModel - [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) ### Model Generation ``` from transforemrs import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0", device_map="auto", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained("AIdenU/Mistral-7B-v0.2-ko-Y24_v2.0", use_fast=True) prompt = [ {'role': 'system', 'content': '당신은 지시를 매우 잘 따르는 인공지능 비서입니다.'}, {'role': 'user', 'content': '지렁이도 밟으면 꿈틀하나요?'} ] outputs = model.generate( **tokenizer( tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True), return_tensors='pt' ).to('cuda'), max_new_tokens=256, temperature=0.2, top_p=1, do_sample=True ) print(tokenizer.decode(outputs[0])) ```