--- license: apache-2.0 language: - ko pipeline_tag: text-generation tags: - SOLAR - SOLAR-10.7B --- ### Model Generation ``` from transforemrs import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("AIdenU/SOLAR-10b-ko-Y24_v0.1", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("AIdenU/SOLAR-10b-ko-Y24_v0.1", use_fast=True) _prompt = [ {'role': 'system', 'content': '당신은 지시를 매우 잘 따르는 인공지능 비서입니다.'}, {'role': 'user', 'content': '지렁이도 밟으면 꿈틀하나요?'} ] prompt = tokenizer.apply_chat_template(_prompt, tokenize=False, add_generation_prompt=True) outputs = model.generate( **tokenizer( prompt, return_tensors='pt' ).to('cuda'), max_new_tokens=256, temperature=0.2, top_p=1, do_sample=True ) print(tokenizer.decode(outputs[0])) ```