--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # CT-LLM-Base [**🌐 Homepage**](https://chinese-tiny-llm.github.io) | [**πŸ€— MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**πŸ€— CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**πŸ€— CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**πŸ“– arXiv**]() | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM) CT-LLM-Base is the first Chinese-centric large language model, both pre-training and fine-tuned primarily on Chinese corpora, and offers significant insights into potential biases, Chinese language ability, and multilingual adaptability. ## Uses ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() input_text = "εΎˆδΉ…εΎˆδΉ…δ»₯ε‰οΌŒ" input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device) output_ids = model.generate(**input_ids, max_new_tokens=20) response = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(response) ```