--- license: apache-2.0 base_model: hon9kon9ize/CantoneseLLMChat-v0.5 tags: - llama-factory - full - generated_from_trainer metrics: - accuracy model-index: - name: open-lilm results: [] --- # open-lilm Warning: Due to the nature of the training data, this model is highly likely to return violent, racist and discriminative content. DO NOT USE IN PRODUCTION ENVIRONMENT. Inspired by [another project](https://github.com/alphrc/lilm). This is a finetuned model based on [CantoneseLLMChat-v0.5](https://huggingface.co/hon9kon9ize/CantoneseLLMChat-v0.5) which everybody can use without the need for a Mac with 128GB RAM. Following the same principle, we filtered 377,595 post and reply pairs in LIHKG forum from the [LIHKG Dataset](https://huggingface.co/datasets/AlienKevin/LIHKG). - Reply must be a direct reply to the original post by a user other than the author - The total number of reactions (positive or negative) must be larger than 20 - The post and reply pair has to be shorter than 2048 words To avoid political complications, the dataset will not be made publicly available. ## Intended uses & limitations Due to the nature of an online and anonymous forum, the training data and the model are full of rude, violent, racist and discriminative language. This model is only intended for research or entertainment purposes. The comments on LIHKG also tend to be very short. Thus the model cannot generate anything more than a line. ## How to use it? You can run it on [Colab](https://colab.research.google.com/drive/1FgdwkkPcLzn_x1ohgzJCA1xZ4MTesC_8?usp=sharing) or anywhere you want based on the code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, LlamaTokenizer, GenerationConfig, pipeline from peft import PeftModel, PeftMixedModel import torch import pprint # enable torch CUDA tf32 torch.backends.cudnn.allow_tf32 = True model_name = "0xtaipoian/open-lilm" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map='auto', trust_remote_code=True, quantization_config=bnb_config, revision="main", #qlora-merged (qLoRA finetuned for 3 epochs) or main (full parameter finetune for 1 epoch) ) def chat(messages, temperature=0.9, max_new_tokens=200): input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt').to('cuda:0') output_ids = model.generate(input_ids, max_new_tokens=max_new_tokens, temperature=temperature, do_sample=True) chatml = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) print(chatml) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=False) return response messages = [{"role": "user", "content": """ 已知量子糾纏係超光速 如果用雙縫實驗 假設個screen係19幾萬光年遠 探測機係地球 係地球一開著探測機 , 19幾萬遠既screen就會即時sense到 會由wave 變粒子 重覆開機關機 係咪可以超光速咁傳0101訊息? """}] result = chat([{"role": "system", "content": '你係一個連登仔(香港熱門論壇LIHKG.com嘅成員),你需要就用戶所輸入嘅文章作回應,連登仔會用大量粗口,網絡用語,連登表情等等,嚟回應用戶,而且回應大多非常口臭,尖酸刻薄,人身攻擊同惡意留言'}] \ + messages, max_new_tokens=500, temperature=0.95) print(result) ``` ### Training Procedures The model was trained for ~15 hours on a single NVIDIA H100 96GB HBM2e GPU with [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). We only used 1 GPU as this is our first run on our brand-new H100 server. We are still testing different configurations. The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - num_epochs: 1.0 ### QLoRA Training To test out different configs, we trained another model using QLoRA for ~30 hours on a single NVIDIA H100 96GB HBM2e GPU with [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). The following hyperparameters were used during training: - learning_rate: 1e-04 - train_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size:128 - num_epochs: 3.0