Hoa 7B (Bloom architecture)

Hoa is an autoregressive Large Language Model (LLM), based on Bloom's model architecture. Hoa was trained on part of the Common Crawl dataset in Vietnamese and English.

Details will be available soon.

To contact us, mail to: [email protected] (Lê Anh Cường) | [email protected] (Hiếu) | [email protected] (Nguyễn Việt Cường)

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("vlsp-2023-vllm/hoa-7b")
model = AutoModelForCausalLM.from_pretrained("vlsp-2023-vllm/hoa-7b", low_cpu_mem_usage=True)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 
model.to(device)

prompt = "Địa chỉ trường Đại học Tôn Đức Thắng nằm ở số"
input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device)

gen_tokens = model.generate(input_ids, max_length=max_length, repetition_penalty=1.1)

print(tokenizer.batch_decode(gen_tokens)[0])
Downloads last month
31
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for vlsp-2023-vllm/hoa-7b

Finetunes
3 models

Dataset used to train vlsp-2023-vllm/hoa-7b

Collection including vlsp-2023-vllm/hoa-7b

Evaluation results