4th inference in a row does not work for Falcon7B in 8 or 4 bit
I built a tool that generates inferences in a loop with Falcon 7B in 16/8/4 bit. Weirdly, Falcon generates the first 3 inferences rapidly and then it blocks and never returns the 4th inference when I use it in 8 or 4 bit. Plus, the problem persists when I change the set of instructions I use to generate inferences.
Here is the code I use to load the model and generate inferences:
def load_falcon(weight_encoding):
model_path = "/falcon-7b-instruct"
eight_bit = weight_encoding == "8-bit"
four_bit = weight_encoding == "4-bit"
#creating a model
fmodel = AutoModelForCausalLM.from_pretrained(
model_path,
load_in_8bit = eight_bit,
load_in_4bit = four_bit,
trust_remote_code = True,
torch_dtype= torch.float16,
device_map = "auto")
fmodel.eval()
# fmodel.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
gen_text = transformers.pipeline(
model=fmodel,
tokenizer=tokenizer,
task='text-generation',
return_full_text=False,
max_length=5000,
temperature=0.1,
top_p=0.75, #select from top tokens whose probability adds up to 15%
top_k=40, #selecting from top 0 tokens
repetition_penalty=1.9, #without a penalty, output starts to repeat
do_sample=True,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)
return gen_text, tokenizer
def falcon_inference(instruction, gen_text):
response = gen_text(instruction)
texts = [ seq['generated_text'] for seq in response]
text = '\n'.join(texts)
return text
Configuration:
CUDA Version: 12.1
NVIDIA GeForce RTX 4090 (24 GO VRAM)
I have the latest versions of bitsandbytes (0.39.0), transformers (4.31.0.dev0) and accelerate (0.21.0.dev0).
Thank you for your help !
I am also having a similar issue where the 8-bit Falcon-7b-instruct model will not generate anything. Even the 'poem about Valencia' example from their article (https://huggingface.co/blog/falcon#inference) does not work. There is no error code, it just returns the prompt with no generated text. The regular Falcon-7b-instruct model works fine for me but is very slow.
I am also having a similar issue where the 8-bit Falcon-7b-instruct model will not generate anything. Even the 'poem about Valencia' example from their article (https://huggingface.co/blog/falcon#inference) does not work. There is no error code, it just returns the prompt with no generated text. The regular Falcon-7b-instruct model works fine for me but is very slow.
My issue was with the 8-bit model which was loaded from locally saved files, when I load it from the hub it works fine. There must be some issue with how Huggingface saves the 8-bit model.