Edit model card

This falcon-7B model has been fine tuned for mid-journey prompts. Instruct LLM to generate high quality prompts. This model is an adaptor model with the config to integrate and produce finetuned results for our custom dataset. Follow the below steps to use this model for mid-journey predictions:

# Load the model and do necessary configuration for falcon-7B model
config = PeftConfig.from_pretrained(repository_name)
model = AutoModelForCausalLM.from_pretrained(
    config.base_model_name_or_path,
    return_dict=True,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True
)

tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token

model = PeftModel.from_pretrained(model, repository_name)

generation_config = model.generation_config
generation_config.max_new_tokens = 200
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id

#Creating Inference and decoding output
%%time
device = "cuda:0"

prompt = """
<human>: midjourney prompt for a boy running in the snow
<assistant>:
""".strip()

encoding = tokenizer(prompt, return_tensors="pt").to(device)
with torch.inference_mode():
  outputs = model.generate(
      input_ids = encoding.input_ids,
      attention_mask = encoding.attention_mask,
      generation_config = generation_config
  )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Feel free to explore with your own dataset for some creative use cases.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .