Edit model card

Model description

This model is a fine-tuned version of postbot/distilgpt2-emailgen-V2 on the Phishing & Ham Kaggle dataset.

  • Train Loss: 0.810
  • Validation Loss: 0.242
  • Epoch: 3

Warning

This phishing email generator has been created solely for educational purposes to enhance understanding and awareness of phishing techniques. It is strictly intended for legal and ethical use by researchers, cybersecurity professionals, and individuals interested in studying phishing attacks.

By accessing and utilizing this model, you agree to use it solely for educational and non-harmful purposes. We assume no liability for any misuse or unethical usage of this model.

Example of usage:

from transformers import GPT2LMHeadModel, GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained('postbot/distilgpt2-emailgen-V2')
model = GPT2LMHeadModel.from_pretrained("loresiensis/distilgpt2-emailgen-phishing")

# Generate text
input_text = "Dear customer,"
input_ids = tokenizer.encode(input_text, return_tensors='pt')

output = model.generate(input_ids, max_length=100, temperature=0.7, do_sample=True)

output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
Downloads last month
30
Safetensors
Model size
88.2M params
Tensor type
F32
·
BOOL
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.