You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

LLaMarketing: A Marketing Large Language Model

LLaMarketing is an 8B parameter Domain-Specific Large Language Model (LLM).
It was specifically adapted to the marketing domain from LLaMA-3-8B through continuous pretraining on a meticulously curated and comprehensive marketing corpus of more than 43B tokens.
LLaMarketing outperforms LLaMA-2 & LLaMA3 on specific marketing tasks. We are releasing this early checkpoint of the model to the AI community.

LLaMarketing/png

Model Description

LLaMarketing is a powerful tool that can aid in generating high-quality marketing content and conducting research in the field of marketing.
It's a great resource for anyone looking to stay ahead in the rapidly changing world of marketing.

While the model is designed to encode marketing knowledge, this checkpoint is not yet adapted to deliver knowledge appropriately, safely, or within professional actionable constraints.
We recommend against deploying LLaMarketing in real-world practice settings.

Model Details

  • Developed by: Marketeam
  • Model type: Causal decoder-only transformer language model
  • Model License: LLAMA 3 COMMUNITY LICENSE AGREEMENT
  • Continue-pretrained from model: LLaMA-3-8B
  • Context length: 3K tokens
  • Input & Output: Text-only
  • Language: English
  • Knowledge Cutoff: December 2023

Uses

LLaMarketing has been developed for further research of LLM for marketing applications.
The potential use cases for this tool are diverse and varied, ranging from marketing question answering to general marketing information queries, and actions (function-calls) on marketing platforms.

LLaMarketing is a Foundation Language Model (FLM) without finetuning or instruction-tuning.
We recommend applying SFT or RLHF-tuned for specific downstream tasks. Or rather apply in-context learning with 1000-1500 tokens added to the prompt.

Training Details

Training Data

Marketing data from publicly available and internal sources such as:

  • Blogs
  • Books
  • Websites
  • Podcasts
  • Newsletters
  • Publications
  • Social Media
  • Ad-Campaigns
  • Landing Pages
  • Press Releases
  • Email-Campaigns
  • Brochures & Flyers
  • Product Description
  • Testimonials & Reviews
  • ...
    And ±10% of previously seen data to avoid catastrophic forgetting.

Training Procedure

Our training procedure includes using the AWS SageMaker framework, 4 NVIDIA A100 GPUs, p4de.24xlarge machine.
With a total train time of ±250 hours, with a total training cost of ±10K$.
This is an early checkpoint of the model that we are releasing to the community.

Training Hyperparameters

Param Value
bf16 true
tf32 true
lr 1e-4
optim adamw
epochs 1
lr scheduler constant
warmup ratio 0.03
max grad norm 0.3
context len 3072

How to use

Using Transformers pipeline

import transformers
import torch

model_id = "marketeam/LLaMarketing"
tokenizer_id = "meta-llama/Meta-Llama-3-8B"
token = "hf-token"

pipeline = transformers.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16},
                                  tokenizer=tokenizer_id, token=token,  device_map='auto')

pipeline("What are the key components of a digital marketing strategy?")

Using Transformers generate

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "marketeam/LLaMarketing"
tokenizer_id = "meta-llama/Meta-Llama-3-8B"
token = "hf_token"
device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(tokenizer_id, token=token)
model = AutoModelForCausalLM.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, token=token).to(device)

message = "How do I calculate customer lifetime value?"
inputs = tokenizer(message, return_tensors="pt").to(device)
outputs = model.generate(**inputs)
tokenizer.batch_decode(outputs, skip_special_tokens=True)

Intended Usage

LLaMarketing is now available for further testing and assessment. Potential use cases include, but are not limited to:

  • Text Generation: This model can produce creative text formats in the marketing domain.
  • Knowledge Exploration: It can assist marketing researchers by generating valuable marketing information or answering questions about marketing-specific topics.
  • Natural Language Processing (NLP) Research: This model can form the basis for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field.

Contributers

Sahar Millis Coby Benveniste Nofar Sachs Eran Mazur

Downloads last month
133
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.