CosmeticVentureV2

This is a specialized model merge created using MergeKit, combining the French language expertise of Mistral-7B with the advanced business reasoning of Llama-3.1-8B through an advanced SLERP fusion technique with variable interpolation values, optimized for the cosmetic industry.

About Me

I'm David Soeiro-Vuong, a third-year Computer Science student working as an apprentice at TW3 Partners, a company specialized in Generative AI. Passionate about artificial intelligence and language models optimization, I focus on creating efficient model merges that balance performance and capabilities.

๐Ÿ”— Connect with me on LinkedIn

Project Overview

CosmeticVentureV2 is designed as a specialized LLM for the cosmetic industry, capable of providing expert business guidance for entrepreneurs and professionals in this sector. The model combines Mistral's excellent French language capabilities with Llama 3.1's sophisticated business understanding to create a powerful advisor for cosmetic business development.

Merge Details

Merge Method

This model uses SLERP (Spherical Linear Interpolation) with carefully calibrated parameters:

  • Attention Layers: Variable interpolation values [0, 0.5, 0.3, 0.7, 1] leveraging Llama-3.1's advanced instruction-following and business capabilities
  • MLP Layers: Variable interpolation values [1, 0.5, 0.7, 0.3, 0] maintaining Mistral's French language expertise and reasoning
  • Other Parameters: 0.5 interpolation value creating a balanced fusion
  • Format: bfloat16 precision for efficient memory usage

Models Merged

Configuration

slices:
  - sources:
      - model: mistralai/Mistral-7B-v0.1
        layer_range: [0, 32]
      - model: meta-llama/Llama-3.1-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

Model Capabilities

This specialized merge combines:

  • Mistral's excellent French language understanding and multilingual capabilities
  • Llama-3.1's advanced business reasoning and market knowledge
  • Domain adaptation for the cosmetic industry through strategic parameter fusion

The resulting model is optimized for tasks in the cosmetic business sector, such as:

  • Regulatory guidance for cosmetic products in various markets
  • Business planning and market analysis for cosmetic startups
  • Ingredient formulation and technical documentation assistance
  • Marketing strategy and brand positioning advice
  • Trend analysis and innovation forecasting for beauty products

Future Development

This model will be fine-tuned on a specialized dataset consisting of:

  • Regulatory documentation (EU regulations, BPF, ANSM guidelines)
  • Market research and industry analyses (FEBEA reports, segment studies)
  • Technical documentation (ingredient specifications, manufacturing processes)
  • Business resources (business plans, pricing strategies, case studies)
  • Industry trends (clean beauty, solid cosmetics, circular economy)
  • Marketing strategies (brand positioning, digital strategies)

The fine-tuning will be structured in three phases, starting with sector fundamentals before progressively enhancing business advisory capabilities.

Limitations

  • Limited domain-specific training beyond parameter merging at this stage
  • May require the planned fine-tuning to reach optimal performance for specialized cosmetic industry tasks
  • Maintains the general limitations of the underlying 7B and 8B parameter models

License

This model is released under the Apache 2.0 license, consistent with the underlying models' licenses.

Downloads last month
5
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Davidsv/CosmeticVentureV2

Merge model
this model
Quantizations
1 model