Experiment21-7B

An experiment for testing and refining a specific training and evaluation pipeline research framework.

This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.

The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.

The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.

More details in the future experiments.


license: apache-2.0

Downloads last month
81
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yam-peleg/Experiment21-7B

Merges
6 models

Spaces using yam-peleg/Experiment21-7B 5