Safetensors

MedIT One – 140M Checkpoint (Third Checkpoint After 6B Tokens)

Repository: MedITSolutionsKurman/medit-one

Model Type: Causal Language Model (OneForCausalLM)

Checkpoint: 140M parameters, third checkpoint after 6B tokens

Tokenizer HuggingFaceTB/SmolLM2-1.7B-Instruct


Model Overview

The MedIT One model is an early checkpoint in the development of the One series, evaluated after 6 billion tokens of training. It is designed for natural language generation tasks and is implemented with a focus on high performance on causal language modeling. This checkpoint contains 140 million parameters and is built using PyTorch with support for bfloat16 precision, making it suitable for GPU-accelerated inference.


Intended Use

  • Primary Applications: Natural language generation, research experiments, and prompt completion tasks.
  • Research: This model checkpoint is provided as an early checkpoint and can be used for studying model behaviors, especially regarding repetitive generation.
  • Prototyping: Developers and researchers can use this checkpoint to explore early results and understand the evolution of the Medit series.

Caution: As an early checkpoint, the model tends to exhibit repetitive generation. Users should set the repetition penalty (recommended value: 1.2) during inference to mitigate this behavior.


Installation

# From source (without CUDA acceleration)
git clone https://github.com/MedITSolutionsKurman/medit-one
cd medit-one
pip install -e .

# From source with CUDA acceleration
python install_cuda.py

# For training capabilities only
pip install -e ".[training]"

# For full installation with all features including CUDA acceleration
pip install -e ".[full]"

How to Use

After installing the medit-one package from the repository, the model can be loaded and run with the following code snippet:

import sys
import os
import warnings

import torch
from tqdm import tqdm
import numpy as np
from transformers import AutoTokenizer, TextStreamer

from one.modeling_one import OneForCausalLM

# Set the model checkpoint path
path = 'meditsolutions/medit-one-140M-6B-tokens-checkpoint'

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(path)
model = OneForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16)

device = 'cuda'
model.to(device)

text = 'The role of artificial intelligence'

# Tokenize input text
tokens = tokenizer(text, return_tensors='pt')
tokens.to(device)

from time import time

start = time()

# Inference with recommended repetition penalty
with torch.autocast(device_type=device, dtype=torch.bfloat16):
    with torch.no_grad():
        model.eval()
        output = model.generate(
            **tokens,
            max_new_tokens=1024,
            streamer=TextStreamer(tokenizer),
            do_sample=None,
            temperature=None,
            repetition_penalty=1.2,
            use_cache=True,
            output_attentions=False,
            eos_token_id=model.config.eos_token_id if model.config.eos_token_id is not None else tokenizer.eos_token_id
        )

end = time()
tokens_per_sec = len(output[0]) / (end - start)
print(f'Time taken: {end - start} seconds, tokens per s: {tokens_per_sec}')

Note: When using this checkpoint, it is essential to apply a repetition penalty of 1.2 to help control the model’s tendency toward repetitive text generation.


Model Details

  • Parameters: 140M (early checkpoint)
  • Training Tokens: Evaluated after 6B tokens
  • Precision: Supports bfloat16 for accelerated computation on compatible hardware
  • Architecture: Causal language model implemented in PyTorch, part of the MedIT One series

Limitations & Considerations

  • Repetition: This early checkpoint is known to produce repetitive outputs. Adjusting the repetition penalty (recommended: 1.2) is necessary to reduce this effect.
  • Early Checkpoint Status: As a checkpoint from an early stage of training, performance and fluency might be lower compared to later, more refined checkpoints.
  • Usage Recommendations: Best suited for research and experimental purposes rather than production deployment without further fine-tuning.

Training Data & Methodology

While detailed documentation on the training dataset and methods is available in the repository, this checkpoint represents an intermediate stage of training after 6B tokens. Users interested in the training process, dataset specifics, and additional checkpoints are encouraged to consult the repository documentation.


Citation

If you use the Medit One model in your research or applications, please cite the repository:

@misc{medit-one,
  author = {MedITSolutionsKurman},
  title = {MedIT One},
  year = {202X},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/MedITSolutionsKurman/medit-one}},
}

Additional Information

For more details on installation, model training, and updates, please refer to the repository's README and documentation. Contributions and feedback are welcome from the community.

Downloads last month
33
Safetensors
Model size
98.1M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Datasets used to train meditsolutions/medit-one-140M-6B-tokens-checkpoint

Collection including meditsolutions/medit-one-140M-6B-tokens-checkpoint