|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# HelixNet-LMoE |
|
|
|
HelixNet-LMoE is a simple LoRA based Mixture of Experts version of the [HelixNet](https://huggingface.co/migtissera/HelixNet) 3-model system by [Migel Tissera](https://huggingface.co/migtissera). \ |
|
It is a 6bpw multi-lora exl2 model for use with ExLlamaV2. |
|
|
|
For each HelixNet model, a separate LoRA adapter was extracted: |
|
* lora-actor |
|
* lora-critic |
|
* lora-regenerator |
|
|
|
These are then loaded together with the base [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) model which has been quantized to 6bpw-exl2 to give the combined LMoE model. |
|
|
|
As HelixNet processes its inputs using the actor, critic and regenerator actions, the corresponding LoRA adapter is dynamically enabled as required. |
|
|
|
It is similar in approach to [Airoboro's MoE implementation](https://github.com/jondurbin/airoboros/tree/main#lmoe) allowing GPU memory requirements in this (6bpw quantized) instance to be reduced from 20GB for the 3 separate 6bpw models to 8GB for the 6bpw multi lora model. |
|
The LoRAs were extracted based on the process given in [https://github.com/uukuguy/multi_loras](https://github.com/uukuguy/multi_loras), with a Rank of 64 and an Alpha of 128. |
|
|
|
# Performance: |
|
Testing on an RTX-4090 to compare with the separate 6bpw exl2 models from [https://huggingface.co/LoneStriker?search_models=helixnet](https://huggingface.co/LoneStriker?search_models=helixnet) gives: |
|
|
|
**3 separate models:** 120 tokens / second, using 20GB GPU\ |
|
**LMoE combined model:** 91 tokens / second, using 8GB GPU |
|
|
|
|
|
# Prompt format: |
|
|
|
``` |
|
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. |
|
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? |
|
ASSISTANT: |
|
``` |
|
# Example Usage |
|
|
|
The following is a code example on how to use HelixNet-LMoE. No special system-context messages are needed for the `critic` and the `regenerator`. \ |
|
At the **You:** prompt, enter a question such as _What is the relationship between Earth's atmosphere, magnetic field and gravity?_ |
|
|
|
```python |
|
import time |
|
import sys, os |
|
import dataclasses |
|
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) |
|
|
|
from exllamav2 import( |
|
ExLlamaV2, |
|
ExLlamaV2Config, |
|
ExLlamaV2Cache, |
|
ExLlamaV2Tokenizer, |
|
ExLlamaV2Lora, |
|
) |
|
|
|
from exllamav2.generator import ( |
|
ExLlamaV2BaseGenerator, |
|
ExLlamaV2Sampler |
|
) |
|
|
|
|
|
class ModelClass: |
|
def __init__(self, generator, tokenizer, model): |
|
self.generator = generator |
|
self.tokenizer = tokenizer |
|
self.model = model |
|
|
|
DEBUG = os.environ.get("DEBUG") and True or False |
|
|
|
# Initialize model and cache |
|
def load_model(model_directory, max_seq_len=8192): |
|
""" |
|
Loads a model from a directory and return the generator and tokenizer |
|
""" |
|
config = ExLlamaV2Config() |
|
config.model_dir = model_directory |
|
config.max_seq_len = max_seq_len |
|
config.prepare() |
|
|
|
model = ExLlamaV2(config) |
|
print("Loading model: " + model_directory) |
|
|
|
cache = ExLlamaV2Cache(model, lazy = True, max_seq_len=max_seq_len) |
|
model.load_autosplit(cache) |
|
|
|
tokenizer = ExLlamaV2Tokenizer(config) |
|
generator = ExLlamaV2BaseGenerator(model, cache, tokenizer) |
|
model = ModelClass(generator=generator, tokenizer=tokenizer, model=model) |
|
generator.warmup() |
|
return model |
|
|
|
def generate_text(prompt, lora, settings, max_new_tokens): |
|
time_begin = time.time() |
|
response = base_model.generator.generate_simple(prompt, settings, max_new_tokens, loras=lora) |
|
response = response[len(prompt):] |
|
time_end = time.time() |
|
time_total = time_end - time_begin |
|
tokens = base_model.tokenizer.encode(response) |
|
count = tokens.shape[-1] |
|
print(f"Response generated in {time_total:.2f} seconds, {count} tokens, {count / time_total:.2f} tokens/second, character len: {len(response)}") |
|
return response |
|
|
|
base_model = load_model("models/HelixNet-LMoE-6.0bpw-h6-exl2") |
|
lora_actor = ExLlamaV2Lora.from_directory(base_model.model, "models/HelixNet-LMoE-6.0bpw-h6-exl2/lora-actor") |
|
lora_critic = ExLlamaV2Lora.from_directory(base_model.model, "models/HelixNet-LMoE-6.0bpw-h6-exl2/lora-critic") |
|
lora_regenerator = ExLlamaV2Lora.from_directory(base_model.model, "models/HelixNet-LMoE-6.0bpw-h6-exl2/lora-regenerator") |
|
|
|
settings = ExLlamaV2Sampler.Settings() |
|
settings.temperature = 0.75 |
|
settings.top_k = 50 |
|
settings.top_p = 1.0 |
|
max_new_tokens = 2000 |
|
|
|
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." |
|
|
|
while True: |
|
user_input = input("You: ") |
|
|
|
prompt_actor = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: " |
|
if DEBUG: print(f"{prompt_actor}\n\n") |
|
print("ACTOR:") |
|
response_actor = generate_text(prompt_actor, lora_actor, settings, max_new_tokens) |
|
if DEBUG: print(f"{response_actor}\n\n") |
|
print("="*132) |
|
|
|
prompt_critic = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: " |
|
if DEBUG: print(f"{prompt_critic}\n\n") |
|
print("CRITIQUE:") |
|
response_critic = generate_text(prompt_critic, lora_critic, settings, max_new_tokens) |
|
if DEBUG: print(f"{response_critic}\n\n") |
|
print("="*132) |
|
|
|
prompt_regenerator = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nRESPONSE: {response_actor}\nCRITIQUE: {response_critic}\nREGENERATOR: " |
|
if DEBUG: print(f"{prompt_regenerator}\n\n") |
|
print("REGENERATION:") |
|
response_regenerator = generate_text(prompt_regenerator, lora_regenerator, settings, max_new_tokens) |
|
print("="*132) |
|
conversation = f"SYSTEM: {system_prompt}\nUSER: {user_input}\nASSISTANT: {response_regenerator}" |
|
print(conversation) |
|
|
|
``` |
|
|
|
# LLM Evaluation |
|
|
|
Evaluation on a merged version of each base+lora models has yet to be done on the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how it compares to the equivalent full HelixNet model. |
|
|
|
# HelixNet Details |
|
|
|
HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer. |
|
|
|
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule. |
|
|
|
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples. |
|
|
|
Full details on how HelixNet was trained and evaluated is located at [https://huggingface.co/migtissera/HelixNet](https://huggingface.co/migtissera/HelixNet) \ |
|
|
|
The 6bpw separate models for HelixNet are available at [https://huggingface.co/LoneStriker?search_models=helixnet](https://huggingface.co/LoneStriker?search_models=helixnet) |
|
|
|
|