metadata
license: mit
language:
- en
metrics:
- bleurt
- bleu
- chrf
datasets:
- Raiff1982/coredata
- Raiff1982/pineco
base_model:
- tiiuae/falcon-40b
- mistralai/Mistral-7B-v0.3
tags:
- medical
- code
- text-generation
- transformers
pipeline_tag: text-generation
library_name: transformers
Codette - Falcon & Mistral Merged Model
π§ Overview
Codette is an advanced AI assistant designed to support users across cognitive, creative, and analytical tasks.
This model merges Falcon-40B and Mistral-7B to deliver high performance in text generation, medical diagnostics, and code reasoning.
β‘ Features
- β Merges Falcon-40B & Mistral-7B for enhanced capabilities
- β Supports multi-modal text generation, medical analysis, and code synthesis
- β
Fine-tuned on domain-specific datasets (
Raiff1982/coredata
,Raiff1982/pineco
) - β Optimized for research, enterprise AI, and advanced reasoning
π Model Details
- Base Models: Falcon-40B, Mistral-7B-v0.3
- Architecture: Transformer-based language model
- Use Cases: Text generation, code assistance, research, medical insights
- Training Datasets:
Raiff1982/coredata
: medical and reasoning-focused samplesRaiff1982/pineco
: mixed domain creative + technical prompts
π Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Raiff1982/Codette"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
prompt = "How can AI improve medical diagnostics?"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))