A Mixtral-8x7b-v3.1?
5
#64 opened 4 months ago
by
chriss1245
Interview Invitation: Thoughts on Gen AI Evaluation and Report
#63 opened 4 months ago
by
evatang
Unable to load model in 8bit model or contains error when using 4bit
#62 opened 5 months ago
by
franciscoliu
Align tokenizer with mistral-common
#61 opened 6 months ago
by
Rocketknight1
Inference time on MMLU
#60 opened 7 months ago
by
kbganesh
The model is returning different outputs with the same prompt
#59 opened 7 months ago
by
Riteshv2910
Update README.md
#58 opened 7 months ago
by
Criztov
Fail download Mistral
2
#57 opened 8 months ago
by
wanted6
How to train every expert in mixtral 8x7B on different domain data ?
1
#56 opened 8 months ago
by
chin-cyber
training every expert of mixtral-8x7b
#55 opened 8 months ago
by
chin-cyber
What does each consolidated.0x.pt consist of? How to load model using them?
#54 opened 8 months ago
by
Keely0419
Colab Notebook: Fine-tune Mixtral-8x7B (QLoRA)
#53 opened 8 months ago
by
Ateeqq
How to fully fine-tune Mixtral 8x7b without using any adapters ?
1
#52 opened 9 months ago
by
cuongk14
HELP: How to host mixtral model as openai server
#51 opened 9 months ago
by
TurtleRuss
Update README.md
1
#50 opened 9 months ago
by
Schmip
Is this best open source model for code summarization?
#47 opened 9 months ago
by
songogeta31
Adding Evaluation Results
#46 opened 10 months ago
by
leaderboard-pr-bot
Complex SQL Query Generator
#45 opened 10 months ago
by
sanipanwala
Help: CUDA Out of Memory. Hardware Requirements. vLLm and FastChat
#44 opened 10 months ago
by
zebfreeman
[AUTOMATED] Model Memory Requirements
#43 opened 10 months ago
by
model-sizer-bot
Request failed during generation: Server error: CUDA out of memory
#42 opened 10 months ago
by
grumpyp
Out of Memory issue in sagemaker ml.g5.12xLarge instance
3
#41 opened 10 months ago
by
ChanakyaReddy
How to host mistral online?
8
#39 opened 11 months ago
by
joelfabregat
Request: DOI
#38 opened 11 months ago
by
BARRY-SANOUSSA
Azure VM not launching Mixtral
4
#37 opened 11 months ago
by
pierrerichard
Running inference on multi GPU
4
#36 opened 11 months ago
by
bweinstein123
Update README.md
#35 opened 11 months ago
by
kingzzm
Out of memory issue.
4
#34 opened 11 months ago
by
kxgong
Use embeddings create with API on my completions
#33 opened 11 months ago
by
PirlogHF
training data
#32 opened 11 months ago
by
whaleloops
Merci / Thanks
1
#31 opened 12 months ago
by
Tigrou83
How to fine tune mixtral 8x7B?
3
#30 opened 12 months ago
by
tzivi
remove old disclaimer
#28 opened 12 months ago
by
LuckiestOne
train mixtral
1
#27 opened about 1 year ago
by
iriven
KeyError: 'mixtral'
5
#26 opened about 1 year ago
by
alvynabranches
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
4
#25 opened about 1 year ago
by
kmukeshreddy
Question: Maximizing GPU Utilization for Inference
#24 opened about 1 year ago
by
ric1732
Prompt format of mistralai/Mixtral-8x7B-v0.1 model
1
#22 opened about 1 year ago
by
Pradeep1995
Delete model-00010-of-00019.safetensors
1
#21 opened about 1 year ago
by
dynamicmortal
Wrong solution for 1+1=
10
#18 opened about 1 year ago
by
yixliu1
Error while loading model
5
#16 opened about 1 year ago
by
imjunaidafzal
Deployment failing on Sagemaker
14
#15 opened about 1 year ago
by
vibranium
Maybe remove the `+` signs in the demo code?
4
#13 opened about 1 year ago
by
petergrubercom
FSDP Finetuning
14
#12 opened about 1 year ago
by
cchristophe
🚀 Torrent File for AI Model Download 🚀
1
#11 opened about 1 year ago
by
Nondzu
Fine-tuning toolkit for Mixtral 8x7B MoE model
18
#10 opened about 1 year ago
by
hiyouga
Sagemaker deployment config for sub second real time inference
#9 opened about 1 year ago
by
vibranium
AutoModelForCausalLM does not seem to work for Mixtral
8
#8 opened about 1 year ago
by
Mauceric
A question
7
#6 opened about 1 year ago
by
Hoioi