Request: DOI
#17 opened about 9 hours ago
by
Revok8935

Json output format issue
1
#16 opened 1 day ago
by
rastegar
Using this model in LM Studio does not complete (stuck at PromptProcessing: 99.9517)
#15 opened 5 days ago
by
sheyenrath

Requirements ?
#14 opened 5 days ago
by
sheyenrath

Local execution issue
2
#13 opened 6 days ago
by
binzhango

Local inference on Intel iGPU
1
#12 opened 9 days ago
by
luweigen
Inference Endpoints?
1
#11 opened 9 days ago
by
iamrobotbear
Input validation error: `inputs` tokens + `max_new_tokens` must be <= 4096. Given: 11588 `inputs` tokens and 2400 `max_new_tokens`
2
#9 opened 14 days ago
by
GollyJer

Supported languages?
3
#7 opened 14 days ago
by
ksol8

openai api capability server
1
#6 opened 14 days ago
by
devops724
Working with JSON template inputs / structured outputs
#5 opened 14 days ago
by
Truc95
Cool Model
1
#4 opened 15 days ago
by
binzhango

please create a quantized version, preferably using bitsandbytes!
4
#3 opened 15 days ago
by
ctranslate2-4you
Some edge case is not good
1
#2 opened 15 days ago
by
bash99
enable greedy
#1 opened 16 days ago
by
merve
