Akarshan Biswas
qnixsynapse
AI & ML interests
NLP, models, quantization
Recent Activity
liked
a model
3 days ago
Echo9Zulu/gemma-3-4b-it-int8_asym-ov
liked
a model
10 days ago
google/gemma-3-4b-it-qat-q4_0-gguf
liked
a model
10 days ago
stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small
Organizations
None yet
qnixsynapse's activity
Tool calling support in Gemma 2
2
#50 opened 4 months ago
by
qnixsynapse

[MODELS] Discussion
34
715
#372 opened about 1 year ago
by
victor

[TOOLS] Community Discussion
3
27
#455 opened 11 months ago
by
victor

Wrong number of tensors; expected 292, got 291
6
#69 opened 9 months ago
by
KingBadger
[FEATURE] Tools
78
69
#470 opened 11 months ago
by
victor

Utterly based
8
1
#9 opened 9 months ago
by
llama-anon

Add IQ Quantization support with the help of imatrix and GPUs
6
8
#35 opened about 1 year ago
by
qnixsynapse

Suggestion: Host Gemma2 using keras_nlp instead of transformers library for the time being
2
#498 opened 10 months ago
by
qnixsynapse

The best 8B in the planet right now. PERIOD!
2
#22 opened 12 months ago
by
cyberneticos

How many active parameters does this model have?
3
#6 opened about 1 year ago
by
lewtun

7B or 8B?
4
#24 opened about 1 year ago
by
amgadhasan
Which model is responsible for naming of the thread?
8
#402 opened about 1 year ago
by
qnixsynapse

Number of parameters
8
#9 opened about 1 year ago
by
HugoLaurencon

Loading the model
3
#3 opened over 1 year ago
by
PyrroAiakid
Looking for GGUF format for this model
1
#14 opened over 1 year ago
by
barha
Help needed to load model
4
19
#13 opened over 1 year ago
by
sanjay-dev-ds-28
Running Llama-2-7B-32K-Instruct-GGML with llama.cpp ?
13
#1 opened over 1 year ago
by
gsimard
How to convert model into GGML format?
4
54
#13 opened over 1 year ago
by
zbruceli

gguf files
#22 opened over 1 year ago
by
qnixsynapse

can't load model with llama.cpp commit 519c981f8b65ee6c87c2965539685ced0a17223b
5
#6 opened over 1 year ago
by
md2