base_model: BSC-LT/ALIA-40b
datasets:
- oscar-corpus/colossal-oscar-1.0
- HuggingFaceFW/fineweb-edu
- joelniklaus/eurlex_resources
- joelniklaus/legal-mc4
- projecte-aina/CATalog
- UFRGS/brwac
- community-datasets/hrwac
- danish-foundation-models/danish-gigaword
- HiTZ/euscrawl
- PleIAs/French-PD-Newspapers
- PleIAs/French-PD-Books
- AI-team-UoA/greek_legal_code
- HiTZ/latxa-corpus-v1.1
- allenai/peS2o
- pile-of-law/pile-of-law
- PORTULAN/parlamento-pt
- hoskinson-center/proof-pile
- togethercomputer/RedPajama-Data-1T
- bigcode/starcoderdata
- bjoernp/tagesschau-2018-2023
- EleutherAI/the_pile_deduplicated
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
About
static quants of https://huggingface.co/BSC-LT/ALIA-40b
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ALIA-40b-i1-GGUF
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Link | Type | Size/GB | Notes |
---|---|---|---|
GGUF | Q2_K | 15.8 | |
GGUF | Q3_K_S | 18.3 | |
GGUF | Q3_K_M | 20.1 | lower quality |
GGUF | Q3_K_L | 21.7 | |
GGUF | IQ4_XS | 22.4 | |
GGUF | Q4_K_S | 23.5 | fast, recommended |
GGUF | Q4_K_M | 24.7 | fast, recommended |
GGUF | Q5_K_S | 28.2 | |
GGUF | Q5_K_M | 28.9 | |
GGUF | Q6_K | 33.3 | very good quality |
GGUF | Q8_0 | 43.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.