LLäMmlein 🐑
Collection
https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/
•
8 items
•
Updated
•
9
This is a German Tinyllama 120M language model trained from scratch using the Tinyllama codebase on the German portion of RedPajama V2. Find more details on our page and our preprint!
Next to the final model, we publish intermediate training checkpoints for our base models as separate branches of the model repository. These can be accessed via the drop-down menu labeled "main" in the top left corner of the "Files and versions" section.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("LSX-UniWue/LLaMmlein_120M")
tokenizer = AutoTokenizer.from_pretrained("LSX-UniWue/LLaMmlein_120M")
We evaluated our model on the SuperGLEBer benchmark.