--- license: apache-2.0 pipeline_tag: text-generation language: - en tags: - pretrained inference: parameters: temperature: 0.7 --- # Mistral YARN 128k 11b This is a mergekit merge of the Nous Research's Yarn-Mistral-7b-128k Large Language Model (LLM) to create an 11 billion parameter pretrained generative text model with a context