motexture's picture
Update README.md
71b7bf0 verified
|
raw
history blame
2.59 kB
---
license: apache-2.0
datasets:
- motexture/cData
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
pipeline_tag: text-generation
tags:
- smoll
- coding
- coder
- model
- small
---
# SmolLCoder-360M-Instruct
## Introduction
iTech-1B-Instruct is an IT assistant, a fine-tuned version of Llama-3.2.1B-Instruct trained on the iData dataset.
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"motexture/SmolLCoder-360M-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("motexture/SmolLCoder-360M-Instruct")
prompt = "Write a C++ program that demonstrates the concept of separate compilation and linkage using namespaces and header files. The program should consist of multiple source files, each containing a portion of the program's code, and a header file that contains the interface information for the program.\n\nThe program should define a namespace my_namespace that contains a class MyClass with a member function print() that takes an integer as an argument. The program should also define a function main() that uses an object of the MyClass class to print a message.\n\nThe program should be compiled and linked separately, with each source file being compiled individually and then linked together to form the final executable."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=4096
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```