https://huggingface.co/amd/AMD-OLMo-1B-SFT-DPO with ONNX weights to be compatible with Transformers.js.
Usage (Transformers.js)
If you haven't already, you can install the Transformers.js JavaScript library from NPM using:
npm i @huggingface/transformers
Example: Text generation with onnx-community/AMD-OLMo-1B-SFT-DPO"
.
import { pipeline } from "@huggingface/transformers";
// Create a text generation pipeline
const generator = await pipeline(
"text-generation",
"onnx-community/AMD-OLMo-1B-SFT-DPO",
{ dtype: "q4" },
);
// Define the list of messages
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Tell me a joke." },
];
// Generate a response
const output = await generator(messages, { max_new_tokens: 128 });
console.log(output[0].generated_text.at(-1).content);
// "Why don't scientists trust atoms?\n\nBecause they make up everything!"
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx
).
- Downloads last month
- 47
Inference API (serverless) does not yet support transformers.js models for this pipeline type.
Model tree for onnx-community/AMD-OLMo-1B-SFT-DPO
Base model
amd/AMD-OLMo-1B-SFT-DPO