File size: 3,330 Bytes
f5ab8e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bca55cf
 
 
 
 
 
 
 
 
 
d5473c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: creativeml-openrail-m
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
base_model:
- prithivMLmods/Triangulum-10B
pipeline_tag: text-generation
library_name: transformers
tags:
- triangulum_10b
- sft
- chain_of_thought
- ollama
- text-generation-inference
- llama_for_causal_lm
---
![Triangulum-10b.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/By0OJ1lMvP5ZvVvfEGvz5.png)

<pre align="center">
  __           .__                                .__                   
_/  |_ _______ |__|_____     ____    ____   __ __ |  |   __ __   _____  
\   __\\_  __ \|  |\__  \   /    \  / ___\ |  |  \|  |  |  |  \ /     \ 
 |  |   |  | \/|  | / __ \_|   |  \/ /_/  >|  |  /|  |__|  |  /|  Y Y  \
 |__|   |__|   |__|(____  /|___|  /\___  / |____/ |____/|____/ |__|_|  /
                        \/      \//_____/                            \/ 
</pre>

# **Triangulum 10B: Multilingual Large Language Models (LLMs)**

Triangulum 10B is a collection of pretrained and instruction-tuned generative models, designed for multilingual applications. These models are trained using synthetic datasets based on long chains of thought, enabling them to perform complex reasoning tasks effectively.

# **Key Features**

- **Foundation Model**: Built upon LLaMA's autoregressive language model, leveraging an optimized transformer architecture for enhanced performance.

- **Instruction Tuning**: Includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align model outputs with human preferences for helpfulness and safety.

- **Multilingual Support**: Designed to handle multiple languages, ensuring broad applicability across diverse linguistic contexts.

# **Training Approach**

1. **Synthetic Datasets**: Utilizes long chain-of-thought synthetic data to enhance reasoning capabilities.
2. **Supervised Fine-Tuning (SFT)**: Aligns the model to specific tasks through curated datasets.
3. **Reinforcement Learning with Human Feedback (RLHF)**: Ensures the model adheres to human values and safety guidelines through iterative training processes.

# **How to use with transformers**

Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.

Make sure to update your transformers installation via `pip install --upgrade transformers`.

```python
import torch
from transformers import pipeline

model_id = "prithivMLmods/Triangulum-10B"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
# **Use Cases**

- Multilingual content generation
- Question answering and dialogue systems
- Text summarization and analysis
- Translation and localization tasks
  
# **Technical Details**

Triangulum 10B employs a state-of-the-art autoregressive architecture inspired by LLaMA. The optimized transformer framework ensures both efficiency and scalability, making it suitable for a variety of use cases.