prithivMLmods commited on
Commit
d5473c2
·
verified ·
1 Parent(s): 7c1457e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -1
README.md CHANGED
@@ -30,4 +30,60 @@ _/ |_ _______ |__|_____ ____ ____ __ __ | | __ __ _____
30
  | | | | \/| | / __ \_| | \/ /_/ >| | /| |__| | /| Y Y \
31
  |__| |__| |__|(____ /|___| /\___ / |____/ |____/|____/ |__|_| /
32
  \/ \//_____/ \/
33
- </pre>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  | | | | \/| | / __ \_| | \/ /_/ >| | /| |__| | /| Y Y \
31
  |__| |__| |__|(____ /|___| /\___ / |____/ |____/|____/ |__|_| /
32
  \/ \//_____/ \/
33
+ </pre>
34
+
35
+ # **Triangulum 10B: Multilingual Large Language Models (LLMs)**
36
+
37
+ Triangulum 10B is a collection of pretrained and instruction-tuned generative models, designed for multilingual applications. These models are trained using synthetic datasets based on long chains of thought, enabling them to perform complex reasoning tasks effectively.
38
+
39
+ # **Key Features**
40
+
41
+ - **Foundation Model**: Built upon LLaMA's autoregressive language model, leveraging an optimized transformer architecture for enhanced performance.
42
+
43
+ - **Instruction Tuning**: Includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align model outputs with human preferences for helpfulness and safety.
44
+
45
+ - **Multilingual Support**: Designed to handle multiple languages, ensuring broad applicability across diverse linguistic contexts.
46
+
47
+ # **Training Approach**
48
+
49
+ 1. **Synthetic Datasets**: Utilizes long chain-of-thought synthetic data to enhance reasoning capabilities.
50
+ 2. **Supervised Fine-Tuning (SFT)**: Aligns the model to specific tasks through curated datasets.
51
+ 3. **Reinforcement Learning with Human Feedback (RLHF)**: Ensures the model adheres to human values and safety guidelines through iterative training processes.
52
+
53
+ # **How to use with transformers**
54
+
55
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
56
+
57
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import pipeline
62
+
63
+ model_id = "prithivMLmods/Triangulum-10B"
64
+ pipe = pipeline(
65
+ "text-generation",
66
+ model=model_id,
67
+ torch_dtype=torch.bfloat16,
68
+ device_map="auto",
69
+ )
70
+ messages = [
71
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
72
+ {"role": "user", "content": "Who are you?"},
73
+ ]
74
+ outputs = pipe(
75
+ messages,
76
+ max_new_tokens=256,
77
+ )
78
+ print(outputs[0]["generated_text"][-1])
79
+ ```
80
+ # **Use Cases**
81
+
82
+ - Multilingual content generation
83
+ - Question answering and dialogue systems
84
+ - Text summarization and analysis
85
+ - Translation and localization tasks
86
+
87
+ # **Technical Details**
88
+
89
+ Triangulum 10B employs a state-of-the-art autoregressive architecture inspired by LLaMA. The optimized transformer framework ensures both efficiency and scalability, making it suitable for a variety of use cases.