Kaito Sugimoto

kaisugi

AI & ML interests

Japanese LLMs

Recent Activity

published a Space 2 days ago
kaisugi/NLP2025_title_search
updated a Space 2 days ago
kaisugi/NLP2025_title_search
upvoted a collection 10 days ago
TinySwallow
View all activity

Organizations

Aizawa Laboratory at NII's profile picture Team Hatakeyama's profile picture Hugging Face Discord Community's profile picture

kaisugi's activity

reacted to lianghsun's post with ๐Ÿ‘ 24 days ago
view post
Post
1818
๐Ÿ–– Let me introduce the work I've done over the past three months: ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—• and ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—•-๐—œ๐—ป๐˜€๐˜๐—ฟ๐˜‚๐—ฐ๐˜, now open-sourced on ๐Ÿค— Hugging Face.

๐—น๐—ถ๐—ฎ๐—ป๐—ด๐—ต๐˜€๐˜‚๐—ป/๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—•: This model is built on top of ๐—บ๐—ฒ๐˜๐—ฎ-๐—น๐—น๐—ฎ๐—บ๐—ฎ/๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐Ÿฏ๐—• with continual pretraining. The training dataset consists of a mixture of Traditional Chinese and multilingual texts in specific proportions, including 20B tokens of Traditional Chinese text.

๐—น๐—ถ๐—ฎ๐—ป๐—ด๐—ต๐˜€๐˜‚๐—ป/๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—•-๐—œ๐—ป๐˜€๐˜๐—ฟ๐˜‚๐—ฐ๐˜: This is a fine-tuned conversational model based on the foundation model.

This Llama-3.2-Taiwan open-source project is currently a one-person effort (yes, I did everything from text preparation โ€” so exhausting!). If you're interested, feel free to join the Discord server for discussions.

๐Ÿ…ฑ๐Ÿ…ด๐Ÿ…ฝ๐Ÿ…ฒ๐Ÿ…ท๐Ÿ…ผ๐Ÿ…ฐ๐Ÿ†๐Ÿ…บ๐Ÿ…ธ๐Ÿ…ฝ๐Ÿ…ถ

The evaluation was conducted using ikala/tmmluplus, though the README page does not yet reflect the latest results. The performance is close to the previous versions, indicating that further improvements might require adding more specialized knowledge in the datasets.

๐Ÿ…ฐ ๐Ÿ…ฒ๐Ÿ…ฐ๐Ÿ…ป๐Ÿ…ป ๐Ÿ…ต๐Ÿ…พ๐Ÿ† ๐Ÿ†‚๐Ÿ†„๐Ÿ…ฟ๐Ÿ…ฟ๐Ÿ…พ๐Ÿ†๐Ÿ†ƒ

If anyone is willing to provide compute resources, it would be greatly appreciated to help this project continue and grow. ๐Ÿ’ช

---
๐Ÿ”๏ธ Foundation model: lianghsun/Llama-3.2-Taiwan-3B
๐Ÿค– Instruction model: lianghsun/Llama-3.2-Taiwan-3B-Instruct
โšก GGUF: lianghsun/Llama-3.2-Taiwan-3B-Instruct-GGUF
  • 4 replies
ยท
upvoted 2 articles about 1 month ago
view article
Article

Navigating Korean LLM Research #2: Evaluation Tools

By amphora โ€ข
โ€ข 7
view article
Article

Navigating Korean LLM Research #1: Models

By amphora โ€ข
โ€ข 23