elijah0528 commited on
Commit
a4e47a0
·
verified ·
1 Parent(s): 66fc902

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,4 +1,4 @@
1
  ### Talk-Tuah-1
2
- Talk-Tuah-1 is an 80 million parameter GPT trained on all of Hailey Welch's inspirational podcast 'Talk Tuah'. This SOTA frontier model is trained on 13 hours of 'Talk Tuah' on an A100 for an hour. The rationale was the discourse in the 'Talk Tuah' podcast is the most enlightened media that any human has created.
3
  Therefore, it should outperform any other LLM on any benchmark. With sufficient training and additional compute – Talk-Tuah-1 can outperform OpenAI and Anthropic's flagship models - o3 and sonnet.
4
  The architecture was adapted from Andrej Karpathy's nanogpt.
 
1
  ### Talk-Tuah-1
2
+ Talk-Tuah-1 is an 80 million parameter GPT trained on all of Hailey Welch's inspirational podcast 'Talk Tuah'. This SOTA frontier model is trained on 13 hours of 'Talk Tuah' on an A100 for ~30 minutes. The rationale was the discourse in the 'Talk Tuah' podcast is the most enlightened media that any human has created.
3
  Therefore, it should outperform any other LLM on any benchmark. With sufficient training and additional compute – Talk-Tuah-1 can outperform OpenAI and Anthropic's flagship models - o3 and sonnet.
4
  The architecture was adapted from Andrej Karpathy's nanogpt.