juewang commited on
Commit
d325d0d
1 Parent(s): ab8b5f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -198,7 +198,7 @@ widget:
198
  We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
199
  GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
200
 
201
- **Please check out our demo: [TOMA-app](https://huggingface.co/spaces/togethercomputer/TOMA-app).**
202
 
203
  # Quick Start
204
  ```python
 
198
  We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
199
  GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
200
 
201
+ **Please check out our [Online Demo](https://huggingface.co/spaces/togethercomputer/TOMA-app)!.**
202
 
203
  # Quick Start
204
  ```python