Update `huggingface_hub` version to avoid 404 Client Error
Resolves #2
Hello!
Pull Request overview
- Update
huggingface_hub
from 0.0.16 to 0.13.4, as 0.0.16 is outdated and won't work anymore.
Details
The HfApi().login(username=..., password=...)
is very much outdated. Nowadays, we huggingface_hub
asks for a HF token
for everything, i.e. one from https://huggingface.co/settings/tokens. So, RAFT should be updated accordingly, too.
Beyond that, I had to modify some other things: No more HfFolder
to save a token, and name
and organization
is now combined into repo_id
.
Before this PR
See #2
After this PR
Note that I'm using my local version of raft-submission
here, rather than cookiecutter git+https://huggingface.co/datasets/ought/raft-submission
like in #2.
> cookiecutter raft-submission
hf_hub_username [huggingface]: tomaarsen
hf_access_token [hf_access_token]: [redacted]
repo_name [my-raft-submission]: [redacted]
Cloning https://huggingface.co/datasets/tomaarsen/[redacted] into local empty directory.
From https://huggingface.co/datasets/tomaarsen/[redacted]
* branch main -> FETCH_HEAD
Already up to date.
[main fbe2380] Add template files
28 files changed, 29640 insertions(+)
create mode 100644 .gitignore
create mode 100644 LICENSE
create mode 100644 README.md
create mode 100644 [redacted].py
create mode 100644 cli.py
create mode 100644 data/ade_corpus_v2/predictions.csv
create mode 100644 data/ade_corpus_v2/task.json
create mode 100644 data/banking_77/predictions.csv
create mode 100644 data/banking_77/task.json
create mode 100644 data/neurips_impact_statement_risks/predictions.csv
create mode 100644 data/neurips_impact_statement_risks/task.json
create mode 100644 data/one_stop_english/predictions.csv
create mode 100644 data/one_stop_english/task.json
create mode 100644 data/overruling/predictions.csv
create mode 100644 data/overruling/task.json
create mode 100644 data/semiconductor_org_types/predictions.csv
create mode 100644 data/semiconductor_org_types/task.json
create mode 100644 data/systematic_review_inclusion/predictions.csv
create mode 100644 data/systematic_review_inclusion/task.json
create mode 100644 data/tai_safety_research/predictions.csv
create mode 100644 data/tai_safety_research/task.json
create mode 100644 data/terms_of_service/predictions.csv
create mode 100644 data/terms_of_service/task.json
create mode 100644 data/tweet_eval_hate/predictions.csv
create mode 100644 data/tweet_eval_hate/task.json
create mode 100644 data/twitter_complaints/predictions.csv
create mode 100644 data/twitter_complaints/task.json
create mode 100644 requirements.txt
Enumerating objects: 43, done.
Counting objects: 100% (43/43), done.
Delta compression using up to 24 threads
Compressing objects: 100% (41/41), done.
Writing objects: 100% (42/42), 101.36 KiB | 14.48 MiB/s, done.
Total 42 (delta 0), reused 0 (delta 0), pack-reused 0
To https://huggingface.co/datasets/tomaarsen/[redacted]
d69d13e..fbe2380 main -> main
Note
I must say that I haven't submitted anything with this, but my local folder and my private Dataset on the Hub both look correct.
I would like to ask
@koaning
if he could try out this PR.
- Tom Aarsen
For those who want to try this out, you can follow the steps from this gist:
https://gist.github.com/tomaarsen/24af24c544d4ce367429a834032a1771
- Tom Aarsen
It seems like this PR indeed works: LinkedIn link
Thanks a lot for the fix @tomaarsen !