This model is a fine-tuned LLaMA (7B) model. This model is under a non-commercial license (see the LICENSE file). You should only use model after having been granted access to the base LLaMA model by filling out this form.

This model is a semantic parser for WikiData. Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/wikidata-emnlp23

Paper: https://aclanthology.org/2023.emnlp-main.353/

Wikidata

WikiSP
arXiv Github Stars

This model is trained on both the WikiWebQuestions dataset, the QALD-7 dataset, and the Stanford Alpaca dataset.

Downloads last month
5
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.