Does the quality of the results depend on the LLM model used?

#4
by JBHF - opened

Does the quality of the results depend on the LLM model used?

In the online Gradio app this LLM model is used:
โ€œ LongWriter-glm4-9bโ€
https://huggingface.co/THUDM/LongWriter-glm4-9b

But there is another one:
โ€œ LongWriter-llama3.1-8bโ€
https://huggingface.co/THUDM/LongWriter-llama3.1-8b

So I wonder which LLM gives the best results!

Also I wonder if one can run the LongWriter LLMs on a free instance of Google Colab.

Sign up or log in to comment