Post
2183
The most recent LLaMA-3-70B Instruct showcases the beast performance in zero-shot-learning mode in Target-Sentiment-Analsys (TSA) π₯π In particular we experiment with sentence-level analysis, with sentences fetched from the WikiArticles that were formed into RuSentNE-2023 dataset.
The key takeaways out of LLaMA-3-70B performance on original (π·πΊ) texts and translated into English are as follows:
1. Outperforms all ChatGPT-4 and all predecessors on non-english-texts (π·πΊ)
2. Surpasses all ChatGPT-3.5 / nearly performs as good as ChatGPT-4 on english texts π₯³
Benchmark: https://github.com/nicolay-r/RuSentNE-LLM-Benchmark
Model: meta-llama/Meta-Llama-3-70B-Instruct
Dataset: https://github.com/dialogue-evaluation/RuSentNE-evaluation
Related paper: Large Language Models in Targeted Sentiment Analysis (2404.12342)
Collection: https://huggingface.co/collections/nicolay-r/sentiment-analysis-665ba391e0eba729021ea101
The key takeaways out of LLaMA-3-70B performance on original (π·πΊ) texts and translated into English are as follows:
1. Outperforms all ChatGPT-4 and all predecessors on non-english-texts (π·πΊ)
2. Surpasses all ChatGPT-3.5 / nearly performs as good as ChatGPT-4 on english texts π₯³
Benchmark: https://github.com/nicolay-r/RuSentNE-LLM-Benchmark
Model: meta-llama/Meta-Llama-3-70B-Instruct
Dataset: https://github.com/dialogue-evaluation/RuSentNE-evaluation
Related paper: Large Language Models in Targeted Sentiment Analysis (2404.12342)
Collection: https://huggingface.co/collections/nicolay-r/sentiment-analysis-665ba391e0eba729021ea101