The function_calling and translation abilities are weaker than Mixtral 8x7b
#11
by
bingw5
- opened
As per my test, this model surpassed mixtral 8x7b generally, especially in math and inference. However, it's also weaker than Mixtral while dealing with function calling. I used the same request body, and expect the model to generate proper json response represent a function call. This model responses "I am not able to access internet...", while Mixtral 8x7b success every time. Translation from Chinese to English seems slightly weaker than Mixtral 8x7b, this model failed to translate some terms or rare words.
In conclusion, this model is really amazing for most of the scenarios, you really did a great job! I hope the new versions can enhance these two weakness, make it better.
Many thanks for your feedback.