How to Pair with Larger Models

#7
by windkkk - opened

This model is very popular. Congratulations to you, and thank you for your help. I have a question for you:

How should this model work with larger models? Could you provide a specific textual description of the process?
For example, should SmallThinker-3B be instructed to write out its thought processes (without writing down the answer) first, then let the larger model reference these thoughts to generate an answer?

Alternatively, could we not give any additional instructions and instead have SmallThinker-3B directly respond to the questions, followed by the larger model referencing those responses before generating the final answer?

I would appreciate it if you could outline two possible workflows or more specific prompts.

PowerInfer org

This is a very nice question. I believe one of the most straightforward approaches is to use the smaller model as a "draft model" for the larger model. This could directly improve inference speed by draft "easy" token using smallthinker. You can try this method with llama.cpp.
Additionally, if we’re exploring how smaller and larger models can collaborate effectively, one possible method might be to package the smaller model’s response along with the original question and send them together to the larger model. This approach could allow the larger model to leverage the smaller model’s preliminary reasoning while refining and expanding upon it for the final output.
So far, my focus has primarily been on using speculative decoding to accelerate the larger model’s inference process. I haven’t yet experimented with other methods of collaboration. Thank you for raising such an interesting question—it’s definitely worth exploring further.

Sign up or log in to comment