Papers
arxiv:2503.18908

FFN Fusion: Rethinking Sequential Computation in Large Language Models

Published on Mar 24
· Submitted by akhaliq on Mar 25

Abstract

We introduce FFN Fusion, an architectural optimization technique that reduces sequential computation in large language models by identifying and exploiting natural opportunities for parallelization. Our key insight is that sequences of Feed-Forward Network (FFN) layers, particularly those remaining after the removal of specific attention layers, can often be parallelized with minimal accuracy impact. We develop a principled methodology for identifying and fusing such sequences, transforming them into parallel operations that significantly reduce inference latency while preserving model behavior. Applying these techniques to Llama-3.1-405B-Instruct, we create Llama-Nemotron-Ultra-253B-Base (Ultra-253B-Base), an efficient and soon-to-be publicly available model that achieves a 1.71X speedup in inference latency and 35X lower per-token cost while maintaining strong performance across benchmarks. Through extensive experiments on models from 49B to 253B parameters, we demonstrate that FFN Fusion becomes increasingly effective at larger scales and can complement existing optimization techniques like quantization and pruning. Most intriguingly, we find that even full transformer blocks containing both attention and FFN layers can sometimes be parallelized, suggesting new directions for neural architecture design.

Community

Paper submitter

Screenshot 2025-03-25 at 12.29.03 AM.png

Great work! The results look very impressive. I noticed that Section 6 (Block Parallelization) discusses an approach that is similar to what we explored in "CQIL: Inference Latency Optimization with Concurrent Computation of Quasi-Independent Layers". It's exciting to see the idea of parallelization being further developed.😄

35 times less cost is insane. Looking forward to see the model on HF

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.18908 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.18908 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.18908 in a Space README.md to link it from this page.

Collections including this paper 5