Papers
arxiv:2410.00907

Addition is All You Need for Energy-efficient Language Models

Published on Oct 1
ยท Submitted by luohy on Oct 7
#1 Paper of the day
Authors:

Abstract

Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision. We propose the linear-complexity multiplication L-Mul algorithm that approximates floating point number multiplication with integer addition operations. The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision. Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation. Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element-wise floating point tensor multiplications and 80% energy cost of dot products. We calculated the theoretical error expectation of L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering. Our numerical analysis experiments agree with the theoretical error estimation, which indicates that L-Mul with 4-bit mantissa achieves comparable precision as float8_e4m3 multiplications, and L-Mul with 3-bit mantissa outperforms float8_e5m2. Evaluation results on popular benchmarks show that directly applying L-Mul to the attention mechanism is almost lossless. We further show that replacing all floating point multiplications with 3-bit mantissa L-Mul in a transformer model achieves equivalent precision as using float8_e4m3 as accumulation precision in both fine-tuning and inference.

Community

Paper author Paper submitter
โ€ข
edited 4 days ago

Implementing floating point multiplications with integer adders, improving computation efficiency, and reducing energy cost.

Screenshot 2024-10-03 at 5.32.22โ€ฏPM.png

ยท

Great results, are you gonna release the code? @luohy

Paper author Paper submitter
This comment has been hidden
This comment has been hidden

great stuff!

also a typo on page 3, 2nd block of text: i believe "save size" should be "same size" right?

Nice idea! You could try combining it with the paper Scaling FP8 training to trillion-token LLMs to train models using far less energy than usual.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Great paper BitEnergy team! More power to energy efficient AI and edge computing โšก
Here's a quick summary: https://soessentially.substack.com/p/gpu-gang-better-watch-out

Great work, thanks. a lot! Here's my summary:
image.png

๐Ÿ’ฅ ๐‹-๐Œ๐ฎ๐ฅ: ๐€๐๐๐ข๐ญ๐ข๐จ๐ง-๐Ž๐ง๐ฅ๐ฒ ๐Œ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ข๐œ๐š๐ญ๐ข๐จ๐ง ๐œ๐š๐ง ๐ฌ๐ฅ๐š๐ฌ๐ก ๐œ๐จ๐ฆ๐ฉ๐ฎ๐ญ๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐œ๐จ๐ฌ๐ญ๐ฌ ๐›๐ฒ ๐Ÿ–๐ŸŽ%!

Microsoft researchers dropped a groundbreaking technique that could slash the energy use in transformer computations : their novel "linear-complexity multiplication" (L-Mul) algorithm approximates floating-point multiplication using energy-efficient integer addition instead of costly multiplications.

๐Ÿ’ก Quick reminder on how floats are coded on 8 bits (FP8):
In the e4m3 FP8 standard, you encode a number as:
Sign (1 bit) | Exponent (4 bits) | Mantissa (3 bits)
Example: 0 (positive) | 1000 (8) | 101 (1/2 + 1/8 = 0.625)
Calculation: you add one to the mantissa, and multiply it by 2 power (the exponent - a bias term which is 7 for e4m3):

โžก๏ธ You get (1 + 0.625) ร— 2^(8-7) = 3.25

Now back to the paper. ๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:

โšก๏ธ Multiplication is extremely energy-intensive compared to addition. For 32-bit operations, multiplication (3.7 pJ) uses 37x more energy than addition (0.1 pJ)!

๐Ÿงฎ Traditional floating-point multiplication go like (noting xm the mantissa and xe the exponent): Mul(x,y) = (1 + xm) ยท 2^xe ยท (1 + ym) ยท 2^ye = (1 + xm + ym + xm ยท ym) ยท 2^(xe+ye)

๐Ÿ’ก L-Mul cleverly approximates this as: L-Mul(x,y) = (1 + xm + ym + 2^-l(m)) ยท 2^(xe+ye), eliminating the costly xm ยท ym term

๐Ÿ”ง l(m) term is adaptively set based on mantissa size for optimal accuracy

๐Ÿ“Š Benchmarks on the Llama-3.1-8B-Instruct model show L-Mul preserves precision across various NLP tasks, with performance nearly identical to full BFloat16 precision

๐Ÿ’ฌ Authors claim: "We can achieve the same model inference performance while reducing the energy cost of attention computations by 80%."

This breakthrough is still theoretical and would need implementation on dedicated hardware to confirm real-world gains, but itโ€™s a really exciting path for more sustainable AI! ๐ŸŒฑ

I got confused by the "no rounding needed" claim
image.png

Let's say x = 1.5, y = 1.75 then the result should be
(1+xm+ym)ร—2xe+ye=(1+0.5+0.75)ร—20+0=2.25ร—20=1.125ร—21(1 + x_m + y_m) \times 2^{x_e + y_e} = (1 + 0.5 + 0.75) \times 2^{0 + 0} = 2.25 \times 2^0 = 1.125 \times 2^1
Consider the uint add without rounding: x_m = (10...0)_2, y_m = (11...0)_2
x_m + y_m = (01...0)_2 with carry 1
new mantissa (01...0)_2 represents 0.25
the exponent add by carry 1
Thus, the result without rounding will be
1.25ร—211.25 \times 2^1

I think a right shift is required.

This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.00907 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.00907 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.00907 in a Space README.md to link it from this page.

Collections including this paper 19