Constant folding

Jump to navigation Jump to search

Main Article
Discussion
Definition [?]
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]

This editable, developed Main Article is subject to a disclaimer.

In computer science, constant folding is a compiler optimization in which arithmetic instructions that always produce the same result are replaced by their result. This optimization can only be performed when the instructions in question can be shown at compile time to produce the same result. In some cases, constant folding is quite similar to reduction in strength optimizations.

Constant folding is most easily implemented on a directed acyclic graph (DAG) intermediate representation, though it can be performed in almost any stage of compilation, even in a peephole optimizer. Basically, the compiler seeks any operation that has constant operands and without side effects, computes (using the mathematics of the target machine) the result, and replaces the entire expression with instructions to load the result.

Why constant folding is effective

In many circumstances, the compiler may emit instructions which can be simplified by constant folding. One common source is array index expressions. For instance, suppose we had a ${\displaystyle N\times M}$ array of integers, row major, with ${\displaystyle S_{int}}$ bytes per integer. Given the array's base address ${\displaystyle B}$, to select the element from the ${\displaystyle i}$th row and the ${\displaystyle j}$th column, the compiler generates instructions to calculate the address as follows:

${\displaystyle Address=B+i\times S_{int}+j\times N\times S_{int}}$

Suppose that, in the execution of a program, one or more of the row index ${\displaystyle i}$ or the column index ${\displaystyle j}$ was constant, then the expression can be simplified at compile time. For instance, if we choose a constant row ${\displaystyle i_{K}}$,

1. ${\displaystyle Address=B+i_{K}\times S_{int}+j\times N\times S_{int}}$
2. ${\displaystyle Address=B+X+j\times N\times S_{int}}$ for ${\displaystyle X=i_{K}\times S_{int}}$
3. ${\displaystyle Address=B+X+j\times Y}$ for ${\displaystyle Y=N\times S_{int}}$
4. ${\displaystyle Address=Z+j\times Y}$ for ${\displaystyle Z=B+X}$

Thus, replacing three multiplication operations and two addition operations with a simple multiplication and a single addition. Additionally, since data types generally occupy a power of two size, reduction in strength optimizations will generally further enhance this code.

Constant folding with associativity

Suppose that a computer program had an expression of the form ${\displaystyle A\times X\times B}$, where A, B are constants and X is a variable. During parsing, the compiler will apply rules of associativity, and depending on the language interpret this expression as either ${\displaystyle (A\times X)\times B}$ or ${\displaystyle A\times (X\times B)}$. In either case, a naive implementation of constant folding cannot simplify this since neither of the multiplications involve two constant operands.

A less naive implementation of constant folding will be able to fold this expression by rewriting it as ${\displaystyle (A\times B)\times X}$. When A, B and X are floating-point numbers, this introduces a very small error because ${\displaystyle (A\times X)\times B}$ and ${\displaystyle (A\times B)\times X}$ are generally not equal in floating-point arithmetics.

Common pitfalls

In cross compilers, the arithmetic is often performed to different precisions on the compiling machine and the target machine. In this case, it is critical that the compiler perform the arithmetic in the precision of the target machine.