Numerical analysis is the study of finding or approximating the solution to a mathematical problem using a computer. Differential equations and linear algebra are two of the largest fields of study within this area.
Accuracy and Error
Errors will be introduced whenever computers are asked to do calculations involving real numbers, since a computer can only store a real number up to a finite number of significant figures. These errors can be multiplied over the course of many calculations.
Some algorithms produce more errors than others. For example, suppose we want to calculate 2,000,001 + 1/3 - 2,000,000. For the purposes of this example, we will work in the usual base ten rather than in binary, and we will suppose that we are accurate to only eight significant figures. Now, how shall we do this calculation? First of all, we must do the division to find that 1/3 = 0.33333333 (to eight significant figures). Suppose that we next choose to add 2,000,001 and 0.33333333 to get 2,000,001.3 (again, to eight significant figures). Now we subtract 2,000,000 to get 1.3000000. This answer is accurate to only two significant figures! On the other hand, if the algorithm was somehow set up to subtract the 2,000,000 before adding 0.33333333, the final answer would still be accurate to eight significant figures. Thus, algorithms are often constructed so as to try to avoid adding numbers of wildly different magnitude.