# Function (mathematics)

Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
Citable Version  [?]

This editable Main Article is under development and subject to a disclaimer.

The mathematical concept of a function (also called a mapping or map) expresses dependence between two quantities, one of which is given (the independent variable, argument of the function, or its "input") and the other (the dependent variable, value of the function, or "output") is uniquely defined by the input.

## The functional concept

A function consists of a mathematical rule and two sets, called the domain and the range. The rule maps each object in the domain to a corresponding object in the range. The object in the domain is called the argument to the function and the object in the range is called the value or image of the function. It is worth noting that a function always maps each argument to a uniquely defined value, so a function cannot give two different values for the same argument.

Example. Let f be the function which takes a triangle (in the plane) and returns its area. The domain of this function is the set of triangles, and the range is the set of positive real numbers (we are not interested in negative areas). Functions are often written with their argument enclosed in parentheses, so if T is a triangle with area 32, this relation might be written

${\displaystyle f(T)=32\!}$

which would be read out as f of T equals 32.

Often, functions describe relations between numbers, vectors or other mathematical objects. Examples of elementary functions include the sine function and the logarithmic function, which (denoting their argument as x) are written log(x) and sin(x) respectively. We shall define these functions and note some of their characteristics.

Example. Consider the equation ${\displaystyle e^{s}=x}$. The logarithm of x is defined as the number s which satisfies the equation.

We note that log(x) is defined only for positive x, but takes on all real values.

Example. Consider a right-angled triangle in the plane. Given either of the non-right angles (which we call x), the sine of x equals the ratio between the side opposite to the angle and the side adjacent to it.

We see that sin(x) is defined for real numbers between, but not equal to either of, 0 and π/2 (or 0 and 90 degrees). These are the only angles which could appear in right-angled triangles. The function takes values between 0 and 1. It is possible, and desireable, to define the sine function for all real numbers, but this requires a more complex definition, see trigonometry.

One important concept in mathematics is function composition: if z is a function of y and y is a function of x, then z is a function of x. This can be described informally by saying that the composite function is obtained by using the output of the first function as the input of the second one. This feature of functions distinguishes them from other mathematical constructs, such as numbers or figures.

In most mathematical fields, the terms operator, operation, and transformation are synonymous with function. However, in some contexts they may have a more specialized meaning. In particular, they often apply to functions whose inputs and outputs are elements of the same set. For example, we speak of linear operators on a vector space, which are linear transformations from the vector space into itself.

## Properties of real-valued functions

Functions of one real variable may have several interesting properties, that allow mathematicians to use practical techniques to analyse them. Many of these properties are also meaningful with respect to complex functions and functions of several variables.

### Continuity

A function is said to be continuous at a point if its value is equal to its limit in that point. This requirement can be written:

${\displaystyle \lim _{x\to 0}f(x_{0}+x)=f(x_{0})\qquad {\textrm {(continuity\ at\ the\ point}}\ x_{0}\in \mathbb {R} ).}$

When drawing a function graph with pen and paper, a continuous function is one that is drawn without lifting the pen from the paper.

## History

### Birth and infancy of the idea

Some tables compiled by ancient Babylonians may be treated now as tables of some functions. Also, some arguments of ancient Greeks may be treated now as integration of some functions. Thus, in ancient times some functions were used (implicitly). However, they were not recognized as special cases of a general notion.

Further progress was made in the 14th century. Two "schools of natural philosophy", at Oxford (William Heytesbury, Richard Swineshead) and Paris (Nicole Oresme), trying to investigate natural phenomena mathematically, came to the idea that laws of nature should be formulated as functional relations between physical quantities. The concept of function was born, including a curve as a graph of a function of one variable, and a surface — for two variables. However, the new concept was not yet widely exploited either in mathematics or in its applications. Linear functions were well understood, but nonlinear functions remained intractable, except for few isolated marginal cases.

The name "function" was assigned to the new concept later, in 1698, by Johann Bernoulli and Gottfried Leibniz, and published by Bernoulli in 1718.

### Power series

The sum of the geometric series

${\displaystyle 1+x+x^{2}+x^{3}+\dots ={\frac {1}{1-x}}}$

was calculated by Archimedes, but only for x=1/4, since only this value was needed, and of course not written in this form, since algebraic notation appeared only in the 16th century. New wonderful formulas with infinite sums were discovered (and repeatedly rediscovered) in the 14–17 centuries: for arctangent,

${\displaystyle \arctan x=x-{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}-\dots }$

(Madhava of Sangamagramma, around 1400; James Gregory, 1671); for logarithm,

${\displaystyle \log(1+x)=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-\dots }$

(Nicholas Mercator, 1668); and many others (Isaac Barrow, Isaac Newton, Gottfried Leibniz, ...) Nonlinear functions, desperately needed for the study of motion (Johannes Kepler, Galileo Galilei) and geometry (Pierre Fermat, René Descartes), became tractable via such infinite sums now called power series.[1]

Newton understood by analysis the investigation of equations by means of infinite series. In other words, Newton's basic discovery was that everything had to be expanded in infinite series.

These studies [on power series] stand in the same relation to algebra as the studies of decimal fractions to ordinary arithmetic. [2]

Power series became a de facto standard of function, since on one hand, all functions needed in applications were successfully developed into power series, and on the other hand, only functions developed into power series were tractable in the theory. It was not unusual, to claim a theorem for an arbitrary function, and then, in the proof, to consider its development into a power series.

Example. In Newtonian mechanics, coordinates of moving bodies are functions of time. For example, the classical equation for a falling body; its height h at a time t is

${\displaystyle h=f(t)=h_{0}-0.5gt^{2}}$

(here h0 is the initial height, and g is the acceleration due to gravity). Infinitely many corresponding values of t and h are embraced by a single function f.

### Trigonometric series

PD Image
Vibrating string: a function changes in time

The instantaneous shape of a vibrating string is described by a function (the displacement y as a function of the coordinate x), and this function changes in time:

${\displaystyle y=f_{t}(x).}$

Infinitely many functions ft are embraced by a single function f of two variables,

${\displaystyle y=f(x,t).}$

After some speculations by Galileo and mathematical interpretation by Brook Taylor (1715/1717) and Johann Bernoulli (1727), the mathematical theory of vibrating string was started by Jean d'Alembert (1746/1749). His approach is equivalent to a partial differential equation written out by Leonhard Euler in 1755,

${\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}f(x,t)={\frac {\partial ^{2}}{\partial x^{2}}}f(x,t),}$

now well-known as the one-dimensional wave equation. D'Alembert found a solution as the superposition of two waves, one traveling to the right, the other to the left:

${\displaystyle f(x,t)=\phi (x+t)+\psi (x-t).}$

The initial shape of the string is given by the function f0. It was a controversial question in the 18th century, whether f0 must develop into a power series, or not necessarily.

D'Alembert held the opinion that the de-facto standard mentioned above still applies; f0 must be represented by a single equation. (He changed his opinion in 1780.)

The old standard was repudiated by Euler in 1744. He introduced "mixed" functions, given by different equations on two or more intervals. Moreover, he admitted functions that do not comply with any analytical law, whose graphs are traced by a free stroke of the hand.

Physically, the vibrating string may be thought of as an infinite collection of non-interacting harmonic oscillators (vibratory modes, harmonics). This idea, previously used by Euler in some special cases, turned into a general method of solving the wave equation by Daniel Bernoulli (1755). To this end the initial function has to be developed into a trigonometric series

${\displaystyle f_{0}(x)=c_{1}\sin x+c_{2}\sin 2x+c_{3}\sin 3x+\dots }$

It was unclear, how many functions can be so developed. D. Bernoulli believed that a trigonometric series is as general as a power series. Both d'Alembert and Euler believed that a trigonometric series is less general than a power series. The truth was revealed only in the 19th century: in fact, a trigonometric series is more general than a power series!

Heat conduction is physically very different from vibrating string, but mathematically it is again about a function that changes in time, and leads to another partial differential equation

${\displaystyle {\frac {\partial }{\partial t}}f(x,t)={\frac {\partial ^{2}}{\partial x^{2}}}f(x,t),}$

now well-known as the one-dimensional heat equation. It was first investigated by Joseph Fourier (1807/1822); a general solution was found in the form

${\displaystyle f(x,t)=c_{1}e^{-t}\sin x+c_{2}e^{-4t}\sin 2x+c_{3}e^{-9t}\sin 3x+\dots }$

## Special classes of function

• An injective function f has the property that if ${\displaystyle x_{1}\neq x_{2}}$ then ${\displaystyle f(x_{1})\neq f(x_{2})}$;
• A surjective function f has the property that for every y in the codomain there exists an x in the domain such that ${\displaystyle f(x)=y}$;
• A bijective function is one which is both surjective and injective.

## Functions in set theory

In set theory, functions are regarded as a special class of relation. A relation between sets X and Y is a subset of the Cartesian product, ${\displaystyle R\subseteq X\times Y}$. We say that a relation R is functional if it satisfies the condition that every ${\displaystyle x\in X}$ occurs in exactly one pair ${\displaystyle (x,y)\in R}$. In this case R defines a function with domain X and codomain Y. We then define the value of the function at x to be that unique y. We thus identify a function with its graph.

## Associated sets

Let f:XY be a function with domain X and codomain Y. The image of a subset A of X is ${\displaystyle f[A]=\{f(x):x\in A\}}$; the image of f is the image of X under f. The pre-image of a subset B of Y is ${\displaystyle f^{-1}[B]=\{x\in X:f(x)\in B\}}$. The fibre of f over a point y in Y is the preimage of the singleton {y}. The kernel of f is the equivalence relation on X for which the equivalence classes are the fibres of f.

## Associated functions

If f is a function from a set X to a set Y, there are several functions associated with f.

If S is a subset of X, the restriction of f to S is the function from S to Y which is given by applying f only to elements of S. The restriction may have different properties to the original. Consider the function ${\displaystyle f:x\mapsto x^{2}}$ from the real numbers R to R. The restriction of f to the positive real numbers is injective, whereas f is not.

The push-forward of f is the function ${\displaystyle f_{\vdash }}$ from the power set of X to that of Y which maps a subset A of X to its image in Y:

${\displaystyle f_{\vdash }(A)=\{f(x):x\in A\}.\,}$

An alternative notation for ${\displaystyle f_{\vdash }(A)}$ is ${\displaystyle f[A]}$ (note the square brackets).

The pull-back of f is the function ${\displaystyle f^{\dashv }}$ from the power set of Y to the power set of X which maps a subset B of Y to its pre-image in X:

${\displaystyle f^{\dashv }(B)=\{x\in X:f(x)\in B\}.\,}$

An alternative notation for ${\displaystyle f^{\dashv }(B)}$ is ${\displaystyle f^{-1}[B]}$ (note the square brackets). Pull-back is a generalized form of inverse, and makes sense whether or not f is an invertible function.

## References

1. Arnol'd, Vladimir Igorevich (1990). Huygens and Barrow, Newton and Hooke: pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Birkhäuser, p. 35. ISBN 3764323833.
2. Newton, Isaac (1664-1671; published 1736). “The method of fluxions and infinite series with its application to the geometry of curve-lines”, Methodus fluxionum et serierum infinitorum, English translation by John Colson, p. 2.