Energy (science)
Energy is a property of a system that produces action (makes things happen) or, in some cases, has the "potential" to make things happen. For example, energy can put vehicles into motion, it can change the temperature of objects and it can transform matter from one form to another, e.g., energy will turn solid water (ice) of 0 °C into liquid water of 0 °C. Energy lights our cities, lets our planes fly, and runs machinery in factories. It warms and cools our bodies and homes, cooks our food, plays our recorded music, and gives us pictures on television.
Quantitatively, energy is a measurable physical quantity of a system and has the dimension M(L / T)^{2} (mass times length squared over time squared). The corresponding SI (metric) unit is the joule (which equals 1 kg·m^{2}/s^{2}); other measurement units are ergs, calories, watthours, Btu, etc. All these units have the dimension M(L / T)^{2}, and if one finds a physical property of a system with these dimensions, one is entitled to call that quantity a part of the energy of the system.
It is difficult, or perhaps impossible, to give an allembracing definition of energy, because energy exists in many forms, such as kinetic or mechanical energy, potential energy, thermal energy or heat,^{[1]} light, electrical energy, chemical energy, nuclear energy, etc. Indeed, it took scientists a long time to realize that the different manifestations of energy are really the same property, and that in all cases it may rightfully carry the same name (energy). From the middle of the 18th to the middle of 19th century, scientists came to realize that the different forms of energy can be converted into each other, and moreover that no energy is lost in the conversion processes.
Let us look at the conventional coalfired power plant as a practical example of the conversion of energy. Such a plant takes as input coal (carbon) and air (oxygen). These two raw materials combine, i.e., coal is burned, and combustion energy, a form of heat, is generated. Combustion energy is converted into electrical energy which is transported to cities and factories through highvoltage power lines. It would be very nice, and would go a long way in solving the energy crisis, if all of the combustion energy would be converted into electrical energy. Unfortunately, this is not the case, the laws of physics do not allow it. Thermodynamics dictates that the larger part of the combustion energy is turned into nonuseable thermal energy, which in practice is carried off by cooling water. Although the cooling water heated by the electricity plant is of little practical use because of its relatively low temperature, it still contains thermal energy that (theoretically not practically) could be used to perform work. At lower ambient temperatures a larger part of the thermal waste energy is converted into useful electrical energy and in the hypothetical case of zero K (−273 °C) ambient temperature all of the thermal energy in the warmed cooling water is converted into electrical energy, which shows that thermal energy is indeed a form of energy. In any case, the thermal energy of the cooling water is important in the energy balance of the electricity plant:
 Combustion energy → electrical energy + thermal energy
Because energy is conserved, the combustion energy is equal to the sum of the electrical and the thermal energy.^{[2]}.
The different manifestations of energy are discussed in more detail in the following sections of this article.
Energy in classical mechanics
To keep the discussion simple we will consider a point particle of mass m in onedimensional space. That is the position of m at time t is given by x(t). For more details and extension to the threedimensional case, see classical mechanics. Let us assume that a force F(x) is acting on the particle. As an example one may think here of a mass in the gravitational field of the earth. The one dimensional space in this example is a line perpendicular to the surface of the earth. Actually, the case considered is slightly more complicated, namely F is taken to be a function of x, while the gravitational force F does not depend on x. (At least near the surface of the earth. The expression for F close to the surface is: F = mg, where g is the gravitational acceleration, a quantity of approximate value 9.8 m/s².) Further, by considering F(x) the case of frictional (dissipative, nonconservative) forces that are not functions of position (but often functions of only the velocity of the mass) is excluded.
Potential energy
In classical mechanics one can define the potential energy of a system as the work the system can perform potentially. If work is done by the system its potential energy decreases. If work is done on the system its potential energy increases. As stated, the physical system that will be considered is the simplest one possible: a particle of mass m in a onedimensional space with a force field F(x).
Imagine, as an example, the great scientist Galileo Galilei, carrying a mass, say a cannon ball, up the stairs of a church tower. Doing this, Galileo has to work against the gravitational force, which pulls the cannon ball downward. The work ΔW performed by Galileo on the cannon ball (the system) is proportional to the gain in height Δx and the absolute value F of the force. The work ΔW is positive and the force is directed downward (F < 0), so we have
for the work performed by Galileo on the system during his carrying it up the stairs over a height Δx. The corresponding gain ΔU in the potential energy of the cannon ball, is the work done on it by Galileo,
where we made the choice of zero of potential energy: . In this example the obvious choice of x_{0} is the base of the tower, i.e., x_{0} is the street level. By the fundamental theorem of integral calculus, we have the important expression that relates force F(x) and potential energy U(x),
Potential energy in three dimensions
The generalisation to three dimensions of the definition of potential energy U(r) is,
where the gradient is the vector operator
In order that this generalization can be made, or in other words, that a potential energy U(r) can be defined, it is necessary that the force field F(r) is conservative (nondissipative). That is, F(r) must satisfy Euler's reciprocity equations,
which can be written more concisely by the use of the curl,
Kinetic energy
Besides potential energy, classical mechanics knows another form of energy: kinetic energy. Suppose Galileo drops the mass to the bottom of the tower after arriving at its top. The mass will pick up speed, (we will neglect air resistance, which will put some brake on the falling mass and generate some heat, friction with air being a dissipative force) and get the kinetic energy
where the speed of the particle is the absolute value of its velocity v.
Equivalence of kinetic and potential energy
This dropping of mass off the top of the church tower is a good example of conversion of energy: potential energy is converted in kinetic energy. Herewith energy is conserved, that is, the sum of kinetic and potential energy is constant in time. Indeed,
where a is the acceleration of the mass. Invoke Newton's second law (see classical mechanics):
and it is proved that the time derivative vanishes of the total energy E ≡ T + U. That is, E is a conserved, timeindependent, property of the cannon ball falling from the tower.
Collisions
Finally, one may wonder what happens when the particle, dropped by Galileo from the top of the tower, hits the ground. Here we have a collision of two bodies, the earth and the dropped particle. The collision can be elastic, in which case no energy is dissipated. If we take the mass of the earth to be infinite, the particle bounces up with the same kinetic energy that it had when it hit the earth. That is, its speed v  remains the same, but the sign of v changes. The momentum mv of the particle changes by −2mv on collision, which seems contradictory to the law of conservation of momentum. The latter conservation law holds when there are no outside forces acting on the physical system consisting of the earth and the dropped particle. Since it was assumed implicitly that no outside forces are present, we indeed expect conservation of momentum. To explain this apparent violation, note that the earth receives the absolute value of momentum M V  = 2mv  from the collision, where M is the mass of the earth and V is the velocity of the earth gained by the collision. When M goes to infinity, V goes to zero. Hence, for infinite mass the earth absorbs momentum without changing velocity and without picking up kinetic energy. This is why the kinetic energy of the bouncing particle is conserved.
A collision may be inelastic: the particle may break up in pieces which fly off with kinetic energy and the earth will absorb the remaining kinetic energy of the falling particle. This absorption is by increase of the internal energy of the earth, which in general implies some warming up of the earth. Of course, the law of energy conservation still holds: the kinetic energy of the broken particle pieces and the increase of the internal energy of the earth add up to the kinetic energy of the dropped particle.
As a final remark: most collisions are somewhere in between elastic and completely inelastic. The particle will bounce back some height, losing some kinetic energy that is transferred to the earth as an increase of the earth's internal energy. Also the internal energy of the dropped particle may increase somewhat by the collision. This must also be included in the energy balance.
Energy in thermodynamics
Energy from heat
 See also: Heat and Entropy (thermodynamics)
A thermodynamical system is a physical system with an extra property: temperature (T). When two thermodynamical systems of unequal temperature are in thermal contact, heat will flow spontaneously from the warmest (highest temperature) system to the coldest (lowest temperature) system. This heat flow will decrease the temperature of the warmer system and increase the temperature of the colder. The heat flow will be sustained until equilibrium is reached and the two systems have the same temperature. At equilibrium the spontaneous heat flow stops.
By using a heat pump it is possible to transfer energy from a colder to a warmer system. This requires input of mechanical or electrical work. The energy transferred from the colder to the warmer system is also called heat.
Earlier in this article, energy was defined in a hand waving manner as the capacity of a system to do work. Now the question arises whether exchange of heat, which is an exchange of energy, can perform work. Or, in other words, can the energy content of a heat bath be utilized to perform work? It is clear that in any case two systems of different temperatures are needed, otherwise heat will not flow. The first to recognize this clearly was William Thomson (Lord Kelvin).
A spontaneous heat flow is depicted in the figure on the right, where we see two heat baths, with T_{1} > T_{2}. The circle in the middle designates a heat engine, a cyclic process in which heat is converted into work W. From the first law of thermodynamics follows that after a number of full cycles of the engine, when no net energy is stored in the engine,
The second law of thermodynamics states that^{[3]}
If we take the heat engine in the drawing to be the idealized Carnot engine that undergoes reversible changes, the equality sign holds. To obtain the theoretical upper bound to the efficiency of the process, we assume this to be the case. Multiplication of Eq. (2) by gives
Define the efficiency η by
and it follows that the efficiency is proportional to the temperature difference of the upper and lower temperature bath:^{[4]}
The work W is a fraction η of the heat Q_{1} delivered by the upper heat bath. For instance, if T_{1} = 500 ^{0}C and T_{2} = 20 ^{0}C , then η = 480/(273.15+500) = 0.62. That is, at most 62% of the heat delivered by the upper heat bath is converted into work, the remaining energy is lost to the lower heat bath.
One may wonder why the work W is related here to Q_{1}. The answer is that the setup of the figure is a model for many engines. Historically, the model was first introduced by Sadi Carnot for the steam engine. The upper heat bath models the steam boiler which is held at a constant temperature T_{1} by burning fuel (in the days of the steam engine usually coal). During the cycle in the middle of the figure the steam drives a piston that performs work W. During this process, the steam cools down, and the steam is cooled down even further in the condenser, becoming liquid water again. The condenser takes away the rest heat Q_{2}, which is not used any further, but given off to the environment (in this scheme the lower heat bath of ambient temperature T_{2} models the environment). The condensed water is led back from the condenser to the steam boiler and heated again, completing the cycle. So, the heat flow between two reservoirs of unequal temperature (the steam boiler and the environment) generates work plus a rest energy Q_{2}. The fact that this rest energy appears has the consequence that only a fraction η of Q_{1}, the heat obtained from burning fuel, can be used to do work. Since the burning of fuel is the determining factor in the cost of operating the engine, the efficiency is expressed as a fraction of Q_{1}.
The same principle applies to combustion engines, for instance car engines, where the rest heat Q_{2} is given off to the environment through the car's radiator. The fact that only a fraction (about ¼) of the chemical energy stored in gasoline is converted into mechanical work (kinetic energy of the car) is not a design flaw, but a consequence of physical principles (the first and second law of thermodynamics). ^{[5]}
The three arrows in the figure can be reverted, in which case the figure depicts a heat pump, for instance a refrigerator or an air conditioner. Work is delivered to the system, usually by an electric motor, and heat Q_{2} is drawn from the lower temperature bath (for instance, the inside of a refrigerator). The heat Q_{1} is transported to the higher temperature heat bath (in the case of a refrigerator the air in the kitchen, in the case of an air conditioner the outside air). Here we see an illustration of the Clausius principle: it takes work W to extract the amount Q_{2} of heat from the low temperature bath. This is converted into the heat Q_{1} that is transported to the high temperature bath. Since a refrigerator gives off its heat to the kitchen, it cannot be used as an air conditioner. The work W done by its electric motor is converted into the net heat Q_{1} − Q_{2}. Overall, the refrigerator acts as an electric heater, converting electric energy W > 0 into the net heat Q_{1} − Q_{2} > 0 that is given off to the surroundings of the refrigerator. By the same reasoning it is clear why an air conditioner needs an outlet outside the house for its rest heat.
In practical applications, such as power plants and combustion engines, it is hard to achieve a large efficiency factor (T_{1}−T_{2})/T_{1}, because T_{2} is in practice always ambient temperature as it costs energy to obtain lower temperatures. The higher temperature T_{1} cannot be raised too much as it is restricted by the burning process, the material of burners, etc. Power plants have a typical efficiency of 38%.
Work
Besides being able to exchange heat, a thermodynamic system can do also work on another system or on its environment, which decreases its internal energy U. Conversely, another system, or the environment, can do work on the system, increasing U. Above we already assumed that the exchange of energy by work was possible for the Carnot engine. Work can be mechanical, electrical, magnetic, chemical, and so on.
The standard textbook example of mechanical work regards a gas filled cylinder with a piston on top. Let the pressure inside the cylinder be p, the surface of the piston be S and the volume of the cylinder be V. If the piston is moved into cylinder over a distance Δx, an amount of work ΔW is performed on the gas which is equal to F Δx. By the definition of pressure the force F is equal to pS, so that the work is ΔW = pSΔx = pΔV, where we assume that p is constant under the small displacement of the piston. The internal energy increases by ΔU, while V decreases by ΔV, so that
If the piston moves outward, the volume increases, the system performs work on its surroundings, costing it internal energy, and hence the sign in the equation covers this case as well.
The work performed on, or by, the system is of the form aΔb, where a does not depend on the size of the system (when we halve the volume of the system and its gas content the pressure p stays the same). The quantity a is an intensive parameter. The quantity b is linear in the size of the system, it is an extensive parameter. This is a general form for all expressions for work, they always involve an intensive/extensive parameter couple. Another example is the polarisation P (a macroscopic dipole) of a dielectric in a static electric field E. The work done by the field is EΔP. When we add an amount Δn mol of substance to a system, we increase its internal energy by μΔn, where μ is the chemical potential of the substance. This addition of substance can be seen as "chemical work" performed on the system. Even heat exchange fits this pattern, ΔQ = T ΔS, where the temperature T is an intensive and the entropy S is an extensive parameter.
Chemical energy
A chemical reaction
may be exothermic, in which case heat escapes from the reaction in the form of translational (external) energy of the molecules B and often radiation. Or, the reaction may be endothermic in which case heat must be supplied in order to let the reaction proceed. Exothermic reactions form a source of chemical energy.
Very often chemical reactions proceed at constant—usually ambient—pressure p. The reaction heat Q is then equal to the change in enthalpy ΔH of the reactants. Indeed, according to the first law of thermodynamics, we have
Here U_{f} is the total internal energy of the final product molecules B and U_{i} of the initial molecules A. Since the reaction occurs at constant pressure p, the work term is p(V_{f}−V_{i}). This term must be included in the energy balance of the first law. The thermodynamic state function "enthalpy" is by definition H ≡ U + pV. Note that an exothermic reaction is characterized by H_{f} < H_{i}, i.e., has a negative reaction enthalpy Δ H ≡ H_{f}  H_{i} < 0. Correspondingly, an endothermic reaction has a positive reaction enthalpy.
In daily life the most important source of energy is the chemical energy obtained from the reaction called combustion. In this chemical reaction oxygen from the air reacts with a fuel, such as gasoline, coal, or natural gas, giving off heat. The fossil fuels contain carbon as the single most important element. Take graphite as an example:
 C(graphite) + O_{2}(g) → CO_{2}(g) ΔH = −393.6 kJ.
The second most important element contained in fossil fuels is hydrogen. The combustion reaction of gaseous hydrogen is
 2H_{2}(g) + O_{2}(g) → 2H_{2}O(l) ΔH = −571.6 kJ

Commercially available natural gas, LPG (liquified petroleum gas), gasoline, kerosine and diesel are mixtures of many hydrocarbons. The adjacent table presents some typical (approximate) values of their combustion enthalpies (often referred to as energy contents, heating values, caloric values or heats of combustion):
These values are per kilogram. Ordinary gasoline has a density of 0.78 kg/L, so that 1 L of gasoline has an energy content of approximately 36 MJ/L.
It is of some interest to give a crude estimate of energy consumption in daily life, indicating orders of magnitude only. Assume, therefore, that it takes 0.1 L (=0.079kg) of gasoline to drive a midsized car one km (corresponds to 23 miles to the gallon, 10 km to the liter). This car consumes an energy of roughly 3.6 MJ per km, which is close to 1 KWh/km (kilowatthour per kilometer).^{[6]} Hence, driving the car over one kilometer costs roughly the same energy as running a 1000 W electric appliance for an hour. Of course, this does not take into account the energy loss in the generation of the electricity. A power plant running on fossil fuel has typically an efficiency of 38%. If we include this number, we see that driving a midsized car over one kilometer costs the same energy as running a 380 W electric apparatus for an hour.
It may be, parenthetically, interesting to look at the energy consumption of an electric car. Most electric cars use roughly 0.20 kWh/km. Transportation of electricity and batterycharge losses are about 10%, so that an electric car has a net consumption of about 0.22 kWh/km. If we include in this number the 38% efficiency of power stations, we arrive at 0.58 kWh/km for an electric car, which may be compared to the 1 kWh for a gasoline car. Clearly, the gain in efficiency of an electric car over a gasoline car is due to the efficiency of a power station (38%) versus that of a car engine (25%). In both cases the less than 100% efficiency is due to unavoidable heat losses.
Another example: Suppose the members of a household drive on average 30000 km (19000 mile) per year in a midsized car. This costs 30 MWh/year. The average Westernworld household uses 3.5 MWh of electricity per year. Including the energy loss at the power station, a household spends three times more energy on driving a car than on electricity.
Electrostatic energy
Consider two point charges q_{1} and q_{2}, a distance r_{12} apart. By Coulomb's law the one particle acts on the other with a force that is inversely proportional to the mutual distance squared,
where ε_{0} is the vacuum permittivity. The forces on the two particles act along the line joining the particles. If the charges are of opposite charge, the forces are attractive, otherwise they are repulsive. As in classical mechanics, the work done by the force is minus force times distance. The work increases or decreases the potential energy of the system, so that the electrostatic energy of a system of two point charges is
The constant can be chosen freely since its choice does not affect the electric field (minus the gradient of U), which is the physical quantity of concern. This freedom of choice is a form of gauge invariance. It is common to choose .
Consider next a system of N point charges. The potential energy of the system is additive, hence the electrostatic energy of a system of N point charges is,
where the condition on the summation over j excludes the (infinite) selfenergy. In the second equation the factor ½ is introduced to avoid counting the same interaction twice. This energy if of great importance in molecular physics, because a molecule can be seen as a collection of point charges.
This expression allows us to introduce a static potential (scalar) field due to a static charge distribution,
It is the work required to bring a single positive unit charge from infinity (where V is zero) to r. Or in other words, V(r) is the voltage difference between r and infinity, or, briefly, the electric potential at the point r due to the charge distribution.
Electric energy
Consider a conducting wire of finite length with a static voltage difference V between the ends. The voltage difference will be kept constant, for instance, by a battery, or an electric generator. An electric current (a flow of positive charges) will run from positive to negative voltage. This electric current transports energy,
Here P is power (energy/time, expressed in watt), i is (direct) current (charge/time, expressed in ampere) and V is voltage difference (expressed in volt).
The magnitude i of the current is determined by the apparatus (light bulbs, electric oven, electric motors, etc.) that the wire runs through. All these will take up power. The power can be in the form of heat generated per unit time: i ^{2}R, where R is the resistance of (part of) the wire. If, for instance, the current runs through an electric heater, part of the energy is converted to heat, i.e., electric power is converted into an energy flow from the heater outward, warming up the surroundings. If the current runs through an electric motor, electric power is converted to mechanical power, i.e., electrical energy is converted to mechanical work. In contrast to the conversion of heat to work, this process is practically lossless. The only loss is by heating of the wires inside the motor, because of resistance of the wires (again i ^{2}R).
In general, the energy carried by an electric current is measured in kWh (kilowatt×hour), instead of the regular SI unit of energy joule (J). Note that 1 kWh = 3600 kJ, since 1 W = 1 A⋅V (ampere×volt) and 1 A = 1 coulomb/second (C/s) and 1 C⋅V (coulomb×volt) = 1 J.
Equivalence of energy and mass
Einstein showed in his theory of special relativity that the energy of a free particle of (rest) mass m and speed v is equal to
where c = the speed of light in a vacuum = 299,792,458 m/s
Using the Taylor series
we find that the energy of the free particle becomes
Recalling that the energy of a free particle in Newton's classical mechanics is the kinetic energy ½mv^{2}, we see that Einstein discovered two completely new and unexpected facts: (i) the classical kinetic energy is the limit v << c (if v << c, neglect of third and higher terms in the expansion is allowed) and (ii) even a nonmoving particle (v = 0) has energy. The second fact has especially attracted much attention and its corresponding expression is the physics formula that is by far the best known among the general public, namely E=mc^{2}.
Often Einstein's result is interpreted as mass depends on velocity, by defining the velocity dependent mass m(v) ≡ γm. This point of view shows that mass is not a conserved quantity, contradictory to what was postulated by the chemist John Dalton in the early nineteenth century. However, in contrast to mass, energy is conserved, provided we include relativistic energies E in a mass balance. If we have a system of particles with interaction U and total mass M then
This equation is universal, in principle it is operative for chemical reactions, as well as for nuclear reactions. Let us first consider an example of the latter, the reaction of tritium (T) and deuterium (D) giving the isotope ^{4}He and a neutron (n). This is the main reaction occurring in a hydrogen bomb explosion:
 D + T → ^{4}He + n + ΔU
Let us compute ΔU from a mass balance, where we use as unit of mass the unified atomic mass unit (u),
The left hand side in the reaction equation has 0.01888 u more mass than the right hand side. To get an idea of the order of magnitude, we note that the mass of an electron m_{e} is 5.485 799 110× 10^{−4} u, so that ΔM is equal to 34.42 m_{e}, i.e., a little over the mass of 34 electrons. In the energy balance ΔM must appear as energy, noting that 1 u = 931.494013 MeV, we find that the energy that comes free in the reaction is ΔU = 17.59 MeV.
We may contrast this nuclear reaction to a typical chemical reaction,
 H + H → H_{2} + 4.5 eV
The left hand side (two free hydrogen atoms) has 4.5 eV more relativistic mass than the hydrogen molecule. The reaction energy 4.5 eV corresponds to 8.8× 10^{−6} m_{e}, which is a completely unobservable loss of mass. The fact that the mass of a molecule is less than the sum of masses of its constituting atoms is true, but such a small effect that it is never included, in, for instance, the translational or rotational energy of the molecule, where the molecular mass plays a role.
Energy in quantum mechanics
The energy of many (but not all) quantum mechanical systems is quantized, meaning that the energy of the system can take on only discrete values. The historic example of a quantum mechanical system with quantized energies is the onedimensional harmonic oscillator. The energies are
where h is Planck's constant and ν is the fundamental frequency of the harmonic oscillator; h ν has the dimension energy. According to quantum mechanics it is impossible for the harmonic oscillator to have an energy equal to, e.g., 1.35 h ν, because there is no integer n such that n + ½ = 1.35. Max Planck^{[7]} was forced to introduce this quantized energy expression in his study of black body radiation, in which he assumed the walls of the black body to consist of thermally excited harmonic oscillators. This was the beginning of quantum theory.
As stated, not all energies are quantized, those of unbound systems (scattering systems) are not quantized. A wellknown example is the ionization of the hydrogen atom, i.e., the removing of the electron of the Hatom. Once the electron has obtained an energy larger than the ionization potential (13.6 eV), it is a free electron—with a trajectory disturbed by the field of the Hnucleus—that can have any (nonquantized) energy.
In modern quantum mechanics, energy, as any observable physical quantity, is represented by a selfadjoint operator, usually designated by H in honor of William Rowan Hamilton. Most terms in the operator H are obtained from the corresponding classical Hamiltonian (the classical energy expressed in momenta and positions of the particles constituting the system). The momenta are replaced by gradients (times with ) and the components of the position vectors are simply reinterpreted as multiplicative operators.
Example
Consider two charged particles of mass m_{1} and m_{2}, with position vectors r_{1} and r_{2}, and charges q_{1} and q_{2}. Classically the particles have kinetic energy and as potential energy the Coulomb (electrostatic) energy:
This is converted into Hamilton form by defining the momenta of the particles
and writing the dot products as squares,
so that
Quantization means
and reinterpretation of r_{1} and r_{2} as multiplicative operators. Here ∇ is the gradient, a vector operator known from vector analysis. Hence the quantum mechanical energy operator (Hamilton operator) becomes finally the selfadjoint operator,
(The proof that this operator is selfadjoint is omitted). The eigenvalues of this operator are the quantum mechanical energies of this system consisting of two charged particles. The lower energies (below the ionization threshold) are quantized (discrete), those above the ionization threshold are continuous.
Sometimes it is a matter of concern that operators do not commute, while the corresponding classical quantities always commute. Often one can then fall back on the Beltrami form of the Laplace operator for the kinetic energy. Further there are quantum mechanical energy terms that do not have classical counterparts. Commonly these terms depend on electron or nuclear spin. Spin terms can either be derived ad hoc (as in the Wolfgang Pauli's theory of electron spin), or more rigorously by Paul Dirac's relativistic theory.
As was already discussed in the example, the energies E_{n} of a quantum mechanical system appear as eigenvalues of the eigenvalue equation
which is the timeindependent Schrödinger equation. By using n to label the eigenstates ψ_{n}, we may suggest the eigenvalues to be discrete, i.e., suggest that n is integral. However, this is not necessary, n may be a continuous label. In that case ψ_{n} is usually not normalizable and is referred to as a scattering state.
In quantum mechanical studies the eigenvalue problem of any observable may appear occasionally. However, the observable H (energy) plays a very special—and central—role. Namely, it appears in the fundamental equation of quantum mechanics, Schrödinger's timedependent equation,
which describes the time evolution of the state function Ψ. This equation is the quantum mechanical counterpart of Newton's second law in classical mechanics and Maxwell's equations in electrodynamics.
Notes
 ↑ Strictly speaking there is a distinction between heat and thermal energy. The distinction is that an object possesses thermal energy while heat is the transfer of thermal energy from one object to another. However, in practice, the words "heat" and "thermal energy" are often used interchangeably
 ↑ This is somewhat simplified, in practice part of the combustion energy is lost to the hot combustion flue gases (carbon dioxide, nitrogen, water vapor, etc.) that leave the plant.
 ↑ The heat flow Q divided by T is the increase of entropy of a system into which Q flows (at constant temperature T). When the equality sign holds in Eq. (2), this statement says that no entropy is taken up or given off by the heat engine in a full cycle other than Q_{1}/T_{1} and Q_{2}/T_{2}; there are no entropy losses.
 ↑ If entropy losses do occur, which is the more usual case, then the lefthand side of Eq. (2)—the net entropy change—is greater than zero, and
 ↑ To avoid misunderstanding: a car loses its mechanical energy mainly by friction with the air. Friction gives an energyloss per unit of time proportional to the speed v cubed (v^{3}) of the car. By Newton's first law, without friction a car would not need any mechanical energy once it had reached constant speed. The engine delivers the necessary mechanical power to overcome friction by converting chemical energy (with about 25% efficiency).
 ↑ David J.C. MacKay, writing for a UK readership, considers a car driving 12km/l (20% more economic than the example in the text) and quotes 0.8kWh/km (20% less energy per km). See: Sustainable energy without the hot air.
 ↑ M. Planck, Annalen der Physik, vol. 4, p. 553 (1901), Ueber das Gesetz der Energieverteilung im Normalspectrum (About the law of energy distribution in the normal spectrum, Ann. d. Phys. online)
Literature
 Introduction to thermodynamics: P.W. Atkins and Julio de Paula (2002). Atkins' Physical Chemistry, 7th Edition. Oxford University Press. ISBN 0198792859.
 Introduction to classical mechanics and electricity: (2004) Physics for Scientists and Engineers, with Modern Physics, 6th Edition. ThomsonBrooks/Cole. ISBN 0534409490.
 Quantum mechanics: Thomas Engel (2006). Quantum Chemistry and Spectroscopy. Pearson/Benjamin Cummings. ISBN 0805338438.