When we learn physics in school, it is difficult to understand how to represent a falling ball as a math problem whose solution describes the real world. The first method we learned was Newtonian mechanics which is where everyone draws a force diagram, little arrows representing how a flower wants to evolve.

In college, we may learn something called Lagrangian mechanics which is a little different. For instance, instead of modeling forces, we talk about energy. Energy is often easier to find because it is always mass times velocity squared.

Lagrangian mechanics allows you to solve complex problems easily because you never need to know the forces at work.

But there is a trick to it. It requires identifying something called generalized coordinates, which can sometimes be challenging, but not too bad.

Actually, the invisible science behind Lagrangian mechanics revolves around a method or process known as the Calculus of Variations.

This involves two ideas: 1: that the differential equations of motion governing the dynamics of a classical or mechanical system can be deduced from a cost function and the fact that nature seeks to minimize the integral of this cost function. 2: that the way to identify the equations of motion require that we make small, even infinitesimal, changes in our path through the solution space.

In classical mechanics, the cost function has to do with energy minimization along a solution curve. Specifically, the cost function (Lagrangian) is defined on the tangent bundle (i.e. set of all tangent spaces to the solution curve, which contains velocity vectors). In Euclidean geometry, the “cost” function is actually the Euclidean distance, and is a metric defined on the solution curve or manifold itself (not the tangent bundle). The solution to problem in classical mechanics is a geodesic curve, i.e. one which minimizes the Lagrangian cost function.

In other domains, such as the infrared, it may be possible to construct a cost function based on the least action principle. However we must expand our notion of cost function.

In the example below, the epsilon parameter is a constant between 0 and 1 for each colored curve. Epsilon determines the scale of the effective blackbody curve, which is represented by the value epsilon = 1.

In the absence of any experimental data on the thermal radiator, all values of epsilon are equally likely. One may think of the radiator in as existing (from our perspective) in an undetermined state. By analogy to the Everett interpretation of coherent quantum states, unique versions of the radiator can be said to exist in separated realities. When an experimental observation of the radiator is made, a specific radiation curve is identified. The precise nature of the radiator is then known in “our” reality. In quantum mechanics, this process is called decoherence. A macroscopic object is unlikely to be in a true quantum state as such states require very low temperatures, however the uncertainty in radiant flux can be interpreted using the Everett representation.

The modeling task therefore is to perform a controlled decoherence of a large number of possible co-existant states into a single macroscopic state.

This can be performed using linear algebra, and the idea of a quadratic form as the cost function defined on a solution manifold. The matrix representing the form controls the mixing of individual time streams (realities). [per the Everett interpretation]

Note that the gray body curve is smooth, in contrast to the curve of the selective radiator. An example of a selective radiator would be a metal such as copper which when reduced to powder and burned in an open flame produces a specific set of peaks corresponding to its valence energy levels.

The following Juypiter Notebooks file and latex writeup show a graphical implementation of these ideas in python 🙂

And the possible conclusions are startling!