The Basic Superstition of Modern Science

The basic superstition underlying modern science, operative particularly since Descartes, is the idea that reality can be accurately and completely described mathematically.

While the monotony of consistent evidence that every mathematical projection proves to be merely approximate the moment more accurate measuring apparatus is devised should be sufficient to demonstrate that the idea is flawed, the very basis of the idea when thought through is absurd.

Looked at from a non-Cartesian perspective, without the theological assumptions that underpin his absolute faith in mathematics, the notion that something invented largely as a game about 2700 years ago says anything fundamental about reality would be odd in itself. When you also consider that with the exception of geometers, mathematicians have repeatedly insisted that mathematics has no empirical or ontological implications, i.e. it says nothing about reality, the belief that it somehow must seems even odder. Even the exception of geometers was decisively disproved in the 19th century when geometries with any arbitrary number of dimensions were found to be as usable an approximation of reality as Euclidean geometry, in fact the most accurate geometrical approximations are most often found using a six dimensional geometry rather than a three dimensional Euclidean geometry.

Finally, the recasting of mathematics itself from an axiomatic foundation, that is, based on simple notions that while unprovable are self evident, to being founded on the ZFS formulation of set theory, something not in any way self evidently the case as far as reality goes, should have been the final nail in the coffin for the superstition.

Yet it continues to persist as the common belief of the majority of scientific researchers, popularizers and teachers. Reality is judged somehow wanting as an approximation of mathematical ‘perfection’, rather than mathematics being judged an imperfect and radically incomplete approximation of reality.

The reason this assumption retains such a hold is twofold:

  1. It corresponds to the basic mythos of western society, the substantialization of number as measure itself as somehow ‘real’.
  2. So many of science’s basic assumptions depend on it that even scientists who became aware of the issue, such as Bohr and Heisenberg, have been reluctant to make their conclusions clear, since those conclusions invalidate fundamental tenets of modern science such as the truth-claim of the repeatable experiment.

Avoiding admitting the falsity of such claims, though, doesn’t change the fact that they are inherently false. There is, however, a further belief that serves to prop up the notion that even if it’s completely irrational, somehow the superstition must be correct: the belief that the functioning of modern technology proves conclusively the correctness of modern science’s basic assumptions.

This last belief is at the heart of the notion of technology as “applied science”. Yet the reality is that technologists rarely use scientific results unless it happens to be a convenient shortcut, and almost never use scientific method. Further, it is not science that ‘discovers’ things, but technology itself. Something has to already be ‘uncovered’, revealed in some way in order to become an object for science. Science itself, in thus coming after the a priori revealing, which in modern times is for the most part via technology, merely attempts to account-for what has been revealed.

As a simple example, Galileo didn’t invent the telescope to test his theories about the orbits of Jupiter’s moons; rather the existence of the telescope revealed data concerning both that and what the moons were in the first place, and gave strong hints as to how they behaved. Only after having determined that and what Jupiter’s moons were could Galileo use those hints to come up with a hypothesis to account-for how they must behave in order to generate the appearances he observed via the telescope. It took significantly better technology, not available until the late 17th century, for Jesuit scholars to demonstrate that his hypothesis in fact correctly accounted for the apparent behaviour.

Currency and Mythos

If one accepts the posit that western civilization is and always has been essentially greco-roman, then the gods of the greco-roman pantheon are the primordial gods of our civilizations, not the god of the abrahamic religions. The first supreme god of Hellenic civilization was Chaos. Chaos is unrepresentable, not as with later gods such as Yahweh, from commandment, but intrinsically. As infinite, constant exchange nothing can be fixed or determined, as the most primordial sublime Chaos is “the night where all cows are black”. Beyond unrepresentable, Chaos was not even properly thinkable within the developing rationality of Greek culture.

The last god of the greco-roman pantheon, Janus, is a god the Romans claimed as exclusively their own, which is correct in a sense. As the god of currency, ports, trade etc. Though Janus is also a transformation of Chaos and simultaneously a substantialization, almost a conceptualization, but not as restrictive as a rational concept. There was also a Greek transformation of Chaos, who took the place at the table of the Olympians formerly occupied by the goddess of home and hearth, Dionysius. As god of wine and currency (the temples were the Greek and Roman mints, among other things, and wine was a common early currency) Dionysius is related to Janus, but thought in a Greek and not Roman way.

The full substantialization of currency and its essence as the god arose via technology, one borrowed from another local civilization. That technology was the touchstone, and it had an importance in the Greek city-state model that it hadn’t had with the more hierarchical Hittites. It allowed a relative value to be fully determined, and gave rise to the notion of currency having “intrinsic” value and substance.

At the birth of Imperial Rome the god was itself exchanged for a man, Caesar, whose imprint replaced Janus on Roman currency. As the Roman empire began to wane, an isomorphic exchange of god for man, the mythos of Christianity, concealed the original exchange, and the Roman empire temporarily reinstated the blessed and therefore valuable determination to its currency, as the Roman empire transformed into the Holy Roman empire.

The means of exchange was substituted for exchange itself, and this exchange hidden behind a further exchange of the god for a man. This double concealment has allowed currency to retain its basic assumed mythos virtually unquestioned up to the modern age. However technology, which allowed the substantialization to begin, has recently reverted the substantialization to such a degree that many in the west rarely touch any representation of currency,. This lack of immediacy weakens the basic trust in the mythos.

“The gods of Greece and their supreme god, if they ever come, will return only transformed to a world whose overthrow is grounded in the land of the gods of ancient Greece.” (italics mine)

Martin Heidegger, Sojourns, translated by John Panteleimon Manoussakis (Albany: SUNY Press, 2005)

The notion that the overthrow of the ‘world’, the basic mythos that structures reality for man, the symbolic order in Zizek’s formulation, is grounded in the land of the gods of ancient Greece is at first sight peculiar. Heidegger is not saying that the overthrow, if it ever occurs, is grounded in the gods themselves but in the land from which they originated. This potential overthrow is what Heidegger terms “the last god”. The last god is not an entity even in the sense of the other greco-roman gods because it is the essence of non-entity itself. It can only appear in any sense as its own passing-away, as its exposure as insubstantial. Currency is a measure,but unlike other measures, it doesn’t measure an entity but only the differencing of entities, which can never itself be an entity.

The possibility of another beginning of Western history must occur as the passing and passing-away of the last god, not merely of its presencing or appearing, though it must appear as a hint in order to pass-away. This passing occurs in a stillness that is likewise the most intense motion as absolute tension, a trembling that gathers coming-to-be and passing-away in the decision of a moment (Augenblick).

The passing of the last god occurs from utmost refusal, the most originary “not” of be-ing out of which alone the hint of the last god can manifest. The last god is not an end but rather the beginning as it sways back into itself. The last god sways as not granting, i.e., utmost refusal as the farthest going ahead. Being is itself insubstantial, what determines a thing insofar as that and what it is is precisely (ex)change itself, a thing is always an (ex)change from something else. As such, like theo in its originary form, Be-ing is purely verbal.

Be-ing, as (ex)change, was usurped by its measure, currency, in the first beginning, which as such became the concealed last god of greco-roman history, Janus, a stand in for Chaos, a concealing of Chaos itself concealed by the Christ figure as a stand-in for Janus. The idolatry of Janus is itself concealed by its transformation as Christianity.

Currency can only sway into its ownmost not-being by the utmost refusal to (ex)change via currency and thus exposing its lack of substantiality, it’s lack of even being a measure of (ex)change. By the not-granting of refusal, the last god passes, and another beginning becomes possible. This other must, as a recoil of the first beginning, be fundamentally a granting, the freeing claim that the essence of technology, properly responded to, brings us into the vicinity of.

Are we getting better? Is this really Progress?

There’s a basic assumption that we are getting better, as individuals, as a society, that is so embedded that even to question it is to be radically heretical. Yet as an assumption it’s never actually determined as justifiable, on the pretext that it’s self-evident.

Is it though? Do we ever really think about any instances that would demonstrate that self-evidence? I decided to look at ways in which we are self-evidently getting better to see what the overall sense or meaning of the assumption truly is.

So, what are we’re getting substantially, self-evidently better at? “Getting better” is itelf too generic to be judged, so we need to specify some of the various ways in which we are getting better. Here are a few:

  1. We are definitely getting better at mass production; all the productivity statistics demonstrate that. The area of mass production that has shown the most radical increase is in the mass production of corpses.  The bureacratization of mass producing corpses is a minor change when compared to the way we in more recent times have totally automated it.
  2. While we are at least maintaining the level of inequality, ensuring that a tiny percentage of the world controls most of the wealth and power, this has been true for a long time. However we are getting significantly better at not seeing this as a problem, but rather as a moral imperative.
  3. We are getting better at starting wars for no real reason. In the not-so-distant past governments had to put significant effort into propaganda in order to drum up support for such wars, but now the effort required is pretty minimal. Everybody knows it’s merely propaganda, that the wars are unjustifiable, and the governments know that everyone knows. They also know that nobody cares sufficiently to do anything about it.
  4. We are getting better at believing in the substantiality and reality of something that is neither, and putting our faith, even basing our reality on it. And no, I’m not talking about the Christian or Islamic or Jewish God, but the operative god, currency.
  5. We are getting much better at only producing affordable food and medicines that are actually deadly, thus preventing the elderly from becoming a burden on the wealthy.
  6. We are getting better at polluting available fresh water and restricting access to whatever’s left based on ability to pay for it.
  7. We are getting phenomenally better at pawning off gambling debts by the wealthiest portion of the wealthiest countries, onto the public of poorer countries, until we can hold them for ransom via the IMF and destroy what little economies they do have on whatever whim suits said IMF.
  8. We are getting better at stuffing more people into smaller spaces, spaces that would have been considered unliveable a hundred years ago, and claiming progress because we sell people new gadgets that only serve to ensure they’re available 24/7.

If this is progress, what must regress look like?

Of Comets and Cosmological Presumption

Ian Wright, the lead scientist on the Ptolemy instrument, describes the organics found on the comet as a “frozen primordial soup”, but concedes some colleagues might not agree. “Potentially, that is what we are talking about, but I’ll get pilloried for saying so,” he added.

“If you were to put these materials on the surface of a primitive body like Earth, and give them the right amount of heat and whatever else is required, conceivably, you could form life,” he said.

Rosetta probe studies released, revealing fullest picture of comet yet, The Guardian

Now, whatever Mr. Wright’s credentials, he should be pilloried for the last statement, as should the reporter who reported it without question. If you put anything, or even nothing, together with the right amount of heat (whatever “right” means) and whatever else is required, you can form anything whatsoever, since the latter is completely indefinite.

The compounds found by the Philae lander on the comet in question may be “organic” compounds, but organic in this sense simply means they contain carbon. Methane, one of the most common gases found on other bodies within the solar system, is itself considered an organic compound in this sense, although it’s one of the simplest compounds to be considered organic. The distinction between organic and inorganic carbon compounds itself, while it may be useful in organizing research in chemistry, is completely arbitrary.

The blurring of such lines leads to the equivocations and unwarranted assumptions found in the thinking of researchers involved in this kind of research, since in common terms organic means “arising from life”. The analogical trope underlying the thought process is quite obviously sperm and egg, a simple visual metaphor, yet Mr. Wright and Mr. Goessman appear to be completely oblivious to this underlying analogy guiding their thinking in a heuristic sense.

At the same time, they miss what should be an obvious inferral from the active nature of the comet’s surface and what can be determined of its interior, which is that a comet is in some sense systemic, dynamic. This shouldn’t be a surprise, since materially comets are not largely distinct from asteroids, yet observationally they are very distinctive. The distinctive features and behaviour of comets can only be accounted for structurally and systemically in some way. This not only invalidates current material theories of comets, but makes the current theory of comet formation much more problematic. Simple accretion would, by itself, only generate a larger clump of ice, rock and dust. It cannot account for whatever systemic processes are producing the gas spewing from the sinkholes. Not unusually, theories prove to be massive oversimplifications.

Instead, though, what we get from these researchers is a vast, unwarranted jump from a comet being a “dirty snowball” (or, for other theorists, a “snowy dirtball”), to a comet being some sort of interstellular sperm fertilizing unsuspecting planets without even so much as a by your leave.

Of course, while cosmology posits rocky planets such as this one, apparently now prime for fertilization, as having initially been in a liquid state (why they only go as far as liquid rather than gaseous is one of those mysteries of cosmological presumption), it then proceeds to treat such planets, once they have formed a crust as they cooled, as if they were a simple rock in space with sufficient gravity to post facto attract sufficient gases to form an atmosphere. That the initial atmosphere is largely a function of the systemic nature of the planet, which remains far more active in every aspect than inactive, appears to escape notice. That the actual atmosphere on this planet is largely a function of far more complex processes of life is even further beyond cosmology’s conceptual reach. In the case of planetary formation the theory is quite literally half-baked, while the theory of comet formation is insufficiently defrosted.

Thoughts on the Capabilities and Limitations of 3D Brain Imaging Technology and NeuroScience in General, in Terms of Understanding the MInd

3D Brain Imaging Technology, Helen Thompson, The Guardian, July 30 2015

In reference to this type of technological uncovering of aspects of the neurological system and what it can achieve in terms of assisting in understanding mental phenomena such as mental illness (something specifically posited in the article as a potential for the technology), we need to first understand what it does and does not reveal about the neurological system, and what it by definition cannot reveal about the relation between that system and any mental phenomena.

There’s a huge number of undemonstrated assumptions, including some with demonstrated problems, underlying this kind of research. Many of them are probably necessary in a heuristic sense, in order to provide any sort of starting point, while others are inevitably going to lead to paradoxical issues. The difficulty here, as in many areas, is that the distinction between a heuristic (something used as a guide but not fully assumed as actually true) and a presumption that a guiding notion is actually true is difficult to maintain in practice. Predictably the researchers will find that assumed structures have so little commonality between individuals that the notion of a “normal” brain becomes too problematic to be used as a baseline for analysing an “abnormal” brain. Just as predictably specific messages assumed to take place between areas of the brain will be absent. How neuroscience copes with these challenges to its basic assumptions (so far, other research that has problematized these assumptions is largely ignored) will determine whether what is really only a proto-science at the moment falls into the common trap of the pseudo-scientific or matures appropriately into a proper science.

This also demonstrates the basic relation between technology and science. Technology is never simply applied science, rather science comes along after technology reveals something and attempts toaccount for what has been revealed. This basis in accounting-for is the reason natural science is always associated with mathematics in the most general sense. Technology can reveal certain things about the brain, but it’s impotent in terms of understanding even the most basic things about the mind from that perspective, as is the science that accounts-for what it does reveal.

One of the basic things that needs to be understood in order to form any relation between the neural system (or the body as a whole) and mental phenomena is the means by which they intra-act as a non-dual duality. Obviously they are not truly distinct entities, because they can never encounter each other as such. Yet as “things”, as unities grasped in a given set of data, they have no data points in common.

The mind treats the body as largely imaginary. In order to walk to the kitchen, say, I imagine myself doing it. But I don’t simply imagine it, or I’d still be sitting imagining. It’s difficult to think, though, of the specific difference between the mind triggering action, which requires that it somehow alter the state of the neurological system rather than merely reflecting it, and simply imagining that action.

As Plato determined, the ‘logistikon’ or faculty of reason is predicated on the ‘pharmakon’, which despite both Foucault and Derrida, cannot be considered madness but as the word implies, habit. To the neurological system, though, the mind would have to appear as mysterious (if it could experience mystery or lack of it), since it makes demands on something it doesn’t encounter, demands that may or may not be met. Further it receives demands from the same something it doesn’t encounter, which it must at least attempt to execute.

In order to make any relation between consciousness, whether normal or abnormal (whatever that properly means) and the neurological system we can’t rely on technology and what it’s capable of uncovering. At best technology may show that certain assumptions are incorrect and that those assumptions are part of what makes the interaction between mind and body appear paradoxical.

What Does Currency Represent?

We initially think of currency as representative, as a signifier to something. But when we try to think of this something, we can’t find anything, it appears to be a signifier to nothing, a representation of nothing.

More precisely, it represents nothing actual. Currency attempt to represent potentia itself. If I have a hundred dollars, I have the potential of issuing a demand for that amount’s worth of whatever I want. In this sense, currency attempts to represent what can only be actual in time future.

There’s a difficulty though as accumulation, or concentration, of currency occurs. I can easily issue a demand for a hundred dollar’s worth of anything available either locally or virtually. If I have a hundred trillion dollars though, I can’t. Only a small percentage of the total amount of currency can be actualized at any given time, because with potential, any actualization destroys other potentials. Issuing more currency doesn’t inherently devalue it, what devalues it is any attempt overall to actualize more than the percentage that can be actualized at that time. Accumulation has a basic problem in that nothing can be accumulated in time future. The closest to accumulation that can occur is in time past. And even that can be accumulated only as personal and societal memory.

The metaphysics of presence is necessary to maintain the illusion of accumulation itself in terms of potential. If things do not simply endure from the past, as that metaphysics posits, but recur from the future as self-same, then we have inverted the provenance of what presences from the future to the past. Simultaneously we have inverted the possibility of accumulation of what has presenced from the past to the future, as substantialized potentia, as currency.

Why ‘Computational Biology’ is a Scientific Blunder

If computational biology were to be a feasible approach to anything but the simplest of problems, a first step would involve determining what approaches are ontologically appropriate and internally consistent, i.e. commutative.

The following is a brief list of approaches used in computational mathematics specifically.

Iterative method

Rate of convergence — the speed at which a convergent sequence approaches its limit

Order of accuracy — rate at which numerical solution of differential equation converges to exact solution

Series acceleration — methods to accelerate the speed of convergence of a series

Aitken’s delta-squared process — most useful for linearly converging sequences

Minimum polynomial extrapolation — for vector sequences

Richardson extrapolation

Shanks transformation — similar to Aitken’s delta-squared process, but applied to the partial sums

Van Wijngaarden transformation — for accelerating the convergence of an alternating series

Abramowitz and Stegun — book containing formulas and tables of many special functions

Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun

Curse of dimensionality

Local convergence and global convergence — whether you need a good initial guess to get convergence

Superconvergence

Discretization

Difference quotient
Computational complexity of mathematical operations

Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs

Symbolic-numeric computation — combination of symbolic and numeric methods
Lattice QCD and Numerical Analysis
Collocation method — discretizes a continuous equation by requiring it only to hold at certain points

Level set method

Level set (data structures) — data structures for representing level sets

Sinc numerical methods — methods based on the sinc function, sinc(x) = sin(x) / x

ABS methods

Error[edit]

Error analysis (mathematics)

Approximation

Approximation error

Condition number

Discretization error

Floating point number

Guard digit — extra precision introduced during a computation to reduce round-off error

Truncation — rounding a floating-point number by discarding all digits after a certain digit

Round-off error

Numeric precision in Microsoft Excel

Arbitrary-precision arithmetic

Interval arithmetic — represent every number by two floating-point numbers guaranteed to have the unknown number between them

Interval contractor — maps interval to subinterval which still contains the unknown exact answer

Interval propagation — contracting interval domains without removing any value consistent with the constraints
Loss of significance

Numerical error

Numerical stability

Error propagation:

Propagation of uncertainty

Significance arithmetic

Residual (numerical analysis)

Relative change and difference — the relative difference between x and y is |x − y| / max(|x|, |y|)

Significant figures

False precision — giving more significant figures than appropriate

Truncation error — error committed by doing only a finite numbers of steps
Affine arithmetic

Elementary and special functions[edit]

Summation:

Kahan summation algorithm

Pairwise summation — slightly worse than Kahan summation but cheaper

Binary splitting

Multiplication:

Multiplication algorithm — general discussion, simple methods

Karatsuba algorithm — the first algorithm which is faster than straightforward multiplication

Toom–Cook multiplication — generalization of Karatsuba multiplication

Schönhage–Strassen algorithm — based on Fourier transform, asymptotically very fast

Fürer’s algorithm — asymptotically slightly faster than Schönhage–Strassen

Division algorithm — for computing quotient and/or remainder of two numbers

Long division

Restoring division

Non-restoring division

SRT division

Newton–Raphson division: uses Newton’s method to find the reciprocal of D, and multiply that reciprocal by N to find the final quotient Q.

Goldschmidt division

Exponentiation:

Exponentiation by squaring

Addition-chain exponentiation

Multiplicative inverse Algorithms: for computing a number’s multiplicative inverse (reciprocal).

Newton’s method
Polynomials:
Horner’s method

Estrin’s scheme — modification of the Horner scheme with more possibilities for parallelization

Clenshaw algorithm

De Casteljau’s algorithm

Square roots and other roots:

Integer square root

nth root algorithm

Shifting nth root algorithm — similar to long division

hypot — the function (x2 + y2)1/2

Alpha max plus beta min algorithm — approximates hypot(x,y)

Fast inverse square root — calculates 1 / √x using details of the IEEE floating-point system

Elementary functions (exponential, logarithm, trigonometric functions):

Trigonometric tables — different methods for generating them

CORDIC — shift-and-add algorithm using a table of arc tangents

BKM algorithm — shift-and-add algorithm using a table of logarithms and complex numbers

Gamma function:

Lanczos approximation

Spouge’s approximation — modification of Stirling’s approximation; easier to apply than Lanczos

AGM method — computes arithmetic–geometric mean; related methods compute special functions

FEE method (Fast E-function Evaluation) — fast summation of series like the power series for ex

Gal’s accurate tables — table of function values with unequal spacing to reduce round-off error

Spigot algorithm — algorithms that can compute individual digits of a real number

Approximations of π:

Liu Hui’s π algorithm — first algorithm that can compute π to arbitrary precision

Leibniz formula for π — alternating series with very slow convergence

Wallis product — infinite product converging slowly to π/2

Viète’s formula — more complicated infinite product which converges faster

Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean

Borwein’s algorithm — iteration which converges quartically to 1/π, and other algorithms

Chudnovsky algorithm — fast algorithm that calculates a hypergeometric series

Bailey–Borwein–Plouffe formula — can be used to compute individual hexadecimal digits of π

Bellard’s formula — faster version of Bailey–Borwein–Plouffe formula
Numerical linear algebra[edit]

Numerical linear algebra — study of numerical algorithms for linear algebra problems
Types of matrices appearing in numerical analysis:
Sparse matrix

Band matrix

Bidiagonal matrix

Tridiagonal matrix

Pentadiagonal matrix

Skyline matrix

Circulant matrix

Triangular matrix

Diagonally dominant matrix

Block matrix — matrix composed of smaller matrices

Stieltjes matrix — symmetric positive definite with non-positive off-diagonal entries

Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to handle)

Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues

Convergent matrix – square matrix whose successive powers approach the zero matrix

Algorithms for matrix multiplication:

Strassen algorithm

Coppersmith–Winograd algorithm

Cannon’s algorithm — a distributed algorithm, especially suitable for processors laid out in a 2d grid

Freivalds’ algorithm — a randomized algorithm for checking the result of a multiplication

Matrix decompositions:

LU decomposition — lower triangular times upper triangular

QR decomposition — orthogonal matrix times triangular matrix

RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a matrix

Polar decomposition — unitary matrix times positive-semidefinite Hermitian matrix

Decompositions by similarity:

Eigendecomposition — decomposition in terms of eigenvectors and eigenvalues

Jordan normal form — bidiagonal matrix of a certain form; generalizes the eigendecomposition

Weyr canonical form — permutation of Jordan normal form

Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable matrix

Schur decomposition — similarity transform bringing the matrix to a triangular matrix

Singular value decomposition — unitary matrix times diagonal matrix times unitary matrix

Matrix splitting – expressing a given matrix as a sum or difference of matrices
Gaussian elimination

Row echelon form — matrix in which all entries below a nonzero entry are zero

Bareiss algorithm — variant which ensures that all entries remain integers if the initial matrix has integer entries

Tridiagonal matrix algorithm — simplified form of Gaussian elimination for tridiagonal matrices

LU decomposition — write a matrix as a product of an upper- and a lower-triangular matrix

Crout matrix decomposition

LU reduction — a special parallelized version of a LU decomposition algorithm

Block LU decomposition

Cholesky decomposition — for solving a system with a positive definite matrix

Minimum degree algorithm

Symbolic Cholesky decomposition

Iterative refinement — procedure to turn an inaccurate solution in a more accurate one

Direct methods for sparse matrices:

Frontal solver — used in finite element methods

Nested dissection — for symmetric matrices, based on graph partitioning

Levinson recursion — for Toeplitz matrices

SPIKE algorithm — hybrid parallel solver for narrow-banded matrices

Cyclic reduction — eliminate even or odd rows or columns, repeat
Iterative methods:
Jacobi method

Gauss–Seidel method

Successive over-relaxation (SOR) — a technique to accelerate the Gauss–Seidel method

Symmetric successive overrelaxation (SSOR) — variant of SOR for symmetric matrices

Backfitting algorithm — iterative procedure used to fit a generalized additive model, often equivalent to Gauss–Seidel

Modified Richardson iteration

Conjugate gradient method (CG) — assumes that the matrix is positive definite

Derivation of the conjugate gradient method

Nonlinear conjugate gradient method — generalization for nonlinear optimization problems

Biconjugate gradient method (BiCG)

Biconjugate gradient stabilized method (BiCGSTAB) — variant of BiCG with better convergence

Conjugate residual method — similar to CG but only assumed that the matrix is symmetric

Generalized minimal residual method (GMRES) — based on the Arnoldi iteration

Chebyshev iteration — avoids inner products but needs bounds on the spectrum

Stone’s method (SIP – Srongly Implicit Procedure) — uses an incomplete LU decomposition

Kaczmarz method

Preconditioner

Incomplete Cholesky factorization — sparse approximation to the Cholesky factorization

Incomplete LU factorization — sparse approximation to the LU factorization

Uzawa iteration — for saddle node problems

Underdetermined and overdetermined systems (systems that have no or more than one solution):

Numerical computation of null space — find all solutions of an underdetermined system

Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual

Sparse approximation — for finding the sparsest solution (i.e., the solution with as many zeros as possible)
Eigenvalue algorithm — a numerical algorithm for locating the eigenvalues of a matrix

Power iteration

Inverse iteration

Rayleigh quotient iteration

Arnoldi iteration — based on Krylov subspaces

Lanczos algorithm — Arnoldi, specialized for positive-definite matrices

Block Lanczos algorithm — for when matrix is over a finite field

QR algorithm

Jacobi eigenvalue algorithm — select a small submatrix which can be diagonalized exactly, and repeat

Jacobi rotation — the building block, almost a Givens rotation

Jacobi method for complex Hermitian matrices

Divide-and-conquer eigenvalue algorithm

Folded spectrum method

LOBPCG — Locally Optimal Block Preconditioned Conjugate Gradient Method

Eigenvalue perturbation — stability of eigenvalues under perturbations of the matrix

Other concepts and algorithms[edit]

Orthogonalization algorithms:

Gram–Schmidt process

Householder transformation

Householder operator — analogue of Householder transformation for general inner product spaces

Givens rotation

Krylov subspace

Block matrix pseudoinverse

Bidiagonalization

Cuthill–McKee algorithm — permutes rows/columns in sparse matrix to yield a narrow band matrix

In-place matrix transposition — computing the transpose of a matrix without using much additional storage

Pivot element — entry in a matrix on which the algorithm concentrates

Matrix-free methods — methods that only access the matrix by evaluating matrix-vector products

Interpolation and approximation[edit]

Interpolation — construct a function going through some given data points

Nearest-neighbor interpolation — takes the value of the nearest neighbor

Polynomial interpolation[edit]

Polynomial interpolation — interpolation by polynomials

Linear interpolation

Runge’s phenomenon

Vandermonde matrix

Chebyshev polynomials

Chebyshev nodes

Lebesgue constant (interpolation)

Different forms for the interpolant:

Newton polynomial

Divided differences

Neville’s algorithm — for evaluating the interpolant; based on the Newton form

Lagrange polynomial

Bernstein polynomial — especially useful for approximation

Brahmagupta’s interpolation formula — seventh-century formula for quadratic interpolation
Bilinear interpolation

Trilinear interpolation

Bicubic interpolation

Tricubic interpolation

Padua points — set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant

Hermite interpolation

Birkhoff interpolation

Abel–Goncharov interpolation
Spline interpolation — interpolation by piecewise polynomials

Spline (mathematics) — the piecewise polynomials used as interpolants

Perfect spline — polynomial spline of degree m whose mth derivate is ±1

Cubic Hermite spline

Centripetal Catmull–Rom spline — special case of cubic Hermite splines without self-intersections or cusps

Monotone cubic interpolation

Hermite spline

Bézier curve

De Casteljau’s algorithm

composite Bézier curve

Generalizations to more dimensions:

Bézier triangle — maps a triangle to R3

Bézier surface — maps a square to R3

B-spline

Box spline — multivariate generalization of B-splines

Truncated power function

De Boor’s algorithm — generalizes De Casteljau’s algorithm

Non-uniform rational B-spline (NURBS)

T-spline — can be thought of as a NURBS surface for which a row of control points is allowed to terminate

Kochanek–Bartels spline

Coons patch — type of manifold parametrization used to smoothly join other surfaces together

M-spline — a non-negative spline

I-spline — a monotone spline, defined in terms of M-splines

Smoothing spline — a spline fitted smoothly to noisy data

Blossom (functional) — a unique, affine, symmetric map associated to a polynomial or spline

See also: List of numerical computational geometry topics
Trigonometric interpolation — interpolation by trigonometric polynomials

Discrete Fourier transform — can be viewed as trigonometric interpolation at equidistant points

Relations between Fourier transforms and Fourier series

Fast Fourier transform (FFT) — a fast method for computing the discrete Fourier transform

Bluestein’s FFT algorithm

Bruun’s FFT algorithm

Cooley–Tukey FFT algorithm

Split-radix FFT algorithm — variant of Cooley–Tukey that uses a blend of radices 2 and 4

Goertzel algorithm

Prime-factor FFT algorithm

Rader’s FFT algorithm

Bit-reversal permutation — particular permutation of vectors with 2m entries used in many FFTs.

Butterfly diagram

Twiddle factor — the trigonometric constant coefficients that are multiplied by the data

Cyclotomic fast Fourier transform — for FFT over finite fields

Methods for computing discrete convolutions with finite impulse response filters using the FFT:

Overlap–add method

Overlap–save method

Sigma approximation

Dirichlet kernel — convolving any function with the Dirichlet kernel yields its trigonometric interpolant

Gibbs phenomenon

Other interpolants[edit]

Simple rational approximation

Polynomial and rational function modeling — comparison of polynomial and rational interpolation

Wavelet

Continuous wavelet

Transfer matrix

See also: List of functional analysis topics, List of wavelet-related transforms

Inverse distance weighting

Radial basis function (RBF) — a function of the form ƒ(x) = φ(|x−x0|)

Polyharmonic spline — a commonly used radial basis function

Thin plate spline — a specific polyharmonic spline: r2 log r

Hierarchical RBF

Subdivision surface — constructed by recursively subdividing a piecewise linear interpolant

Catmull–Clark subdivision surface

Doo–Sabin subdivision surface

Loop subdivision surface

Slerp (spherical linear interpolation) — interpolation between two points on a sphere

Generalized quaternion interpolation — generalizes slerp for interpolation between more than two quaternions

Irrational base discrete weighted transform

Nevanlinna–Pick interpolation — interpolation by analytic functions in the unit disc subject to a bound

Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite

Multivariate interpolation — the function being interpolated depends on more than one variable

Barnes interpolation — method for two-dimensional functions using Gaussians common in meteorology

Coons surface — combination of linear interpolation and bilinear interpolation

Lanczos resampling — based on convolution with a sinc function

Natural neighbor interpolation

Nearest neighbor value interpolation

PDE surface

Transfinite interpolation — constructs function on planar domain given its values on the boundary

Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered observations
Approximation theory:
Orders of approximation

Lebesgue’s lemma

Curve fitting

Vector field reconstruction

Modulus of continuity — measures smoothness of a function

Least squares (function approximation) — minimizes the error in the L2-norm

Minimax approximation algorithm — minimizes the maximum error over an interval (the L∞-norm)

Equioscillation theorem — characterizes the best approximation in the L∞-norm

Unisolvent point set — function from given function space is determined uniquely by values on such a set of points

Stone–Weierstrass theorem — continuous functions can be approximated uniformly by polynomials, or certain other function spaces
Approximation by polynomials:
Linear approximation

Bernstein polynomial — basis of polynomials useful for approximating a function

Bernstein’s constant — error when approximating |x| by a polynomial

Remez algorithm — for constructing the best polynomial approximation in the L∞-norm

Bernstein’s inequality (mathematical analysis) — bound on maximum of derivative of polynomial in unit disk

Mergelyan’s theorem — generalization of Stone–Weierstrass theorem for polynomials

Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some coefficients have to be zero

Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions

Discrete Chebyshev polynomials — polynomials orthogonal with respect to a discrete measure

Favard’s theorem — polynomials satisfying suitable 3-term recurrence relations are orthogonal polynomials

Approximation by Fourier series / trigonometric polynomials:

Jackson’s inequality — upper bound for best approximation by a trigonometric polynomial

Bernstein’s theorem (approximation theory) — a converse to Jackson’s inequality

Fejér’s theorem — Cesàro means of partial sums of Fourier series converge uniformly for continuous periodic functions

Erdős–Turán inequality — bounds distance between probability and Lebesgue measure in terms of Fourier coefficients

Different approximations:

Moving least squares

Padé approximant

Padé table — table of Padé approximants

Hartogs–Rosenthal theorem — continuous functions can be approximated uniformly by rational functions on a set of Lebesgue measure zero

Szász–Mirakyan operator — approximation by e−n xk on a semi-infinite interval

Szász–Mirakjan–Kantorovich operator

Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators

Favard operator — approximation by sums of Gaussians

Surrogate model — application: replacing a function that is hard to evaluate by a simpler function

Constructive function theory — field that studies connection between degree of approximation and smoothness

Universal differential equation — differential–algebraic equation whose solutions can approximate any continuous function

Fekete problem — find N points on a sphere that minimize some kind of energy

Carleman’s condition — condition guaranteeing that a measure is uniquely determined by its moments

Krein’s condition — condition that exponential sums are dense in weighted L2 space

Lethargy theorem — about distance of points in a metric space from members of a sequence of subspaces

Wirtinger’s representation and projection theorem
Extrapolation:
Linear predictive analysis — linear extrapolation

Unisolvent functions — functions for which the interpolation problem has a unique solution

Regression analysis

Isotonic regression

Curve-fitting compaction

Interpolation (computer graphics)

Finding roots of nonlinear equations[edit]

See #Numerical linear algebra for linear equations

Root-finding algorithm — algorithms for solving the equation f(x) = 0
Bisection method — simple and robust; linear convergence

Lehmer–Schur algorithm — variant for complex functions
Newton’s method — based on linear approximation around the current iterate; quadratic convergence

Kantorovich theorem — gives a region around solution such that Newton’s method converges

Newton fractal — indicates which initial condition converges to which root under Newton iteration

Quasi-Newton method — uses an approximation of the Jacobian:

Broyden’s method — uses a rank-one update for the Jacobian

Symmetric rank-one — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian

Davidon–Fletcher–Powell formula — update of the Jacobian in which the matrix remains positive definite

Broyden–Fletcher–Goldfarb–Shanno algorithm — rank-two update of the Jacobian in which the matrix remains positive definite

Limited-memory BFGS method — truncated, matrix-free variant of BFGS method suitable for large problems

Steffensen’s method — uses divided differences instead of the derivative

Secant method — based on linear interpolation at last two iterates

False position method — secant method with ideas from the bisection method

Muller’s method — based on quadratic interpolation at last three iterates

Sidi’s generalized secant method — higher-order variants of secant method

Inverse quadratic interpolation — similar to Muller’s method, but interpolates the inverse

Brent’s method — combines bisection method, secant method and inverse quadratic interpolation

Ridders’ method — fits a linear function times an exponential to last two iterates and their midpoint

Halley’s method — uses f, f’ and f”; achieves the cubic convergence

Householder’s method — uses first d derivatives to achieve order d + 1; generalizes Newton’s and Halley’s method
Methods for polynomials:
Aberth method

Bairstow’s method

Durand–Kerner method

Graeffe’s method

Jenkins–Traub algorithm — fast, reliable, and widely used

Laguerre’s method

Splitting circle method

Analysis:

Wilkinson’s polynomial

Numerical continuation — tracking a root as one parameter in the equation changes

Piecewise linear continuation

Optimization[edit]

Mathematical optimization — algorithm for finding maxima or minima of a given function

Basic concepts[edit]

Active set

Candidate solution

Constraint (mathematics)

Constrained optimization — studies optimization problems with constraints

Binary constraint — a constraint that involves exactly two variables

Corner solution

Feasible region — contains all solutions that satisfy the constraints but may not be optimal

Global optimum and Local optimum

Maxima and minima

Slack variable

Continuous optimization

Discrete optimization
Algorithms for linear programming:
Simplex algorithm

Bland’s rule — rule to avoid cycling in the simplex method

Klee–Minty cube — perturbed (hyper)cube; simplex method has exponential complexity on such a domain

Criss-cross algorithm — similar to the simplex algorithm

Big M method — variation of simplex algorithm for problems with both “less than” and “greater than” constraints

Interior point method

Ellipsoid method

Karmarkar’s algorithm

Mehrotra predictor–corrector method

Column generation

k-approximation of k-hitting set — algorithm for specific LP problems (to find a weighted hitting set)

Linear complementarity problem
Decompositions:
Benders’ decomposition

Dantzig–Wolfe decomposition

Theory of two-level planning

Variable splitting

Basic solution (linear programming) — solution at vertex of feasible region

Fourier–Motzkin elimination

Hilbert basis (linear programming) — set of integer vectors in a convex cone which generate all integer vectors in the cone

LP-type problem

Linear inequality

Vertex enumeration problem — list all vertices of the feasible set

Convex optimization[edit]

Convex optimization

Quadratic programming

Linear least squares (mathematics)

Total least squares

Frank–Wolfe algorithm

Sequential minimal optimization — breaks up large QP problems into a series of smallest possible QP problems

Bilinear program

Basis pursuit — minimize L1-norm of vector subject to linear constraints

Basis pursuit denoising (BPDN) — regularized version of basis pursuit

In-crowd algorithm — algorithm for solving basis pursuit denoising

Linear matrix inequality

Conic optimization

Semidefinite programming

Second-order cone programming

Sum-of-squares optimization

Quadratic programming (see above)

Bregman method — row-action method for strictly convex optimization problems

Proximal gradient method — use splitting of objective function in sum of possible non-differentiable pieces

Subgradient method — extension of steepest descent for problems with a non-differentiable objective function

Biconvex optimization — generalization where objective function and constraint set can be biconvex

Nonlinear programming[edit]

Nonlinear programming — the most general optimization problem in the usual framework

Special cases of nonlinear programming:

See Linear programming and Convex optimization above

Geometric programming — problems involving signomials or posynomials

Signomial — similar to polynomials, but exponents need not be integers

Posynomial — a signomial with positive coefficients

Quadratically constrained quadratic program

Linear-fractional programming — objective is ratio of linear functions, constraints are linear

Fractional programming — objective is ratio of nonlinear functions, constraints are linear

Nonlinear complementarity problem (NCP) — find x such that x ≥ 0, f(x) ≥ 0 and xT f(x) = 0

Least squares — the objective function is a sum of squares

Non-linear least squares

Gauss–Newton algorithm

BHHH algorithm — variant of Gauss–Newton in econometrics

Generalized Gauss–Newton method — for constrained nonlinear least-squares problems

Levenberg–Marquardt algorithm

Iteratively reweighted least squares (IRLS) — solves a weigted least-squares problem at every iteration

Partial least squares — statistical techniques similar to principal components analysis

Non-linear iterative partial least squares (NIPLS)

Mathematical programming with equilibrium constraints — constraints include variational inequalities or complementarities

Univariate optimization:

Golden section search

Successive parabolic interpolation — based on quadratic interpolation through the last three iterates
Guess value — the initial guess for a solution with which an algorithm starts

Line search

Backtracking line search

Wolfe conditions

Gradient method — method that uses the gradient as the search direction

Gradient descent

Stochastic gradient descent

Landweber iteration — mainly used for ill-posed problems

Successive linear programming (SLP) — replace problem by a linear programming problem, solve that, and repeat

Sequential quadratic programming (SQP) — replace problem by a quadratic programming problem, solve that, and repeat

Newton’s method in optimization

See also under Newton algorithm in the section Finding roots of nonlinear equations

Nonlinear conjugate gradient method

Derivative-free methods

Coordinate descent — move in one of the coordinate directions

Adaptive coordinate descent — adapt coordinate directions to objective function

Random coordinate descent — randomized version

Nelder–Mead method

Pattern search (optimization)

Powell’s method — based on conjugate gradient descent

Rosenbrock methods — derivative-free method, similar to Nelder–Mead but with guaranteed convergence

Augmented Lagrangian method — replaces constrained problems by unconstrained problems with a term added to the objective function

Ternary search

Tabu search

Guided Local Search — modification of search algorithms which builds up penalties during a search

Reactive search optimization (RSO) — the algorithm adapts its parameters automatically

MM algorithm — majorize-minimization, a wide framework of methods

Least absolute deviations

Expectation–maximization algorithm

Ordered subset expectation maximization

Adaptive projected subgradient method

Nearest neighbor search

Space mapping — uses “coarse” (ideal or low-fidelity) and “fine” (practical or high-fidelity) models

Optimal control and infinite-dimensional optimization[edit]

Optimal control

Pontryagin’s minimum principle — infinite-dimensional version of Lagrange multipliers

Costate equations — equation for the “Lagrange multipliers” in Pontryagin’s minimum principle

Hamiltonian (control theory) — minimum principle says that this function should be minimized

Types of problems:

Linear-quadratic regulator — system dynamics is a linear differential equation, objective is quadratic

Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE with additive noise, objective is quadratic

Optimal projection equations — method for reducing dimension of LQG control problem

Algebraic Riccati equation — matrix equation occurring in many optimal control problems

Bang–bang control — control that switches abruptly between two states

Covector mapping principle

Differential dynamic programming — uses locally-quadratic models of the dynamics and cost functions

DNSS point — initial state for certain optimal control problems with multiple optimal solutions

Legendre–Clebsch condition — second-order condition for solution of optimal control problem

Pseudospectral optimal control

Bellman pseudospectral method — based on Bellman’s principle of optimality

Chebyshev pseudospectral method — uses Chebyshev polynomials (of the first kind)

Flat pseudospectral method — combines Ross–Fahroo pseudospectral method with differential flatness

Gauss pseudospectral method — uses collocation at the Legendre–Gauss points

Legendre pseudospectral method — uses Legendre polynomials

Pseudospectral knotting method — generalization of pseudospectral methods in optimal control

Ross–Fahroo pseudospectral method — class of pseudospectral method including Chebyshev, Legendre and knotting

Ross–Fahroo lemma — condition to make discretization and duality operations commute

Ross’ π lemma — there is fundamental time constant within which a control solution must be computed for controllability and stability

Sethi model — optimal control problem modelling advertising

Infinite-dimensional optimization

Semi-infinite programming — infinite number of variables and finite number of constraints, or other way around

Shape optimization, Topology optimization — optimization over a set of regions

Topological derivative — derivative with respect to changing in the shape

Generalized semi-infinite programming — finite number of variables, infinite number of constraints
Approaches to deal with uncertainty:

Markov decision process

Partially observable Markov decision process

Probabilistic-based design optimization

Robust optimization

Wald’s maximin model

Scenario optimization — constraints are uncertain

Stochastic approximation

Stochastic optimization

Stochastic programming

Stochastic gradient descent
Random optimization algorithms:
Random search — choose a point randomly in ball around current iterate

Simulated annealing

Adaptive simulated annealing — variant in which the algorithm parameters are adjusted during the computation.

Great Deluge algorithm

Mean field annealing — deterministic variant of simulated annealing

Bayesian optimization — treats objective function as a random function and places a prior over it

Evolutionary algorithm

Differential evolution

Evolutionary programming

Genetic algorithm, Genetic programming

Genetic algorithms in economics

MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) — uses an evolutionary algorithm for every agent

Simultaneous perturbation stochastic approximation (SPSA)

Luus–Jaakola

Particle swarm optimization

Stochastic tunneling

Harmony search — mimicks the improvisation process of musicians
Convex analysis — function f such that f(tx + (1 − t)y) ≥ tf(x) + (1 − t)f(y) for t ∈ [0,1]

Pseudoconvex function — function f such that ∇f · (y − x) ≥ 0 implies f(y) ≥ f(x)

Quasiconvex function — function f such that f(tx + (1 − t)y) ≤ max(f(x), f(y)) for t ∈ [0,1]

Subderivative

Geodesic convexity — convexity for functions defined on a Riemannian manifold

Duality (optimization)

Weak duality — dual solution gives a bound on the primal solution

Strong duality — primal and dual solutions are equivalent

Shadow price

Dual cone and polar cone

Duality gap — difference between primal and dual solution

Fenchel’s duality theorem — relates minimization problems with maximization problems of convex conjugates

Perturbation function — any function which relates to primal and dual problems

Slater’s condition — sufficient condition for strong duality to hold in a convex optimization problem

Total dual integrality — concept of duality for integer linear programming

Wolfe duality — for when objective function and constraints are differentiable

Farkas’ lemma

Karush–Kuhn–Tucker conditions (KKT) — sufficient conditions for a solution to be optimal

Fritz John conditions — variant of KKT conditions

Lagrange multiplier

Lagrange multipliers on Banach spaces

Semi-continuity

Complementarity — study of problems with constraints of the form 〈u, v〉 = 0

Mixed complementarity problem

Mixed linear complementarity problem

Lemke’s algorithm — method for solving (mixed) linear complementarity problems

Danskin’s theorem — used in the analysis of minimax problems

Maximum theorem — the maximum and maximizer are continuous as function of parameters, under some conditions
Relaxation (approximation) — approximating a given problem by an easier problem by relaxing some constraints

Lagrangian relaxation

Linear programming relaxation — ignoring the integrality constraints in a linear programming problem

Self-concordant function

Reduced cost — cost for increasing a variable by a small amount

Hardness of approximation — computational complexity of getting an approximate solution
Geometric median — the point minimizing the sum of distances to a given set of points

Chebyshev center — the centre of the smallest ball containing a given set of points
Iterated conditional modes — maximizing joint probability of Markov random field

Response surface methodology — used in the design of experiments

Automatic label placement

Compressed sensing — reconstruct a signal from knowledge that it is sparse or compressible

Cutting stock problem

Demand optimization

Destination dispatch — an optimization technique for dispatching elevators

Energy minimization

Entropy maximization

Highly optimized tolerance

Hyperparameter optimization

Inventory control problem

Newsvendor model

Extended newsvendor model

Assemble-to-order system

Linear programming decoding

Linear search problem — find a point on a line by moving along the line

Low-rank approximation — find best approximation, constraint is that rank of some matrix is smaller than a given number

Meta-optimization — optimization of the parameters in an optimization method

Multidisciplinary design optimization

Optimal computing budget allocation — maximize the overall simulation efficiency for finding an optimal decision

Paper bag problem

Process optimization

Recursive economics — individuals make a series of two-period optimization decisions over time.

Stigler diet

Space allocation problem

Stress majorization

Trajectory optimization

Transportation theory

Wing-shape optimization

Miscellaneous[edit]

Combinatorial optimization

Dynamic programming

Bellman equation

Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation

Backward induction — solving dynamic programming problems by reasoning backwards in time

Optimal stopping — choosing the optimal time to take a particular action

Odds algorithm

Robbins’ problem

Global optimization:

BRST algorithm

MCS algorithm

Multi-objective optimization — there are multiple conflicting objectives

Benson’s algorithm — for linear vector optimization problems

Bilevel optimization — studies problems in which one problem is embedded in another

Optimal substructure

Dykstra’s projection algorithm — finds a point in intersection of two convex sets

Algorithmic concepts:

Barrier function

Penalty method

Trust region

Test functions for optimization:

Rosenbrock function — two-dimensional function with a banana-shaped valley

Himmelblau’s function — two-dimensional with four local minima, defined by f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2

Rastrigin function — two-dimensional function with many local minima

Shekel function — multimodal and multidimensional

Mathematical Optimization Society

Numerical quadrature (integration)[edit]

Numerical integration — the numerical evaluation of an integral

Rectangle method — first-order method, based on (piecewise) constant approximation

Trapezoidal rule — second-order method, based on (piecewise) linear approximation

Simpson’s rule — fourth-order method, based on (piecewise) quadratic approximation

Adaptive Simpson’s method

Boole’s rule — sixth-order method, based on the values at five equidistant points

Newton–Cotes formulas — generalizes the above methods

Romberg’s method — Richardson extrapolation applied to trapezium rule

Gaussian quadrature — highest possible degree with given number of points

Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight (1 − x2)±1/2 on [−1, 1]

Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [−∞, ∞]

Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − x)α (1 + x)β on [−1, 1]

Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−x) on [0, ∞]

Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature

Gauss–Kronrod rules

Tanh-sinh quadrature — variant of Gaussian quadrature which works well with singularities at the end points

Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials

Adaptive quadrature — adapting the subintervals in which the integration interval is divided depending on the integrand

Monte Carlo integration — takes random samples of the integrand
Quantized state systems method (QSS) — based on the idea of state quantization

Lebedev quadrature — uses a grid on a sphere with octahedral symmetry

Sparse grid

Coopmans approximation

Numerical differentiation — for fractional-order integrals

Numerical smoothing and differentiation

Adjoint state method — approximates gradient of a function in an optimization problem

Euler–Maclaurin formula
Numerical methods for ordinary differential equations — the numerical solution of ordinary differential equations (ODEs)

Euler method — the most basic method for solving an ODE

Explicit and implicit methods — implicit methods need to solve an equation at every step

Backward Euler method — implicit variant of the Euler method

Trapezoidal rule — second-order implicit method

Runge–Kutta methods — one of the two main classes of methods for initial-value problems

Midpoint method — a second-order method with two stages

Heun’s method — either a second-order method with two stages, or a third-order method with three stages

Bogacki–Shampine method — a third-order method with four stages (FSAL) and an embedded fourth-order method

Cash–Karp method — a fifth-order method with six stages and an embedded fourth-order method

Dormand–Prince method — a fifth-order method with seven stages (FSAL) and an embedded fourth-order method

Runge–Kutta–Fehlberg method — a fifth-order method with six stages and an embedded fourth-order method

Gauss–Legendre method — family of A-stable method with optimal order based on Gaussian quadrature

Butcher group — algebraic formalism involving rooted trees for analysing Runge–Kutta methods

List of Runge–Kutta methods

Linear multistep method — the other main class of methods for initial-value problems

Backward differentiation formula — implicit methods of order 2 to 6; especially suitable for stiff equations

Numerov’s method — fourth-order method for equations of the form y” = f(t,y)

Predictor–corrector method — uses one method to approximate solution and another one to increase accuracy

General linear methods — a class of methods encapsulating linear multistep and Runge-Kutta methods

Bulirsch–Stoer algorithm — combines the midpoint method with Richardson extrapolation to attain arbitrary order

Exponential integrator — based on splitting ODE in a linear part, which is solved exactly, and a nonlinear part

Methods designed for the solution of ODEs from classical physics:

Newmark-beta method — based on the extended mean-value theorem

Verlet integration — a popular second-order method

Leapfrog integration — another name for Verlet integration

Beeman’s algorithm — a two-step method extending the Verlet method

Dynamic relaxation

Geometric integrator — a method that preserves some geometric structure of the equation

Symplectic integrator — a method for the solution of Hamilton’s equations that preserves the symplectic structure

Variational integrator — symplectic integrators derived using the underlying variational principle

Semi-implicit Euler method — variant of Euler method which is symplectic when applied to separable Hamiltonians

Energy drift — phenomenon that energy, which should be conserved, drifts away due to numerical errors

Other methods for initial value problems (IVPs):

Bi-directional delay line

Partial element equivalent circuit

Methods for solving two-point boundary value problems (BVPs):

Shooting method

Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval

Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints:

Constraint algorithm — for solving Newton’s equations with constraints

Pantelides algorithm — for reducing the index of a DEA

Methods for solving stochastic differential equations (SDEs):

Euler–Maruyama method — generalization of the Euler method for SDEs

Milstein method — a method with strong order one

Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs

Methods for solving integral equations:

Nyström method — replaces the integral with a quadrature rule
Analysis:

Truncation error (numerical integration) — local and global truncation errors, and their relationships

Lady Windermere’s Fan (mathematics) — telescopic identity relating local and global truncation errors

Stiff equation — roughly, an ODE for which unstable methods need a very short step size, but stable methods do not

L-stability — method is A-stable and stability function vanishes at infinity

Dynamic errors of numerical methods of ODE discretization — logarithm of stability function

Adaptive stepsize — automatically changing the step size when that seems advantageous
Numerical partial differential equations — the numerical solution of partial differential equations (PDEs)
Finite difference method — based on approximating differential operators with difference operators

Finite difference — the discrete analogue of a differential operator

Finite difference coefficient — table of coefficients of finite-difference approximations to derivatives

Discrete Laplace operator — finite-difference approximation of the Laplace operator

Eigenvalues and eigenvectors of the second derivative — includes eigenvalues of discrete Laplace operator

Kronecker sum of discrete Laplacians — used for Laplace operator in multiple dimensions

Discrete Poisson equation — discrete analogue of the Poisson equation using the discrete Laplace operator

Stencil (numerical analysis) — the geometric arrangements of grid points affected by a basic step of the algorithm

Compact stencil — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours

Higher-order compact finite difference scheme

Non-compact stencil — any stencil that is not compact

Five-point stencil — two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid

Finite difference methods for heat equation and related PDEs:

FTCS scheme (forward-time central-space) — first-order explicit

Crank–Nicolson method — second-order implicit

Finite difference methods for hyperbolic PDEs like the wave equation:

Lax–Friedrichs method — first-order explicit

Lax–Wendroff method — second-order explicit

MacCormack method — second-order explicit

Upwind scheme

Upwind differencing scheme for convection — first-order scheme for convection–diffusion problems

Lax–Wendroff theorem — conservative scheme for hyperbolic system of conservation laws converges to the weak solution

Alternating direction implicit method (ADI) — update using the flow in x-direction and then using flow in y-direction
Finite difference methods for option pricing

Finite-difference time-domain method — a finite-difference method for electrodynamics

Finite element methods[edit]

Finite element method — based on a discretization of the space of solutions

Finite element method in structural mechanics — a physical approach to finite element methods

Galerkin method — a finite element method in which the residual is orthogonal to the finite element space

Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous

Rayleigh–Ritz method — a finite element method based on variational principles

Spectral element method — high-order finite element methods

hp-FEM — variant in which both the size and the order of the elements are automatically adapted

Examples of finite elemets:

Bilinear quadrilateral element — also known as the Q4 element

Constant strain triangle element (CST) — also known as the T3 element

Barsoum elements

Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis

Trefftz method

Finite element updating

Extended finite element method — puts functions tailored to the problem in the approximation space

Functionally graded elements — elements for describing functionally graded materials

Superelement — particular grouping of finite elements, employed as a single element

Interval finite element method — combination of finite elements with interval arithmetic

Discrete exterior calculus — discrete form of the exterior calculus of differential geometry

Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations

Céa’s lemma — solution in the finite-element space is an almost best approximation in that space of the true solution

Patch test (finite elements) — simple test for the quality of a finite element

MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at Brunel University

NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis

Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture

Interval finite element

Applied element method — for simulation of cracks and structural collapse

Wood–Armer method — structural analysis method based on finite elements used to design reinforcement for concrete slabs

Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools

Stiffness matrix — finite-dimensional analogue of differential operator

Combination with meshfree methods:

Weakened weak form — form of a PDE that is weaker than the standard weak form

G space — functional space used in formulating the weakened weak form

Smoothed finite element method

List of finite element software packages

Other methods[edit]

Spectral method — based on the Fourier transformation

Pseudo-spectral method

Method of lines — reduces the PDE to a large system of ordinary differential equations

Boundary element method (BEM) — based on transforming the PDE to an integral equation on the boundary of the domain

Interval boundary element method — a version using interval arithmetics

Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically

Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics

Godunov’s scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation

MUSCL scheme — second-order variant of Godunov’s scheme

AUSM — advection upstream splitting method

Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations

Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data)

Properties of discretization schemes — finite volume methods can be conservative, bounded, etc.

Discrete element method — a method in which the elements can move freely relative to each other

Extended discrete element method — adds properties such as strain to each particle

Movable cellular automaton — combination of cellular automata with discrete elements

Meshfree methods — does not use a mesh, but uses a particle view of the field

Discrete least squares meshless method — based on minimization of weighted summation of the squared residual

Diffuse element method

Finite pointset method — represent continuum by a point cloud

Moving Particle Semi-implicit Method

Method of fundamental solutions (MFS) — represents solution as linear combination of fundamental solutions

Variants of MFS with source points on the physical boundary:

Boundary knot method (BKM)

Boundary particle method (BPM)

Regularized meshless method (RMM)

Singular boundary method (SBM)

Methods designed for problems from electromagnetics:

Finite-difference time-domain method — a finite-difference method

Rigorous coupled-wave analysis — semi-analytical Fourier-space method based on Floquet’s theorem

Transmission-line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines

Uniform theory of diffraction — specifically designed for scattering problems

Particle-in-cell — used especially in fluid dynamics

Multiphase particle-in-cell method — considers solid particles as both numerical particles and fluid

High-resolution scheme

Shock capturing method

Vorticity confinement — for vortex-dominated flows in fluid dynamics, similar to shock capturing

Split-step method

Fast marching method

Orthogonal collocation

Lattice Boltzmann methods — for the solution of the Navier-Stokes equations

Roe solver — for the solution of the Euler equation

Relaxation (iterative method) — a method for solving elliptic PDEs by converting them to evolution equations

Broad classes of methods:

Mimetic methods — methods that respect in some sense the structure of the original problem

Multiphysics — models consisting of various submodels with different physics

Immersed boundary method — for simulating elastic structures immersed within fluids

Multisymplectic integrator — extension of symplectic integrators, which are for ODEs

Stretched grid method — for problems solution that can be related to an elastic grid behavior.

Techniques for improving these methods[edit]

Multigrid method — uses a hierarchy of nested meshes to speed up the methods

Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains

Additive Schwarz method

Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information

Balancing domain decomposition method (BDD) — preconditioner for symmetric positive definite matrices

Balancing domain decomposition by constraints (BDDC) — further development of BDD

Finite element tearing and interconnect (FETI)

FETI-DP — further development of FETI

Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape

Mortar methods — meshes on subdomain do not mesh

Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain

Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains

Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current

Schur complement method — early and basic method on subdomains that do not overlap

Schwarz alternating method — early and basic method on subdomains that overlap

Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom

Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary

Fast multipole method — hierarchical method for evaluating particle-particle interactions

Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions

Grids and meshes[edit]

Grid classification / Types of mesh:

Polygon mesh — consists of polygons in 2D or 3D

Triangle mesh — consists of triangles in 2D or 3D

Triangulation (geometry) — subdivision of given region in triangles, or higher-dimensional analogue

Nonobtuse mesh — mesh in which all angles are less than or equal to 90°

Point set triangulation — triangle mesh such that given set of point are all a vertex of a triangle

Polygon triangulation — triangle mesh inside a polygon

Delaunay triangulation — triangulation such that no vertex is inside the circumcentre of a triangle

Constrained Delaunay triangulation — generalization of the Delaunay triangulation that forces certain required segments into the triangulation

Pitteway triangulation — for any point, triangle containing it has nearest neighbour of the point as a vertex

Minimum-weight triangulation — triangulation of minimum total edge length

Kinetic triangulation — a triangulation that moves over time

Triangulated irregular network

Quasi-triangulation — subdivision into simplices, where vertiсes are not points but arbitrary sloped line segments

Volume mesh — consists of three-dimensional shapes

Regular grid — consists of congruent parallelograms, or higher-dimensional analogue

Unstructured grid

Geodesic grid — isotropic grid on a sphere

Mesh generation

Image-based meshing — automatic procedure of generating meshes from 3D image data

Marching cubes — extracts a polygon mesh from a scalar field

Parallel mesh generation

Ruppert’s algorithm — creates quality Delauney triangularization from piecewise linear data
Apollonian network — undirected graph formed by recursively subdividing a triangle

Barycentric subdivision — standard way of dividing arbitrary convex polygons into triangles, or the higher-dimensional analogue
Chew’s second algorithm — improves Delauney triangularization by refining poor-quality triangles

Laplacian smoothing — improves polynomial meshes by moving the vertices

Jump-and-Walk algorithm — for finding triangle in a mesh containing a given point

Spatial twist continuum — dual representation of a mesh consisting of hexahedra

Pseudotriangle — simply connected region between any three mutually tangent convex sets

Simplicial complex — all vertices, line segments, triangles, tetrahedra, …, making up a mesh
Lax equivalence theorem — a consistent method is convergent if and only if it is stable

Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs

Von Neumann stability analysis — all Fourier components of the error should be stable

Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present

False diffusion

Numerical resistivity — the same, with resistivity instead of diffusion

Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods

Total variation diminishing — property of schemes that do not introduce spurious oscillations

Godunov’s theorem — linear monotone schemes can only be of first order

Motz’s problem — benchmark problem for singularity problems
Direct simulation Monte Carlo

Quasi-Monte Carlo method

Markov chain Monte Carlo

Metropolis–Hastings algorithm

Multiple-try Metropolis — modification which allows larger step sizes

Wang and Landau algorithm — extension of Metropolis Monte Carlo

Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm

Multicanonical ensemble — sampling technique that uses Metropolis–Hastings to compute integrals

Gibbs sampling

Coupling from the past

Reversible-jump Markov chain Monte Carlo

Dynamic Monte Carlo method

Kinetic Monte Carlo

Gillespie algorithm

Particle filter

Auxiliary particle filter

Reverse Monte Carlo

Demon algorithm

Pseudo-random number sampling

Inverse transform sampling — general and straightforward method but computationally expensive

Rejection sampling — sample from a simpler distribution but reject some of the samples

Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments

For sampling from a normal distribution:

Box–Muller transform

Marsaglia polar method

Convolution random number generator — generates a random variable as a sum of other random variables

Indexed search

Variance reduction techniques:

Antithetic variates

Control variates

Importance sampling

Stratified sampling

VEGAS algorithm

Low-discrepancy sequence

Constructions of low-discrepancy sequences

Event generator

Parallel tempering

Umbrella sampling — improves sampling in physical systems with significant energy barriers

Hybrid Monte Carlo

Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables

Transition path sampling

Walk-on-spheres method — to generate exit-points of Brownian motion from bounded domains
Ensemble forecasting — produce multiple numerical predictions from slightly differing initial conditions or parameters

Bond fluctuation model — for simulating the conformation and dynamics of polymer systems

Iterated filtering

Metropolis light transport

Monte Carlo localization — estimates the position and orientation of a robot

Monte Carlo methods for electron transport

Monte Carlo method for photon transport

Monte Carlo methods in finance

Monte Carlo methods for option pricing

Quasi-Monte Carlo methods in finance

Monte Carlo molecular modeling

Path integral molecular dynamics — incorporates Feynman path integrals

Quantum Monte Carlo

Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation

Gaussian quantum Monte Carlo

Path integral Monte Carlo

Reptation Monte Carlo

Variational Monte Carlo

Methods for simulating the Ising model:

Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters

Wolff algorithm — improvement of the Swendsen–Wang algorithm

Metropolis–Hastings algorithm

Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems

Cross-entropy method — for multi-extremal optimization and importance sampling

Also see the list of statistics topics
Large eddy simulation

Smoothed-particle hydrodynamics

Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types

Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and Lagrangian for structures

Explicit algebraic stress model

Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids

Geodesic grid

Quantum jump method — used for simulating open quantum systems, operates on wave function

Dynamic Design Analysis Method (DDAM) — for evaluating effect of underwater explosions on equipment
Cell lists

Coupled cluster

Density functional theory

DIIS — direct inversion in (or of) the iterative subspace

Now, if one were to pick any given set of approaches, one would have to first ensure that the set of approaches is commutative, i.e. many of these approaches assume slightly different ontological premises and thus cannot be used together, which is not an issue for mathematics per se, since it doesn’t claim to imply any relation to empirical data. However it becomes an issue in applying computational approaches to empirical data.

Even within the simplest mathematical systems, for instance regular arithmetic and simple linear algebra, there is no rational transition between the two. We tend to assume there is primarily because most of us made the transition initially as children and habituated the change in the basis of understanding the symbology involved. In terms of the process of learning, this is reinforced by the sense that simple algebra is somehow based on arithmetic, since arithmetic is always an assumed prerequisite. However this is illusory. The prerequisite practice of arithmetic is simply the habituating of the ability to manipulate mathematical symbols in the most general sense, it in no way implies that there is any necessary or even contingent relation between the symbology of arithmetic and simple algebra. Within computationl mathematics, which is only a small subset of mathematical systems in general, there are dozens of underlying mathematical systems that have no rational transitions, i.e. are non-commutative. Within computation, which generally runs on a ‘good enough’ approach, this only occasionally creates issues. However if one is trying to model a given system accurately rather than simply using a ‘good enough’ simulation to provide an optimization to a purely computational problem, this simultaneous use of non-commutative approaches cannot be permitted.

Once it is confirmed that the set is fully commutative (which we have no quick or simple means of doing) one would have to ensure that the operative ontology in all the approaches is a sensible one in terms of understanding and manipulating empirical biological data, i.e. determining that the operative ontology of the mathematical approaches is identical to the actual operative ontology of real biological systems. We have no theoretical means of accomplishing this, never mind a practical method.

Even within the small set of biologically inspired computational approaches, while it is true that the models behave in a manner that is somewhat similar to the biological systems they were inspired by, it is also true that they do not do so with any accuracy. This lack of realistic precision could be due to the model being a relatively closed system when compared with the actual biological system, or it could be due to the model failing to take into account or failing to accurately determine initial conditions for all relevant parameters, or it could be that the model is based on invalid ontological assumptions and merely mimics a certain aspect of reality without implying anything ontologically valid about reality. There’s no means to distinguish between these potential origins of inaccuracy except by modeling the system in question and its entire spatio-temporal environment, which is nothing other than the rest of reality itself, with all parameters accurately determined. The only feasible model we can ever have for that is reality itself.

While even the more complex single celled systems are beyond the modeling capability of computational mathematics, multicellular cell differentiated systems are beyond the capability of computation in a more general sense. In a similar manner to the lack of commutativity between different mathematical systems, there is no commutativity between a single cell system and a multicellular, cell differentiated system, i.e. there is no rational transition, since the generation of a more comprehensive generic view is itself an ontological, not a rational exercise.