Of Comets and Cosmological Presumption

Ian Wright, the lead scientist on the Ptolemy instrument, describes the organics found on the comet as a “frozen primordial soup”, but concedes some colleagues might not agree. “Potentially, that is what we are talking about, but I’ll get pilloried for saying so,” he added.

“If you were to put these materials on the surface of a primitive body like Earth, and give them the right amount of heat and whatever else is required, conceivably, you could form life,” he said.

Rosetta probe studies released, revealing fullest picture of comet yet, The Guardian

Now, whatever Mr. Wright’s credentials, he should be pilloried for the last statement, as should the reporter who reported it without question. If you put anything, or even nothing, together with the right amount of heat (whatever “right” means) and whatever else is required, you can form anything whatsoever, since the latter is completely indefinite.

The compounds found by the Philae lander on the comet in question may be “organic” compounds, but organic in this sense simply means they contain carbon. Methane, one of the most common gases found on other bodies within the solar system, is itself considered an organic compound in this sense, although it’s one of the simplest compounds to be considered organic. The distinction between organic and inorganic carbon compounds itself, while it may be useful in organizing research in chemistry, is completely arbitrary.

The blurring of such lines leads to the equivocations and unwarranted assumptions found in the thinking of researchers involved in this kind of research, since in common terms organic means “arising from life”. The analogical trope underlying the thought process is quite obviously sperm and egg, a simple visual metaphor, yet Mr. Wright and Mr. Goessman appear to be completely oblivious to this underlying analogy guiding their thinking in a heuristic sense.

At the same time, they miss what should be an obvious inferral from the active nature of the comet’s surface and what can be determined of its interior, which is that a comet is in some sense systemic, dynamic. This shouldn’t be a surprise, since materially comets are not largely distinct from asteroids, yet observationally they are very distinctive. The distinctive features and behaviour of comets can only be accounted for structurally and systemically in some way. This not only invalidates current material theories of comets, but makes the current theory of comet formation much more problematic. Simple accretion would, by itself, only generate a larger clump of ice, rock and dust. It cannot account for whatever systemic processes are producing the gas spewing from the sinkholes. Not unusually, theories prove to be massive oversimplifications.

Instead, though, what we get from these researchers is a vast, unwarranted jump from a comet being a “dirty snowball” (or, for other theorists, a “snowy dirtball”), to a comet being some sort of interstellular sperm fertilizing unsuspecting planets without even so much as a by your leave.

Of course, while cosmology posits rocky planets such as this one, apparently now prime for fertilization, as having initially been in a liquid state (why they only go as far as liquid rather than gaseous is one of those mysteries of cosmological presumption), it then proceeds to treat such planets, once they have formed a crust as they cooled, as if they were a simple rock in space with sufficient gravity to post facto attract sufficient gases to form an atmosphere. That the initial atmosphere is largely a function of the systemic nature of the planet, which remains far more active in every aspect than inactive, appears to escape notice. That the actual atmosphere on this planet is largely a function of far more complex processes of life is even further beyond cosmology’s conceptual reach. In the case of planetary formation the theory is quite literally half-baked, while the theory of comet formation is insufficiently defrosted.

Thoughts on the Capabilities and Limitations of 3D Brain Imaging Technology and NeuroScience in General, in Terms of Understanding the MInd

3D Brain Imaging Technology, Helen Thompson, The Guardian, July 30 2015

In reference to this type of technological uncovering of aspects of the neurological system and what it can achieve in terms of assisting in understanding mental phenomena such as mental illness (something specifically posited in the article as a potential for the technology), we need to first understand what it does and does not reveal about the neurological system, and what it by definition cannot reveal about the relation between that system and any mental phenomena.

There’s a huge number of undemonstrated assumptions, including some with demonstrated problems, underlying this kind of research. Many of them are probably necessary in a heuristic sense, in order to provide any sort of starting point, while others are inevitably going to lead to paradoxical issues. The difficulty here, as in many areas, is that the distinction between a heuristic (something used as a guide but not fully assumed as actually true) and a presumption that a guiding notion is actually true is difficult to maintain in practice. Predictably the researchers will find that assumed structures have so little commonality between individuals that the notion of a “normal” brain becomes too problematic to be used as a baseline for analysing an “abnormal” brain. Just as predictably specific messages assumed to take place between areas of the brain will be absent. How neuroscience copes with these challenges to its basic assumptions (so far, other research that has problematized these assumptions is largely ignored) will determine whether what is really only a proto-science at the moment falls into the common trap of the pseudo-scientific or matures appropriately into a proper science.

This also demonstrates the basic relation between technology and science. Technology is never simply applied science, rather science comes along after technology reveals something and attempts toaccount for what has been revealed. This basis in accounting-for is the reason natural science is always associated with mathematics in the most general sense. Technology can reveal certain things about the brain, but it’s impotent in terms of understanding even the most basic things about the mind from that perspective, as is the science that accounts-for what it does reveal.

One of the basic things that needs to be understood in order to form any relation between the neural system (or the body as a whole) and mental phenomena is the means by which they intra-act as a non-dual duality. Obviously they are not truly distinct entities, because they can never encounter each other as such. Yet as “things”, as unities grasped in a given set of data, they have no data points in common.

The mind treats the body as largely imaginary. In order to walk to the kitchen, say, I imagine myself doing it. But I don’t simply imagine it, or I’d still be sitting imagining. It’s difficult to think, though, of the specific difference between the mind triggering action, which requires that it somehow alter the state of the neurological system rather than merely reflecting it, and simply imagining that action.

As Plato determined, the ‘logistikon’ or faculty of reason is predicated on the ‘pharmakon’, which despite both Foucault and Derrida, cannot be considered madness but as the word implies, habit. To the neurological system, though, the mind would have to appear as mysterious (if it could experience mystery or lack of it), since it makes demands on something it doesn’t encounter, demands that may or may not be met. Further it receives demands from the same something it doesn’t encounter, which it must at least attempt to execute.

In order to make any relation between consciousness, whether normal or abnormal (whatever that properly means) and the neurological system we can’t rely on technology and what it’s capable of uncovering. At best technology may show that certain assumptions are incorrect and that those assumptions are part of what makes the interaction between mind and body appear paradoxical.

What Does Currency Represent?

We initially think of currency as representative, as a signifier to something. But when we try to think of this something, we can’t find anything, it appears to be a signifier to nothing, a representation of nothing.

More precisely, it represents nothing actual. Currency attempt to represent potentia itself. If I have a hundred dollars, I have the potential of issuing a demand for that amount’s worth of whatever I want. In this sense, currency attempts to represent what can only be actual in time future.

There’s a difficulty though as accumulation, or concentration, of currency occurs. I can easily issue a demand for a hundred dollar’s worth of anything available either locally or virtually. If I have a hundred trillion dollars though, I can’t. Only a small percentage of the total amount of currency can be actualized at any given time, because with potential, any actualization destroys other potentials. Issuing more currency doesn’t inherently devalue it, what devalues it is any attempt overall to actualize more than the percentage that can be actualized at that time. Accumulation has a basic problem in that nothing can be accumulated in time future. The closest to accumulation that can occur is in time past. And even that can be accumulated only as personal and societal memory.

The metaphysics of presence is necessary to maintain the illusion of accumulation itself in terms of potential. If things do not simply endure from the past, as that metaphysics posits, but recur from the future as self-same, then we have inverted the provenance of what presences from the future to the past. Simultaneously we have inverted the possibility of accumulation of what has presenced from the past to the future, as substantialized potentia, as currency.

Why ‘Computational Biology’ is a Scientific Blunder

If computational biology were to be a feasible approach to anything but the simplest of problems, a first step would involve determining what approaches are ontologically appropriate and internally consistent, i.e. commutative.

The following is a brief list of approaches used in computational mathematics specifically.

Iterative method

Rate of convergence — the speed at which a convergent sequence approaches its limit

Order of accuracy — rate at which numerical solution of differential equation converges to exact solution

Series acceleration — methods to accelerate the speed of convergence of a series

Aitken’s delta-squared process — most useful for linearly converging sequences

Minimum polynomial extrapolation — for vector sequences

Richardson extrapolation

Shanks transformation — similar to Aitken’s delta-squared process, but applied to the partial sums

Van Wijngaarden transformation — for accelerating the convergence of an alternating series

Abramowitz and Stegun — book containing formulas and tables of many special functions

Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun

Curse of dimensionality

Local convergence and global convergence — whether you need a good initial guess to get convergence

Superconvergence

Discretization

Difference quotient
Computational complexity of mathematical operations

Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs

Symbolic-numeric computation — combination of symbolic and numeric methods
Lattice QCD and Numerical Analysis
Collocation method — discretizes a continuous equation by requiring it only to hold at certain points

Level set method

Level set (data structures) — data structures for representing level sets

Sinc numerical methods — methods based on the sinc function, sinc(x) = sin(x) / x

ABS methods

Error[edit]

Error analysis (mathematics)

Approximation

Approximation error

Condition number

Discretization error

Floating point number

Guard digit — extra precision introduced during a computation to reduce round-off error

Truncation — rounding a floating-point number by discarding all digits after a certain digit

Round-off error

Numeric precision in Microsoft Excel

Arbitrary-precision arithmetic

Interval arithmetic — represent every number by two floating-point numbers guaranteed to have the unknown number between them

Interval contractor — maps interval to subinterval which still contains the unknown exact answer

Interval propagation — contracting interval domains without removing any value consistent with the constraints
Loss of significance

Numerical error

Numerical stability

Error propagation:

Propagation of uncertainty

Significance arithmetic

Residual (numerical analysis)

Relative change and difference — the relative difference between x and y is |x − y| / max(|x|, |y|)

Significant figures

False precision — giving more significant figures than appropriate

Truncation error — error committed by doing only a finite numbers of steps
Affine arithmetic

Elementary and special functions[edit]

Summation:

Kahan summation algorithm

Pairwise summation — slightly worse than Kahan summation but cheaper

Binary splitting

Multiplication:

Multiplication algorithm — general discussion, simple methods

Karatsuba algorithm — the first algorithm which is faster than straightforward multiplication

Toom–Cook multiplication — generalization of Karatsuba multiplication

Schönhage–Strassen algorithm — based on Fourier transform, asymptotically very fast

Fürer’s algorithm — asymptotically slightly faster than Schönhage–Strassen

Division algorithm — for computing quotient and/or remainder of two numbers

Long division

Restoring division

Non-restoring division

SRT division

Newton–Raphson division: uses Newton’s method to find the reciprocal of D, and multiply that reciprocal by N to find the final quotient Q.

Goldschmidt division

Exponentiation:

Exponentiation by squaring

Addition-chain exponentiation

Multiplicative inverse Algorithms: for computing a number’s multiplicative inverse (reciprocal).

Newton’s method
Polynomials:
Horner’s method

Estrin’s scheme — modification of the Horner scheme with more possibilities for parallelization

Clenshaw algorithm

De Casteljau’s algorithm

Square roots and other roots:

Integer square root

nth root algorithm

Shifting nth root algorithm — similar to long division

hypot — the function (x2 + y2)1/2

Alpha max plus beta min algorithm — approximates hypot(x,y)

Fast inverse square root — calculates 1 / √x using details of the IEEE floating-point system

Elementary functions (exponential, logarithm, trigonometric functions):

Trigonometric tables — different methods for generating them

CORDIC — shift-and-add algorithm using a table of arc tangents

BKM algorithm — shift-and-add algorithm using a table of logarithms and complex numbers

Gamma function:

Lanczos approximation

Spouge’s approximation — modification of Stirling’s approximation; easier to apply than Lanczos

AGM method — computes arithmetic–geometric mean; related methods compute special functions

FEE method (Fast E-function Evaluation) — fast summation of series like the power series for ex

Gal’s accurate tables — table of function values with unequal spacing to reduce round-off error

Spigot algorithm — algorithms that can compute individual digits of a real number

Approximations of π:

Liu Hui’s π algorithm — first algorithm that can compute π to arbitrary precision

Leibniz formula for π — alternating series with very slow convergence

Wallis product — infinite product converging slowly to π/2

Viète’s formula — more complicated infinite product which converges faster

Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean

Borwein’s algorithm — iteration which converges quartically to 1/π, and other algorithms

Chudnovsky algorithm — fast algorithm that calculates a hypergeometric series

Bailey–Borwein–Plouffe formula — can be used to compute individual hexadecimal digits of π

Bellard’s formula — faster version of Bailey–Borwein–Plouffe formula
Numerical linear algebra[edit]

Numerical linear algebra — study of numerical algorithms for linear algebra problems
Types of matrices appearing in numerical analysis:
Sparse matrix

Band matrix

Bidiagonal matrix

Tridiagonal matrix

Pentadiagonal matrix

Skyline matrix

Circulant matrix

Triangular matrix

Diagonally dominant matrix

Block matrix — matrix composed of smaller matrices

Stieltjes matrix — symmetric positive definite with non-positive off-diagonal entries

Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to handle)

Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues

Convergent matrix – square matrix whose successive powers approach the zero matrix

Algorithms for matrix multiplication:

Strassen algorithm

Coppersmith–Winograd algorithm

Cannon’s algorithm — a distributed algorithm, especially suitable for processors laid out in a 2d grid

Freivalds’ algorithm — a randomized algorithm for checking the result of a multiplication

Matrix decompositions:

LU decomposition — lower triangular times upper triangular

QR decomposition — orthogonal matrix times triangular matrix

RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a matrix

Polar decomposition — unitary matrix times positive-semidefinite Hermitian matrix

Decompositions by similarity:

Eigendecomposition — decomposition in terms of eigenvectors and eigenvalues

Jordan normal form — bidiagonal matrix of a certain form; generalizes the eigendecomposition

Weyr canonical form — permutation of Jordan normal form

Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable matrix

Schur decomposition — similarity transform bringing the matrix to a triangular matrix

Singular value decomposition — unitary matrix times diagonal matrix times unitary matrix

Matrix splitting – expressing a given matrix as a sum or difference of matrices
Gaussian elimination

Row echelon form — matrix in which all entries below a nonzero entry are zero

Bareiss algorithm — variant which ensures that all entries remain integers if the initial matrix has integer entries

Tridiagonal matrix algorithm — simplified form of Gaussian elimination for tridiagonal matrices

LU decomposition — write a matrix as a product of an upper- and a lower-triangular matrix

Crout matrix decomposition

LU reduction — a special parallelized version of a LU decomposition algorithm

Block LU decomposition

Cholesky decomposition — for solving a system with a positive definite matrix

Minimum degree algorithm

Symbolic Cholesky decomposition

Iterative refinement — procedure to turn an inaccurate solution in a more accurate one

Direct methods for sparse matrices:

Frontal solver — used in finite element methods

Nested dissection — for symmetric matrices, based on graph partitioning

Levinson recursion — for Toeplitz matrices

SPIKE algorithm — hybrid parallel solver for narrow-banded matrices

Cyclic reduction — eliminate even or odd rows or columns, repeat
Iterative methods:
Jacobi method

Gauss–Seidel method

Successive over-relaxation (SOR) — a technique to accelerate the Gauss–Seidel method

Symmetric successive overrelaxation (SSOR) — variant of SOR for symmetric matrices

Backfitting algorithm — iterative procedure used to fit a generalized additive model, often equivalent to Gauss–Seidel

Modified Richardson iteration

Conjugate gradient method (CG) — assumes that the matrix is positive definite

Derivation of the conjugate gradient method

Nonlinear conjugate gradient method — generalization for nonlinear optimization problems

Biconjugate gradient method (BiCG)

Biconjugate gradient stabilized method (BiCGSTAB) — variant of BiCG with better convergence

Conjugate residual method — similar to CG but only assumed that the matrix is symmetric

Generalized minimal residual method (GMRES) — based on the Arnoldi iteration

Chebyshev iteration — avoids inner products but needs bounds on the spectrum

Stone’s method (SIP – Srongly Implicit Procedure) — uses an incomplete LU decomposition

Kaczmarz method

Preconditioner

Incomplete Cholesky factorization — sparse approximation to the Cholesky factorization

Incomplete LU factorization — sparse approximation to the LU factorization

Uzawa iteration — for saddle node problems

Underdetermined and overdetermined systems (systems that have no or more than one solution):

Numerical computation of null space — find all solutions of an underdetermined system

Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual

Sparse approximation — for finding the sparsest solution (i.e., the solution with as many zeros as possible)
Eigenvalue algorithm — a numerical algorithm for locating the eigenvalues of a matrix

Power iteration

Inverse iteration

Rayleigh quotient iteration

Arnoldi iteration — based on Krylov subspaces

Lanczos algorithm — Arnoldi, specialized for positive-definite matrices

Block Lanczos algorithm — for when matrix is over a finite field

QR algorithm

Jacobi eigenvalue algorithm — select a small submatrix which can be diagonalized exactly, and repeat

Jacobi rotation — the building block, almost a Givens rotation

Jacobi method for complex Hermitian matrices

Divide-and-conquer eigenvalue algorithm

Folded spectrum method

LOBPCG — Locally Optimal Block Preconditioned Conjugate Gradient Method

Eigenvalue perturbation — stability of eigenvalues under perturbations of the matrix

Other concepts and algorithms[edit]

Orthogonalization algorithms:

Gram–Schmidt process

Householder transformation

Householder operator — analogue of Householder transformation for general inner product spaces

Givens rotation

Krylov subspace

Block matrix pseudoinverse

Bidiagonalization

Cuthill–McKee algorithm — permutes rows/columns in sparse matrix to yield a narrow band matrix

In-place matrix transposition — computing the transpose of a matrix without using much additional storage

Pivot element — entry in a matrix on which the algorithm concentrates

Matrix-free methods — methods that only access the matrix by evaluating matrix-vector products

Interpolation and approximation[edit]

Interpolation — construct a function going through some given data points

Nearest-neighbor interpolation — takes the value of the nearest neighbor

Polynomial interpolation[edit]

Polynomial interpolation — interpolation by polynomials

Linear interpolation

Runge’s phenomenon

Vandermonde matrix

Chebyshev polynomials

Chebyshev nodes

Lebesgue constant (interpolation)

Different forms for the interpolant:

Newton polynomial

Divided differences

Neville’s algorithm — for evaluating the interpolant; based on the Newton form

Lagrange polynomial

Bernstein polynomial — especially useful for approximation

Brahmagupta’s interpolation formula — seventh-century formula for quadratic interpolation
Bilinear interpolation

Trilinear interpolation

Bicubic interpolation

Tricubic interpolation

Padua points — set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant

Hermite interpolation

Birkhoff interpolation

Abel–Goncharov interpolation
Spline interpolation — interpolation by piecewise polynomials

Spline (mathematics) — the piecewise polynomials used as interpolants

Perfect spline — polynomial spline of degree m whose mth derivate is ±1

Cubic Hermite spline

Centripetal Catmull–Rom spline — special case of cubic Hermite splines without self-intersections or cusps

Monotone cubic interpolation

Hermite spline

Bézier curve

De Casteljau’s algorithm

composite Bézier curve

Generalizations to more dimensions:

Bézier triangle — maps a triangle to R3

Bézier surface — maps a square to R3

B-spline

Box spline — multivariate generalization of B-splines

Truncated power function

De Boor’s algorithm — generalizes De Casteljau’s algorithm

Non-uniform rational B-spline (NURBS)

T-spline — can be thought of as a NURBS surface for which a row of control points is allowed to terminate

Kochanek–Bartels spline

Coons patch — type of manifold parametrization used to smoothly join other surfaces together

M-spline — a non-negative spline

I-spline — a monotone spline, defined in terms of M-splines

Smoothing spline — a spline fitted smoothly to noisy data

Blossom (functional) — a unique, affine, symmetric map associated to a polynomial or spline

See also: List of numerical computational geometry topics
Trigonometric interpolation — interpolation by trigonometric polynomials

Discrete Fourier transform — can be viewed as trigonometric interpolation at equidistant points

Relations between Fourier transforms and Fourier series

Fast Fourier transform (FFT) — a fast method for computing the discrete Fourier transform

Bluestein’s FFT algorithm

Bruun’s FFT algorithm

Cooley–Tukey FFT algorithm

Split-radix FFT algorithm — variant of Cooley–Tukey that uses a blend of radices 2 and 4

Goertzel algorithm

Prime-factor FFT algorithm

Rader’s FFT algorithm

Bit-reversal permutation — particular permutation of vectors with 2m entries used in many FFTs.

Butterfly diagram

Twiddle factor — the trigonometric constant coefficients that are multiplied by the data

Cyclotomic fast Fourier transform — for FFT over finite fields

Methods for computing discrete convolutions with finite impulse response filters using the FFT:

Overlap–add method

Overlap–save method

Sigma approximation

Dirichlet kernel — convolving any function with the Dirichlet kernel yields its trigonometric interpolant

Gibbs phenomenon

Other interpolants[edit]

Simple rational approximation

Polynomial and rational function modeling — comparison of polynomial and rational interpolation

Wavelet

Continuous wavelet

Transfer matrix

See also: List of functional analysis topics, List of wavelet-related transforms

Inverse distance weighting

Radial basis function (RBF) — a function of the form ƒ(x) = φ(|x−x0|)

Polyharmonic spline — a commonly used radial basis function

Thin plate spline — a specific polyharmonic spline: r2 log r

Hierarchical RBF

Subdivision surface — constructed by recursively subdividing a piecewise linear interpolant

Catmull–Clark subdivision surface

Doo–Sabin subdivision surface

Loop subdivision surface

Slerp (spherical linear interpolation) — interpolation between two points on a sphere

Generalized quaternion interpolation — generalizes slerp for interpolation between more than two quaternions

Irrational base discrete weighted transform

Nevanlinna–Pick interpolation — interpolation by analytic functions in the unit disc subject to a bound

Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite

Multivariate interpolation — the function being interpolated depends on more than one variable

Barnes interpolation — method for two-dimensional functions using Gaussians common in meteorology

Coons surface — combination of linear interpolation and bilinear interpolation

Lanczos resampling — based on convolution with a sinc function

Natural neighbor interpolation

Nearest neighbor value interpolation

PDE surface

Transfinite interpolation — constructs function on planar domain given its values on the boundary

Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered observations
Approximation theory:
Orders of approximation

Lebesgue’s lemma

Curve fitting

Vector field reconstruction

Modulus of continuity — measures smoothness of a function

Least squares (function approximation) — minimizes the error in the L2-norm

Minimax approximation algorithm — minimizes the maximum error over an interval (the L∞-norm)

Equioscillation theorem — characterizes the best approximation in the L∞-norm

Unisolvent point set — function from given function space is determined uniquely by values on such a set of points

Stone–Weierstrass theorem — continuous functions can be approximated uniformly by polynomials, or certain other function spaces
Approximation by polynomials:
Linear approximation

Bernstein polynomial — basis of polynomials useful for approximating a function

Bernstein’s constant — error when approximating |x| by a polynomial

Remez algorithm — for constructing the best polynomial approximation in the L∞-norm

Bernstein’s inequality (mathematical analysis) — bound on maximum of derivative of polynomial in unit disk

Mergelyan’s theorem — generalization of Stone–Weierstrass theorem for polynomials

Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some coefficients have to be zero

Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions

Discrete Chebyshev polynomials — polynomials orthogonal with respect to a discrete measure

Favard’s theorem — polynomials satisfying suitable 3-term recurrence relations are orthogonal polynomials

Approximation by Fourier series / trigonometric polynomials:

Jackson’s inequality — upper bound for best approximation by a trigonometric polynomial

Bernstein’s theorem (approximation theory) — a converse to Jackson’s inequality

Fejér’s theorem — Cesàro means of partial sums of Fourier series converge uniformly for continuous periodic functions

Erdős–Turán inequality — bounds distance between probability and Lebesgue measure in terms of Fourier coefficients

Different approximations:

Moving least squares

Padé approximant

Padé table — table of Padé approximants

Hartogs–Rosenthal theorem — continuous functions can be approximated uniformly by rational functions on a set of Lebesgue measure zero

Szász–Mirakyan operator — approximation by e−n xk on a semi-infinite interval

Szász–Mirakjan–Kantorovich operator

Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators

Favard operator — approximation by sums of Gaussians

Surrogate model — application: replacing a function that is hard to evaluate by a simpler function

Constructive function theory — field that studies connection between degree of approximation and smoothness

Universal differential equation — differential–algebraic equation whose solutions can approximate any continuous function

Fekete problem — find N points on a sphere that minimize some kind of energy

Carleman’s condition — condition guaranteeing that a measure is uniquely determined by its moments

Krein’s condition — condition that exponential sums are dense in weighted L2 space

Lethargy theorem — about distance of points in a metric space from members of a sequence of subspaces

Wirtinger’s representation and projection theorem
Extrapolation:
Linear predictive analysis — linear extrapolation

Unisolvent functions — functions for which the interpolation problem has a unique solution

Regression analysis

Isotonic regression

Curve-fitting compaction

Interpolation (computer graphics)

Finding roots of nonlinear equations[edit]

See #Numerical linear algebra for linear equations

Root-finding algorithm — algorithms for solving the equation f(x) = 0
Bisection method — simple and robust; linear convergence

Lehmer–Schur algorithm — variant for complex functions
Newton’s method — based on linear approximation around the current iterate; quadratic convergence

Kantorovich theorem — gives a region around solution such that Newton’s method converges

Newton fractal — indicates which initial condition converges to which root under Newton iteration

Quasi-Newton method — uses an approximation of the Jacobian:

Broyden’s method — uses a rank-one update for the Jacobian

Symmetric rank-one — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian

Davidon–Fletcher–Powell formula — update of the Jacobian in which the matrix remains positive definite

Broyden–Fletcher–Goldfarb–Shanno algorithm — rank-two update of the Jacobian in which the matrix remains positive definite

Limited-memory BFGS method — truncated, matrix-free variant of BFGS method suitable for large problems

Steffensen’s method — uses divided differences instead of the derivative

Secant method — based on linear interpolation at last two iterates

False position method — secant method with ideas from the bisection method

Muller’s method — based on quadratic interpolation at last three iterates

Sidi’s generalized secant method — higher-order variants of secant method

Inverse quadratic interpolation — similar to Muller’s method, but interpolates the inverse

Brent’s method — combines bisection method, secant method and inverse quadratic interpolation

Ridders’ method — fits a linear function times an exponential to last two iterates and their midpoint

Halley’s method — uses f, f’ and f”; achieves the cubic convergence

Householder’s method — uses first d derivatives to achieve order d + 1; generalizes Newton’s and Halley’s method
Methods for polynomials:
Aberth method

Bairstow’s method

Durand–Kerner method

Graeffe’s method

Jenkins–Traub algorithm — fast, reliable, and widely used

Laguerre’s method

Splitting circle method

Analysis:

Wilkinson’s polynomial

Numerical continuation — tracking a root as one parameter in the equation changes

Piecewise linear continuation

Optimization[edit]

Mathematical optimization — algorithm for finding maxima or minima of a given function

Basic concepts[edit]

Active set

Candidate solution

Constraint (mathematics)

Constrained optimization — studies optimization problems with constraints

Binary constraint — a constraint that involves exactly two variables

Corner solution

Feasible region — contains all solutions that satisfy the constraints but may not be optimal

Global optimum and Local optimum

Maxima and minima

Slack variable

Continuous optimization

Discrete optimization
Algorithms for linear programming:
Simplex algorithm

Bland’s rule — rule to avoid cycling in the simplex method

Klee–Minty cube — perturbed (hyper)cube; simplex method has exponential complexity on such a domain

Criss-cross algorithm — similar to the simplex algorithm

Big M method — variation of simplex algorithm for problems with both “less than” and “greater than” constraints

Interior point method

Ellipsoid method

Karmarkar’s algorithm

Mehrotra predictor–corrector method

Column generation

k-approximation of k-hitting set — algorithm for specific LP problems (to find a weighted hitting set)

Linear complementarity problem
Decompositions:
Benders’ decomposition

Dantzig–Wolfe decomposition

Theory of two-level planning

Variable splitting

Basic solution (linear programming) — solution at vertex of feasible region

Fourier–Motzkin elimination

Hilbert basis (linear programming) — set of integer vectors in a convex cone which generate all integer vectors in the cone

LP-type problem

Linear inequality

Vertex enumeration problem — list all vertices of the feasible set

Convex optimization[edit]

Convex optimization

Quadratic programming

Linear least squares (mathematics)

Total least squares

Frank–Wolfe algorithm

Sequential minimal optimization — breaks up large QP problems into a series of smallest possible QP problems

Bilinear program

Basis pursuit — minimize L1-norm of vector subject to linear constraints

Basis pursuit denoising (BPDN) — regularized version of basis pursuit

In-crowd algorithm — algorithm for solving basis pursuit denoising

Linear matrix inequality

Conic optimization

Semidefinite programming

Second-order cone programming

Sum-of-squares optimization

Quadratic programming (see above)

Bregman method — row-action method for strictly convex optimization problems

Proximal gradient method — use splitting of objective function in sum of possible non-differentiable pieces

Subgradient method — extension of steepest descent for problems with a non-differentiable objective function

Biconvex optimization — generalization where objective function and constraint set can be biconvex

Nonlinear programming[edit]

Nonlinear programming — the most general optimization problem in the usual framework

Special cases of nonlinear programming:

See Linear programming and Convex optimization above

Geometric programming — problems involving signomials or posynomials

Signomial — similar to polynomials, but exponents need not be integers

Posynomial — a signomial with positive coefficients

Quadratically constrained quadratic program

Linear-fractional programming — objective is ratio of linear functions, constraints are linear

Fractional programming — objective is ratio of nonlinear functions, constraints are linear

Nonlinear complementarity problem (NCP) — find x such that x ≥ 0, f(x) ≥ 0 and xT f(x) = 0

Least squares — the objective function is a sum of squares

Non-linear least squares

Gauss–Newton algorithm

BHHH algorithm — variant of Gauss–Newton in econometrics

Generalized Gauss–Newton method — for constrained nonlinear least-squares problems

Levenberg–Marquardt algorithm

Iteratively reweighted least squares (IRLS) — solves a weigted least-squares problem at every iteration

Partial least squares — statistical techniques similar to principal components analysis

Non-linear iterative partial least squares (NIPLS)

Mathematical programming with equilibrium constraints — constraints include variational inequalities or complementarities

Univariate optimization:

Golden section search

Successive parabolic interpolation — based on quadratic interpolation through the last three iterates
Guess value — the initial guess for a solution with which an algorithm starts

Line search

Backtracking line search

Wolfe conditions

Gradient method — method that uses the gradient as the search direction

Gradient descent

Stochastic gradient descent

Landweber iteration — mainly used for ill-posed problems

Successive linear programming (SLP) — replace problem by a linear programming problem, solve that, and repeat

Sequential quadratic programming (SQP) — replace problem by a quadratic programming problem, solve that, and repeat

Newton’s method in optimization

See also under Newton algorithm in the section Finding roots of nonlinear equations

Nonlinear conjugate gradient method

Derivative-free methods

Coordinate descent — move in one of the coordinate directions

Adaptive coordinate descent — adapt coordinate directions to objective function

Random coordinate descent — randomized version

Nelder–Mead method

Pattern search (optimization)

Powell’s method — based on conjugate gradient descent

Rosenbrock methods — derivative-free method, similar to Nelder–Mead but with guaranteed convergence

Augmented Lagrangian method — replaces constrained problems by unconstrained problems with a term added to the objective function

Ternary search

Tabu search

Guided Local Search — modification of search algorithms which builds up penalties during a search

Reactive search optimization (RSO) — the algorithm adapts its parameters automatically

MM algorithm — majorize-minimization, a wide framework of methods

Least absolute deviations

Expectation–maximization algorithm

Ordered subset expectation maximization

Adaptive projected subgradient method

Nearest neighbor search

Space mapping — uses “coarse” (ideal or low-fidelity) and “fine” (practical or high-fidelity) models

Optimal control and infinite-dimensional optimization[edit]

Optimal control

Pontryagin’s minimum principle — infinite-dimensional version of Lagrange multipliers

Costate equations — equation for the “Lagrange multipliers” in Pontryagin’s minimum principle

Hamiltonian (control theory) — minimum principle says that this function should be minimized

Types of problems:

Linear-quadratic regulator — system dynamics is a linear differential equation, objective is quadratic

Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE with additive noise, objective is quadratic

Optimal projection equations — method for reducing dimension of LQG control problem

Algebraic Riccati equation — matrix equation occurring in many optimal control problems

Bang–bang control — control that switches abruptly between two states

Covector mapping principle

Differential dynamic programming — uses locally-quadratic models of the dynamics and cost functions

DNSS point — initial state for certain optimal control problems with multiple optimal solutions

Legendre–Clebsch condition — second-order condition for solution of optimal control problem

Pseudospectral optimal control

Bellman pseudospectral method — based on Bellman’s principle of optimality

Chebyshev pseudospectral method — uses Chebyshev polynomials (of the first kind)

Flat pseudospectral method — combines Ross–Fahroo pseudospectral method with differential flatness

Gauss pseudospectral method — uses collocation at the Legendre–Gauss points

Legendre pseudospectral method — uses Legendre polynomials

Pseudospectral knotting method — generalization of pseudospectral methods in optimal control

Ross–Fahroo pseudospectral method — class of pseudospectral method including Chebyshev, Legendre and knotting

Ross–Fahroo lemma — condition to make discretization and duality operations commute

Ross’ π lemma — there is fundamental time constant within which a control solution must be computed for controllability and stability

Sethi model — optimal control problem modelling advertising

Infinite-dimensional optimization

Semi-infinite programming — infinite number of variables and finite number of constraints, or other way around

Shape optimization, Topology optimization — optimization over a set of regions

Topological derivative — derivative with respect to changing in the shape

Generalized semi-infinite programming — finite number of variables, infinite number of constraints
Approaches to deal with uncertainty:

Markov decision process

Partially observable Markov decision process

Probabilistic-based design optimization

Robust optimization

Wald’s maximin model

Scenario optimization — constraints are uncertain

Stochastic approximation

Stochastic optimization

Stochastic programming

Stochastic gradient descent
Random optimization algorithms:
Random search — choose a point randomly in ball around current iterate

Simulated annealing

Adaptive simulated annealing — variant in which the algorithm parameters are adjusted during the computation.

Great Deluge algorithm

Mean field annealing — deterministic variant of simulated annealing

Bayesian optimization — treats objective function as a random function and places a prior over it

Evolutionary algorithm

Differential evolution

Evolutionary programming

Genetic algorithm, Genetic programming

Genetic algorithms in economics

MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) — uses an evolutionary algorithm for every agent

Simultaneous perturbation stochastic approximation (SPSA)

Luus–Jaakola

Particle swarm optimization

Stochastic tunneling

Harmony search — mimicks the improvisation process of musicians
Convex analysis — function f such that f(tx + (1 − t)y) ≥ tf(x) + (1 − t)f(y) for t ∈ [0,1]

Pseudoconvex function — function f such that ∇f · (y − x) ≥ 0 implies f(y) ≥ f(x)

Quasiconvex function — function f such that f(tx + (1 − t)y) ≤ max(f(x), f(y)) for t ∈ [0,1]

Subderivative

Geodesic convexity — convexity for functions defined on a Riemannian manifold

Duality (optimization)

Weak duality — dual solution gives a bound on the primal solution

Strong duality — primal and dual solutions are equivalent

Shadow price

Dual cone and polar cone

Duality gap — difference between primal and dual solution

Fenchel’s duality theorem — relates minimization problems with maximization problems of convex conjugates

Perturbation function — any function which relates to primal and dual problems

Slater’s condition — sufficient condition for strong duality to hold in a convex optimization problem

Total dual integrality — concept of duality for integer linear programming

Wolfe duality — for when objective function and constraints are differentiable

Farkas’ lemma

Karush–Kuhn–Tucker conditions (KKT) — sufficient conditions for a solution to be optimal

Fritz John conditions — variant of KKT conditions

Lagrange multiplier

Lagrange multipliers on Banach spaces

Semi-continuity

Complementarity — study of problems with constraints of the form 〈u, v〉 = 0

Mixed complementarity problem

Mixed linear complementarity problem

Lemke’s algorithm — method for solving (mixed) linear complementarity problems

Danskin’s theorem — used in the analysis of minimax problems

Maximum theorem — the maximum and maximizer are continuous as function of parameters, under some conditions
Relaxation (approximation) — approximating a given problem by an easier problem by relaxing some constraints

Lagrangian relaxation

Linear programming relaxation — ignoring the integrality constraints in a linear programming problem

Self-concordant function

Reduced cost — cost for increasing a variable by a small amount

Hardness of approximation — computational complexity of getting an approximate solution
Geometric median — the point minimizing the sum of distances to a given set of points

Chebyshev center — the centre of the smallest ball containing a given set of points
Iterated conditional modes — maximizing joint probability of Markov random field

Response surface methodology — used in the design of experiments

Automatic label placement

Compressed sensing — reconstruct a signal from knowledge that it is sparse or compressible

Cutting stock problem

Demand optimization

Destination dispatch — an optimization technique for dispatching elevators

Energy minimization

Entropy maximization

Highly optimized tolerance

Hyperparameter optimization

Inventory control problem

Newsvendor model

Extended newsvendor model

Assemble-to-order system

Linear programming decoding

Linear search problem — find a point on a line by moving along the line

Low-rank approximation — find best approximation, constraint is that rank of some matrix is smaller than a given number

Meta-optimization — optimization of the parameters in an optimization method

Multidisciplinary design optimization

Optimal computing budget allocation — maximize the overall simulation efficiency for finding an optimal decision

Paper bag problem

Process optimization

Recursive economics — individuals make a series of two-period optimization decisions over time.

Stigler diet

Space allocation problem

Stress majorization

Trajectory optimization

Transportation theory

Wing-shape optimization

Miscellaneous[edit]

Combinatorial optimization

Dynamic programming

Bellman equation

Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation

Backward induction — solving dynamic programming problems by reasoning backwards in time

Optimal stopping — choosing the optimal time to take a particular action

Odds algorithm

Robbins’ problem

Global optimization:

BRST algorithm

MCS algorithm

Multi-objective optimization — there are multiple conflicting objectives

Benson’s algorithm — for linear vector optimization problems

Bilevel optimization — studies problems in which one problem is embedded in another

Optimal substructure

Dykstra’s projection algorithm — finds a point in intersection of two convex sets

Algorithmic concepts:

Barrier function

Penalty method

Trust region

Test functions for optimization:

Rosenbrock function — two-dimensional function with a banana-shaped valley

Himmelblau’s function — two-dimensional with four local minima, defined by f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2

Rastrigin function — two-dimensional function with many local minima

Shekel function — multimodal and multidimensional

Mathematical Optimization Society

Numerical quadrature (integration)[edit]

Numerical integration — the numerical evaluation of an integral

Rectangle method — first-order method, based on (piecewise) constant approximation

Trapezoidal rule — second-order method, based on (piecewise) linear approximation

Simpson’s rule — fourth-order method, based on (piecewise) quadratic approximation

Adaptive Simpson’s method

Boole’s rule — sixth-order method, based on the values at five equidistant points

Newton–Cotes formulas — generalizes the above methods

Romberg’s method — Richardson extrapolation applied to trapezium rule

Gaussian quadrature — highest possible degree with given number of points

Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight (1 − x2)±1/2 on [−1, 1]

Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [−∞, ∞]

Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − x)α (1 + x)β on [−1, 1]

Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−x) on [0, ∞]

Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature

Gauss–Kronrod rules

Tanh-sinh quadrature — variant of Gaussian quadrature which works well with singularities at the end points

Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials

Adaptive quadrature — adapting the subintervals in which the integration interval is divided depending on the integrand

Monte Carlo integration — takes random samples of the integrand
Quantized state systems method (QSS) — based on the idea of state quantization

Lebedev quadrature — uses a grid on a sphere with octahedral symmetry

Sparse grid

Coopmans approximation

Numerical differentiation — for fractional-order integrals

Numerical smoothing and differentiation

Adjoint state method — approximates gradient of a function in an optimization problem

Euler–Maclaurin formula
Numerical methods for ordinary differential equations — the numerical solution of ordinary differential equations (ODEs)

Euler method — the most basic method for solving an ODE

Explicit and implicit methods — implicit methods need to solve an equation at every step

Backward Euler method — implicit variant of the Euler method

Trapezoidal rule — second-order implicit method

Runge–Kutta methods — one of the two main classes of methods for initial-value problems

Midpoint method — a second-order method with two stages

Heun’s method — either a second-order method with two stages, or a third-order method with three stages

Bogacki–Shampine method — a third-order method with four stages (FSAL) and an embedded fourth-order method

Cash–Karp method — a fifth-order method with six stages and an embedded fourth-order method

Dormand–Prince method — a fifth-order method with seven stages (FSAL) and an embedded fourth-order method

Runge–Kutta–Fehlberg method — a fifth-order method with six stages and an embedded fourth-order method

Gauss–Legendre method — family of A-stable method with optimal order based on Gaussian quadrature

Butcher group — algebraic formalism involving rooted trees for analysing Runge–Kutta methods

List of Runge–Kutta methods

Linear multistep method — the other main class of methods for initial-value problems

Backward differentiation formula — implicit methods of order 2 to 6; especially suitable for stiff equations

Numerov’s method — fourth-order method for equations of the form y” = f(t,y)

Predictor–corrector method — uses one method to approximate solution and another one to increase accuracy

General linear methods — a class of methods encapsulating linear multistep and Runge-Kutta methods

Bulirsch–Stoer algorithm — combines the midpoint method with Richardson extrapolation to attain arbitrary order

Exponential integrator — based on splitting ODE in a linear part, which is solved exactly, and a nonlinear part

Methods designed for the solution of ODEs from classical physics:

Newmark-beta method — based on the extended mean-value theorem

Verlet integration — a popular second-order method

Leapfrog integration — another name for Verlet integration

Beeman’s algorithm — a two-step method extending the Verlet method

Dynamic relaxation

Geometric integrator — a method that preserves some geometric structure of the equation

Symplectic integrator — a method for the solution of Hamilton’s equations that preserves the symplectic structure

Variational integrator — symplectic integrators derived using the underlying variational principle

Semi-implicit Euler method — variant of Euler method which is symplectic when applied to separable Hamiltonians

Energy drift — phenomenon that energy, which should be conserved, drifts away due to numerical errors

Other methods for initial value problems (IVPs):

Bi-directional delay line

Partial element equivalent circuit

Methods for solving two-point boundary value problems (BVPs):

Shooting method

Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval

Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints:

Constraint algorithm — for solving Newton’s equations with constraints

Pantelides algorithm — for reducing the index of a DEA

Methods for solving stochastic differential equations (SDEs):

Euler–Maruyama method — generalization of the Euler method for SDEs

Milstein method — a method with strong order one

Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs

Methods for solving integral equations:

Nyström method — replaces the integral with a quadrature rule
Analysis:

Truncation error (numerical integration) — local and global truncation errors, and their relationships

Lady Windermere’s Fan (mathematics) — telescopic identity relating local and global truncation errors

Stiff equation — roughly, an ODE for which unstable methods need a very short step size, but stable methods do not

L-stability — method is A-stable and stability function vanishes at infinity

Dynamic errors of numerical methods of ODE discretization — logarithm of stability function

Adaptive stepsize — automatically changing the step size when that seems advantageous
Numerical partial differential equations — the numerical solution of partial differential equations (PDEs)
Finite difference method — based on approximating differential operators with difference operators

Finite difference — the discrete analogue of a differential operator

Finite difference coefficient — table of coefficients of finite-difference approximations to derivatives

Discrete Laplace operator — finite-difference approximation of the Laplace operator

Eigenvalues and eigenvectors of the second derivative — includes eigenvalues of discrete Laplace operator

Kronecker sum of discrete Laplacians — used for Laplace operator in multiple dimensions

Discrete Poisson equation — discrete analogue of the Poisson equation using the discrete Laplace operator

Stencil (numerical analysis) — the geometric arrangements of grid points affected by a basic step of the algorithm

Compact stencil — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours

Higher-order compact finite difference scheme

Non-compact stencil — any stencil that is not compact

Five-point stencil — two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid

Finite difference methods for heat equation and related PDEs:

FTCS scheme (forward-time central-space) — first-order explicit

Crank–Nicolson method — second-order implicit

Finite difference methods for hyperbolic PDEs like the wave equation:

Lax–Friedrichs method — first-order explicit

Lax–Wendroff method — second-order explicit

MacCormack method — second-order explicit

Upwind scheme

Upwind differencing scheme for convection — first-order scheme for convection–diffusion problems

Lax–Wendroff theorem — conservative scheme for hyperbolic system of conservation laws converges to the weak solution

Alternating direction implicit method (ADI) — update using the flow in x-direction and then using flow in y-direction
Finite difference methods for option pricing

Finite-difference time-domain method — a finite-difference method for electrodynamics

Finite element methods[edit]

Finite element method — based on a discretization of the space of solutions

Finite element method in structural mechanics — a physical approach to finite element methods

Galerkin method — a finite element method in which the residual is orthogonal to the finite element space

Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous

Rayleigh–Ritz method — a finite element method based on variational principles

Spectral element method — high-order finite element methods

hp-FEM — variant in which both the size and the order of the elements are automatically adapted

Examples of finite elemets:

Bilinear quadrilateral element — also known as the Q4 element

Constant strain triangle element (CST) — also known as the T3 element

Barsoum elements

Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis

Trefftz method

Finite element updating

Extended finite element method — puts functions tailored to the problem in the approximation space

Functionally graded elements — elements for describing functionally graded materials

Superelement — particular grouping of finite elements, employed as a single element

Interval finite element method — combination of finite elements with interval arithmetic

Discrete exterior calculus — discrete form of the exterior calculus of differential geometry

Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations

Céa’s lemma — solution in the finite-element space is an almost best approximation in that space of the true solution

Patch test (finite elements) — simple test for the quality of a finite element

MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at Brunel University

NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis

Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture

Interval finite element

Applied element method — for simulation of cracks and structural collapse

Wood–Armer method — structural analysis method based on finite elements used to design reinforcement for concrete slabs

Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools

Stiffness matrix — finite-dimensional analogue of differential operator

Combination with meshfree methods:

Weakened weak form — form of a PDE that is weaker than the standard weak form

G space — functional space used in formulating the weakened weak form

Smoothed finite element method

List of finite element software packages

Other methods[edit]

Spectral method — based on the Fourier transformation

Pseudo-spectral method

Method of lines — reduces the PDE to a large system of ordinary differential equations

Boundary element method (BEM) — based on transforming the PDE to an integral equation on the boundary of the domain

Interval boundary element method — a version using interval arithmetics

Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically

Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics

Godunov’s scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation

MUSCL scheme — second-order variant of Godunov’s scheme

AUSM — advection upstream splitting method

Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations

Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data)

Properties of discretization schemes — finite volume methods can be conservative, bounded, etc.

Discrete element method — a method in which the elements can move freely relative to each other

Extended discrete element method — adds properties such as strain to each particle

Movable cellular automaton — combination of cellular automata with discrete elements

Meshfree methods — does not use a mesh, but uses a particle view of the field

Discrete least squares meshless method — based on minimization of weighted summation of the squared residual

Diffuse element method

Finite pointset method — represent continuum by a point cloud

Moving Particle Semi-implicit Method

Method of fundamental solutions (MFS) — represents solution as linear combination of fundamental solutions

Variants of MFS with source points on the physical boundary:

Boundary knot method (BKM)

Boundary particle method (BPM)

Regularized meshless method (RMM)

Singular boundary method (SBM)

Methods designed for problems from electromagnetics:

Finite-difference time-domain method — a finite-difference method

Rigorous coupled-wave analysis — semi-analytical Fourier-space method based on Floquet’s theorem

Transmission-line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines

Uniform theory of diffraction — specifically designed for scattering problems

Particle-in-cell — used especially in fluid dynamics

Multiphase particle-in-cell method — considers solid particles as both numerical particles and fluid

High-resolution scheme

Shock capturing method

Vorticity confinement — for vortex-dominated flows in fluid dynamics, similar to shock capturing

Split-step method

Fast marching method

Orthogonal collocation

Lattice Boltzmann methods — for the solution of the Navier-Stokes equations

Roe solver — for the solution of the Euler equation

Relaxation (iterative method) — a method for solving elliptic PDEs by converting them to evolution equations

Broad classes of methods:

Mimetic methods — methods that respect in some sense the structure of the original problem

Multiphysics — models consisting of various submodels with different physics

Immersed boundary method — for simulating elastic structures immersed within fluids

Multisymplectic integrator — extension of symplectic integrators, which are for ODEs

Stretched grid method — for problems solution that can be related to an elastic grid behavior.

Techniques for improving these methods[edit]

Multigrid method — uses a hierarchy of nested meshes to speed up the methods

Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains

Additive Schwarz method

Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information

Balancing domain decomposition method (BDD) — preconditioner for symmetric positive definite matrices

Balancing domain decomposition by constraints (BDDC) — further development of BDD

Finite element tearing and interconnect (FETI)

FETI-DP — further development of FETI

Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape

Mortar methods — meshes on subdomain do not mesh

Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain

Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains

Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current

Schur complement method — early and basic method on subdomains that do not overlap

Schwarz alternating method — early and basic method on subdomains that overlap

Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom

Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary

Fast multipole method — hierarchical method for evaluating particle-particle interactions

Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions

Grids and meshes[edit]

Grid classification / Types of mesh:

Polygon mesh — consists of polygons in 2D or 3D

Triangle mesh — consists of triangles in 2D or 3D

Triangulation (geometry) — subdivision of given region in triangles, or higher-dimensional analogue

Nonobtuse mesh — mesh in which all angles are less than or equal to 90°

Point set triangulation — triangle mesh such that given set of point are all a vertex of a triangle

Polygon triangulation — triangle mesh inside a polygon

Delaunay triangulation — triangulation such that no vertex is inside the circumcentre of a triangle

Constrained Delaunay triangulation — generalization of the Delaunay triangulation that forces certain required segments into the triangulation

Pitteway triangulation — for any point, triangle containing it has nearest neighbour of the point as a vertex

Minimum-weight triangulation — triangulation of minimum total edge length

Kinetic triangulation — a triangulation that moves over time

Triangulated irregular network

Quasi-triangulation — subdivision into simplices, where vertiсes are not points but arbitrary sloped line segments

Volume mesh — consists of three-dimensional shapes

Regular grid — consists of congruent parallelograms, or higher-dimensional analogue

Unstructured grid

Geodesic grid — isotropic grid on a sphere

Mesh generation

Image-based meshing — automatic procedure of generating meshes from 3D image data

Marching cubes — extracts a polygon mesh from a scalar field

Parallel mesh generation

Ruppert’s algorithm — creates quality Delauney triangularization from piecewise linear data
Apollonian network — undirected graph formed by recursively subdividing a triangle

Barycentric subdivision — standard way of dividing arbitrary convex polygons into triangles, or the higher-dimensional analogue
Chew’s second algorithm — improves Delauney triangularization by refining poor-quality triangles

Laplacian smoothing — improves polynomial meshes by moving the vertices

Jump-and-Walk algorithm — for finding triangle in a mesh containing a given point

Spatial twist continuum — dual representation of a mesh consisting of hexahedra

Pseudotriangle — simply connected region between any three mutually tangent convex sets

Simplicial complex — all vertices, line segments, triangles, tetrahedra, …, making up a mesh
Lax equivalence theorem — a consistent method is convergent if and only if it is stable

Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs

Von Neumann stability analysis — all Fourier components of the error should be stable

Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present

False diffusion

Numerical resistivity — the same, with resistivity instead of diffusion

Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods

Total variation diminishing — property of schemes that do not introduce spurious oscillations

Godunov’s theorem — linear monotone schemes can only be of first order

Motz’s problem — benchmark problem for singularity problems
Direct simulation Monte Carlo

Quasi-Monte Carlo method

Markov chain Monte Carlo

Metropolis–Hastings algorithm

Multiple-try Metropolis — modification which allows larger step sizes

Wang and Landau algorithm — extension of Metropolis Monte Carlo

Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm

Multicanonical ensemble — sampling technique that uses Metropolis–Hastings to compute integrals

Gibbs sampling

Coupling from the past

Reversible-jump Markov chain Monte Carlo

Dynamic Monte Carlo method

Kinetic Monte Carlo

Gillespie algorithm

Particle filter

Auxiliary particle filter

Reverse Monte Carlo

Demon algorithm

Pseudo-random number sampling

Inverse transform sampling — general and straightforward method but computationally expensive

Rejection sampling — sample from a simpler distribution but reject some of the samples

Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments

For sampling from a normal distribution:

Box–Muller transform

Marsaglia polar method

Convolution random number generator — generates a random variable as a sum of other random variables

Indexed search

Variance reduction techniques:

Antithetic variates

Control variates

Importance sampling

Stratified sampling

VEGAS algorithm

Low-discrepancy sequence

Constructions of low-discrepancy sequences

Event generator

Parallel tempering

Umbrella sampling — improves sampling in physical systems with significant energy barriers

Hybrid Monte Carlo

Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables

Transition path sampling

Walk-on-spheres method — to generate exit-points of Brownian motion from bounded domains
Ensemble forecasting — produce multiple numerical predictions from slightly differing initial conditions or parameters

Bond fluctuation model — for simulating the conformation and dynamics of polymer systems

Iterated filtering

Metropolis light transport

Monte Carlo localization — estimates the position and orientation of a robot

Monte Carlo methods for electron transport

Monte Carlo method for photon transport

Monte Carlo methods in finance

Monte Carlo methods for option pricing

Quasi-Monte Carlo methods in finance

Monte Carlo molecular modeling

Path integral molecular dynamics — incorporates Feynman path integrals

Quantum Monte Carlo

Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation

Gaussian quantum Monte Carlo

Path integral Monte Carlo

Reptation Monte Carlo

Variational Monte Carlo

Methods for simulating the Ising model:

Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters

Wolff algorithm — improvement of the Swendsen–Wang algorithm

Metropolis–Hastings algorithm

Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems

Cross-entropy method — for multi-extremal optimization and importance sampling

Also see the list of statistics topics
Large eddy simulation

Smoothed-particle hydrodynamics

Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types

Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and Lagrangian for structures

Explicit algebraic stress model

Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids

Geodesic grid

Quantum jump method — used for simulating open quantum systems, operates on wave function

Dynamic Design Analysis Method (DDAM) — for evaluating effect of underwater explosions on equipment
Cell lists

Coupled cluster

Density functional theory

DIIS — direct inversion in (or of) the iterative subspace

Now, if one were to pick any given set of approaches, one would have to first ensure that the set of approaches is commutative, i.e. many of these approaches assume slightly different ontological premises and thus cannot be used together, which is not an issue for mathematics per se, since it doesn’t claim to imply any relation to empirical data. However it becomes an issue in applying computational approaches to empirical data.

Even within the simplest mathematical systems, for instance regular arithmetic and simple linear algebra, there is no rational transition between the two. We tend to assume there is primarily because most of us made the transition initially as children and habituated the change in the basis of understanding the symbology involved. In terms of the process of learning, this is reinforced by the sense that simple algebra is somehow based on arithmetic, since arithmetic is always an assumed prerequisite. However this is illusory. The prerequisite practice of arithmetic is simply the habituating of the ability to manipulate mathematical symbols in the most general sense, it in no way implies that there is any necessary or even contingent relation between the symbology of arithmetic and simple algebra. Within computationl mathematics, which is only a small subset of mathematical systems in general, there are dozens of underlying mathematical systems that have no rational transitions, i.e. are non-commutative. Within computation, which generally runs on a ‘good enough’ approach, this only occasionally creates issues. However if one is trying to model a given system accurately rather than simply using a ‘good enough’ simulation to provide an optimization to a purely computational problem, this simultaneous use of non-commutative approaches cannot be permitted.

Once it is confirmed that the set is fully commutative (which we have no quick or simple means of doing) one would have to ensure that the operative ontology in all the approaches is a sensible one in terms of understanding and manipulating empirical biological data, i.e. determining that the operative ontology of the mathematical approaches is identical to the actual operative ontology of real biological systems. We have no theoretical means of accomplishing this, never mind a practical method.

Even within the small set of biologically inspired computational approaches, while it is true that the models behave in a manner that is somewhat similar to the biological systems they were inspired by, it is also true that they do not do so with any accuracy. This lack of realistic precision could be due to the model being a relatively closed system when compared with the actual biological system, or it could be due to the model failing to take into account or failing to accurately determine initial conditions for all relevant parameters, or it could be that the model is based on invalid ontological assumptions and merely mimics a certain aspect of reality without implying anything ontologically valid about reality. There’s no means to distinguish between these potential origins of inaccuracy except by modeling the system in question and its entire spatio-temporal environment, which is nothing other than the rest of reality itself, with all parameters accurately determined. The only feasible model we can ever have for that is reality itself.

While even the more complex single celled systems are beyond the modeling capability of computational mathematics, multicellular cell differentiated systems are beyond the capability of computation in a more general sense. In a similar manner to the lack of commutativity between different mathematical systems, there is no commutativity between a single cell system and a multicellular, cell differentiated system, i.e. there is no rational transition, since the generation of a more comprehensive generic view is itself an ontological, not a rational exercise.

“Niche” Programming Languages

There are a number of very good “niche” programming languages available. But “niches” are not all alike.

Scala has a “niche” in parallel software. It’s a good language and you can use it for other things, but it buys you the most in that area.

Haskell has a “niche” in scientific programming. It’s also a good language and can be used for other things, but it buys you the most in that area.

LISP has a “niche” in recursive programming, which is useful for certain types of mathematical problems and for AI, it can be used for other things, but framing common business problems in LISP can be more of a challenge than it’s worth unless you’re really good with it.

Smalltalk, on the other hand, is also a “niche” language, but its niches seem to have little in common at first glance. It’s used in prototyping, due to the speed of revising iterative software, and simultaneously in production automation, including high performance manufacturing such as semiconductor fabs. It’s used in DoD defense automation systems, as well as things like telecom, where most of Iceland’s cell phone system is run on a Smalltalk back end. Another, seemingly unrelated niche is epidemiology software, such as Kendrick, used for the Ebola outbreak.

The commonality to these niches that stands out has less to do with specific language aspects that make certain types of calculations easier to frame, but rather that in each case “shit has to work”, and it has to be written, tested and debugged productively.

However “shit has to work” and “has to be written, tested and debugged quickly and productively” seem like niches that shouldn’t be niche in the first place, especially given the percentage of overall software projects that are late, overbudget, or fail completely. According to a software productivity study undertaken in 2013 based on time to production per feature point and average number of defects, Smalltalk was 34x as productive as basic assembler, compared to Ruby at 16x, Java at 11x, C at 5x, and JavaScript/NodeJS at 1.2x.

It seems odd that CIOs, technical leads and developers would see a language that productive and that reliable as merely a “niche”.

Decapitalization of Industry and Desubstantialization of Currency

Somebody made a comment on a website the other day which, so far as it goes, is correct, but completely beside the point:

And what exactly would you like to point out? The two are one and the same, because a thriving, competitive marketplace always drives prices down. That’s, like, Capitalism 101, guys.

It is correct, of course, but the necessary assumption is that Capitalism as such is the primary operative mode of the economy. It’s beside the point precisely because Capitalism no longer is the primary operative mode of the economy, and insofar as it is operative at all, is simply eating itself.

Money, as itself insubstantial, cannot make money. Capitalism involves the pretense that it can, by taking surplus wages. That in itself though requires that there are such surplus wages to be taken, but that is no longer the case, ironically as a result predominantly of higher productivity. The phenomenal rise in the cost of property is only the most noticeable effect, and has occurred simply because those with money must put it somewhere, and property, with the rentier income it provides, is the only reliable intermediary state between having money and getting income from it.

While in individual situations a capitalist can make money from investment, whatever money is made is predicated on a further overall divestment of capital. Successful companies such as Apple and Google are only successful insofar as they disrupt and decapitalize companies with larger initial capital bases than the resulting capital bases of the newly successful firms. Thus overall, the success of specific companies doesn’t take away from the overall decapitalization of the economy, but rather accelerates it.

Money, as such, can only be measured relative to other money. While a rough estimate could theoretically be made of the absolute value by looking at the relative prices of the same goods or services over a given period, too many goods and services have not existed for long enough, while others have disappeared, to make any accurate estimates possible. This leads to the odd situation that we really don’t know what, if anything, any given money is worth.

Any attempt to base money on something else, such as energy (the petrodollar, Bitcoin) or a specific material (the gold standard) inevitably fails, since the value of energy and gold themselves are measured in money.

Money is, quite literally, worth whatever society as a whole believes it is worth. This belief is itself based on a mythos of substantialized currency, a mythos that was substituted for the initial mythos of currency as valuable due to the god’s blessing. The apparent testimony to the substantial nature of money came from the technology of the touchstone, but of course it, like any other measure of money, could only measure relative, not absolute value.

This mythos, though, is on its deathbed, only sustained at the moment by a combination of artificial life-support and fear of what will happen if it does in fact die. The desubstantialization of money via its digization, combined with the decapitalization of the economy, has resulted in a situation where the operative economy is closer to feudalism than to capitalism proper, based as it is primarily on property as real estate.

The next financial crisis, if it follows the pattern of the others since the 1970s, will be “too big to bail”, since the amount of currency at risk is larger than any government’s ability to bailout without creating a domino effect on all other currencies. The desire for local stability, which creates global instability by (from each local perspective sensible) intertwining of financial and other large industry globally.

The question is, what then? Nobody can simply resurrect the global fnancial system since too many conflicting interests would have to agree. I don’t have an answer, and as far as I can tell, neither does anyone else. When the system crashes is entirely unpredictable, it may be this year, next year, or two hundred years from now, but with a much higher than zero probability, it inevitably will happen at some point.

The proper meaning of “currency” is “something generally accepted as meaningful” in a given society, as in “certain slang terms gained currency in the 1920’s, only to be eclipsed by others and virtually disappearing by the 1940’s”. Currency in the monetary sense is no more substantial than slang, and the idea that currency itself could not ‘lose currency’ is neither backed up by evidence, nor is sensible in itself.

Can We Recycle Paleontologists as Street Sweepers? Or Is That Beyond Them?

Actual quotes from a paleontologist, Steve Bruzatte, made THIS YEAR: “In general it is very bird-like, but it’s big, and has these very short arms with full-blown wings.”

““When you see a dinosaur like this that’s pretty big, and has these short arms and bird-like wings, it begs that question: what are wings really for? We used to think pretty much anything that had wings was flying, but that’s not so clear now,”

Excuse me, but have you not noticed the number of big ass feathered bird-like creatures that run around various parts of the world and can’t fly? You know, exotic, rare species like chickens, turkeys and ostriches, Just because stupid zoologists jumped to the conclusion that their ancestors must have been able to fly, with no supporting evidence whatsoever, instead of actually thinking about the problems it might raise, doesn’t make you any less of a fucking idiot.

Any halfways intelligent seven year old should notice the resemblance between a skeleton of a wild turkey and the fossilized traces of skeletons of “dinosaurs” such as the Velociraptor. It wasn’t an extinct reptil, just a big ass carnivorous turkey that lived a long time ago. In fact it was obvious to a halfways intelligent seven year old, the halfways intelligent seven year old I was 40 years ago. I had no idea these idiots still thought that the Jurassic Park dinosaurs were “realistic”, it’s just a movie, not real life. Most 3 year olds know that movies and tv aren’t real. It wouldn’t be anything other than an absurd comedy if big ass turkeys were chasing people around in some messed up version of Disneyworld..

Guess what, you dipshits, dinosaurs never existed, they’re a figment of your halfwit minds. There were plenty of big ass birds, many of which, like big ass birds today, can’t fly. Birds are a genus much more robust, diverse and adaptive than the genus of mammals has ever been. The “great extinction” is itself no more than a myth, birds simply changed. Ex-aptive change cannot be accounted for even when it occurs in your lifetime, much less when it occurred long before there were human beings to account for it.

Below: skeleton of a wild turkey.  Not only not extinct for millions of years, it died a decade ago.  Theer’s plenty more still running around.Wild Turkey Skeleton