Physlib.SpaceAndTime.Space.Derivatives.Div
11 declarations
Divergence of a function
#divFor a function (where is represented by `EuclideanSpace ℝ (Fin d)`), the divergence is the scalar-valued function on defined by the sum of the partial derivatives of its components. Specifically, for any point : where denotes the -th component of the vector and denotes the spatial derivative in the direction of the -th basis vector.
Divergence notation
#divNotationThe notation represents the divergence operator applied to a function from the space to the -dimensional Euclidean space . It is defined as a shorthand for the formal function `Space.div f`.
The divergence of the zero function mapping to is the zero function mapping to . Symbolically, .
The Divergence of a Constant Function is Zero
#div_constFor any constant vector , the divergence of the constant function defined by is the zero function, i.e., .
Let be two differentiable functions, where is the standard -dimensional Euclidean space. Then the divergence of the sum of the functions is equal to the sum of their individual divergences: where denotes the divergence operator .
For any differentiable function and any scalar , the divergence of the scalar multiple function is equal to the scalar multiple of the divergence of : Here, is a -dimensional real inner product space and is equipped with the standard Euclidean structure.
The Map is Linear if is Linear
#div_linear_mapLet be a real vector space and let be a linear map. If for every , the function is differentiable, then the map that assigns each to the divergence of , denoted by , is also a linear map. Here, represents a 3-dimensional real inner product space and is the standard Euclidean space.
Divergence operator for distributions
#distDivThe divergence operator for distributions, , is an -linear map that transforms a distribution into a scalar-valued distribution . It is defined as the composition of the distributional Fréchet derivative with a trace operator . For a continuous linear map , the trace is computed as , where is the standard orthonormal basis of and .
Let be a -dimensional real inner product space equipped with the standard orthonormal basis . For any distribution and any Schwartz test function , the distributional divergence of applied to is given by the sum: where is the distributional Fréchet derivative of , and denotes the -th component of the vector resulting from applying the continuous linear map to the basis vector .
Let be a -dimensional real inner product space equipped with the standard orthonormal basis . For any distribution and any Schwartz test function , the distributional divergence of applied to is equal to the sum of the -th components of the distributional partial derivatives of applied to : where is the distributional partial derivative of in the direction of the -th basis vector , and denotes the -th component of the resulting vector in .
Distributional divergence of a function equals
#distDiv_ofFunctionLet be a -dimensional Euclidean space (where ). Let be a distribution-bounded function, and let denote the distribution induced by . For any Schwartz test function , the distributional divergence of applied to is given by: \[ (\text{distDiv } T_f)(\eta) = -\int_E \langle f(x), \nabla \eta(x) \rangle \, dx \] where is the gradient of and is the standard inner product on .
