Physlib.SpaceAndTime.Space.Derivatives.Grad
35 declarations
Gradient of a function
#gradFor a real-valued function , the gradient is a vector field mapping each point to an element of the Euclidean space . The -th component of the gradient at point is given by the partial derivative of with respect to the -th spatial coordinate, denoted as or .
Gradient notation
#term∇The notation is introduced to represent the gradient operator `grad`. For a function , denotes the gradient of , which is a vector field mapping points in to the Euclidean space .
The gradient of the zero function (defined by for all ) is the zero vector field, denoted as .
Let be two real-valued functions that are differentiable on . The gradient of the sum of these functions is equal to the sum of their individual gradients: where denotes the gradient operator.
The gradient of a constant function is zero
#grad_constFor any real constant , the gradient of the constant function defined by is the zero vector field, denoted as .
For a differentiable function and a real scalar , the gradient of the scalar multiplication of and is equal to times the gradient of : \[ \nabla (k \cdot f) = k \cdot \nabla f \] where denotes the gradient operator.
For any real-valued function , the gradient of the negation of is equal to the negation of the gradient of : \[ \nabla (-f) = -\nabla f \] where denotes the gradient operator.
Expansion of the gradient
#grad_eq_sumFor any real-valued function and any point , the gradient is equal to the sum over all coordinates of the partial derivative of at in the -th direction, , multiplied by the -th standard basis vector : \[ \nabla f(x) = \sum_{i=0}^{d-1} \partial_i f(x) \mathbf{e}_i \] where corresponds to the spatial derivative `deriv i f x` and corresponds to the unit vector `EuclideanSpace.single i 1`.
For any real-valued function and any point , the -th component of the gradient vector (where ) is equal to the partial derivative of at with respect to the -th coordinate, denoted as or .
For a real-valued function and a point , the inner product of the gradient with the -th standard basis vector (where is the vector with at the -th component and elsewhere) is equal to the partial derivative of at with respect to the -th coordinate, denoted as . Mathematically, this is expressed as:
For a real-valued function , a point , and a vector , the inner product of the gradient and the vector is equal to the sum over all coordinate indices of the product of the -th component of and the partial derivative of at with respect to the -th coordinate. Mathematically, this is expressed as: where (or ) is the spatial derivative of at in the direction of the -th standard basis vector.
For a real-valued function , a vector , and a point , the inner product of and the gradient is equal to the sum over all coordinate indices of the product of the -th component of and the partial derivative of at with respect to the -th coordinate. Mathematically, this is expressed as: where is the -th component of and (also denoted as ) is the spatial derivative of at in the direction of the -th standard basis vector.
Let be a real-valued function. For any points , the inner product between the gradient of at and the coordinate representation of with respect to the standard orthonormal basis is equal to the Fréchet derivative of at evaluated in the direction : where is the gradient vector in , is the coordinate vector of , and (denoted in Lean as `fderiv ℝ f x y`) is the Fréchet derivative.
Let be a real-valued function. For any points , the inner product between the coordinate representation of (with respect to the standard orthonormal basis) and the gradient of at is equal to the Fréchet derivative of at evaluated in the direction : where is the vector in representing , is the gradient vector of at , and (denoted as `fderiv ℝ f y x`) is the Fréchet derivative.
For a real-valued function , let be the gradient of that maps each point in to a coordinate vector in the Euclidean space . Let be the gradient of as defined in Mathlib (the vector in associated with the Fréchet derivative via the Riesz representation theorem). Let be the isometric isomorphism that maps a vector to its coordinates with respect to the standard orthonormal basis of . Then the gradient is equal to the composition of the coordinate representation map and the Mathlib gradient:
For a real-valued function , let be the gradient of as defined in Mathlib (the vector in associated with the Fréchet derivative via the Riesz representation theorem). Let be the gradient operator that maps each point in to its coordinate vector in the Euclidean space . Let be the inverse of the isometric isomorphism that maps a vector to its coordinates with respect to the standard orthonormal basis of . Then the Mathlib gradient is equal to the composition of the inverse coordinate representation map and the coordinate-based gradient :
Expansion of the gradient in the standard basis of
#gradient_eq_sumFor any dimension and any real-valued function , the gradient of at a point is equal to the sum over all indices of the partial derivative of in the direction of the -th standard basis vector, multiplied by that basis vector: where is the spatial derivative `deriv i f x` and is the -th orthonormal basis vector `basis i`.
Expansion of the Gradient in the Standard Basis of
#euclid_gradient_eq_sumLet be a natural number and be the -dimensional Euclidean space. For any function and any point , the gradient of at is equal to the sum of its directional derivatives along the standard basis vectors multiplied by those basis vectors: where is the -th standard basis vector (defined as `EuclideanSpace.single i 1`) and denotes the Fréchet derivative of at .
The radial component of equals the radial derivative
#grad_inner_space_unit_vectorLet be a natural number, be a differentiable function, and be a point. The inner product of the gradient with the unit vector in the direction of (represented by ) is equal to the derivative of the function evaluated at . Mathematically, this is expressed as: \[ \left\langle \nabla f(x), \frac{1}{\|x\|} \text{basis.repr } x \right\rangle = \left. \frac{d}{dr} f\left( r \frac{x}{\|x\|} \right) \right|_{r = \|x\|} \] where is the gradient of at , is the Euclidean norm, and is the coordinate representation of in .
Let be a natural number, be a differentiable function, and be a point. The inner product of the gradient with the coordinate representation of (denoted ) is equal to the norm multiplied by the radial derivative of evaluated at . Mathematically, this is expressed as: \[ \left\langle \nabla f(x), \text{basis.repr } x \right\rangle = \|x\| \cdot \left. \frac{d}{dr} f\left( r \frac{x}{\|x\|} \right) \right|_{r = \|x\|} \] where is the gradient of at , is the Euclidean norm, and is the representation of as a vector in the Euclidean space .
For any point in the -dimensional real inner product space , the gradient of the squared norm function evaluated at is equal to twice the coordinate representation of with respect to the standard orthonormal basis: \[ \nabla (\|x\|^2) = 2 \cdot \text{basis.repr } x \] where denotes the Euclidean norm and is the vector in whose components are the coordinates of .
For any dimension , the gradient of the function defined by is the vector field that maps each point to twice its coordinate representation in the standard orthonormal basis. That is, , where denotes the real inner product and is the isometric isomorphism from to the Euclidean space .
For any dimension and a fixed vector in the -dimensional real inner product space , the gradient of the real-valued function is the constant vector field whose value at any point is the coordinate representation of with respect to the standard orthonormal basis. That is: \[ \nabla (\langle y, x \rangle) = \text{basis.repr } x \] where denotes the real inner product and is the isometric isomorphism mapping a vector in to its coordinates in the Euclidean space .
For any dimension and a fixed vector , the gradient of the real-valued function is the constant vector field whose value at any point is the coordinate representation of with respect to the standard orthonormal basis. That is: \[ \nabla (\langle x, y \rangle) = \text{basis.repr } x \] where denotes the real inner product on and is the isometric isomorphism mapping a vector in to its coordinates in the Euclidean space .
is Integrable for Distribution-Bounded and Schwartz
#integrable_isDistBounded_inner_grad_schwartzMapLet for some natural number . If is a distribution-bounded function and is a Schwartz function, then the function is integrable over with respect to the volume measure, where denotes the gradient of and denotes the standard inner product.
is Integrable in Spherical Coordinates for Distribution-Bounded and Schwartz
#integrable_isDistBounded_inner_grad_schwartzMap_sphericalLet for some natural number . Suppose is a distribution-bounded function and is a Schwartz function. Let be the homeomorphism mapping a vector to its spherical coordinates (the direction on the unit sphere and the radial distance). Then the function mapping , when transformed into spherical coordinates via , is integrable with respect to the product measure , where is the measure on the unit sphere and is the radial measure on with density relative to the Lebesgue measure.
If is , then is
#contDiff_gradLet be a function. If is continuously differentiable of order (), then its gradient field , which maps each point to the vector of its partial derivatives, is continuously differentiable of order ().
Gradient operator for distributions
#distGradLet be a -dimensional real inner product space. The distributional gradient operator is a linear map that transforms a scalar-valued distribution into a vector-valued distribution . For a distribution , its gradient is defined by taking its distributional Fréchet derivative (which is a distribution valued in the dual space ), applying the Riesz representation theorem to identify with via the inner product, and then mapping the resulting vector to its coordinate representation in using the standard orthonormal basis. Specifically, for a test function and a vector , the gradient satisfies the relation , where is the inverse basis representation.
for Distributions
#distGrad_inner_eqLet be a -dimensional real inner product space equipped with a standard orthonormal basis representation . For a scalar-valued distribution , a Schwartz test function , and a vector , the inner product of the distributional gradient evaluation with is equal to the evaluation of the distributional Fréchet derivative on the vector . That is, where is the inverse of the basis representation mapping.
if for distributions
#distGrad_eq_of_innerLet be a -dimensional real inner product space and let be its standard orthonormal basis representation. Let be a scalar-valued distribution and be a vector-valued distribution. If for every Schwartz test function and every vector , the distributional Fréchet derivative satisfies the relation: where is the standard Euclidean inner product, then the distributional gradient of is equal to ().
Expansion of the distributional gradient in the standard basis
#distGrad_eq_sum_basisLet be a -dimensional real inner product space equipped with a standard orthonormal basis . For any scalar-valued distribution and any Schwartz test function , the evaluation of the distributional gradient is given by the sum: where is the directional derivative of the test function in the direction of the -th basis vector , and denotes the -th standard basis vector of the Euclidean space .
The distributional gradient equals the vector of partial derivatives
#distGrad_toFun_eq_distDerivLet be a -dimensional real inner product space. For any scalar-valued distribution and any Schwartz test function , the evaluation of the distributional gradient is the vector in whose components are the distributional partial derivatives of evaluated at . Mathematically, this is expressed as: for each , where is the distributional gradient operator `distGrad` and is the -th distributional partial derivative operator `distDeriv i`.
Evaluation of distributional gradient
#distGrad_applyLet be a -dimensional real inner product space. For any scalar-valued distribution and any Schwartz test function , the evaluation of the distributional gradient is the vector in whose -th component is the -th distributional partial derivative of evaluated at . Mathematically, this is expressed as: for each , where is the distributional gradient operator and is the -th distributional partial derivative operator.
Gradient continuous linear map
#gradSchwartzThe definition `gradSchwartz` represents the gradient operator as a continuous linear map from the space of real-valued Schwartz functions to the space of vector-valued Schwartz functions . For a given Schwartz function , the map returns the vector field , where the component in the direction of the -th basis vector is the partial derivative of with respect to that coordinate.
For any real-valued Schwartz function and any point , the value of the Schwartz gradient operator applied to at , denoted , is equal to the standard gradient .
