Physlib

Physlib.SpaceAndTime.Space.Derivatives.Grad

35 declarations

definition

Gradient of a function ff

#grad

For a real-valued function f:Space dRf: \text{Space } d \to \mathbb{R}, the gradient f\nabla f is a vector field mapping each point xSpace dx \in \text{Space } d to an element of the Euclidean space Rd\mathbb{R}^d. The ii-th component of the gradient at point xx is given by the partial derivative of ff with respect to the ii-th spatial coordinate, denoted as if(x)\partial_i f(x) or fxi(x)\frac{\partial f}{\partial x_i}(x).

definition

Gradient notation \nabla

#term∇

The notation \nabla is introduced to represent the gradient operator `grad`. For a function f:Space dRf : \text{Space } d \to \mathbb{R}, f\nabla f denotes the gradient of ff, which is a vector field mapping points in Space d\text{Space } d to the Euclidean space Rd\mathbb{R}^d.

theorem

0=0\nabla 0 = 0

#grad_zero

The gradient of the zero function f:Space dRf: \text{Space } d \to \mathbb{R} (defined by f(x)=0f(x) = 0 for all xSpace dx \in \text{Space } d) is the zero vector field, denoted as 0=0\nabla 0 = 0.

theorem

(f1+f2)=f1+f2\nabla (f_1 + f_2) = \nabla f_1 + \nabla f_2

#grad_add

Let f1,f2:Space dRf_1, f_2: \text{Space } d \to \mathbb{R} be two real-valued functions that are differentiable on Space d\text{Space } d. The gradient of the sum of these functions is equal to the sum of their individual gradients: (f1+f2)=f1+f2\nabla (f_1 + f_2) = \nabla f_1 + \nabla f_2 where \nabla denotes the gradient operator.

theorem

The gradient of a constant function is zero

#grad_const

For any real constant cc, the gradient of the constant function f:Space dRf: \text{Space } d \to \mathbb{R} defined by f(x)=cf(x) = c is the zero vector field, denoted as f=0\nabla f = 0.

theorem

(kf)=kf\nabla (k \cdot f) = k \cdot \nabla f

#grad_smul

For a differentiable function f:Space dRf: \text{Space } d \to \mathbb{R} and a real scalar kRk \in \mathbb{R}, the gradient of the scalar multiplication of kk and ff is equal to kk times the gradient of ff: \[ \nabla (k \cdot f) = k \cdot \nabla f \] where \nabla denotes the gradient operator.

theorem

(f)=f\nabla (-f) = -\nabla f

#grad_neg

For any real-valued function f:Space dRf: \text{Space } d \to \mathbb{R}, the gradient of the negation of ff is equal to the negation of the gradient of ff: \[ \nabla (-f) = -\nabla f \] where \nabla denotes the gradient operator.

theorem

Expansion of the gradient f(x)=iif(x)ei\nabla f(x) = \sum_i \partial_i f(x) \mathbf{e}_i

#grad_eq_sum

For any real-valued function f:Space dRf: \text{Space } d \to \mathbb{R} and any point xSpace dx \in \text{Space } d, the gradient f(x)\nabla f(x) is equal to the sum over all coordinates i{0,,d1}i \in \{0, \dots, d-1\} of the partial derivative of ff at xx in the ii-th direction, if(x)\partial_i f(x), multiplied by the ii-th standard basis vector ei\mathbf{e}_i: \[ \nabla f(x) = \sum_{i=0}^{d-1} \partial_i f(x) \mathbf{e}_i \] where if(x)\partial_i f(x) corresponds to the spatial derivative `deriv i f x` and ei\mathbf{e}_i corresponds to the unit vector `EuclideanSpace.single i 1`.

theorem

(f(x))i=if(x)(\nabla f(x))_i = \partial_i f(x)

#grad_apply

For any real-valued function f:Space dRf: \text{Space } d \to \mathbb{R} and any point xSpace dx \in \text{Space } d, the ii-th component of the gradient vector f(x)\nabla f(x) (where i{0,,d1}i \in \{0, \dots, d-1\}) is equal to the partial derivative of ff at xx with respect to the ii-th coordinate, denoted as if(x)\partial_i f(x) or fxi(x)\frac{\partial f}{\partial x_i}(x).

theorem

f(x),ei=if(x)\langle \nabla f(x), e_i \rangle = \partial_i f(x)

#grad_inner_single

For a real-valued function f:Space dRf: \text{Space } d \to \mathbb{R} and a point xSpace dx \in \text{Space } d, the inner product of the gradient f(x)\nabla f(x) with the ii-th standard basis vector eie_i (where eie_i is the vector with 11 at the ii-th component and 00 elsewhere) is equal to the partial derivative of ff at xx with respect to the ii-th coordinate, denoted as if(x)\partial_i f(x). Mathematically, this is expressed as: f(x),ei=if(x)\langle \nabla f(x), e_i \rangle = \partial_i f(x)

theorem

f(x),y=iyiif(x)\langle \nabla f(x), y \rangle = \sum_i y_i \partial_i f(x)

#grad_inner_eq

For a real-valued function f:Space dRf: \text{Space } d \to \mathbb{R}, a point xSpace dx \in \text{Space } d, and a vector yRdy \in \mathbb{R}^d, the inner product of the gradient f(x)\nabla f(x) and the vector yy is equal to the sum over all coordinate indices ii of the product of the ii-th component of yy and the partial derivative of ff at xx with respect to the ii-th coordinate. Mathematically, this is expressed as: f(x),y=iyiif(x)\langle \nabla f(x), y \rangle = \sum_i y_i \partial_i f(x) where if(x)\partial_i f(x) (or fxi(x)\frac{\partial f}{\partial x_i}(x)) is the spatial derivative of ff at xx in the direction of the ii-th standard basis vector.

theorem

x,f(y)=ixiif(y)\langle x, \nabla f(y) \rangle = \sum_i x_i \partial_i f(y)

#inner_grad_eq

For a real-valued function f:Space dRf: \text{Space } d \to \mathbb{R}, a vector xRdx \in \mathbb{R}^d, and a point ySpace dy \in \text{Space } d, the inner product of xx and the gradient f(y)\nabla f(y) is equal to the sum over all coordinate indices ii of the product of the ii-th component of xx and the partial derivative of ff at yy with respect to the ii-th coordinate. Mathematically, this is expressed as: x,f(y)=ixiif(y)\langle x, \nabla f(y) \rangle = \sum_i x_i \partial_i f(y) where xix_i is the ii-th component of xx and if(y)\partial_i f(y) (also denoted as fxi(y)\frac{\partial f}{\partial x_i}(y)) is the spatial derivative of ff at yy in the direction of the ii-th standard basis vector.

theorem

f(x),repr(y)=Df(x)(y)\langle \nabla f(x), \text{repr}(y) \rangle = Df(x)(y)

#grad_inner_repr_eq

Let f:Space dRf: \text{Space } d \to \mathbb{R} be a real-valued function. For any points x,ySpace dx, y \in \text{Space } d, the inner product between the gradient of ff at xx and the coordinate representation of yy with respect to the standard orthonormal basis is equal to the Fréchet derivative of ff at xx evaluated in the direction yy: f(x),repr(y)=Df(x)(y)\langle \nabla f(x), \text{repr}(y) \rangle = Df(x)(y) where f(x)\nabla f(x) is the gradient vector in Rd\mathbb{R}^d, repr(y)\text{repr}(y) is the coordinate vector of yy, and Df(x)(y)Df(x)(y) (denoted in Lean as `fderiv ℝ f x y`) is the Fréchet derivative.

theorem

repr(x),f(y)=Df(y)(x)\langle \text{repr}(x), \nabla f(y) \rangle = Df(y)(x)

#repr_grad_inner_eq

Let f:Space dRf: \text{Space } d \to \mathbb{R} be a real-valued function. For any points x,ySpace dx, y \in \text{Space } d, the inner product between the coordinate representation of xx (with respect to the standard orthonormal basis) and the gradient of ff at yy is equal to the Fréchet derivative of ff at yy evaluated in the direction xx: repr(x),f(y)=Df(y)(x)\langle \text{repr}(x), \nabla f(y) \rangle = Df(y)(x) where repr(x)\text{repr}(x) is the vector in Rd\mathbb{R}^d representing xx, f(y)\nabla f(y) is the gradient vector of ff at yy, and Df(y)(x)Df(y)(x) (denoted as `fderiv ℝ f y x`) is the Fréchet derivative.

theorem

f=reprgradient f\nabla f = \text{repr} \circ \text{gradient } f

#grad_eq_gradiant

For a real-valued function f:Space dRf : \text{Space } d \to \mathbb{R}, let f\nabla f be the gradient of ff that maps each point in Space d\text{Space } d to a coordinate vector in the Euclidean space Rd\mathbb{R}^d. Let gradient f\text{gradient } f be the gradient of ff as defined in Mathlib (the vector in Space d\text{Space } d associated with the Fréchet derivative via the Riesz representation theorem). Let repr:Space dRd\text{repr} : \text{Space } d \to \mathbb{R}^d be the isometric isomorphism that maps a vector to its coordinates with respect to the standard orthonormal basis of Space d\text{Space } d. Then the gradient f\nabla f is equal to the composition of the coordinate representation map and the Mathlib gradient: f=reprgradient f\nabla f = \text{repr} \circ \text{gradient } f

theorem

gradient f=repr1f\text{gradient } f = \text{repr}^{-1} \circ \nabla f

#gradient_eq_grad

For a real-valued function f:Space dRf : \text{Space } d \to \mathbb{R}, let gradient f\text{gradient } f be the gradient of ff as defined in Mathlib (the vector in Space d\text{Space } d associated with the Fréchet derivative via the Riesz representation theorem). Let f\nabla f be the gradient operator that maps each point in Space d\text{Space } d to its coordinate vector in the Euclidean space Rd\mathbb{R}^d. Let repr1:RdSpace d\text{repr}^{-1} : \mathbb{R}^d \to \text{Space } d be the inverse of the isometric isomorphism that maps a vector to its coordinates with respect to the standard orthonormal basis of Space d\text{Space } d. Then the Mathlib gradient is equal to the composition of the inverse coordinate representation map and the coordinate-based gradient f\nabla f: gradient f=repr1f\text{gradient } f = \text{repr}^{-1} \circ \nabla f

theorem

Expansion of the gradient grad f(x)=iif(x)ei\text{grad } f(x) = \sum_i \partial_i f(x) \mathbf{e}_i in the standard basis of Space d\text{Space } d

#gradient_eq_sum

For any dimension dd and any real-valued function f:Space dRf: \text{Space } d \to \mathbb{R}, the gradient of ff at a point xSpace dx \in \text{Space } d is equal to the sum over all indices i{0,,d1}i \in \{0, \dots, d-1\} of the partial derivative of ff in the direction of the ii-th standard basis vector, multiplied by that basis vector: grad f(x)=i=0d1if(x)ei\text{grad } f(x) = \sum_{i=0}^{d-1} \partial_i f(x) \mathbf{e}_i where if(x)\partial_i f(x) is the spatial derivative `deriv i f x` and ei\mathbf{e}_i is the ii-th orthonormal basis vector `basis i`.

theorem

Expansion of the Gradient f\nabla f in the Standard Basis of Rd\mathbb{R}^d

#euclid_gradient_eq_sum

Let dd be a natural number and Rd\mathbb{R}^d be the dd-dimensional Euclidean space. For any function f:RdRf: \mathbb{R}^d \to \mathbb{R} and any point xRdx \in \mathbb{R}^d, the gradient of ff at xx is equal to the sum of its directional derivatives along the standard basis vectors ei\mathbf{e}_i multiplied by those basis vectors: f(x)=i(Df(x)ei)ei\nabla f(x) = \sum_{i} (D f(x) \cdot \mathbf{e}_i) \mathbf{e}_i where ei\mathbf{e}_i is the ii-th standard basis vector (defined as `EuclideanSpace.single i 1`) and Df(x)D f(x) denotes the Fréchet derivative of ff at xx.

theorem

The radial component of f(x)\nabla f(x) equals the radial derivative fr\frac{\partial f}{\partial r}

#grad_inner_space_unit_vector

Let dd be a natural number, f:Space dRf: \text{Space } d \to \mathbb{R} be a differentiable function, and xSpace dx \in \text{Space } d be a point. The inner product of the gradient f(x)\nabla f(x) with the unit vector in the direction of xx (represented by x1basis.repr x\|x\|^{-1} \cdot \text{basis.repr } x) is equal to the derivative of the function rf(rxx)r \mapsto f(r \cdot \frac{x}{\|x\|}) evaluated at r=xr = \|x\|. Mathematically, this is expressed as: \[ \left\langle \nabla f(x), \frac{1}{\|x\|} \text{basis.repr } x \right\rangle = \left. \frac{d}{dr} f\left( r \frac{x}{\|x\|} \right) \right|_{r = \|x\|} \] where f(x)\nabla f(x) is the gradient of ff at xx, x\|x\| is the Euclidean norm, and basis.repr x\text{basis.repr } x is the coordinate representation of xx in Rd\mathbb{R}^d.

theorem

f(x),basis.repr x=xfr\langle \nabla f(x), \text{basis.repr } x \rangle = \|x\| \frac{\partial f}{\partial r}

#grad_inner_space

Let dd be a natural number, f:Space dRf: \text{Space } d \to \mathbb{R} be a differentiable function, and xSpace dx \in \text{Space } d be a point. The inner product of the gradient f(x)\nabla f(x) with the coordinate representation of xx (denoted basis.repr x\text{basis.repr } x) is equal to the norm x\|x\| multiplied by the radial derivative of ff evaluated at xx. Mathematically, this is expressed as: \[ \left\langle \nabla f(x), \text{basis.repr } x \right\rangle = \|x\| \cdot \left. \frac{d}{dr} f\left( r \frac{x}{\|x\|} \right) \right|_{r = \|x\|} \] where f(x)\nabla f(x) is the gradient of ff at xx, x\|x\| is the Euclidean norm, and basis.repr x\text{basis.repr } x is the representation of xx as a vector in the Euclidean space Rd\mathbb{R}^d.

theorem

x2=2basis.repr x\nabla \|x\|^2 = 2 \cdot \text{basis.repr } x

#grad_norm_sq

For any point xx in the dd-dimensional real inner product space Space d\text{Space } d, the gradient of the squared norm function xx2x \mapsto \|x\|^2 evaluated at xx is equal to twice the coordinate representation of xx with respect to the standard orthonormal basis: \[ \nabla (\|x\|^2) = 2 \cdot \text{basis.repr } x \] where x\|x\| denotes the Euclidean norm and basis.repr x\text{basis.repr } x is the vector in Rd\mathbb{R}^d whose components are the coordinates of xx.

theorem

y,y=2basis.repr y\nabla \langle y, y \rangle = 2 \cdot \text{basis.repr } y

#grad_inner

For any dimension dd, the gradient of the function f:Space dRf: \text{Space } d \to \mathbb{R} defined by f(y)=y,yf(y) = \langle y, y \rangle is the vector field that maps each point zSpace dz \in \text{Space } d to twice its coordinate representation in the standard orthonormal basis. That is, (y,y)(z)=2basis.repr z\nabla (\langle y, y \rangle) (z) = 2 \cdot \text{basis.repr } z, where ,\langle \cdot, \cdot \rangle denotes the real inner product and basis.repr\text{basis.repr} is the isometric isomorphism from Space d\text{Space } d to the Euclidean space Rd\mathbb{R}^d.

theorem

y,x=basis.repr x\nabla \langle y, x \rangle = \text{basis.repr } x

#grad_inner_left

For any dimension dd and a fixed vector xx in the dd-dimensional real inner product space Space d\text{Space } d, the gradient of the real-valued function yy,xy \mapsto \langle y, x \rangle is the constant vector field whose value at any point is the coordinate representation of xx with respect to the standard orthonormal basis. That is: \[ \nabla (\langle y, x \rangle) = \text{basis.repr } x \] where ,\langle \cdot, \cdot \rangle denotes the real inner product and basis.repr\text{basis.repr} is the isometric isomorphism mapping a vector in Space d\text{Space } d to its coordinates in the Euclidean space Rd\mathbb{R}^d.

theorem

x,y=basis.repr x\nabla \langle x, y \rangle = \text{basis.repr } x

#grad_inner_right

For any dimension dNd \in \mathbb{N} and a fixed vector xSpace dx \in \text{Space } d, the gradient of the real-valued function yx,yy \mapsto \langle x, y \rangle is the constant vector field whose value at any point is the coordinate representation of xx with respect to the standard orthonormal basis. That is: \[ \nabla (\langle x, y \rangle) = \text{basis.repr } x \] where ,\langle \cdot, \cdot \rangle denotes the real inner product on Space d\text{Space } d and basis.repr\text{basis.repr} is the isometric isomorphism mapping a vector in Space d\text{Space } d to its coordinates in the Euclidean space Rd\mathbb{R}^d.

theorem

f,η\langle f, \nabla \eta \rangle is Integrable for Distribution-Bounded ff and Schwartz η\eta

#integrable_isDistBounded_inner_grad_schwartzMap

Let d=n+1d = n + 1 for some natural number nn. If f:Space dRdf: \text{Space } d \to \mathbb{R}^d is a distribution-bounded function and η:Space dR\eta: \text{Space } d \to \mathbb{R} is a Schwartz function, then the function xf(x),η(x)x \mapsto \langle f(x), \nabla \eta(x) \rangle is integrable over Space d\text{Space } d with respect to the volume measure, where η\nabla \eta denotes the gradient of η\eta and ,\langle \cdot, \cdot \rangle denotes the standard inner product.

theorem

f,η\langle f, \nabla \eta \rangle is Integrable in Spherical Coordinates for Distribution-Bounded ff and Schwartz η\eta

#integrable_isDistBounded_inner_grad_schwartzMap_spherical

Let d=n+1d = n + 1 for some natural number nn. Suppose f:Space dRdf: \text{Space } d \to \mathbb{R}^d is a distribution-bounded function and η:Space dR\eta: \text{Space } d \to \mathbb{R} is a Schwartz function. Let Φ:Space d{0}Sd1×(0,)\Phi: \text{Space } d \setminus \{0\} \to S^{d-1} \times (0, \infty) be the homeomorphism mapping a vector to its spherical coordinates (the direction on the unit sphere Sd1S^{d-1} and the radial distance). Then the function mapping xf(x),η(x)x \mapsto \langle f(x), \nabla \eta(x) \rangle, when transformed into spherical coordinates via Φ1\Phi^{-1}, is integrable with respect to the product measure σμ\sigma \otimes \mu, where σ\sigma is the measure on the unit sphere Sd1S^{d-1} and μ\mu is the radial measure on (0,)(0, \infty) with density rd1r^{d-1} relative to the Lebesgue measure.

theorem

If ff is Cn+1C^{n+1}, then f\nabla f is CnC^n

#contDiff_grad

Let f:SpaceRf: \text{Space} \to \mathbb{R} be a function. If ff is continuously differentiable of order n+1n+1 (Cn+1C^{n+1}), then its gradient field f\nabla f, which maps each point xx to the vector of its partial derivatives, is continuously differentiable of order nn (CnC^n).

definition

Gradient operator for distributions :D(Space d,R)D(Space d,Rd)\nabla: \mathcal{D}'(\text{Space } d, \mathbb{R}) \to \mathcal{D}'(\text{Space } d, \mathbb{R}^d)

#distGrad

Let V=Space dV = \text{Space } d be a dd-dimensional real inner product space. The distributional gradient operator is a linear map that transforms a scalar-valued distribution fD(V,R)f \in \mathcal{D}'(V, \mathbb{R}) into a vector-valued distribution fD(V,Rd)\nabla f \in \mathcal{D}'(V, \mathbb{R}^d). For a distribution ff, its gradient is defined by taking its distributional Fréchet derivative DfDf (which is a distribution valued in the dual space VV^*), applying the Riesz representation theorem to identify VV^* with VV via the inner product, and then mapping the resulting vector to its coordinate representation in Rd\mathbb{R}^d using the standard orthonormal basis. Specifically, for a test function η\eta and a vector yRdy \in \mathbb{R}^d, the gradient satisfies the relation (f)η,y=(Df)η(b1(y))\langle (\nabla f) \eta, y \rangle = (Df) \eta (\mathbf{b}^{-1}(y)), where b1\mathbf{b}^{-1} is the inverse basis representation.

theorem

(f)(η),y=(Df)(η)(b1(y))\langle (\nabla f)(\eta), y \rangle = (Df)(\eta)(\mathbf{b}^{-1}(y)) for Distributions

#distGrad_inner_eq

Let V=Space dV = \text{Space } d be a dd-dimensional real inner product space equipped with a standard orthonormal basis representation b:VRd\mathbf{b} : V \to \mathbb{R}^d. For a scalar-valued distribution fD(V,R)f \in \mathcal{D}'(V, \mathbb{R}), a Schwartz test function ηS(V,R)\eta \in \mathcal{S}(V, \mathbb{R}), and a vector yRdy \in \mathbb{R}^d, the inner product of the distributional gradient evaluation (f)(η)(\nabla f)(\eta) with yy is equal to the evaluation of the distributional Fréchet derivative (Df)(η)(Df)(\eta) on the vector b1(y)V\mathbf{b}^{-1}(y) \in V. That is, (f)(η),yRd=((Df)(η))(b1(y))\langle (\nabla f)(\eta), y \rangle_{\mathbb{R}^d} = ((Df)(\eta))(\mathbf{b}^{-1}(y)) where b1\mathbf{b}^{-1} is the inverse of the basis representation mapping.

theorem

f=g\nabla f = g if (Df)(η)(y)=g(η),b(y)(Df)(\eta)(y) = \langle g(\eta), \mathbf{b}(y) \rangle for distributions

#distGrad_eq_of_inner

Let V=Space dV = \text{Space } d be a dd-dimensional real inner product space and let b:VRd\mathbf{b} : V \to \mathbb{R}^d be its standard orthonormal basis representation. Let fD(V,R)f \in \mathcal{D}'(V, \mathbb{R}) be a scalar-valued distribution and gD(V,Rd)g \in \mathcal{D}'(V, \mathbb{R}^d) be a vector-valued distribution. If for every Schwartz test function ηS(V,R)\eta \in \mathcal{S}(V, \mathbb{R}) and every vector yVy \in V, the distributional Fréchet derivative DfDf satisfies the relation: (Df)(η)(y)=g(η),b(y)Rd(Df)(\eta)(y) = \langle g(\eta), \mathbf{b}(y) \rangle_{\mathbb{R}^d} where ,Rd\langle \cdot, \cdot \rangle_{\mathbb{R}^d} is the standard Euclidean inner product, then the distributional gradient of ff is equal to gg (f=g\nabla f = g).

theorem

Expansion of the distributional gradient (f)(η)(\nabla f)(\eta) in the standard basis

#distGrad_eq_sum_basis

Let V=Space dV = \text{Space } d be a dd-dimensional real inner product space equipped with a standard orthonormal basis {ei}i{0,,d1}\{\mathbf{e}_i\}_{i \in \{0, \dots, d-1\}}. For any scalar-valued distribution fD(V,R)f \in \mathcal{D}'(V, \mathbb{R}) and any Schwartz test function ηS(V,R)\eta \in \mathcal{S}(V, \mathbb{R}), the evaluation of the distributional gradient (f)(η)Rd(\nabla f)(\eta) \in \mathbb{R}^d is given by the sum: (f)(η)=if(eiη)e^i (\nabla f)(\eta) = \sum_i -f(\partial_{\mathbf{e}_i} \eta) \mathbf{\hat{e}}_i where eiη\partial_{\mathbf{e}_i} \eta is the directional derivative of the test function η\eta in the direction of the ii-th basis vector ei\mathbf{e}_i, and e^i\mathbf{\hat{e}}_i denotes the ii-th standard basis vector of the Euclidean space Rd\mathbb{R}^d.

theorem

The distributional gradient (f)(ϵ)(\nabla f)(\epsilon) equals the vector of partial derivatives ((if)(ϵ))i((\partial_i f)(\epsilon))_i

#distGrad_toFun_eq_distDeriv

Let V=Space dV = \text{Space } d be a dd-dimensional real inner product space. For any scalar-valued distribution fD(V,R)f \in \mathcal{D}'(V, \mathbb{R}) and any Schwartz test function ϵS(V,R)\epsilon \in \mathcal{S}(V, \mathbb{R}), the evaluation of the distributional gradient (f)(ϵ)(\nabla f)(\epsilon) is the vector in Rd\mathbb{R}^d whose components are the distributional partial derivatives of ff evaluated at ϵ\epsilon. Mathematically, this is expressed as: ((f)(ϵ))i=(if)(ϵ)((\nabla f)(\epsilon))_i = (\partial_i f)(\epsilon) for each i{0,,d1}i \in \{0, \dots, d-1\}, where \nabla is the distributional gradient operator `distGrad` and i\partial_i is the ii-th distributional partial derivative operator `distDeriv i`.

theorem

Evaluation of distributional gradient (f)(ϵ)=((if)(ϵ))i(\nabla f)(\epsilon) = ((\partial_i f)(\epsilon))_i

#distGrad_apply

Let V=Space dV = \text{Space } d be a dd-dimensional real inner product space. For any scalar-valued distribution fD(V,R)f \in \mathcal{D}'(V, \mathbb{R}) and any Schwartz test function ϵS(V,R)\epsilon \in \mathcal{S}(V, \mathbb{R}), the evaluation of the distributional gradient (f)(ϵ)(\nabla f)(\epsilon) is the vector in Rd\mathbb{R}^d whose ii-th component is the ii-th distributional partial derivative of ff evaluated at ϵ\epsilon. Mathematically, this is expressed as: ((f)(ϵ))i=(if)(ϵ)((\nabla f)(\epsilon))_i = (\partial_i f)(\epsilon) for each i{0,,d1}i \in \{0, \dots, d-1\}, where \nabla is the distributional gradient operator and i\partial_i is the ii-th distributional partial derivative operator.

definition

Gradient continuous linear map S(Space d,R)S(Space d,Rd)\mathcal{S}(\text{Space } d, \mathbb{R}) \to \mathcal{S}(\text{Space } d, \mathbb{R}^d)

#gradSchwartz

The definition `gradSchwartz` represents the gradient operator as a continuous linear map from the space of real-valued Schwartz functions S(Space d,R)\mathcal{S}(\text{Space } d, \mathbb{R}) to the space of vector-valued Schwartz functions S(Space d,Rd)\mathcal{S}(\text{Space } d, \mathbb{R}^d). For a given Schwartz function η\eta, the map returns the vector field η\nabla \eta, where the component in the direction of the ii-th basis vector is the partial derivative of η\eta with respect to that coordinate.

theorem

gradSchwartz(η)(x)=η(x)\text{gradSchwartz}(\eta)(x) = \nabla \eta(x)

#gradSchwartz_apply_eq_grad

For any real-valued Schwartz function ηS(Space d,R)\eta \in \mathcal{S}(\text{Space } d, \mathbb{R}) and any point xSpace dx \in \text{Space } d, the value of the Schwartz gradient operator applied to η\eta at xx, denoted gradSchwartz(η)(x)\text{gradSchwartz}(\eta)(x), is equal to the standard gradient η(x)\nabla \eta(x).