PhyslibSearch

Physlib.SpaceAndTime.Space.Derivatives.Div

11 declarations

definition

Divergence of a function ff

#div

For a function f:Space dRdf: \text{Space } d \to \mathbb{R}^d (where Rd\mathbb{R}^d is represented by `EuclideanSpace ℝ (Fin d)`), the divergence divf\text{div} f is the scalar-valued function on Space d\text{Space } d defined by the sum of the partial derivatives of its components. Specifically, for any point xSpace dx \in \text{Space } d: (divf)(x)=i=0d1fixi(x)(\text{div} f)(x) = \sum_{i=0}^{d-1} \frac{\partial f_i}{\partial x_i}(x) where fi(x)f_i(x) denotes the ii-th component of the vector f(x)f(x) and xi\frac{\partial}{\partial x_i} denotes the spatial derivative in the direction of the ii-th basis vector.

definition

Divergence notation f\nabla \cdot f

#divNotation

The notation f\nabla \cdot f represents the divergence operator applied to a function ff from the space Space d\text{Space } d to the dd-dimensional Euclidean space Rd\mathbb{R}^d. It is defined as a shorthand for the formal function `Space.div f`.

theorem

0=0\nabla \cdot \mathbf{0} = 0

#div_zero

The divergence of the zero function mapping Space d\text{Space } d to Rd\mathbb{R}^d is the zero function mapping Space d\text{Space } d to R\mathbb{R}. Symbolically, 0=0\nabla \cdot \mathbf{0} = 0.

theorem

The Divergence of a Constant Function is Zero

#div_const

For any constant vector vRd\mathbf{v} \in \mathbb{R}^d, the divergence of the constant function f:Space dRdf: \text{Space } d \to \mathbb{R}^d defined by f(x)=vf(x) = \mathbf{v} is the zero function, i.e., f=0\nabla \cdot f = 0.

theorem

(f1+f2)=f1+f2\nabla \cdot (f_1 + f_2) = \nabla \cdot f_1 + \nabla \cdot f_2

#div_add

Let f1,f2:Space dRdf_1, f_2: \text{Space } d \to \mathbb{R}^d be two differentiable functions, where Rd\mathbb{R}^d is the standard dd-dimensional Euclidean space. Then the divergence of the sum of the functions is equal to the sum of their individual divergences: (f1+f2)=f1+f2\nabla \cdot (f_1 + f_2) = \nabla \cdot f_1 + \nabla \cdot f_2 where f\nabla \cdot f denotes the divergence operator divf\text{div} f.

theorem

(kf)=k(f)\nabla \cdot (k f) = k (\nabla \cdot f)

#div_smul

For any differentiable function f:Space dRdf: \text{Space } d \to \mathbb{R}^d and any scalar kRk \in \mathbb{R}, the divergence of the scalar multiple function kfk f is equal to the scalar multiple of the divergence of ff: (kf)=k(f)\nabla \cdot (k f) = k (\nabla \cdot f) Here, Space d\text{Space } d is a dd-dimensional real inner product space and Rd\mathbb{R}^d is equipped with the standard Euclidean structure.

theorem

The Map w(f(w))w \mapsto \nabla \cdot (f(w)) is Linear if ff is Linear

#div_linear_map

Let WW be a real vector space and let f:W(Space 3R3)f: W \to (\text{Space } 3 \to \mathbb{R}^3) be a linear map. If for every wWw \in W, the function f(w)f(w) is differentiable, then the map that assigns each wWw \in W to the divergence of f(w)f(w), denoted by w(f(w))w \mapsto \nabla \cdot (f(w)), is also a linear map. Here, Space 3\text{Space } 3 represents a 3-dimensional real inner product space and R3\mathbb{R}^3 is the standard Euclidean space.

definition

Divergence operator for distributions distDiv\text{distDiv}

#distDiv

The divergence operator for distributions, distDiv\text{distDiv}, is an R\mathbb{R}-linear map that transforms a distribution f:Space dRdf: \text{Space } d \to \mathbb{R}^d into a scalar-valued distribution g:Space dRg: \text{Space } d \to \mathbb{R}. It is defined as the composition of the distributional Fréchet derivative DfDf with a trace operator tr:L(Space d,Rd)R\text{tr} : \mathcal{L}(\text{Space } d, \mathbb{R}^d) \to \mathbb{R}. For a continuous linear map vL(Space d,Rd)v \in \mathcal{L}(\text{Space } d, \mathbb{R}^d), the trace is computed as tr(v)=i=1dv(ei),ei\text{tr}(v) = \sum_{i=1}^d \langle v(e_i), e_i \rangle, where {ei}\{e_i\} is the standard orthonormal basis of Space d\text{Space } d and Rd\mathbb{R}^d.

theorem

distDiv f(η)=i(Dfηei)i\text{distDiv } f (\eta) = \sum_i (Df \eta e_i)_i

#distDiv_apply_eq_sum_fderivD

Let E=Space dE = \text{Space } d be a dd-dimensional real inner product space equipped with the standard orthonormal basis {ei}i{1,,d}\{e_i\}_{i \in \{1, \dots, d\}}. For any distribution f:ERdf: E \to \mathbb{R}^d and any Schwartz test function ηS(E,R)\eta \in \mathcal{S}(E, \mathbb{R}), the distributional divergence of ff applied to η\eta is given by the sum: distDiv f(η)=i=1d((Df)(η)(ei))i\text{distDiv } f (\eta) = \sum_{i=1}^d \left( (Df)(\eta)(e_i) \right)_i where DfDf is the distributional Fréchet derivative of ff, and ((Df)(η)(ei))i( (Df)(\eta)(e_i) )_i denotes the ii-th component of the vector resulting from applying the continuous linear map (Df)(η)L(E,Rd)(Df)(\eta) \in \mathcal{L}(E, \mathbb{R}^d) to the basis vector eie_i.

theorem

distDiv f(η)=i(if(η))i\text{distDiv } f (\eta) = \sum_i (\partial_i f (\eta))_i

#distDiv_apply_eq_sum_distDeriv

Let E=Space dE = \text{Space } d be a dd-dimensional real inner product space equipped with the standard orthonormal basis {ei}i{1,,d}\{e_i\}_{i \in \{1, \dots, d\}}. For any distribution f:ERdf: E \to \mathbb{R}^d and any Schwartz test function ηS(E,R)\eta \in \mathcal{S}(E, \mathbb{R}), the distributional divergence of ff applied to η\eta is equal to the sum of the ii-th components of the distributional partial derivatives of ff applied to η\eta: distDiv f(η)=i=1d(if(η))i\text{distDiv } f (\eta) = \sum_{i=1}^d (\partial_i f (\eta))_i where if\partial_i f is the distributional partial derivative of ff in the direction of the ii-th basis vector eie_i, and ()i(\cdot)_i denotes the ii-th component of the resulting vector in Rd\mathbb{R}^d.

theorem

Distributional divergence of a function ff equals f,η-\int \langle f, \nabla \eta \rangle

#distDiv_ofFunction

Let E=Space dE = \text{Space } d be a dd-dimensional Euclidean space (where d1d \ge 1). Let f:ERdf: E \to \mathbb{R}^d be a distribution-bounded function, and let TfT_f denote the distribution induced by ff. For any Schwartz test function ηS(E,R)\eta \in \mathcal{S}(E, \mathbb{R}), the distributional divergence of TfT_f applied to η\eta is given by: \[ (\text{distDiv } T_f)(\eta) = -\int_E \langle f(x), \nabla \eta(x) \rangle \, dx \] where η\nabla \eta is the gradient of η\eta and ,\langle \cdot, \cdot \rangle is the standard inner product on Rd\mathbb{R}^d.