Stationary Partial Differential Equations

Yihong Du , in Handbook of Differential Equations: Stationary Partial Differential Equations, 2005

Proof

By Theorem 4.5 , the positive solution curve of (4.4) with Ω = B is "⊂"-shaped with exactly one turning point at (μ0,v 0), where v 0 = v μ0 =v μ0. Denote ξ0 = v 0 (0). Then for any ε ∈ (0, ξ0), we can find a unique μ ε ∈ (μ0, ∞) such that

υ μ ε ( 0 ) = ε .

By Theorem 4.5, we see that μ ε increases as ε decreases and μ ε → ∞ as ε → 0.

For any μ ∈ [μ0 ε ), we can find a unique aμ = aμ (ε) ∈ (0,1) such that

υ μ ( a μ ) = ε .

Clearly,

4.20 lim ε a μ ( ε ) = 1 for fixed μ μ 0 , lim μ μ ε 0 a μ ( ε ) = 0 for fixed ε ( 0 , ξ 0 ) .

Now we define

η μ = ( a μ ) 2 μ , u μ ( x ) = υ μ ( a μ x ) ε , x B ,

and find that

Γ ε = { ( η μ , u μ ) : μ 0 μ < μ ε }

gives a piece of smooth solution curve to (4.17). Moreover, Γ ε connects ( η μ 0 , u μ 0 ) and (0, 0) (when μ → μ ε − 0).

On the other hand, due to ε ∈ (0, ξ0), for any μ ≥ μ0, we can find a unique aμ = aμ (ε) ∈ (0, 1) satisfying vμ (aμ ) = ε, and define

η μ = ( a μ ) 2 μ , u μ ( x ) = υ μ ( a μ x ) ε , x B ,

we obtain another piece of smooth solution curve of(4.17)

Γ ε = { ( η μ , u μ ) : μ 0 μ < } .

By Theorem 4.5, μ → aμ is strictly increasing and

4.21 lim μ a μ = 1.

Therefore μ → η μ is strictly increasing and {(μ,u(0)): (μ,u) ∈ Γ ε } is a monotone curve in R 2 that connects ( η μ 0 , u μ 0 ) ( 0 ) ) to (∞, ∞). Since

( η μ 0 , u μ 0 ) = ( η μ 0 , u μ 0 ) ,

we find that

Γ ( ε ) = Γ ε Γ ε

gives a piecewise smooth (in fact smooth) curve for (4.17) connecting (0, 0) and (∞, ∞). By Lemma 4.4, we know it contains all the positive solutions of (4.17). We are going to find out the shape of this curve.

Recall that

f ( u ) > 0 for u ( 0 , 1 / 2 ) and f ( u ) < 0 for u ( 1 / 2 , ) .

We fix some ξ1 ∈ (0, 1/2) and suppose

ε < ε 1 1 / 2 ξ 1 .

Then clearly f″(u + ε) > 0 for u ∈ (0, ξ1).

Now we choose λ ξ1 > μ 0 such that

υ μ ( 0 ) < ξ 1 when μ λ ξ 1 .

By shrinking ε1 we may assume that λ ξ1 < μ ε for all ε ∈ (0, ε1). We can now divide Γ ε into two parts

Γ ε 1 = { ( η μ , υ μ ) : λ ξ 1 μ < μ ε } and Γ ε 2 = { ( η μ , υ μ ) : μ 0 μ < λ ξ 1 } .

We first analyze the shape of Γ ε 1 . Define

Λ ε * = sup μ [ λ ξ 1 , μ ε ) η μ .

By (4.20), one easily sees that there exists ε2 ∈ (0, ε1] such that when ε ∈ (0, ε2),

Λ ε *  is achieved at some μ * ( λ ξ 1 , μ ε ) and lim ε 0 Λ ε * = .

By the implicit function theorem, ( η μ * , u μ * ) must be a degenerate solution of (4.17). Then by Lemma 4.7, (4.19) and our choice of ξ1, the solutions of (4.17) near ( η μ * , ν μ * ) form a smooth curve that has a turn to the left. Therefore, we have an upper branch and a lower branch of positive solutions starting from this point, and both branches can be continued towards smaller values of μ. The lower branch can be continued to reach (0, 0), because (a) we cannot meet a degenerate solution in the way of continuation due to Lemma 4.7 and u(0) < ξ1 on Γ ε 1 , and (b) the branch goes along Γ ε 1 . For the same reason, the upper branch can be continued till it reaches λ ξ 1 , η λ ξ 1 ) . This implies that Γ ε 1 is exactly "⊃"-shaped.

Next we analyze the shape of Γ ε 2 . It is more convenient for our discussion if we consider a bigger piece of solution curve

Γ ε 3 = Γ ε 2 { ( η μ , u μ ) : μ 0 μ λ ξ 1 } ,

which contains part of Γ ε . We observe that any ( μ , u ) Γ ε 3 satisfies

4.22 λ ε * μ λ ξ 1 , u λ ξ 1 ( 0 ) ε | | u | | = u ( 0 ) u λ ξ 1 ( 0 ) ε ,

where

λ ε * = inf { μ : ( μ , u ) Γ ε 3 } .

Since η μ is increasing in μ , λ ε * is achieved at some η μ , μ [ μ 0 , λ ξ 1 ) . Therefore ( λ ε * , u μ ) must be a degenerate solution of (4.17). Clearly

λ ε * η μ 0 = ( a μ 0 ( ε ) ) 2 μ 0 < μ 0 .

On the other hand, it is easy to see that aμ (ε) → 1 as ε → 0 uniformly for μ [ μ 0 , λ ξ 1 ] . Hence

lim ε 0 λ ε * = lim ε 0 min { ( a μ ( ε ) ) 2 μ : μ 0 μ λ ξ 1 } = μ 0 .

We know from the above that Γ ε 3 contains at least one degenerate solution ( λ ε * , u μ ) . If we can show that there exists ε3 ∈ (0, ε2) such that whenever ε ∈ (0, ε3), any degenerate solution on Γ ε 3 must make τ″(0) > 0 in (4.19) of Lemma 4.7, then a continuation argument much as before shows Γ ε 3 contains exactly one degenerate solution at μ=λ ε * and the curve makes a turn to the right at this point. Hence Γ ε 3 must be smooth and "⊂"-shaped. This tells us that the entire solution curve Γ(ε) is exactly S-shaped with two turning points at μ=λ ε * and μ = Λ ε * , respectively. Clearly, this would finish the proof of Theorem 4.8.

It remains to show that there exists ε3 ∈ (0, ε2) such that any degenerate solution on Γ ε 3 must make τ′(0) > 0 in (4.19) of Lemma 4.7 as long as ε ∈ (0, ε3). We argue indirectly. Suppose for some ε k → 0, we can find degenerate solutions k , u k ) Γ ε k 3 such that

τ k ( 0 ) = μ k B f ( u k + ε k ) ϕ k 3 d x B f ( u k + ε k ) ϕ k d x 0 ,

where ϕ k is the positive eigenfunction given in Lemma 4.7 when (μ,u) = (μ k, uk ). We may assume that | | ϕ k | | = 1.

By (4.22), we may assume that μ k μ 0 [ μ 0 , λ ξ 1 ] . equation (4.22) also implies that | | f ( u k + ɛ k ) | | is uniformly bounded. Therefore, by the equation for uk and a standard regularity and compactness argument, {uk } has a convergent subsequence in C 1. We may assume uk u0 in C 1. Moreover, from

Δ ϕ k = μ k f ( u k + ε k ) ϕ k , ϕ k | B = 0 ,

we can use a similar regularity and compactness argument to obtain a C 1 convergent subsequence of ϕ k . We may assume ϕ k → ϕ0. Then we easily deduce

Δ u 0 = μ 0 f ( u 0 ) , u 0 | B = 0 , u 0 0 , u 0 0 ,

and

Δ ϕ 0 = μ 0 f ( u 0 ) ϕ 0 , ϕ | B = 0 , ϕ 0 0 , | | ϕ 0 | | = 1.

This is to say (μ0,u 0) is a degenerate positive solution of (4.4) and ϕ0 is the corresponding positive eigenfunction. By Theorem 4.5, (4.4) has a unique degenerate positive solution which is (μ0,u 0), and by Lemma 4.3 and (4.13),

τ ( 0 ) = μ 0 B f ( u 0 ) ϕ 3 d x B f ( u 0 ) ϕ d x > 0.

Therefore, we must have μ k → μ0,uk u 0 and ϕ0 = ϕ (note that the positive eigenfunction is unique if it is normalized). Hence we have

0 τ k ( 0 ) = μ k B f ( u k + ε k ) ϕ k 3 d x B f ( u k + ε ) ϕ k d x μ 0 B f ( u 0 ) ϕ 3 d x B f ( u 0 ) ϕ d x > 0.

This contradiction finishes our proof.

Theorem 4.8 is illustrated by the bifurcation diagram in Figure 3.

Fig. 3. Bifurcation diagram for (4.17) with smallε &gt; 0.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874573305800117

Preparation of Catalysts VII

S. Szabó , I. Bakos , in Studies in Surface Science and Catalysis, 1998

3.1 Rhenium deposition via ionization of hydrogen adsorbed on Pt

First the potential sweep of the platinized pt electrode was determined in 0.5   M H2SO4 solution (curve 1 in Fig. 1) and then the Pt electrode was resaturated with hydrogen. When the potential reached 0.05   V, 10   mg Re2O7 was introduced into the main compartment of the cell. As a result, the potential of the electrode rose to 0.425   V in 30   min. After Re deposition the cell was washed free of ReO 4 ions with deoxygenated 0.5   M H2SO4 and the potential sweep of the electrode covered with adsorbed rhenium species was determined also in 0.5   M H2SO4 (curve 2 in Fig. 1). Since the integrals of the hydrogen region of curve 1 and the rhenium region of curve 2 are the same, adsorbed hydrogen was replaced by rhenium species without any loss of charge. It also follows from this result that the adsorbed material was reoxidized into ReO 4 ions also quantitatively.

Fig.1. Potential sweep of the platinized Pt electrode in 0.5   M H2SO4 (1). Potential sweep of the same electrode also in 0.5   M H2SO after Re deposition via ionization of the preadsorbed hydrogen (2). Sweep rate: 2   ×   10  3  V/s.

Potential of oxidation of adsorbed species is practically the same as published earlier [5]. It follows from this result that in this case ReO2 deposition may also be assumed

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0167299198801912

Critical Phenomena in Gravitational Collapse

C. Gundlach , in Encyclopedia of Mathematical Physics, 2006

The Dynamical Systems Picture

When we consider general relativity as an infinite-dimensional dynamical system, a solution curve is a spacetime. Points along the curve are Cauchy surfaces in the spacetime, which can be thought of as moments of time. An important difference between general relativity and other field theories is that the same spacetime can be sliced in many different ways, none of which is preferred. Therefore, to turn general relativity into a dynamical system, one has to fix a slicing (and in practice also coordinates on each slice). In the example of the spherically symmetric massless scalar field, using polar slicing and an area radial coordinate r, a point in phase space can be characterized by the two functions

[5] Z = { ϕ ( r ) , r ϕ t ( r ) }

In spherical symmetry, there are no degrees of freedom in the scalar field, and Cauchy data for the metric can be reconstructed from Z using the Einstein constraints ( Figure 1 ).

Figure 1. The phase-space picture for the black hole threshold in the presence of a critical point. The arrow lines are time evolutions, corresponding to spacetimes. The line without an arrow is not a time evolution, but a one-parameter family of initial data that crosses the black hole threshold at p = p *. (Reproduced with permission from Gundlach C (2003) Critical phenomena in gravitational collapse. Physics Reports 376: 339–405.)

The phase space consists of two halves: initial data whose time evolution always remains regular, and data which contain a black hole or form one during time evolution. The numerical evidence collected from individual one-parameter families of data suggests that the black hole threshold that separates the two is a smooth hypersurface. The mass-scaling law [1] can, therefore, be restated without explicit reference to one-parameter families. Let P be any function on phase space such that data sets with P > 0 form black holes, and data with P < 0 do not, and which is analytic in a neighborhood of the black hole threshold P = 0. The black hole mass as a function on phase space is then given by

[6] M F ( P ) P γ

for P > 0, where F(P) > 0 is an analytic function.

Consider now the time evolution in this dynamical system, near the threshold ("critical surface") between black hole formation and dispersion. A phase-space trajectory that starts out in a critical surface by definition never leaves it. A critical surface is, therefore, a dynamical system in its own right, with one dimension fewer. If it has an attracting fixed point, such a point is called a critical point. It is an attractor of codimension 1, and the critical surface is its basin of attraction. The fact that the critical solution is an attractor of codimension 1 is visible in its linear perturbations: it has an infinite number of decaying perturbation modes tangential to (and spanning) the critical surface, and a single growing mode not tangential to the critical surface.

Any trajectory beginning near the critical surface, but not necessarily near the critical point, moves almost parallel to the critical surface toward the critical point. As the phase point approaches the critical point, its movement parallel to the surface slows down, while its distance and velocity out of the critical surface are still small. The phase point spends sometime moving slowly near the critical point. Eventually, it moves away from the critical point in the direction of the growing mode, and ends up on an attracting fixed point.

This is the origin of universality: any initial data set that is close to the black hole threshold (on either side) evolves to a spacetime that approximates the critical spacetime for sometime. When it finally approaches either the dispersion fixed point or the black hole fixed point, it does so on a trajectory that appears to be coming from the critical point itself. All near-critical solutions are passing through one of these two funnels. All details of the initial data have been forgotten, except for the distance from the black hole threshold: the closer the initial phase point is to the critical surface, the more the solution curve approaches the critical point, and the longer it will remain close to it.

In all systems that have been examined, the black hole threshold contains at least one critical point. A fixed point of the dynamical system represents a spacetime with an additional continuous symmetry that generic solutions do not have. If the critical spacetime is time independent in the usual sense, we have type I critical phenomena; if the symmetry is scale invariance, we have type II critical phenomena. The attractor within the critical surface may also be a limit cycle, rather than a fixed point. In spacetime terms this corresponds to a discrete symmetry (DSS rather than CSS in type II, or a pulsating critical solution, rather than a stationary one, in type I).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0125126662000109

Mathematical Proof of the Carathéodory Theorem and Resulting Interpretations; Derivation of the Debye–Hückel Equation

J.M. Honig , in Thermodynamics (Third Edition), 2007

9.1.2 Caгathéodory's Theorem

We have so far demonstrated that if (9.1.3) is holonomic and if (9.1.4) applies these conditions are sufficient to guarantee that in any neighborhood of a given point x 0 in the hyperspace there exist other points, corresponding to (9.1.5) that are not accessible from x 0 via solution curves that are subject to the relation X · dx = 0.

Is the converse also true? That is to say, from the assumption of nonaccessibility can one deduce that i X i d x i is holonomic? The answer is in the affirmative and is furnished through Carathéodory's theorem:

If every neighborhood of an arbitrary point x 0 in a hyperspace contains points not accessible from it via solution curves of the equation i X i d x i = 0 , then the Pfaffian form đ L = i X i d x i is holonomic.

The proof of this important theorem is provided in the next two sections.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123738776500113

Mathematical context

In Transcendental Curves in the Leibnizian Calculus, 2017

4.4.5.2 The paracentric isochrone

In the paper where he solved this problem, Leibniz (1689a) also proposed a much more difficult variant of it, the paracentric isochrone problem: find the curve for which a ball rolling down it approaches or recedes from a given point at uniform speed. The solution curve is very complicated, depending, as we would say, on elliptic integrals. We shall see later how this difficulty was tackled, by Johann Bernoulli and others, in numerous creative ways in the 1690s. In his lectures, however, Bernoulli only goes so far as to derive several complicated differential equations for this problem, using the same approach as the second method above. He then leaves the problem with the remark, or rather confession of ignorance, that this gives the nature of the solution curve insofar as these differential equations are solvable by inverse tangent methods. We shall follow Bernoulli's method when deriving a differential equation for the paracentric isochrone in Section 8.3.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128132371500041

Analysis of Dynamic Models

Mark M. Meerschaert , in Mathematical Modeling (Fourth Edition), 2013

5.3 Phase Portraits

In Section 5.1 we introduced the eigenvalue test for stability in continuous time dynamical systems. This test is based on the idea of a linear approximation in the neighborhood of an isolated equilibrium point. In this section we will show how this simple idea can be used to obtain a graphical description of the behavior of a dynamical system near an equilibrium point. This information can then be used along with a sketch of the vector field to obtain a graphical description of the dynamics over the entire state space, called the phase portrait. Phase portraits are important in the analysis of nonlinear dynamical systems because, in most cases, it is not possible to obtain exact analytical solutions. At the end of this section we also include a brief discussion of some similar techniques for discrete time dynamical systems, again based on the idea of linear approximation.

Example 5.3. Consider the electrical circuit diagrammed in Figure 5.6. The circuit consists of a capacitor, a resistor, and an inductor in a simple closed loop. The effect of each component of the circuit is measured in terms of the relationship between current and voltage on that branch of the loop. An idealized physical model gives the relations

Figure 5.6:. RLC circuit diagram for Example 5.3.

C d v C d t = i C ( capacitor ) v R = f ( i R ) ( resistor ) L d i L d t = v L ( inductor )

where vC represents the voltage across the capacitor, iR represents the current through the resistor, and so on. The function f(x) is called the v-i characteristic of the resistor. Usually f(x) has the same sign as x. This is called a passive resistor. Some control circuits use an active resistor, where f(x) and x have opposite sign for small x, see Example 5.4. In the classical linear model of the RLC circuit, we assume that f(x) = Rx where R > 0 is the resistance. Kirchoff's current law states that the sum of the currents flowing into a node equals the sum of the currents flowing out. Kirchoff's voltage law states that the sum of the voltage drops along a closed loop must add up to zero. Determine the behavior of this circuit over time in the case where L = 1, C = 1/3, and f(x) = x 3 + 4x.

We will use the five-step method. The results of step 1 are summarized in Figure 5.7. Step 2 is to select a modeling approach. We will model this problem using a continuous time dynamical system, which we will analyze by sketching the complete phase portrait.

Figure 5.7:. Step 1 of the RLC circuit problem.

Suppose that we are given a dynamical system x′ = F(x) where x = (x 1…, xn ) and F has continuous first partial derivatives in the neighborhood of an equilibrium point x 0. Let A denote the matrix of first partial derivatives evaluated at the equilibrium point x 0 as defined by (5.1). We have stated previously that for x near x 0 the system x′ = F(x) behaves like the linear system x′ = A(xx 0). Now we will be more specific.

The phase portrait of a continuous time dynamical system is simply a sketch of the state space showing a representative selection of solution curves. It is not hard to draw the phase portrait for a linear system (at least on R 2 ) because we can always find an exact solution to a linear system of differential equations. Then we can just graph the solutions for a few initial conditions to get the phase portrait. We refer the reader to any textbook on differential equations for the details on how to solve linear systems of differential equations. For nonlinear systems, we can draw an approximate phase portrait in the neighborhood of each isolated equilibrium point by using the linear approximation.

A homeomorphism is a continuous function with a continuous inverse. The idea of a homeomorphism has to do with shapes and their generic properties. For example, consider a circle in the plane. The image of this circle under a homeomorphism

G : R 2 R 2

might be another circle, an ellipse, or even a square or a triangle. But it could not be a line segment. This would violate continuity. It also could not be a figure eight, because this would violate the property that G must have an inverse (so it must be one-to-one). There is a theorem that states that if the eigenvalues of A all have nonzero real parts, then there is a homeomorphism G that maps the phase portrait of the system x′ = Ax onto the phase portrait of x′ = F(x), with G(0) = x 0 (Hirsch and Smale (1974) p. 314). This theorem says that the phase portrait of x′ = F(x) around the point x0 looks just like that of the linear system, except for some distortion. It would be as if we drew the phase portrait of the linear system on a sheet of rubber which we could stretch any way we like, but could not tear. This is a very powerful result. It means that we can get an actual picture (good enough for almost all practical purposes) of the behavior of a nonlinear dynamical system near each isolated equilibrium point just by analyzing its linear approximation. Then, to finish up the phase portrait on the rest of the state space, we combine what we have learned about the behavior of solutions near the equilibrium points with the information contained in a sketch of the vector field.

Step 3 is to formulate the model. We begin by considering the state space. There are six state variables to begin with, but we can use Kirchoff's laws to reduce the number of degrees of freedom (the number of independent state variables) from six to two. Let x 1 = iR and notice that x 1 = iL = iC as well. Let x 2 = vC . Then we have

x 2 ' 3 = x 1 v R = x 1 3 + 4 x 1 x 1 ' = v L x 2 + v R + v L = 0 .

Substitute to obtain

x 2 ' 3 = x 1 x 2 + x 1 3 + 4 x 1 + x 1 ' = 0 ,

and then rearrange to get

(5.15) x 1 ' = x 1 3 4 x 1 x 2 x 2 ' = 3 x 1 .

Now if we let x = (x 1, x 2), then Eq. (5.15) can be written in the form x′ = F(x) where F = (f 1, f 2) and

(5.16) f 1 ( x 1 , x 2 ) = x 1 3 4 x 1 x 2 f 2 ( x 1 , x 2 ) = 3 x 1 .

This concludes step 3.

Step 4 is to solve the model. We will analyze the dynamical system (5.15) by sketching the complete phase portrait. Figure 5.8 shows a Maple graph of the vector field for this dynamical system. It is also a fairly simple matter to sketch the vector field by hand. Velocity vectors are horizontal on the curve x 1 = 0 where x2 = 0, and vertical on the curve x 2 = x 1 3 4 x 1 where x1 = 0. There is one equilibrium point (0, 0) at the intersection of these two curves. From the vector field it is difficult to tell whether the equilibrium is stable or unstable. To obtain more information, we will analyze the linear system that approximates the behavior of (5.15) near the equilibrium (0, 0).

Figure 5.8:. Graph of voltage x 2 versus current x 1 showing vector field (5.16) for the RLC circuit problem of Example 5.3.

Computing the partial derivatives from (5.16), we obtain

(5.17) f 1 x 1 = 3 x 1 2 4 f 1 x 2 = 1 f 2 x 1 = 3 f 2 x 2 = 0 .

Evaluating the partial derivatives (5.17) at the equilibrium point (0, 0) and substituting back into Eq. (5.1), we obtain

A = ( 4 1 3 0 ) .

The eigenvalues of this 2 × 2 matrix can be computed as the roots of the equation

| λ + 4 1 3 λ | = 0 .

Evaluating the determinant, we obtain the equation

λ 2 + 4 λ + 3 = 0 ,

and then we obtain

λ = 3 , 1 .

Since both eigenvalues are negative, the equilibrium is stable.

To obtain additional information, we will solve the linear system x′ = Ax. In this case we have

(5.18) ( x 1 ' x 2 ' ) = ( 4 1 3 0 ) ( x 1 x 2 ) .

We will solve the linear system (5.18) by the method of eigenvalues and eigenvectors. We have already calculated the eigenvalues λ = −3,−1. To compute the eigenvector corresponding to the eigenvalue λ, we must find a nonzero solution to the equation

( λ + 4 1 3 λ ) ( x 1 x 2 ) = ( 0 0 ) .

For λ = −3 we have

( 1 1 3 3 ) ( x 1 x 2 ) = ( 0 0 )

from which we obtain

( x 1 x 2 ) = ( 1 1 ) ,

so that

( 1 1 ) e 3 t

is one solution to the linear system (5.18). For λ = −1 we have

( 3 1 3 1 ) ( x 1 x 2 ) = ( 0 0 )

from which we obtain

( x 1 x 2 ) = ( 1 3 ) ,

so that

( 1 3 ) e t

is another solution to the linear system (5.18). Then, the general solution to (5.18) can be written in the form

(5.19) ( x 1 x 2 ) = c 1 ( 1 1 ) e 3 t + c 2 ( 1 3 ) e t

where c 1, c 2 are arbitrary real constants.

Figure 5.9 shows the phase portrait for the linear system (5.18). This graph was obtained by plotting the solution curves (5.19) for a few select values of the constants c 1, c 2. For example, when c 1 = 1 and c 2 = 1, we plotted a parametric graph of

Figure 5.9:. Graph of voltage x 2 versus current x 1 showing linear approximation to the phase portrait near (0, 0) for the RLC circuit problem of Example 5.3.

x 1 ( t ) = e 3 t e t x 2 ( t ) = e 3 t + 3 e t .

We superimposed a graph of the linear vector field in order to indicate the orientation of the solution curves. Whenever you plot a phase portrait, be sure to add arrows to indicate the direction of the flow.

Figure 5.10 shows the complete phase portrait for the original nonlinear dynamical system (5.15). This picture was obtained by combining the information in Figures 5.8 and 5.9 and using the fact that the phase portrait of the nonlinear system (5.15) is homeomorphic to the phase portrait of the linear system (5.18). In this example there is not much qualitative difference between the behavior of the linear and the nonlinear systems.

Figure 5.10:. Graph of voltage x 2 versus current x 1 showing complete phase portrait for the RLC circuit problem of Example 5.3.

Step 5 is to answer the question. The question was to describe the behavior of the RLC circuit. The overall behavior can be described in terms of two quantities, the current through the resistor and the voltage drop across the capacitor. Regardless of the initial state of the circuit, both quantities eventually tend to zero. Furthermore, it is eventually true that either voltage is positive and current is negative, or vice versa. For a complete graphical description of the way that current and voltage behave over time, see Figure 5.10, where x 1 represents current and x 2 represents voltage. The behavior of other quantities of interest can easily be described in terms of these two variables (see Figure 5.7 for details). For example, the variable x 1 actually represents the current through any branch of the circuit loop.

Next, we will perform a sensitivity analysis to determine the effect of small changes in our assumptions on our general conclusions. First let us consider the capacitance C. In our example we assumed that C = 1/3. Now we will generalize our model by letting C remain indeterminate. In this case we obtain the dynamical system

(5.20) x 1 ' = x 1 3 4 x 1 x 2 x 2 ' = x 1 C

in place of (5.15). Now we have

(5.21) f 1 ( x 1 , x 2 ) = x 1 3 4 x 1 x 2 f 2 ( x 1 , x 2 ) = x 1 C .

For values of C near 1/3, the vector field for (5.21) is essentially the same as in Figure 5.8. Velocity vectors are still horizontal on the curve x 1 = 0 and vertical on the curve x 2 = x 1 3 4 x 1 . There is still one equilibrium point (0, 0) at the intersection of these two curves.

Computing the partial derivatives from Eq. (5.21), we obtain

(5.22) f 1 x 1 = 3 x 1 2 4 f 1 x 2 = 1 f 2 x 1 = 1 C f 2 x 2 = 0 .

Evaluating the partial derivatives (5.22) at the equilibrium point (0, 0) and substituting back into Eq. (5.1), we obtain

A = ( 4 1 1 / C 0 ) .

The eigenvalues of this matrix can be computed as the roots of the equation

| λ + 4 1 1 / C λ | = 0 .

Evaluating the determinant, we obtain the equation

λ 2 + 4 λ + 1 C = 0 .

The eigenvalues are

λ = 2 ± 4 1 C .

If C > 1/4, then we have two distinct real negative eigenvalues, and so the equilibrium is stable. In this case, the general solution to the linear system is

(5.23) ( x 1 x 2 ) = c 1 ( 1 2 + α ) e ( 2 + α ) t + c 2 ( 1 2 α ) e ( 2 α ) t

where α2 = 4−1/C. The phase portrait of the linear system is about the same as Figure 5.9, except that the slope of the straight line solutions varies with C. Then for values of C greater than 1/4, the phase portrait for the original nonlinear system is a lot like the one shown in Figure 5.10. We conclude that our general conclusions about this RLC circuit are not sensitive to the exact value of C as long as C > 1/4. A similar result may be expected for the inductance L. Generally speaking, the important characteristics of our solution (e.g., eigenvectors) depend continuously on these parameters.

Next, we consider the question of robustness. We assumed that the RLC circuit had v-i characteristic f(x) = x 3 + 4x. Suppose more generally that f(0) = 0 and that f is strictly increasing. Now the dynamical system equations are

(5.24) x 1 ' = f ( x 1 ) x 2 x 2 ' = 3 x 1 .

Now we have

(5.25) f 1 ( x 1 , x 2 ) = f ( x 1 ) x 2 f 2 ( x 1 , x 2 ) = 3 x 1 .

Let R = f′(0). The linear approximation uses

A = ( R 1 3 0 )

and so the eigenvalues are the roots to the equation

| λ + R 1 3 λ | = 0 .

We compute that

λ = R ± R 2 12 2 .

As long as R > 12 , we have two distinct real negative eigenvalues, and the behavior of the linear system is as depicted in Figure 5.9. Furthermore the behavior of the original nonlinear system cannot be too different from Figure 5.10. We conclude that our model of this RLC circuit is robust with regard to our assumptions about the form of the v-i characteristic.

Example 5.4. Consider the nonlinear RLC circuit with L = 1,C = 1, and v-i characteristic f(x) = x 3x. Determine the behavior of this circuit over time.

The modeling process is, of course, the same as for the previous example. Letting x 1 = iR and x 2 = vC , we obtain the dynamical system

(5.26) x 1 ' = x 1 x 1 3 x 2 x 2 ' = x 1 .

See Figure 5.11 for a plot of the vector field. The velocity vectors are vertical on the curve x 2 = x 1 x 1 3 and horizontal on the x 2 axis. The only equilibrium is the origin (0, 0). It is hard to tell from the vector field whether or not the origin is a stable equilibrium.

Figure 5.11:. Graph of voltage x 2 versus current x 1 showing vector field from (5.26) for the RLC circuit problem of Example 5.4.

The matrix of partial derivatives is

A = ( 1 3 x 1 2 1 1 0 ) .

Evaluate at x 1 = 0, x 2 = 0 to obtain the linear system

( x 1 ' x 2 ' ) = ( 1 1 1 0 ) ( x 1 x 2 ) ,

which approximates the behavior of our nonlinear system near the origin. To obtain the eigenvalues we must solve

| λ 1 1 1 λ 0 | = 0

or λ2 − λ + 1 = 0. The eigenvalues are

λ = 1 / 2 ± i 3 / 2 .

Since the real part of every eigenvalue is positive, the origin is an unstable equilibrium.

To obtain more information, we will solve the linear system. To find an eigenvector belonging to

λ = 1 / 2 ± i 3 / 2 ,

we solve

( 1 / 2 + i 3 / 2 1 1 1 / 2 + i 3 / 2 ) ( x 1 x 2 ) = ( 0 0 )

and we obtain

x 1 = 2 , x 2 = 1 i 3 .

Then we have the complex solution

( x 1 x 2 ) = ( 2 1 i 3 ) e ( 1 2 + i 3 2 ) t .

Taking real and imaginary parts yields two linearly independent real solutions u = (x 1, x 2), where

x 1 ( t ) = 2 e t / 2 cos ( t 3 / 2 ) x 2 ( t ) = e t / 2 cos ( t 3 / 2 ) + 3 e t / 2 sin ( t 3 / 2 )

and v = (x 1, x 2), where

x 1 ( t ) = 2 e t / 2 sin ( t 3 / 2 ) x 2 ( t ) = e t / 2 sin ( t 3 / 2 ) 3 e t / 2 cos ( t 3 / 2 ) .

The general solution is c 1 u(t)+c 2 v(t). The phase portrait for this linear system is shown in Figure 5.12. This graph shows parametric plots of the solution for a few select values of c 1 and c 2. We superimpose a plot of the vector field in order to show the direction of the flow.

Figure 5.12:. Graph of voltage x 2 versus current x 1 showing linear approximation to the phase portrait near (0, 0) for the RLC circuit problem of Example 5.4.

Note that, if we zoom in or zoom out on the origin in this linear phase portrait, it will look essentially the same. One of the defining characteristics of linear vector fields and linear phase portraits is that they look the same on every scale. The phase portrait for the nonlinear system in the neighborhood of the origin looks about the same, with some distortions. The solution curves near (0, 0) spiral outward, moving counterclockwise. If we continue to zoom in to the origin on a vector field or phase portrait for the nonlinear system, it will look more and more like the linear system. Further away from the origin, the behavior of the nonlinear system may vary significantly from that of the linear system.

In order to obtain the complete phase portrait of the nonlinear system, we need to combine the information from Figures 5.11 and 5.12. It is apparent from the vector field in Figure 5.11 that the behavior of solution curves changes dramatically farther away from the origin. There is still a general counterclockwise flow, but the solution curves do not spiral out to infinity as in the linear phase portrait. Solution curves that begin far away from the origin look like they are moving toward the origin as they continue their counterclockwise flow. Since the solution curves near the origin are spiraling outwards, and the solution curves far away from the origin tend inward, and we know that solution curves do not cross, something interesting must be happening in the phase portrait. Whatever is happening, it is something that can never happen in a linear system. If a solution curve spirals outward in a linear phase portrait, then it must continue to spiral all the way out to infinity. In Section 6.3 we will explore the behavior of the dynamical system (5.26) using computational methods. We will wait until then to draw the complete phase portrait.

Before we leave the subject of linear approximation techniques, we should point out a few facts about discrete time dynamical systems. Suppose we have a discrete time dynamical system

Δ x = F ( x )

where x = (x 1,…, xn ), and let

G ( x ) = x + F ( x )

denote the iteration function. At an equilibrium point x 0 we have G(x 0) = x 0. In Section 5.2 we used the approximation

G ( x ) A ( x x 0 )

for values of x near x 0, where A is the matrix of partial derivatives evaluated at x = x 0 as defined by (5.12).

One way to obtain a graphical picture of the iteration function G(x) is to draw the image sets

G ( S ) = { G ( x ) : x S }

for various sets

S = { x : | x x 0 | = r } .

In dimension n = 2 the set S is a circle, and in dimension n = 3 it is a sphere. It is possible to show that, as long as the matrix A is nonsingular, there is a diffeomorphism H(x) that maps the image sets A(S) onto G(S) in a neighborhood of the point x 0. If a point x lies inside of S, then G(x) will be inside of G(S). This allows a graphical interpretation of the dynamics. Figures 5.13 through 5.15 illustrate the dynamics of the docking problem from Example 5.2. In this case, G(S) = A(S), since G is linear. Starting at a state on (or inside) the set S, shown in Figure 5.13, the next state will be on (or inside, respectively) the set A(S), shown in Figure 5.14, and then the next state will be on (or inside, respectively) the set A 2(S) = A(A(S)), shown in Figure 5.15. As n → ∞, the set An (S) gradually shrinks in toward the origin.

Figure 5.13:. Dynamics of the docking problem showing the initial condition S = { ( x 1 , x 2 ) : x 1 2 + x 2 2 = 1 } .

Figure 5.14:. Dynamics of the docking problem showing A(S) after one iteration.

Figure 5.15:. Dynamics of the docking problem showing A 2(S) after two iterations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869128500051

Free and mixed convection boundary-layer flow over moving surfaces

Ioan Pop , Derek B. Ingham , in Convective Heat Transfer, 2001

8.5.2 λ < 0

In this case it was found by Ingham (1986b) that no solutions of Equations (8.66)–(8.68) are possible for λ < λ c (= −0.182) and this occurs at f(∞) = 0.348. As the value of f (∞) was further reduced towards zero it was shown that a second branch of the solution curve has been developed. It appeared that λ ∼ −0.174 as f(∞) 0+. Further, a search of the asymptotic solution of Equations (8.66) and (8.67) as η yields that it must have an algebraic decay which is of the form

(8.73) f A 0 ( η + a 0 ) + A 1 ( η + a 0 ) 2 + A 2 ( η + a 0 ) 3 + θ B 0 ( η + a 0 ) 4 + B 1 ( η + a 0 ) 5 + A 2 ( η + a 0 ) 6 +

where a 0, A 0, B 0, … are constants,

(8.74) A 0 = 10 3 , λ B 0 = 20 9 , λ B 1 = 8 3 A 1 , λ B 2 = 2 A 1 2 , A 2 = 3 10 A 1 2

and λ is the unknown value of λ for which f(∞) = 0.

A numerical solution of Equations (8.66) and (8.67), subject to the boundary conditions (8.68a) at η = 0 and (8.73) at η = η , gives

(8.75) λ = 0.1739 , f ( 0 ) = 0.8873 , θ ( 0 ) = 0.7467

The variation of f ( 0 ) , θ ( 0 ) and λ as a function of f(∞) is shown in Table 8.4. This table clearly shows that the solution of Equations (8.66) and (8.67), which at large values of η has the algebraic decay given by expressions (8.73), is being approached and that dual solutions exists for λ c < λ < λ . Ingham (1986b) has demonstrated analytically that the solution of these equations is singular at λ = λ . Therefore, an important and novel outcome of the opposing flow case (λ < 0) of this problem is the singular nature of the solution curve of Equations (8.66)–(8.68), which terminates at f(∞) = 0. In all other similar problems, where dual and singular solutions exist for nonlinear ordinary differential equations, the termination of the solution curve usually occurs in a quite predictable manner but this does not occur in the present problem.

Table 8.4. Variation of f ( 0 ) , θ ( 0 ) and λ as a function of f ( ) for Pr = 1.

f ( ) f ( 0 ) θ ( 0 ) λ
0.7 −0.8479 0.8383 −0.1514
0.6 −0.8712 0.7967 −0.1672
0.5 −0.8861 0.7671 −0.1770
0.4 −0.8939 0.7486 −0.1814
0.3 −0.8955 0.7401 −0.1814
0.2 −0.8927 0.7411 −0.1783
0.1 −0.8883 0.7448 −0.1746
0.05 −0.8877 0.7460 −0.1742
0.02 −0.8874 0.7465 −0.1739
0.01 −0.8873 0.7467 −0.1739
0.0 −0.8873 0.7467 −0.1739

Typical results for f ( 0 ) , f ( ) and θ ( 0 ) obtained from a direct numerical integration of Equations (8.66)–(8.68) are shown in Figure 8.13 for several values of λ of interest. Also shown in these figures are the limiting and asymptotic solutions. It is seen that there is an excellent agreement between these solutions. Further, it is observed from Figure 8.13(a) that f ( 0 ) > 0 on the lower branch of the solution curve for λ > 0.3817 and on the upper branch curve for λ > 0. This shows that for this range of values of λ that the maximum fluid velocity occurs within the boundary-layer. This is not surprising since as λ increases the buoyancy force becomes larger and it will eventually dominate over the motion caused by the plate. Also, θ ( 0 ) > 0 for all values of λ for which solutions are possible and this gives rise to temperature profiles in which the maximum temperature occurs at a point within the boundary-layer rather than on the plate.

Figure 8.13. Variation of (a) f ( 0 ) and (b) θ ( 0 ) with λ for Pr = 1. The numerical solutions are indicated by the solid lines, the asymptotic solution (8.69) for small values of λ is indicated by the broken line and the limiting solutions are denoted by the symbols o.

It should be noted that the corresponding problem of a flat plate which moves horizontally has been studied in a similar way by Merkin and Ingham (1987). It was found that there is a unique solution for all positive values of the buoyancy parameter λ and that for negative values of λ the solution terminates in a singular manner with algebraic decay.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080438788500110

Simulation of Dynamic Models

Mark M. Meerschaert , in Mathematical Modeling (Fourth Edition), 2013

6.3 The Euler Method

One of the reasons we simulate dynamic models is to obtain accurate quantitative information about system behavior. For some applications the simple simulation techniques of the previous section are too imprecise. More sophisticated numerical analysis techniques are available, however, that can be used to provide accurate solutions to initial value problems for almost any differential equation model. In this section we present the simplest generally useful method for solving systems of differential equations to any desired degree of accuracy.

Example 6.3. Reconsider the RLC circuit problem of Example 5.4 in the previous chapter. Describe the behavior of this circuit.

Our analysis in Section 5.3 was successful only in determining the local behavior of the dynamical system

(6.8) x 1 ' = x 1 x 1 3 x 2 x 2 ' = x 1

in the neighborhood of (0, 0), which is the only equilibrium of this system. The equilibrium is unstable, with nearby solution curves spiraling counterclockwise and outward. A sketch of the vector field (see Fig. 5.11) reveals little new information. There is a general counterclockwise rotation to the flow, but it is hard to tell whether solution curves spiral inward, outward, or neither, in the absence of additional information.

We will use the Euler method to simulate the dynamical system in Eq. (6.8). Figure 6.19 gives an algorithm for the Euler method. Consider a continuous– time dynamical system model

Figure 6.19. Pseudocode for the Euler method.

x ' = F ( x )

with x = (x 1,…, xn ) and F = (f 1,…, fn ), along with the initial condition x(t 0) = x 0.

Starting from this initial condition, at each iteration the Euler method produces an estimate of x(t + h) based on the current estimate of x(t), using the fact that

x ( t + h ) x ( t ) hF ( x ( t ) ) .

The accuracy of the Euler method increases as the step size h becomes smaller; i.e., as the number of steps N becomes larger. For small h the error in the estimate x(N) of the final value of the state variable x is roughly proportional to h. In other words, using twice as many steps (i.e., reducing h by half) produces results twice as accurate.

Figures 6.20 and 6.21 illustrate the results obtained by applying a computer implementation of the Euler method to Eq. (6.8). Each graph in Figs. 6.20 and 6.21 is the result of several simulation runs. For each set of initial conditions, we need to perform a sensitivity analysis on the input parameters T and N.

Figure 6.20. Graph of voltage x 2 versus current x 1 for the nonlinear RLC circuit problem: case x 1(0) = −1.0, x 2(0) = −1.5.

Figure 6.21. Graph of voltage x 2 versus current x 1 for the nonlinear RLC circuit problem: case x 1(0) = 0.1, x 2(0) = 0.3.

First, we enlarged T until any further enlargements produced essentially the same picture (the solution just cycled around a few more times). Then we enlarged N (i.e., decreased the step size) to check accuracy. If doubling N produced a graph that was indistinguishable from the one before, we judged that N was large enough for our purposes.

In Fig. 6.20 we started at x 1(0) = −1, x 2(0) = −1.5. The resulting solution curve spirals in toward the origin, with a counterclockwise rotation. However, before it gets too close to the origin, the solution settles into a more-or–less periodic behavior, cycling around the origin. When we start nearer the origin in Fig. 6.21, the same behavior occurs, except now the solution curve spirals outward. In both cases the solution approaches the same closed loop around the origin. This closed loop is called a limit cycle.

Figure 6.22 shows the complete phase portrait for this dynamical system. For any initial condition except (x 1, x 2) = (0, 0), the solution curve tends to the same limit cycle. If we begin inside the loop, the curve spirals outward; if we begin outside the loop, the curve moves inward. The kind of behavior we see in Fig. 6.22 is a phenomenon that cannot occur in a linear dynamical system. If a solution to a linear dynamical system spirals in toward the origin, it must spiral all the way into the origin. If it spirals outward, then it spirals all the way out to infinity. This observation has modeling implications, of course. Any dynamical system exhibiting the kind of behavior shown in Fig. 6.22 cannot be modeled adequately using linear differential equations.

Figure 6.22. Graph of voltage x 2 versus current x 1 showing the complete phase portrait for the nonlinear RLC circuit problem of Example 6.3.

The graphs in Figures 6.20–6.22 were produced using a spreadsheet implementation of the Euler method. The advantage of a spreadsheet implementation is that the computations and graphics are both performed on the same platform, and the results of changing initial conditions can be observed instantly. A simple computer program to implement this algorithm is effective, but the output is harder to interpret without graphics. Many graphing calculators and computer algebra systems also have built-in differential equation solvers, most of which are based on some variation of the Euler method. The Runge-Kutta method is one variation that uses a more sophisticated interpolation between x(t) and x(t + h); see Exercise 21 at the end of this chapter. No matter what kind of numerical method you use to solve differential equations, be sure to check your results by performing a sensitivity analysis on the parameters that control accuracy. Even the most sophisticated algorithms can produce serious errors unless they are used with care.

Next, we will perform a sensitivity analysis to determine the effect of small changes in our assumptions on our general conclusions. Here we will discuss the sensitivity to the capacitance C. Some additional questions of sensitivity and robustness are relegated to the exercises at the end of this chapter. In our example we assumed that C = 1. In the more general case we obtain the dynamical system

(6.9) x 1 ' = x 1 x 1 3 x 2 x 2 ' = x 1 C .

For any value of C > 0, the vector field is essentially the same as in Fig. 5.11. The velocity vectors are vertical on the curve x 2 = x 1 x 1 3 and horizontal on the x 2 axis. The only equilibrium is the origin, (0, 0).

The matrix of partial derivatives is

A = ( 1 3 x 1 2 1 1 / C 0 ) .

Evaluate at x 1 = 0, x 2 = 0 to obtain the linear system

(6.10) ( x 1 ' x 2 ' ) = ( 1 1 1 / C 0 ) ( x 1 x 2 ) ,

which approximates the behavior of our nonlinear system near the origin. To obtain the eigenvalues, we must solve

| λ 1 1 1 / C λ 0 | = 0 ,

or λ2 − λ + 1/C = 0. The eigenvalues are

(6.11) λ = 1 ± 1 4 C 2 .

As long as 0 < C < 4, the quantity under the radical is negative, so we have two complex conjugate eigenvalues with positive real parts, making the origin an unstable equilibrium.

Next, we need to consider the phase portrait for the linear system. It is possible to solve the system in Eq. (6.10) in general by using the method of eigenvalues and eigenvectors, although it would be rather messy. Fortunately, in the present case it is not really necessary to determine a formula for the exact analytical solution to Eq. (6.10) in order to draw the phase portrait. We already know that the eigenvalues of this system are of the form λ = a ± ib, where a is positive. As we mentioned previously (in Section 5.1, during the discussion of step 2 for Example 5.1), this implies that the coordinates of any solution curve must be linear combinations of the two terms e at cos(bt) and eat sin(bt). In other words, every solution curve spirals outward. A cursory examination of the vector field for Eq. (6.10) tells us that the spirals must rotate counterclockwise. We thus see that for any 0 < C < 4, the phase portrait of the linear system in Eq. (6.10) looks much like the one in Fig. 5.10.

Our examination of the linear system in Eq. (6.10) shows that the behavior of the nonlinear system in the neighborhood of the origin must be essentially the same as in Fig. 6.22 for any value of C near the baseline case C = 1. To see what happens farther away from the origin, we need to simulate. Figures 6.23 through 6.26 show the results of simulating the dynamical system in Eq. (6.9) using the Euler method for several different values of C near 1. In each simulation run we started at the same initial condition as in Fig. 6.21.

Figure 6.23. Graph of voltage x 2 versus current x 1 for the nonlinear RLC circuit problem: case x 1(0) = 0.1, x 2(0) = 0.3, C = 0.5.

Figure 6.24. Graph of voltage x 2 versus current x 1 for the nonlinear RLC circuit problem: case x 1(0) = 0.1, x 2(0) = 0.3, C = 0.75.

Figure 6.25. Graph of voltage x 2 versus current x 1 for the nonlinear RLC circuit problem: case x 1(0) = 0.1, x 2(0) = 0.3, C = 1.5.

Figure 6.26. Graph of voltage x 2 versus current x 1 for the nonlinear RLC circuit problem: case x 1(0) = 0.1, x 2(0) = 0.3, C = 2.0.

In each case the solution curve spirals outward and is gradually attracted to a limit cycle. The limit cycle gets smaller as C increases. Several different initial conditions were used for each value of C tested (additional simulation runs are not shown). In each case, apparently, a single limit cycle attracts every solution curve away from the origin. We conclude that the RLC circuit of Example 6.3 has the behavior shown in Fig. 6.22 regardless of the exact value of the capacitance C, assuming that C is close to 1.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869128500063

First-order differential equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

Summary

An easy type of first-order ODE to solve is a separable equation, one that can be written in the form d y d x = f ( x ) g ( y ) , where f denotes a function of x alone and g denotes a function of y alone. "Separating the variables" leads to the equation d y g ( y ) = f ( x ) d x . It is possible that you cannot carry out one of the integrations in terms of elementary functions or you may wind up with an implicit solution. Furthermore, the process of separation of variables may introduce singular solutions.

Another important type of first-order ODE is a linear equation, one that can be written in the form a 1 ( x ) y + a 0 ( x ) y = f ( x ) , where a 1 ( x ) , a 0 ( x ) , and f ( x ) are functions of the independent variable x alone. The standard form of such an equation is d y d x + P ( x ) y = Q ( x ) . The equation is called homogeneous if Q ( x ) 0 and nonhomogeneous otherwise. Any homogeneous linear equation is separable.

After writing a first-order linear equation in the standard form d y d x + P ( x ) y = Q ( x ) , we can solve it by the method of variation of parameters or by introducing an integrating factor, μ ( x ) = e P ( x ) d x .

A typical first-order differential equation can be written in the form d y d x = f ( x , y ) . Graphically, this tells us that at any point ( x , y ) on a solution curve of the equation, the slope of the tangent line is given by the value of the function f at that point. We can outline the solution curves by using possible tangent line segments. Such a collection of tangent line segments is called a direction field or slope field of the equation. The set of points ( x , y ) such that f ( x , y ) = C , a constant, defines an isocline, a curve along which the slopes of the tangent lines are all the same (namely, C). In particular, the nullcline (or zero isocline) is a curve consisting of points at which the slopes of solution curves are zero. A differential equation in which the independent variable does not appear explicitly is called an autonomous equation. If the independent variable does appear, the equation is called nonautonomous. For an autonomous equation the slopes of the tangent line segments that make up the slope field depend only on the values of the dependent variable. Graphically, if we fix the value of the dependent variable, say x, by drawing a horizontal line x = C for any constant C, we see that all the tangent line segments along this line have the same slope, no matter what the value of the independent variable, say t. Another way to look at this is to realize that we can generate infinitely many solutions by taking any one solution and translating (shifting) its graph left or right. Even when we can't solve an equation, an analysis of its slope field can be very instructive. However, such a graphical analysis may miss certain important features of the integral curves, such as vertical asymptotes.

An autonomous first-order equation can be analyzed qualitatively by using a phase line or phase portrait. For an autonomous equation the points x such that d y d x = f ( x ) = 0 are called critical points. We also use the terms equilibrium points, equilibrium solutions, and stationary points to describe these key values. There are three kinds of equilibrium points for an autonomous first-order equation: sinks, sources, and nodes. An equilibrium solution y is a sink (or asymptotically stable solution) if solutions with initial conditions "sufficiently close" to y approach y as the independent variable tends to infinity. On the other hand, if solutions "sufficiently close" to an equilibrium solution y are asymptotic to y as the independent variable tends to negative infinity, then we call y a source (or unstable equilibrium solution). An equilibrium solution that shows any other kind of behavior is called a node (or semistable equilibrium solution). The First Derivative Test is a simple (but not always conclusive) test to determine the nature of equilibrium points.

Suppose that we have an autonomous differential equation with a parameter α. A bifurcation point α 0 is a value of the parameter that causes a change in the nature of the equation's equilibrium solutions as α passes through the value α 0 . There are three main types of bifurcation for a first-order equation: (1) pitchfork bifurcation; (2) saddle-node bifurcation; and (3) transcritical bifurcation.

When we are trying to solve a differential equation, especially an IVP, it is important to understand whether the problem has a solution and whether any solution is unique. The Existence and Uniqueness Theorem provides simple sufficient conditions that guarantee that there is one and only one solution of an IVP. A standard proof of this result involves successive approximations, or Picard iterations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000099

Prolongations and Generalized Liapunov Functions

JOSEPH AUSLANDER , PETER SEIBERT , in International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, 1963

§ 1 General Concepts and Notations

Consider the system of differential equations

(1) x ˙ = f ( x )

where x and f are n-vectors and where f is defined in a region X of Rn . Let conditions be placed upon f sufficient to guarantee the existence and uniqueness of solutions through every point x 0 of X, the solutions depending continuously on the initial value and being defined for all real t. Then, as is well known [ 3 ], the solution curves define a dynamical system or continuous flow on X. That is, there is a continuous map π of X × R onto X satisfying

(2) ( a ) π ( x , 0 ) = x ( x X ) ; ( b ) π ( π ( x , t 1 ) , t 2 ) = π ( x , t 1 + t 2 ) ( x X ; t 1 , t 2 R ) .

The motions or orbits of the dynamical system (2) are the solution curves of (1).

The notions occurring in the first four sections of this paper may be formulated directly in terms of the dynamical system (2), without explicit reference to the system of differential equations which gives rise to it. Therefore, let us suppose that we are given a locally compact second countable metric space X, with metric d, and a dynamical system (2), on X. We shall suppress notationally the map π; if xX and tR, we shall write xt in place of π(x, t).

We adopt the following definitions and notations. If xX, xR + = [xt | t ≧ 0], is called the positive semiorbit of x. If AX, Ā will denote the closure of A in X, and, if ε > 0,

S ( A ) = [ y X | d ( y , A ) < ] .

If Q 0 is a map of X into 2 X (the set of all subsets of X) and AX, then

Q 0 ( A ) = x A Q 0 ( x ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123956514500490