2.2 Optimum First Order Solution

2.2.1 Basic Assumptions

As noted earlier, equation (2.1-1) is indeterminate when the input variable is specified only by a sequence of discrete values, unless some assumption is made as to the behavior of the input variable between the discrete values. When only two discrete values (the current and one past value) are available at any one time, as is the case in virtually all first-order algorithms, the best possible assumption is that "x" varies linearly with time between the two discrete values. If x were significantly non-linear over the interval, then clearly two values would be insufficient to specify the function. So, we can represent the input function over an interval Dt by

where the subscripts p and c denote "past" and "current" discrete values, corresponding to the initial and final values for that interval. The derivative of this x(t) during the interval is then just

so equation (2.1-1) can be rewritten

(2.2.1-1)

Now, if tD and tN are known constants, and we are given the discrete values of xp, yp, and xc for any given interval Dt, then equation (2.2.1-1) can be solved exactly for the current output value yc. This is the basis for the optimum solution for constant t presented in Section 2.2.3.

If the coefficients tD and tN are variable, the situation is somewhat more complicated. We will consider only the case where these coefficients may be treated as functions of time, so that (2.1-1) remains linear. In this case the coefficients are specified for a given interval by their initial and final values (i.e., past and current). As with the input function x(t), this in itself is insufficient specification unless the coefficients can be assumed to vary linearly with time between the current instants. Therefore, we make the assumption that

On this basis, equation (2.2.1-1) can be written

(2.2.1-2)

where

Knowing the initial and final values of x, tN, and tD, and the initial value of y for a given Dt, equation (2.2.1-2) can be solved exactly for the final (current) value of y. Therefore, this equation is the basis for the optimum solution to be presented in Section 2.2.4 for variable t.

 

2.2.2 General First Order Recurrence Formulas

Before deriving the actual solutions of equations (2.2.1-1) and (2.2.1-2) we will consider the form these solutions must take. We require a formula that can be applied recursively to compute the current value of y based on the current value of x and the past values of x and y. We will find that, in general, the solution can be expressed as a linear combination of the three given values, i.e.,

yc = (f1) yp + (f2) xp + (f3) xc(2.2.2-1)

where f1, f2, and f3 are all functions of tD, tN, and Dt. However, these three functions are obviously not independent, because if yp = xp = xc = m then clearly yc = m, so we have

m = mf1 + mf2 + mf3 which implies 1 = f1 + f2 + f3. Taking advantage of this fact, we can eliminate f1 from equation (2.2.2-1) to give

yc = (1 - f2 - f3) yp + (f2) xp + (f3) xc

If we then make the substitution xc = xp + (xc - xp) we have

yc = (1 - f2 - f3) yp + (f2) xp + (f3) xp + (f3)(xc - xp)

which can be written

yc = yp + A (xp - yp) + B (xc - xp)(2.2.2-2)

where the coefficients A and B are defined as

A = f2 + f3B = f3(2.2.2-3)

Any valid lead/lag algorithm that computes yc as a linear function of yp,xp, and xc must be expressible in the form of equation (2.2.2-2), regardless of the assumptions made concerning the interpolation of the independent variable. Therefore, we define equation (2.2.2-2) as the standard recurrence formula for digital lead/lag simulations. In subsequent sections we will frequently identify algorithms simply by specifying their standard recurrence coefficients A and B.

 

2.2.3 Optimum Recurrence Coefficients for Constant t

When tD and tN are constant, the optimum recurrence coefficients can be determined by solving equation (2.2.1-1), which can be rewritten as follows

Recall that the solution of any equation of the form

(2.2.3-1)

where F(t) and G(t) are functions of t is given by

(2.2.3-2)

where K is a constant of integration. Making the substitutions

we have

Performing the integration and dividing through by et/tD gives

To determine K we apply the initial condition y = yp at t = 0, which gives

Inserting this back into the preceding equation, recalling the definition of , and noting that y = yc at t = Dt, we arrive (after some algebraic manipulation) at the result

yc = (f1)yp + (f2)xp + (f3)xc(2.2.3-3)

where

Note that as required the sum of f1, f2, and f3 is identically 1. Substituting the expressions for f2 and f3 into equations (2.2.2-3) gives the optimum recurrence coefficients for constant t:

(2.2.3-4)

 

2.2.4 Optimum Recurrence Coefficients For Variable t

When tN and tD are variable, we base the optimum recurrence coefficients on equation (2.2.1-2). For convenience we define the following parameters

In these terms equation (2.2.1-2) can be rewritten

(2.2.4-1)

This is in the form of equation (2.2.3-1), so the general solution is given by equation (2.2.3-2) where

Therefore we have

Performing the integration and dividing through by (at+b)1/a gives

(2.2.4-2)

The constant of integration K is determined by the initial condition y = yp at t = 0, which gives

We can now compute yc by evaluating equation (2.2.4-2) at t = Dt. If we then replace a,b,c, and d with their respective definitions, we arrive at the result

yc = (f1)yp + (f2)xp + (f3)xc(2.2.4-3)

where

Note that, as required, the sum of f1, f2, and f3 is identically 1. Substituting the expressions for f2 and f3 into equations (2.2.2-3) gives the optiumum recurrence coefficients for variable t:

(2.2.4-4)

(2.2.4-5)

 

2.2.5 Discussion

When referring to the recurrence formulas (2.2.3-3) and (2.2.4-3) the term "optimum" is justified in the following sense: The total solution y(t) equals the solution to the homogeneous equation plus a particular solution for the given forcing function. The only ambiguity is in the particular solution, which depends on how we choose to interpolate the independent function x(t). Note that the forcing function has no effect on the homogeneous solution, and that the particular solution is independent of the actual y(t) values. It follows that the coefficients of the "y terms" in the recurrence relation must be exactly as given in (2.2.3-3) and (2.2.4-3), regardless of how the x(t) function is interpolated. The only ambiguityt in the recurrence formula is in the "x-term" coefficients (f2 and f3), and even those have a fully determined sum. In view of this, the term "optimum" is used throughout this document to denote a recurrence formula that has the exact "y-term" coefficient(s), and for which the "x-term" coefficients sum to the correct value.

With regard to the variable-t solution, we would expect to find that it has as a special case the constant-t solution, and in fact if and are zero then clearly equation (2.2.4-5) is equivalent to the constant-t expression for B given by equation (2.2.3-4). However, it may not be self-evident that equation (2.2.4-4) reduces to the A of equation (2.2.3-4) for constant t. To show that it does, we can rewrite equation (2.2.4-4) as follows

(2.2.5-1)

If we now recall the series expansion of the natural log

we see that if the ratio t DP/t DC is close to 1 we can approximate the natural log in equation (2.2.5-1) very accurately by just the first term of the expansion, i.e.,

in which case equation (2.2.5-1) becomes

This makes it clear that as goes to zero, and t DC approaches t DP, the vairable-t expression for A does in fact reduce to the constant-t case given by equation (2.2.3-4).

We now demonstrate that the constant-t response approaches the variable-t response if a sufficiently small time interval Dt is used. First, notice that as Dt goes to zero the ratio of t DP to t DC can be made arbitrarily close to 1 for any finite value of . Also, as we have already seen, equation (2.2.4-4) is equivalent to A of equation (2.2.3-4) in the limit as t DP/t DC goes to 1. Therefore, as Dt goes to zero the A coefficient for both constant t and variable t is given by equation (2.2.3-4). Furthermore, if we recall the power series expansion

we see that as Dt goes to zero, A of equation (2.2.3-4) becomes simply A = Dt/t D. Substituting this into the expressions for B in equations (2.2.3-4) and (2.2.4-5), along with the stipulation that t NC t NP for a sufficiently small Dt, we see that both solutions give B = t N/t D. Thus, for sufficiently small Dt, the constant-t and variable-t solutions both reduce to the form

Notice that if Dt is actually zero, but xc - xp does not vanish, then this can be written as

which is the expected response to a step input, viz, an instantaneous change in x yields an instantaneous change in y with a magnitude amplified by the factor t N/t D.

Examination of equation (2.2.4-4) also shows that the variable-t solution requires t DP and t DC have the same sign, since if they didn't, the ratio t DP/t DC would be negative and the result of the exponentiation would, in general, be complex. Another way of stating this restriction is that t D can never pass through zero, which effectively prohibits a change of sign for a continuous function. In one sense, the "reason" for this restriction is that we divided by t D when we wrote equation (2.2.4-1). More fundamentally, equation (2.1-1) shows that when t D is zero the differential term in y vanishes and the equation is singular.

It may appear that equation (2.2.4-5) also exhibits a singularity, specifically when equals -1. However, as long as the absolute value of Dt (/t DP) is less than 1 (which corresponds to the requirement that t D never pass through zero during the interval) it can be shown that B remains analytic at = -1, and is given by

Сайт управляется системой uCoz