# Introduction he adaptive gradient lattice (GAL) filter is due to Griffiths (1977) and may be viewed as a natural extension of least mean square as they both use stochastic gradient [16] approach. First, we derive the recursive formula for order update then we find the updates for the desired response. Multistage Lattice Predictor [18] Figure 2 is a single stage lattice predictor. The input and output are characterized by a single parameter km. We assume that the input is wide sense stationary. To find km, we start with the cost function. Where fm(n) is the forward prediction error and bm(n) is the backward prediction error and E is the expected value. The relation for the lattice from stage m-1 to m is (2 ) (3) Using equations 1 and 2 and 3 we will have for (4) This is a max-min problem. We want to find the min j as km change. Differentiating (5) Equating to zero we find that the optimum value of km to make j minimum. II. ( )1 for the reflection coefficient km for the m stage in the lattice predictor (7) It is clear that the estimate is data dependent. Equation 7 is a block estimator for the reflection coefficient km. It is time now to find a recursive formula to update km. First, we find (8 ) This is the total energy of the forward and delayed backward error at the input of the m stage. Doing some math, we will have the recursive formula. (9) At this point, we need a recursive formula for equation 7 and we will start by writing the top as (10) Substituting equations 9 and 10 into 7, we will find that (11) Equation 11 is not a pure recursive form, so we need to do some more steps. First use km(n-1) in place km in equation 2 and 3 and write them as (12) Second use equation 12 and 13 with 9 to write (13) Then we use equation 7 for (n -1) to write equation 11 as This mean (14) At this point, we will make two modification to equations 9 and 14. 1. We wi ll introduce a step size parameter to control the adjustment. 2. We introduce an averaging filter to the energy estimator (16) Equation 16 take the fact that we are dealing with nonstationary environment, and we have statistical variation. This will equip the estimator with memory were the present value and immediate past is used. Let us say we want a desired response d(n). we consider the structure shown in figure 3 which is part of figure 1. We have the input vector bm(n) and the parameters of the filter hm(n) which will converge with time to give the desired response. For the estimation of the vector h we use the stochastic gradient approach. We find that the order update for the desired response d(n) is The error is The time update for the mth coefficient of figure 3 # is The squared Euclidean norm is defined as (20) From Forward Prediction Error and Backward Prediction Error to Orthogonal Data in Space(Lattice 2 Year 2017 ( ) (17) (19) k m (n)= -2 ? ?? ??=1 [b m-1 (i-1) f * m-1 (i)] / (? ?? ??=1 [|f m-1 (i)| 2 +|b m-1 (i-1)| 2 ]) E m-1 (n)=? ?? ??=1 [|f m-1 (i)| 2 + | b m-1 ( i-1)| 2 ] E m-1 (n)=? ?? ?1 ??=1 [|f m-1 (i)| 2 + | b m-1 ( i-1)| 2 ] +[| b m-1 (n-1)| 2 +| f * m-1 (n)| 2 ] =E m-1 (n-1)+| b m-1 (n-1)| 2 +| f m-1 (n)| 2 ] ? ?? ??=1 [b m-1 (i-1) f * m-1 (i)] = ? ?? ?1 ??=1 [b m-1 (i-1) f * m-1 (i)] + [b m-1 (n-1) f * m-1 (n)] k m (n) =? ?? ?1 ??=1 [b m-1 (i-1) f * m-1 (i)] + [b m-1 (n-1) f * m-1 (n)] / E m-1 (n-1)+ | b m-1 (n-1)| 2 +| f m-1 (n)| 2 ] f m (n) = f m-1 (n) + k m (n-1) * b m-1 (n-1) b m (n) = b m-1 ( n -1 ) + k m (n-1) f m-1 ( n -1 ) 2b m-1 (n-1)f * m-1 (n)=b m-1 (n-1)f * m-1 +f * m-1 (n)b m-1 (n-1) =b m-1 (n-1)(f m (n)-k * em (n-1)b m-1 (n-1)) * +f * m-1 (n)(b m (n)-k em (n-1)f m-1 (n)) =-k em (n-1)(|f m-1 (n)| 2 +|b m-1 (n-1)| 2 ) +(f * m-1 (n)b m (n)+b m-1 (n-1)f * m (n)) = -k em (n-1)E m-1 (n)+k em (n-1)E m-1 (n-1) -1 - +(f * m-1 (n)b m (n)+b m-1 (n-1)f * m (n)) 2? ???1 ??=1 b m-1 (i-1)f m-1 * (i)+2b m-1 (n-1)f * m-1 (n) =k em (n-1)E m-1 (n-1)-k em (n-1)E m-1 (n)+k em (n-1)E m-1 (n-1) +(f * m-1 (n)b m (n)+b m-1 (n-1)f m * (n)) = -k em (n-1)E m-1 (n)+(f m-1 * (n)b m (n)+b m-1 (n-1)f * m (n)) k em (n)=k em (n-1)-(f * m-1 (n)b m (n)+b m-1 (n-1)f * m (n))/E m-1 (n) m=1,2,??..,M. k em (n)=k em (n-1)-[µ e /E m-1 (n)](f * m-1 (n)b m (n) +b m-1 (n-1)f * m (n)) M=1,2, ??.,M. (15) E m-1 (n)=?E m-1 (n-1)+(1-?)(|f m-1 (n)| 2 +|b m-1 (n-1)| 2 ) y m (n)= ? ?? ??=0 h ek * (n)b k (n) = ? ?? ?1 ??=0 h ek * (n)b k (n) + h ek * (n) b k (n) = y m-1 (n) + h ek * (n) b k (n) e m (n)=d(n)-y m (n) h em (n+1)=h em (n)+[µ/||b m (n)|| 2 ]b m (n)e * m (n) ||b m (n)|| 2 =? ?? ??=0 |b k (n)| 2 =|b m (n)| 2 + ? ?? ?1 ??=0 |b k (n)| 2 = ||b m (n)|| 2 + |b k (n)| 2 III. . Desired Response Estimator [14] IV. Adaptive Forward Linear Prediction [17] Conceder the 4 th order filter in figure 4 at time n. The forward prediction error is The forward prediction problem is to find u(i) at time i from the vector u(i-1)????u(i-m) using the filter in figure 4 of the weight vector wm1(n)????..wmm(n). We refer to fm(i) as the forward a posteriori prediction error, since its value is based on the current weight vector wfm(n). We defined forward a priori prediction error as The update formula for the weights vector for the forward predictor is (23) k is the gain vector defined by ( 24) In equation 24 we have the inverse of the correlation matrix defined. At this point, we have described the adaptive filter forward prediction problem and using the weight vector w,f,m(n). Also, the forward prediction error problem is important and we are going to approach the solution using the knowledge we have so far. Let us say we have am(n) were [15]. # Table 2: Notation Where the first element of the vector am(n) is one. The forward a posteriori prediction error and the forward a priori prediction error The weight vector w f,m(n) can also be found by minimizing the sum The solution using am(n) is the solution to the same minimization problem using a more elegant form. f m (i)=u(i)-w ef,m H (n)u m (i-1) u(i-1)=[u(i-1), u(i-2),???.,u(i-m)] T w(n) =[w f, m, 1 , w f, m, 2 (n),???.,w f, m, m (n)] T ? m (i)=u(i)-w ef, m (n-1) u m (i-1) I = 1, 2, ??.,n. k m (n-1)=? m -1 (n-1)u m (n-1)(22)? m (n-1)=? ????? ??=?? ? n-1-i u m (i)u m H (i) a m (n) = 1 ??? (26) f m (i) = a m H (n)u m+1 (i) i = i=1, 2,??, n, ? = a m H (n-1)u m (i) i=1, 2, ??., n, u m+1 (i) = ??(??) ??(?? ? ??) ? ?? ??=1 ? n-1 u(i-1)f m * (i) = 0 F m (n) = ? ?? ??=1 ? n-1 |f m (i)| 2(30) At this point, we use equation 21 in equation 30 and next equation 23 and the condition of equation 29 to get the recursion equation, (31) In this equation the product at the end is a real value. Adaptive Backward Linear Prediction [17] Consider the backward linear predictor of order m. This is in Figure 12.5(a) for operation at time n. The tap weight vector is optimized using least squares sense until time n. Let [15]. (32) Figure 5: Backward prediction This is the backward prediction error for the input vector um(i). We have And bm(i) is the backward a posteriori prediction error. It is dependent on the current value of the vector wb,m(n). we may define the backward a priori prediction error as (33) The computation is based on past weight vector wb,m(n). To do recursion for adaptive backward linear prediction, we modify the RLS algorithm. The following is the recursion for updating the tap weight vector. In equation 34 we have the backward priori prediction error and we have (35) The matrix we have in equation 35 is the inverse of the correlation matrix (36) We may analyze this problem as a backward prediction error filter problem. In this case, the tap weight vector is cm(n) which we can find from figure 12(b)as (37) In this vector cm,m(n) is one and the input vector u m+1(i) of size m+1. In this case, the backward a posteriori prediction error and the backward a priori prediction error can be found as (38) (39) The input vector is The tap weight vector is orthogonal to the backward linear prediction error. This mean (40) The tap weights vector wb,m(n) may also beseen as minimizing the sum To end this discussion, it is important to note in the case of backward prediction the input vector um+1(n) is partitioned with the desired response u(n-m) as the last entry. As in the case of forward prediction, the input vector um+1(n) is partitioned with u(n) as the first entry. Conversion Factor [18] First, we defined the vector k as # Global Journal of Computer Science and Technology Volume XVII Issue III Version I F m (n) = ? F m (n-1) + ? m (n) f * m (n) b m (i) = u(i-m) -w H eb, m (n)u m (i) i = 1, 2, ?., n, u(i) = [u(i), u(i-1), ??.., u(i-m+1)] T w eb, m (n) = [w eb, m, 1 (n), w eb, m, 2 (n), ??., w eb, m,m (n)] T ? m (i) = u(i-m) -w H eb, m (n-1)u m (i) I = 1, 2, ???, n, w eb, m (n) = w eb, m (n-1) + k m (n) ? * m (n) (34) k m (n) = ? -1 m u m (n) ? m =? ?? ??=?? ? n-1 u m (i)u m H (i) c m (n) = ??? 1 b m (i) = c m H (n)u m+1 (i) i= 1, 2, ?,n, ? m (i) = c m H (n-1)u m+1 (i) i = 1, 2, ??., n, u m+1 (i) = ??(??) ??(?? ? ??) ? ?? ??=1 ? n-1 u m (i)b m * (i) = 0 B m (n) = ? ?? ??=1 ? n-i |b m (i)| 1 < i < n ? m (n) = ? ? m (n-1 ) + ? m (n) b m * (n) k m (n) = ? -1 m (n) u m (n) VI. V km(n) is the tap weight vector of the filter that operates on the data u(1), u(2)??..u(n) to produce the special response (43) d(i) is an n by 1 vector, and the name of it is the first coordinate vector. This vector has the property that its dot product with any time-dependent vector is the last element of that vector. First, we have to say that things are normalized. Second, we define the estimation error as (44) Were the estimation error is the output of the filter with tap weights km(n) and input um(n) as in figure 6. We can see from the equation 44 that the estimation error is real moreover it is between zero and one. Lambda between zero and one so the estimation error is bounded as in equation 45. It is good to see that the estimation error is the output of the filter of figure 6 of the tap weight vector km(n). Some Useful Interpretation of the stimation Error [14] Depending on the way it is used the estimation error can have three different interpretations 1. The estimation error can be seen as the likelihood variable (Lee 1981). This is due to the statistical formulation of the tap input function in terms of its log-likelihood function. We say that the input has joint Gaussian distribution. 2. The estimation error can be seen as the angle variable (Lee 1981). This can be seen from equation 44. We may say Were phi is the angle of plane rotation. 3. The estimation error can be seen as the conversion factor (Carayannis 1983). It can be used to find an a posteriori estimation error from the a priori estimation error. It is due to the third interpretation we use the term conversion factor. Three Kinds of Estimation Error [14] In linear least square estimation theory, we have three kinds of estimation error. The ordinary estimation error, the forward prediction error, and the backward prediction error. This means we have three interpretation as a conversion factor. # The recursive least squares estimation Where we have the estimation error is equal to the posteriori error divided by the a priori estimation error. This can be seen from equation 44. 2. For adaptive forward linear prediction (48) This can be seen by post-multiplying the Hermitian transposed sides of equation 23 by um(n-1) and then using equations 21 and 22 and 24 and 44. # For adaptive backward linear prediction (49) As in 2 if we multiply equation 34 by um(n) and use equations 32 and 33 and 35 and 44 we can find 49. The estimation error can be seen as the multiplicative correction. As we see the estimation error is the common factor (either regular or delayed) in the conversion from a priori to a posteriori estimation error. This is in ordinary estimation or forward prediction or backward prediction. We can use this conversion factor to find em(n) or fm(n) or bm(n) at time n before the tap weight has been computed (Carayannis 1983). Year 2017 d(i) = 1 i = n 0 i = 1, 2, ???, n-1 ? m (n) = 1 -k H m (n) u m (n) = 1 -u H m (n) ? m -1 (n) u m (n) 0 < ? m (n) < 1 ? m (n) = 1 / [1 + ? -1 u m H (n)? m -1 (n-1)u m (n)] ? m 1/2 (n) = cos? m (m) ? m (n-1 ) = f m (n)/? m (n) ? m (n ) = b m (n)/? m (n) VII. # E VIII. Least Square Lattice Predictor [13] We see that the input vector um(n) for the backward linear predictor of order m-1 and the input vector u(m+1)(n) for the backward linear predictor of order m have the same m-1 input entries. Let us move know to the partitioned vector. The input vector um(n-1) for the forward linear predictor of order m-1 and the input vector u m+1 (n) for the forward linear predictor of order m have the same last m-1 entries. The question is can we carry over the information from stage m-1 to stage m. The answer to this question is yes. And it employs modular structure known as lattice predictor. To find this important filtering structure, we use the principle of orthogonality, and with the umbrella of Kalman filter theory, we find the least squares lattice predictor. The upper part is a forward prediction error filter with tap weight vector a (m-1)(n) and output f (m-1)(i). The lower part is a backward prediction error filter with tap weight vector c (m-1)(n) and output b (m-1) (i). The problem we want to solve may be stated as. Given the forward prediction error f (m-1)(i) and the backward prediction error b (m-1)(i) find their order update value f m (i) and b m (i) efficiently. The past sample u(i-m) needed to compute fm(i) can be found from b (m-1)(i-1). Thus treating this as input to the one tap least square filter and f (m-1)(i) as the desired response and f m (i) as a result from least square approximation we can write (50) This is Figure 8 To find the coefficient of this filter we use the principal of orthogonality. According to this principal, the error produced by this filter f m (i) is orthogonal to the input b (m-1) (i). f m (i) = f m-1 (i) + k * f, m (n) b m-1 (i-1) i= 1, 2, ??, n, ? ?? ??=1 ? n-1 b m-1 (i-1)f * m (i) =0 k f, m (n) =? ?? ??=1 ? n-1 b m-1 (i-1)f * m=1 (i) / [? ?? ??=1 ? n-1 |b m-1 (i-1)| 2 ] B m-1 (n-1)= ? ?? ??=1 ? n-1 |b m-1 (i-1)| 2 b m-1 (0) = 0 for all m > 1 Î?" m-1 (n) =? ?? ??=1 ? n-1 b m-1 (i-1)f * m-1 (i) k f, m (n) = Î?" m-1 (n) / B m-1 (n-1) b m (i) = b m-1 (i-1) + k * b, m (n) f m-1 (i) i= 1, 2, ??, n, ? ?? ??=1 ? n-1 f m-1 (i)b * m (i) =0 k b, m (n) =? ?? ??=1 ? n-1 f m-1 (i)b * m-1 (i-1) / [? ?? ??=1 ? n-1 |f m-1 (i)| 2 ] IX. Using the time shifting property of the input data we write the partitioned vector. We mean by efficient manner is to use the information in f(m-1)(i) and b(m-1)(i) plus the input data is enlarged by the past sample u(i-m). # Let us put (59) This mean equation 58 can be written as (60) Equation 50 and 56 are the basic to lattice predictor. For physical interpretation we define Based on equation 50 and 56 we may make the following statements using the terminology of projection theory. 1. The result of projecting the vector b (m-1) (n-1) onto f (m-1) (n) is represented by the vector f m(n) and the forward reflection coefficient is the parameter needed to make this projection. 2. The result of projecting the vector f (m-1)(n) onto b (m-1)(n-1) is represented by the vector b(m)(n) . The back word reflection coefficient is the parameter needed to make this second projection. So we have the pair of interrelated order update recursions. Where u(n) is the input at time n. And m is the prediction order from zero up to M. We have M stages least-squares lattice predictor in figure 9. An important feature is the lattice structure which implies linear complexity with the order. Least Squares Lattice Version [13] The forward prediction error and backward prediction error are determined by equations 27 and 38 as And Where a m,m(n) is the last element of the vector am(n) and c m,0(n) is the first element of the vector cm(n). we generally find. Year 2017 F m-1 (n) = ? ?? ??=1 ? n-1 |f m-1 (i)| 2 k b, m (n) = Î?" * m-1 (n) / F m-1 (n) f m (n) = [f m (1), f m (2), ??., f m (n)] T b m (n) = [b m (1), b m (2), ??., b m (n)] T b m (n-1) = [0, b m (1), b m (2), ??., b m (n-1)] T f m (n) = f m-1 (n) + k * f, m (n) b m-1 (n-1) b m (n) = b m-1 (n-1) + k * f, m (n) f m-1 (n) f 0 (n) = b 0 (n) = u(n) f m (n) = a m H (n)u m+1 (n) b m (n) = c H m (n) u m+1 (n) f m (n) = a m-1 H (n)u m (n) = ??(?? ? ??)(??) 0 H ??(??)(??) ??(?? ? ??) = ??(?? ? ??)(??) 0 H u m+1 (n) b m-1 (n-1) = c m-1 H (n-1)u m (n-1) = 0 ??(?? ? ??)(?? ? ??) H u(n) ??(??)(?? ? ??) = 0 ??(?? ? ??)(?? ? ??) H u m+1 (n) a m (n) = ??(?? ? ??)(??) 0 +k f,m (n) 0 ??(?? ? ??)(?? ? ??) c m (n) = 0 ??(?? ? ??)(??) +k f,m (n) ??(?? ? ??)(??) 0 k f,m (n) = a m,m (n) k b,m (n) = c m,0 (n) X. The order update equations 64 and 65 show a very good property of the lattice predictor of order M. we can say such a predictor have a chain of forward prediction error filters of order 1,2,???,M and a chain of backward prediction error filters of order 1,2,???,M all in one modular structure shown in figure 9. # XI. Time Update Recursion [17] From equation 55 and 60 we find that the reflection coefficients (backward and forward)are uniquely determined by three quantities. Equation 31 and 32 provide the time update for two of them. We still have to find the time update equation for the third quantity (exponential cross-correlation). To proceed, we recall the two equations with (m-1) in place of m. # And Substituting in equation 54 we get This equation simplifies as follows, First, the second term in the equation is zero using the principal of orthogonalization which states. Second, the first term inside the brackets we have the a priori forward prediction error. # This mean delta is (68) We can write this summation as We know that the first term is simply delta (m-1) (n-1) so we write. (69) Which is the desired equation. This is similar to equation 31 and 42 in that of these three updates the correction term has the product of posteriori and a priori prediction errors. Exact Decoupling Property of the east Squares Lattice Predictor [18] An important property of this predictor is that the backward prediction errors at different stages are uncorrelated. This is plus that they are orthogonal. Keep in mind that the input u(n) might be a correlated sequence. This means we are transforming a correlated sequence to uncorrelated one. (70) The transformation here is reciprocal which mean that this filter keeps the information content of the input data. The tap weight vector of the filter is cm(n) We want to find the backward a posteriori prediction error bm(i) using the input u(m+1)(i). We can express bm(i) as (71) Let Be (m+1) by 1 backward a posteriori prediction error vector. Substituting equation 71 into this vector we have the transformation [19] (72) Where the m+1 by m+1 transformation matrix This mean the inverse matrix exist. This means that the reciprocal nature of equation 70 is confirmed. The correlation between the backward prediction errors of orders k and m is zero. # Global Journal of Computer Science and Technology Volume XVII Issue III Version I k f,m (n) = k b,m * (n) f m-1 (i) = u(i) -w * f,m-1 (n) u m-1 (i-1) i = 1, 2, ???,n, w ef,m-1 (n)=w ef,m-1 (n-1) + k m-1 (n-1) ? m-1 * (n) Î?" m-1 (n) =? ?? ??=1 ? n-1 [u(i)-w ef,m-1 H (n-1) u m-1 (i-1)] * b m-1 (i-1) -? m-1 (n)k T m-1 (n-1)? ?? ??=?? ? n-1 b m-1 (i-1)u * m-1 (i-1) ? ?? ??=1 ? n-1 u m-1 (i-1)b * m (i) =0 ? m (i)=u(i)-w ef, m (n-1) u m (i-1) I = 1, 2, ??.,n. Î?" m-1 (n) =? ?? ??=1 ? n-1 b m-1 (i-1)? * m-1 (i) Î?" m-1 (n) =? ?? ?1 ??=1 ? n-1 b m-1 (i-1)? * m-1 (i) + b m-1 (i-1)? * m-1 (i) Î?" m-1 (n) =?? ?? ?1 ??=1 ? n- # L Using the principal of orthogonality, it is clear that the error bm(i) is perpendicular to the input uk(i) and this means that the correlation is zero for m not equal k. This means that bm(n) and bk(n) are uncorrelated in the time-averaged sense. This property makes this system an ideal device for exact least squares joint process estimation. We might use the sequence of bm(n) in figure 9 to perform the least squares estimation of the desired response as in figure 10. We may write (75) The initial condition of the joint process estimation is (76) The parameter h(m-1)(n) are called joint process estimation or regression coefficients. Thus the estimation of the desired response d(n) may go as a stage by stage basis, jointly with the linear prediction process. Equation 75 is shown in figure 8(c). We use i in the figure to be consistent with 8(a) and 8(b). the input is b(m-1)(i) and the desired response is e(m-1)(i). [18]. # XIII. Simulation Results In this part, we will use mat lab. The desired response is an output of a Wiener filter of the first order and coefficient a=.3. The input is random signal. This input is given to the Wiener filter and the lattice predictor also first order. We feed the desired signal d(n) to the lattice predictor. The block diagram of the system is figure 11. As we can see from the simulation results, the coefficient h1 will pick up the value of a=.3 of the Wiener filter (figure 12). 1![Figure 1: Lattice filter This is Burg formula (1968).](image-2.png "Figure 1 :") 2![Figure 2: Block diagram This formula assumes that the process is ergodic. This means we can use time averages. We get](image-3.png "Figure 2 :") 3![Figure 3: The coefficients h](image-4.png "Figure 3 :") 4![Figure 4: Forward prediction](image-5.png "Figure 4 :") ![of size m+1 is the following, Because of orthogonality we have the condition,(29) ](image-6.png "") ![can find cm(n) as a solution to the same minimization problem.Using equation 32 in equation 41then equation34 and the orthogonality condition of equation 40 we get the recursion.(42)](image-7.png "") ![Forward Prediction Error and Backward Prediction Error to Orthogonal Data in Space(Lattice Predictor) and the Origin of a System to Pick up Another Journa ls Inc. (US) 1](image-8.png "From") ![(45) Know it is time to simplify things(46) ](image-9.png "") 6![Figure 6: Conversion factor](image-10.png "Figure 6 :") 7![Figure 7: Block diagram Let us begin with figure 7. The input is um(n).The upper part is a forward prediction error filter with tap weight vector a (m-1)(n) and output f (m-1)(i). The lower part is a backward prediction error filter with tap weight vector c (m-1)(n) and output b (m-1) (i). The problem we want to solve may be stated as.Given the forward prediction error f (m-1)(i) and the backward prediction error b (m-1)(i) find their order update value f m (i) and b m (i) efficiently.](image-11.png "Figure 7 :") ![into equation 51 and solving for the coefficient.](image-12.png "") ![last line we used the fact thatIn equation 52 we have introduced the notation of exponentially weighted cross-correlation between forward and backward prediction error.](image-13.png "") 8![Figure 8: Recursion Using equation 53 and equation 54 in equation 52 we see that the coefficient is (55)We use the same method to find the order update for backward prediction error b m (i). The input is f(m-1)(i). The filter is figure8 (b). It is clear that](image-14.png "Figure 8 :") ![order of the filter and n is the time index. The initial condition is(63) ](image-15.png "") 3Year 2017 © 20 7 Global Journa ls Inc. (US) 1 © 2017 Global Journals Inc. (US) From Forward Prediction Error and Backward Prediction Error to Orthogonal Data in Space(Lattice Predictor) and the Origin of a System to Pick up Another5© 2017 Global Journals Inc. (US) © 2017 Global Journals Inc. (US) From Forward Prediction Error and Backward Prediction Error to Orthogonal Data in Space(Lattice Predictor) and the Origin of a System to Pick up Another 8 Year 2017 ( ) G © 20 7 Global Journa ls Inc. (US) 1 * ProbabilityPapoulis Random Variables, and Stochastic Process 2002 * JGProakis 2001 Digital Communications * Engineering Analysis RJSchilling 1988 * Van Trees, Detection, Estimation, and Modulation Theory HL 1968 * J Proakis, Inroduction to Digital Signal Processing 1988 * Chen Linear System Theory and Design 1984 * Communication System SHaykin 1983 * Introduction to System Analysis THGlisson 1985 * Airborne Doppler Radar MartinSchetzen 2006 * The Volterra & Wiener Theories of Nonlinear Systems MartinSchetzen 2006 * Discrete System using Matlab MartinSchetzen 2004 * ArvinGrabel Microelectronics 1987 * Ziadsobih International Journal of Engineering) 7 2013 Issue * Construction of the Sampled Signal up to any Frequency While Keeping the Sampling Rate Fixed Ziadsobih Signal Processing International Journal) 7 2013 * Ziadsobih Up/Down Converter Linear Model With Feed Forward And Feedback Stability Analysis 2014 Issue * Generation of any Pdf from A Set of Equally Likely Random Variables Ziadsobih Global Journal of Computer Science and Technology 14 2014 Version 1.0 : Year * AdaptiveZiadsobih Filters Global Journal of Researches in Engineering (F) 14 2014 Version (1): Year * Adaptive Filter to Pick Up A Wiener Filter from the Error with and without Noise Ziadsobih Global Journal of Researches in Engineering (F) 15 2015 Version * From Forward Prediction Error and Backward Prediction Error to Orthogonal Data in Space(Lattice Predictor) and the Origin of a System to Pick up Another SimonHaykin 2001 Adaptive Filter Theory