A MULTI-BEAM MODEL OF ANTENNA ARRAY PAT- TERN SYNTHESIS BASED ON CONIC TRUST REGION METHOD

Size: px
Start display at page:

Download "A MULTI-BEAM MODEL OF ANTENNA ARRAY PAT- TERN SYNTHESIS BASED ON CONIC TRUST REGION METHOD"

Transcription

1 Progress In Electromagnetics Research B, Vol. 53, 67 90, 013 A MULTI-BEAM MODEL OF ANTENNA ARRAY PAT- TERN SYNTHESIS BASED ON CONIC TRUST REGION METHOD Tiao-Jun Zeng and Quan-Yuan Feng * School of Information Science and Technology, Southwest Jiaotong University, Sichuan , China Abstract In this paper, we propose a multi-beam model for antenna array pattern synthesis AAPS problem. The model uses a conic trust region algorithm CTRA similarly proposed in this paper to optimize its cost function. Undoubtedly, whole algorithm efficiency ultimately lies on the CTRA, thereof, we propose a method to improve the iterative algorithm s efficiency. Unlike traditional trust region methods that resolve sub-problems, the CTRA efficiently searches a region via solving a inequation, by which it identifies new iteration points when a trial step is rejected. Thus, the proposed algorithm improves computational efficiency. Moreover, the CTRA has strong convergence properties with the local superlinear and quadratic convergence rate under mild conditions, and exhibits high efficiency and robustness. Finally, we apply the combinative algorithm to AAPS. Numerical results show that the method is highly robust, and computer simulations indicate that the algorithm excellently performs AAPS problem. 1. INTRODUCTION Although array pattern synthesis problems have been extensively investigated over the last several decades [1 9], most of them are single-beam methods, and very few focus on multi-beam case [10, 11]. A multi-beam antenna is an antenna which is able to create multibeams simultaneously. Since the size of antennas is fixed by physical considerations, the difficulty of accommodating them on satellites and other facilities with multiple antenna-based functions grows with the number of such functions if a separate antenna Received 15 June 013, Accepted 31 July 013, Scheduled August 013 * Corresponding author: Quan-Yuan Feng fengquanyuan@163.com.

2 68 Zeng and Feng is used for each. In this context, it is desirable for a single antenna to be able to generate multiple radiation patterns, with the excitation corresponding to each being selectable by suitable switching devices [1]. Multibeam antennas can be considered as effective approach to perform independent data streams between a transmitter and a receiver. In this paper, we propose a model to multi-beam AAPS, and this model is the extension of our previous work [7]. Under this model, the AAPS problem can be transformed into an unconstrained optimization problem. In recent years, the global optimization techniques, such as particle swarm optimization and differential evolution, have been used for AAPS problem [6, 13]. However, these algorithms are too time-consuming and limited in their application. Therefore, it is necessary to design a time-saving algorithms. Trust-region method of quadratic model for unconstrained optimization has been studied by many researchers [14 17]. Trust-region methods are robust, can be applied to ill-conditioned problems and have strong global convergence properties. Instead of quadratic approximation to objective function, Davidon [18] proposed conic model to approximate the objective function. Di and Sun [19] first presented a trust region method based on conic model for unconstrained optimization. The conic model has several advantages. First, if the objective function has strong nonquadratic behavior or its curvature changes severely, the quadratic model methods often produce a poor prediction of the minimizer of the function. In this case, conic model approximates the objective function better than a quadratic approximation, because it has more freedom in the model. Second, the quadratic model does not take into account the information concerning the function value in the previous iteration which is useful for algorithms. However, the conic model possesses richer interpolation information and satisfies four interpolation conditions of the function values and the gradient values at the current and the previous points. Using these rich interpolation information may improve the performance of the algorithms. Third, the initial and limited numerical results provided in [19] etc. show that the conic model methods give improvement over the quadratic model ones. Finally, the conic model methods have the similar global and local convergence properties as the quadratic model ones [0]. Recently, Many researchers have recently studied nonmonotone adaptive conic trust region methods NACTRM for unconstrained optimization problems [16, 19, 0]. NACTRM can automatically produce an adaptive trust region radius whenever a trial step is rejected, and decreases functional values after finite iterations. The

3 Progress In Electromagnetics Research B, Vol. 53, main disadvantage of NACTRM lies in identifying new trial iterations; it requires considerable computational time to repeatedly solve subproblems. Motivated by rectifying this shortcoming, we propose a solving-inequality conic trust region method SICTRM that adopts a different search approach at each iteration. The search direction d k = x k+1 x k is generated by solving the sub-problem of the cost function. If d k is rejected, the sub-problem does not need to be resolved. The search direction is generated by solving an inequality details in Section 3. The rest of the paper is organized as follows. Section presents the Multi-beam model. In Section 3, SICTRM is introduced and then proven as a well-defined algorithm with its convergence properties. Section 4 shows the experimental results and related discussions. Discussion and Conclusions are drawn in Sections 5 and 6, respectively. Notation: T is the transpose, is the complex conjugate, and represents the Frobenius norm.. MULTI-BEAM MODEL The array factor for a linear array with N isotropic elements can be expressed as: N pw, θ = w i e jφiθ = W T Sθ, θ [ 90, 90 ] 1 i=1 where θ is the direction of arrival of signal, j = 1, Sθ = [ 1, e jϕ θ,..., e ] jϕ θ T the steering vector, φi θ = πd i sin θ/λ the phase delay due to propagation, λ the wavelength of the transmitted signal, d i the position of the ith element of the antenna array d 1 = 0, and W = [ w 1, w,..., w N ] T the complexweight vector. Since the optimization approach is only applicable to real variable problems, a complex-to-real transform of the pw, θ is necessary. Firstly, introducing a power function P W, θ of the AF, i.e., P W, θ = pw, θ = pw, θp W, θ, then denote W 1 = [w1 1 w1... wn 1 ]T, W = [w1 w... wn ]T, Sθ = S 1 θ + js θ, S 1 θ = [1 cosφ θ... cosφ N θ ] T, S θ = [0 sinφ θ... sinφ N θ] T, W 1, W, S 1 θ and S θ R N. Thus, we obtain [7] P W, θ = [ W T 1 S 1 + W T 1 S 1 W T S + W T S + W T 1 S W T 1 S W T S 1 + W T S 1 ] R

4 70 Zeng and Feng Transition region Mainlobe region β α 1 Sidelobe region α α 3 α 4 Line 1:Multi-beam pattern Line :Array respond α 5 α θ d 1 θ θ θ θ d θ5θ6 θ7 θ8 d Figure 1. Definition of multi-beam synthesis. In the case where phase constraint is not considered, an optimal weight vector, W opt, is determined so that the pown or amplitude response P W, θ or pw, θ best approximates the desired pattern. The definition of multi-beam synthesis is illustrated in Figure 1, we propose a method for synthesizing multi-beam pattern of antenna array, it is formulated by a discreted angle penalty function based on pown response of antenna array as following: F W, A = F 1 W, A + F W, A 3 d 1 θ [ P m ] θ 1 θ [ P m ] F 1 W, A = α 1 P + + α P + F W, A = θ θ θ 1 θ 4 θ θ 3 θ θ 5 θ d α 1 d 1 [ P m ] f 1 P + + f 1 [ P f P + [ α 4 P + P f α 4 m ] + m ] + θ 3 β P θ [ P α 3 P + d θ θ 4 θ 6 θ θ 7 θ 8 θ [ P m ] + β P + f 4 P + + f 4 θ 6 θ 7 θ θ 5 α 3 α m ] 4 [ P m ] f 3 P + f 3 d 3 θ θ 8 [ α5 P

5 Progress In Electromagnetics Research B, Vol. 53, P m ] + + α 5 90 d 3 [ α 6 P P m ] + λ α 6 6 lg α i 5 where θ is the angle resolution, P = P W, θ defined by Eq., constant m > 0, β = r α i r is a given value, in general, and r 0 will satisfy the condition: β 100α i, f 1 = β α θ θ 1 θ θ 1 + α, f = β α 3 θ 3 θ 4 θ θ 4+α 3, f 3 = β α 4 θ 6 θ 5 θ θ 4+α 4, f 4 = β α 5 θ 7 θ 8 θ θ 8+α 5. P Similar roles of penalty terms f i, α j and lg α j are mentioned in reference [7]. When F W, θ ε ε is a very small constant, it obtain [ P W, θ m ] θ [ 90, θ 1 +θ 4, θ 5 +θ 8, 90], α i P W, θ + ε ω [θ, θ 3 ] + [θ 6, θ 7 ], β P W, ω ε i.e., P W, θ < α i and P W, ω = β, thereof, P SLL = 10 log P W,ω P W,θ > 10 log β α i > 0 db β > 100α i. Above analysis indicates that F ε is the sufficient condition of weight vector W converging to a optimal solution. In practice, the converging condition is too hard to achieve, because F is a bundle of many sub functions, every sub function has convergence errors generally. Thereof, it is necessary to introduce a more practical F function, here, average cost function ACF, i.e., 180/ θ to index algorithm convergency. The optimal value of the cost function, which is finally obtained by the optimizing method. In this paper, an algorithm based on hybrid trust region method is propped in next section, however, for applying the approach, we must first compute its gradient and Hessian matrices see the details in Appendix A. 3. ALGORITHM BASED ON THE CONIC TRUST REGION ALGORITHM We consider the following unconstrained optimization problem: i=1 min fx, x R n 6 where f : R n R is a continuously differentiable function. The optimization problem has become an important research focus given its wide range of potential applications. Throughout the paper, {x k } is a sequence of points generated by our algorithm. CTRA iteratively solves optimization problems. At each iteration, a trial step d k is α i

6 7 Zeng and Feng generated by solving the sub-problem min ϕ k d = fx k + gt k d 1 αk T d + 1 d T B k d 1 α T k d, α k, d R n 7 s.t. d k, 1 αk T d > 0 where g k = fx k R n, B k R n n is an approximate Hessian matrix of fx k, and k > 0 is a trust region radius. Some criteria are used to decide on accepting a trial step and the manner by which the trust region radius is adjusted. After obtaining a trial step d k, the trust region algorithms compute the ratio ρ k = fx k fx k +d k fx k ϕ k d k. Where ρ k indicates the approximate degree of fx k + d k and ϕ k d k, and ρ k 1 while k 0. In iterative computations, ρ k c 0 a positive constant means that ϕ k d k is approximate to fx k +d k, and trial step d k is accepted, so that k is enlarged to speed up the computations. On the other hand, if d k is not accepted ρ k < c 0, some methods resolve subproblem Eq. 7 by reducing the trust region radius until an acceptable step is found. Therefore, the subproblem may be solved several times during each iteration before an acceptable step is found, and these repetitious processes increase the total computational cost for large-scale problems. In the current paper, we propose a new algorithm to improve the computational efficiency of trust region methods. In the computation, B k is generated using the Broyden-Fletcher- Goldfarb-Shanno BFGS formula and is therefore is positive; d k = β k g k β k 0 is a descent direction of fx k. Meanwhile, fx k +d k ϕ k d k when d k { x x k k }, indicating that d k = β k g k is also the descent direction of ϕ k d under this condition. Thus, we obtain an approximate solution of the subproblem by solving the following inequality: gt k β kg k 1 αk T β kg k 1 β k g k T B k β k g k 1 α T k β k g k fx k fx k + β k g k 8 c 0 Let β = max{β k 8 holds}, then d k = β g k is an approximate solution of the subproblem and k = β g k is the trust region radius of the kth iteration. Algorithm 3.1 conic trust region algorithm for unconstrained optimization Step 0. Given 0.75 < c 0 < 1, c 1 > 1 and an initial symmetric positive definite matrix B 0, choose x 0 and set k = 0. Step 1. Compute g k ; if g k < ε ε is a small positive real number, then stop. Step. calculating solution d k of sub-problem Eq. 7.

7 Progress In Electromagnetics Research B, Vol. 53, Step 3. We compute ρ k. If ρ k µ, then we proceed to step 4. Otherwise, Solving inequality 8, we obtain d k = β g k and x k+1 = x k + d k. Let We then proceed to step 5. Step 4. We set k+1 = ρ k k and x k+1 = x k + d k k+1 = k, if d k < k k+1 [ k, c 1 k ], if d k = k Step 5. Generate α k+1, modify B k into B k+1 using the BFGS formula as an approximation to fx k+1, then return to step 1. Remarks. The method for generating α k+1 and B k+1 was as previously reported [16 18]. We assume that the matrices B k+1 are uniformly bounded to prove the convergence, and k, σ 0, 1 : α k k σ which ensures that the conic model function ϕ k d is bounded over the trust-region d k. We reiterate that our algorithm is reduced to a quadratic model-based algorithm if α k = 0 for all k. Under the smoothness assumptions made in this paper, note that the objective function is convex quadratic around a local minimizer. Therefore, choosing α k 0 asymptotically is suitable when x k is near the minimizer. The mild following assumptions, commonly used in the convergence analysis of most optimization algorithms [16 18], are proposed to analyze the convergence properties of Algorithm 3.1. Assumption 1. f is twice continuously differentiable and bounded below. Assumption. A strong local minimizer point x exists, such that fx is positive definite. d Set s = 1 α T d, d = s k 1+α T s. From d k, we obtain k s 1+α T s k, s k 1 + αk T s k + k α k s k + σ s, k and s k 1 σ. We consider a sequence {s k = τ k k { 1 σ 1, if g T { } k Bg k 0 τ k = min k g k 3 1 σ gk T Bg, 1, otherwise k s k Bx k, k 1 σ. Lemma 1. In equation fx k ϕ k d = gt k d g T k s 1 st Bs 1 g k min{ k 1 σ, g k B } holds. g k g k }, where. Clearly, s k k 1 σ, 1 α T k d 1 d T B k d = 1 α T k d

8 74 Zeng and Feng Proof: Case 1: g T k Bg k 0 means that τ k = 1. g T k s k 1 st k Bs k = gt k g k 1 g k min{ k 1 σ, g k B }. Case : gk T Bg k > 0. If min{ k g k 3 1 σ gk T Bg, 1} = k k 1 σ gk T s k 1 st k Bs k = g k 4 1 g k min{ k 1 σ, g k If min{ k 1 σ B }. and s k = gt k gk T Bg g k. k Thereof, k 1 σ 1 k 1 σ 1 g T k g T k Bg k k 1 σ g k g k 3 gk T Bg k gk T Bg 1 g k 4 k g k 3 gk T Bg, 1} = 1, then k k 1 σ g T k s k 1 st k Bs k = k 1 σ g k g k, then s k = gt k g k. g T k Bg k g T k Bg k = 1 g k 3 g T k Bg k 1, gk T Bg k g k 4 g T k Bg k 1 g k B g T k Bg k 1 σ k g k 3 1 k 1 σ gt k Bg k g k k 1 σ g k 1 k 1 σ 1 1 σ g k k g k 3 1 k 1 σ g k 1 g k min{ k 1 σ, g k B }. Lemma. Suppose d is a solution of Sub-problem 7; then the following inequality holds: fx k ϕ k d 1 g k min { k 1 σ, g } k B Proof. ϕ k d ϕ k d and Lemma 1, thus, the above inequality holds. Lemma 3. Suppose that Assumption holds, then M exists as a positive constant, such that fx k fx k + d k fx k ϕ k d k M d k, Proof. This proof is similar to that of Lemma 3. [17]. Lemma 4. Suppose that Assumption 1 holds, then Algorithm 3.1 is well defined. Proof. Assuming that the algorithm does not terminate at x k g k 0, then from Lemma and 3, we obtain fx k fx k + d k fx k ϕ k d k fx k ϕ k d k M d k { g k min k k 1 σ, g k B } M k { } g k min k 1 σ, g k B The definition of k, k 0 as β 0 implies that

9 Progress In Electromagnetics Research B, Vol. 53, fx k fx k +d k fx k ϕ k d k fx k ϕ k d k From the definition of ρ k, we obtain 0, i.e., fx k fx k +d k fx k ϕ k d k 1 ρ k = fx k fx k +d k fx k ϕ k d k 1 > c 0 Therefore, we can obtain trial step d k after finite computation, i.e., Algorithm 3.1 is well defined. Lemma 5. Suppose that Assumption 1 and hold, and is the sequence generated by Algorithm 3.1, then lim g k = 0 k Proof. This proof is similar to that of Lemma [7]. Lemma 6. The sequence {x k } is generated by Algorithm 3.1 and converging to x B. Suppose that lim k fx d k k d k = 0, where fx is positive definite. Therefore, the convergence rate is superlinear. Moreover, suppose that fx is Lipschitz-continuous in a neighborhood Nx, δ = { x R N x x < δ }, i.e., Lδ > 0 exists such that fx fy Lδ x y, x, y Nx, δ {x k } then quadratically converges to x. Proof. Given that B lim k fx d k k d k = 0 therefore, B k fx d k = o d k 9 Using Eq. 9 and the Taylor theorem, we obtain fx k fx k + d k fx k ϕ k d k = gt k d k 1 dt k f k d k o d k gk T + d 1 αk T d + 1 d T B k d 1 α T k d Thus, α k 0 as x k x 1 dt k Bk f d k +o d k 1 d T k B k f d k +o d k 1 d T k o d k + o d k = o d k

10 76 Zeng and Feng fx k fx k + d k fx k ϕ k d k o d k gt k d 1 α T k d + 1 d T B k d 1 α T k d o d k d k 0 as k. Therefore, 1 = fx k fx k + d k fx k ϕ k d k fx k ϕ k d k o d k 1 dt k B k, g kd k k 0, α k 0 fx k fx k +d k fx k ϕ k d k µ x k+1 = x k + d k holds for all sufficiently large k, and Algorithm 3.1 becomes the Newton method or quasi-newton method that superlinearly converges. Therefore, the sequence {x k } converges to x superlinearly. x k x, k > 0, k > k, x k Nx, δ. Set δ k = x k x. Using the mean value theorem, we obtain Thus, δ k+1 = x k+1 x = x k x + d k = δ k β k g k = δ k β k g k g β k g g = gx 1 ] = β k [ fx k δ k fx + tδ k δ k dt β k g δ k+1 = 0 [ 1 [ = β k fx k fx + tδ k ] ] δ k dt β k g 0 [ 1 β k β k 0 1 [ fx k fx + tδ k ] ] δ k dt β k g 0 fx k fx + tδ k δ k dt + β k g 1 β k Lδ δ k tdt + fx k 1 g 0 δ k+1 lim k δ k 1 Lδ β k lim gx k = g = 0 k implying that {x k } converges to x quadratically. Finally, the general algorithm is derived by minimizing the cost function, obtaining optimal weight vector W o, and outputing amplitude pattern pw o, θ. The algorithm is described as follows: Step 1. The cost function F W, A is constructed as Eq. 3. Step. Let X = W 1 W A then, computing X F and X F

11 Progress In Electromagnetics Research B, Vol. 53, Eqs. A1 and A43 respectively see the details in Appendix A. The two formulas are essential to optimizing algorithm. Finally, Algorithm 3.1 is used to minimize the cost function F X. Step 3. Base on the previous analysis, vector W converges a optimal solution, if 0, then array amplitude pattern is outputed. F 180/ θ 4. SIMULATION RESULTS In this section, several simulations are performed to test the multibeam synthesizing patterns validity Simulation 1: Comparison with NACTRM [16] Consider a synthesis problem using a 1-element half-wave-length spacing linear array with the following configurations: θ 1 = 6.0, θ = 4.5, θ 3 = 15.5, θ 4 = 14.0, θ 5 = 14.0, θ 6 = 15.5, θ 7 = 4.5 Table 1. Comparison of the SICTRM and NACTRM algorithms. No. Algorithm r m θ ACF DPMLL db PSLL db Time 1 SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM SICTRM NACTRM s

12 78 Zeng and Feng and θ 8 = 6.0. The simulations based on the SICTRM and NACTRM algorithms were executed 10 times independently, the results are listed in Table 1. The two iterative algorithms optimize same cost functions in every stimulation, but the results show their distinct performance. Under NACTRM, average wasted time, average DPMLL the difference of peak mainlobe level and average PSLL the peak sidelobe level are 50.0 second, db and db, under SICTRM, they are 4.07 second, db and 1.99 db respectively. Undoubtedly, compared with NACTRM, the SICTRM algorithm exhibits stronger Normalized AF db θ degree a Log F Iteration Figure. a Radiation pattern and b cost function variety of 1- element line antenna array under SICTRM algorithm. Table. Optimal solution of 1-element line antenna array under SICTRM algorithm. Element W 1 W Element W α 1, α 3, W Element W 1 W α 5, b

13 Progress In Electromagnetics Research B, Vol. 53, search ability in these simulations. The reported minimum PSLL for synthesis of 1-element multi-beam array is 4.76 db in [11], our results is better than their result. We show the optimal normalized AF amplitude pattern, ACF variety and optimal solution in Figures a, b and Table respectively. 4.. Simulation : Comparison with DA Proposed in [7] and Dynamic Differential Evolution DDE [13] The DA method, a gradient-based optimizing algorithm, adopts a computing inequality to define step length for improving computation efficiency, the method has exhibited strong convergence properties. DDE is a new version of differential evolution algorithm, which used only one array to search the optimization. DDE significantly outperforms the differential evolution strategy in efficiency, robustness, and memory requirement [1]. Consider a synthesis problem using a 17-element half-wave-length spacing linear array with the following configurations: θ 1 = 4.0, θ = 3.0, θ 3 = 17.0, θ 4 = 16.0, θ 5 = 6.0, θ 6 = 7.0, θ 7 = 33.0 and θ 8 = Executing the three algorithm 10 times independently and listing results in Table 3 as following. Under DA, average iterative time, minimum PSLL, average DPMLL and PSLL are second, 8.75 db, db, and.98 db, respectively. Similarly, under DDE, they are second, 6.71 db, db, and 3.96 db, respectively. However, under SICTRM, they are 5.79 second, db, db, and 5.88 db, respectively. Obviously, the result indicates that the proposed algorithm has stronger ability to suppress sidelobe level. Finally, we show the optimal radiation pattern, ACF variety and optimal solution in Figures 3a, b, and Table 4, respectively Simulation 3: SICTRM Performance with 17 Mutual Coupling Elements In this simulation, we test the SICTRM performance in the presence of antenna elements coupled to each other. Taking the mutual coupling into account, the SICTRM is closer to the actual situation. The coupling coefficient Between any two antenna elements can be calculated []. In the case of an uniform linear array, these coefficients can be represented by a symmetric Toeplitz matrix [3, 4]. The mutual coupling between two elements is inversely related to their distance, and thus it is assumed that when the distance between two array elements is more than k inter-element spacing, the mutual

14 80 Zeng and Feng Table 3. Comparison of the SICTRM, DA and DDE algorithms. No. Algorithm r m θ ACF DPMLL PSLL Time db db s 1 SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE SICTRM DA DDE coupling coefficients are assumed to be zero. Therefor, the mutual coupling matrix can be sufficiently modeled as follows: M = T opelitz 1, c 1, c,..., c k 1, 0 1 N k 10

15 Progress In Electromagnetics Research B, Vol. 53, Normalized AF db θ degree a LogF Iteration Figure 3. a Radiation pattern and b cost function variety of 17- element line antenna array under SICTRM. b Table 4. SICTRM. Optimal solution of 17-element line antenna array under Element W 1 W Element W α 1, α 3, W Element W 1 W α 5, Under this model, the true steering vector should be rewritten as S t = MSθ 11 Make minor modifications to the original program accordingly and execute 10 times. The results are shown in Table 5. In the simulation, the mutual coupling matrix

16 8 Zeng and Feng Table 5. SICTRM performance with 17 mutual coupling elements. No. Algorithm r m θ ACF DPMLL PSLL Time db db s 1 SICTRM SICTRM SICTRM SICTRM SICTRM SICTRM SICTRM SICTRM SICTRM SICTRM M = T opelitz 1, j, j, j, Other parameters are same as those of simulation. Under SICTRM, average iterative time, minimum PSLL, average DPMLL and PSLL are 7.96 second, 5.47 db, db and 1.57 db, respectively. The results show that our algorithm is valid even in the presence of larger array element coupling. Finally, we show the comparing radiation patterns in isolated and coupled cases, ACF variety and optimal solution in Figures 4a, b and Table 6 respectively. These two patterns are well matched. Normalized AF db Isolated coupled θ degree a Log F Iteration Figure 4. a Comparing radiation patterns and b cost function variety of 17-element line antenna array under SICTRM. b

17 Progress In Electromagnetics Research B, Vol. 53, Table 6. Optimal solution of 17-coupled-element line antenna array under SICTRM. Element W 1 W Element W α 1, α 3, W Element W 1 W α 5, DISCUSSION We have designed three simulations to validate our algorithm. Simulation results indicate that the proposed algorithm is valid to multi-beam antenna array pattern synthesis in both isolated and coupled cases. Of course, the performance of the algorithms relates to some pre-assumed parameters. In these parameters, the parameter r has impact on the convergence of the algorithm greatly. If r is too small r < 0, antenna sidelobe level will not be suppressed effectively. However, if r is too is too large, the algorithm is difficult to converge. The simulations results show that our assumptions 0 r 4 are reasonable. Generally, parameter m > 0. In the three simulations, m is more than 0.1 and less than 0.4. Similarly, if it is too big, the algorithm is difficult to converge. In theory, when the angle resolution θ is smaller, the antenna sidelobe level will be suppressed more effectively. But that will increase the computational burden. In fact, θ = 0.1 is a good choice. The parameter ACF is a convergence indicator and is assumed less than 1.

18 84 Zeng and Feng 6. CONCLUSION In this paper, we have presented a multi-beam model for antenna array pattern synthesis problem. Under the model, an antenna is able to create multibeams simultaneously. In order to quickly calculate the objective function to obtain the optimal solution, an iterative optimization algorithm named SICTRM is proposed. The SICTRM algorithm, unlike traditional trust region methods that resolve subproblems, can improve computational efficiency as indicated by the theoretical analysis and stimulation results. Computer simulations illustrate the general algorithm s good performance as it is applied to the AAPS problem. APPENDIX A. THE DERIVATION OF GRADIENT AND HESSIAN MATRIX OF COST FUNCTION The similar derivation can be seen in [7]. Let t 1 = S T 1 W 1, t = S T W, t 3 = S T W 1, and t 4 = S T 1 W, we obtain W1 P = t 1 + t S 1 + t 3 + t 4 S A1 For convenience, let F 11 = α 1 P + P m, α1 F1 = α P + m, m, F13 = f 1 P + F14 = β P, F 15 = f P + P α P f P f1 m, and F16 = α 3 P + P m. α3 Thereof, W1 F 11 = P α 1 W1 P + m α 1 P α 1 m 1 W1 P A Similarly, W1 F 1 = P α W1 P + m α P α m 1 W1 P A3 W1 F 13 = P f 1 W1 P + m f 1 P f 1 m 1 W1 P W1 F 14 = P β W1 P W1 F 15 = P f W1 P + m f P f m 1 W1 P A4 A5 A6 W1 F 16 = P α 3 W1 P + m α 3 P α 3 m 1 W1 P A7

19 Progress In Electromagnetics Research B, Vol. 53, From Eqs. A1 A7, we obtain, W1 F 1 W, A = 1 W1 F 11 + W1 F W1 F W1 F W1 F W1 F 16 A8 Similarly, W1 F W, A = 7 W1 F W1 F + 9 W1 F W1 F W1 F W1 F 6 A9 where W1 F 1 = P α 4 W1 P + m α 4 P α 4 m 1 W1 P A10 W1 F = P f 3 W1 P + m f 3 P f 3 m 1 W1 P W1 F 3 = P β W1 P W1 F 4 = P f 4 W1 P + m f 4 P f 4 m 1 W1 P A11 A1 A13 W1 F 5 = P α 5 W1 P + m α 5 P α 5 m 1 W1 P A14 W1 F 6 = P α 6 W1 P + m α 6 P α 6 m 1 W1 P A15 So, Similarly, W1 F W, A = W1 F 1 W, A + W1 F W, A W P = t 4 t 3 S 1 + t 1 + t S A16 A17 Thereof, Eq. A17 is substituted W1 P above equations correspondingly and obtains the formulations of W F 1 W, A and W F W, A. Finally, we get W F W, A= W F 1 W, A + W F W, A F 11 α 1 =α 1 P mp m α m 1 1, A18 F 13 α 1 =f 1 P f 1 α 1 mp m f m 1 1 f 1 α 1, F 14 α 1 =rβ P,

20 86 Zeng and Feng F 15 α 1 =f P f α 1 mp m f m 1 f α 1, F α 1 =f 3 P f 3 α 1 mp m f m 1 3 f 3 α 1, F 3 α 1 =rβ P, F 4 α 1 =f 4 P f 4 α 1 mp m f m 1 4 f 4 α 1, lg α 1 =1/α 1 F α 1 = 1 F 11 α F 13 α F 14 α F 15 α F α F 3 α F 4 α 1 λ/α 1 A19 Similarly, it is not difficult to compute F α i i =, 3,..., 6, finally, we can obtain A F = F α 1 F α... F α 6 T A0 Using Eqs. A16, A18 and A0, we obtain the formulation X F W, A as following: let X F = W1 F W F A F A1 Next, we calculate the Hessian matrices of the cost function. First, h 11 =t 1 +t S 1 +t 3 +t 4 S R N 1, h 1 =h 11 h T 11+h T 11 h 11 R N N h 1 =S 1 S1 T + S S T R N N, h = h 1 + h T 1 R N N h 31 =t 4 t 3 S 1 +t 1 +t S R N 1, h 3 =h 31 h T 31+h T 31 h 31 R N N then W 1 F 11 = +mm 1P m α1 h1 + P α 1 +mp m 1 α1 h A W 1 F 1 = +mm 1P m α h1 + P α +mp m 1 α h A3 W 1 F 13 = +mm 1P m f1 h1 + P f 1 +mp m 1 f1 h A4 W 1 F 14 =4h 1 + P βh A5 W 1 F 15 = +mm 1P m f m W 1 F 16 = +mm 1P m α3 m W 1 F 1 = +mm 1P m α4 m h1 + P f +mp m 1 f h A6 h1 + P α 3 +mp m 1 α h A7 h1 + P α 4 +mp m 1 α h A8 3 4

21 Progress In Electromagnetics Research B, Vol. 53, W 1 F = +mm 1P m f3 h1 + P f 3 +mp m 1 f3 h A9 W 1 F 3 = 4h 1 + P βh A30 W 1 F 4 = +mm 1P m f4 h1 + P f 4 +mp m 1 f4 h A31 W 1 F 5 = +mm 1P m α5 h1 + P α 5 +mp m 1 α5 h A3 W 1 F 6 = +mm 1P m α6 h1 + P α 6 +mp m 1 α6 h A33 W 1 F = W 1 F 1i + W 1 F i i = 1,,..., 6 A34 i i Similarly, h 3 is substituted h 1 above Eqs. A A33 correspondingly and obtains the formulations of W F 1i and W F i. Finally, we get W F = i W F 1i + i W F i i = 1,,... 6 A35 Based on the definition of Hessian matrix, we obtain A F and W 1 W F as following: F F α 1 α 1 α 1 α... F α 1 α 6 AF = α F F F α 1 α... α 6 = F F α α 1 α... F α α 6.. A F F α 6 α 6 α 1 α 6 α... F α 6 W 1 W F = = w 1 1 w 1. wn 1 F F w1 1 w 1 F w 1 w1. F wn 1 w1 w 1 F w F w1 1 w F w 1 w w 1 N. F... wn F w... F w w w 1 N w N F w N F w N

22 88 Zeng and Feng = W1 W F 1 W1 W F... W1 W F N N N F F F T W F =... A37 Similarly, w 1 w w N W 1 AF = W1 A F 1 W1 A F... W1 A F 6 N 6 A38 W W 1 F = W1 W F T A39 N N W AF = W A F 1 W A F... W A F 6 N 6 A40 AW 1 F = W1 AF T A41 6 N AW F = W AF T A4 6 N Using Eqs. A36 A41, we finally obtain W 1 F W 1 W F W 1 A F XF = W1 W F T W F W A F W1 A F T W A F T A F ACKNOWLEDGMENT N+6 N+6 A43 This work was supported by the National Natural Science Foundation of China NNSF under Grants , and , the National 863 Project of China under Grant 01AA01305, Sichuan Provincial Science and technology support Project under Grant 01GZ0101, and Chengdu Science and technology support Project under Grant 1DXYB347JH-00. REFERENCES 1. Li, X., W.-T. Li, X.-W. Shi, J. Yang, and J.-F. Yu, Modified differential evolution algorithm for pattern synthesis of antenna arrays, Progress In Electromagnetics Research, Vol. 137, , Lin, C., A.-Y. Qing, and Q.-Y. Feng, Synthesis of unequally spaced antenna arrays by using differential evolution, IEEE Transactions on Antennas and Propagation, Vol. 58, , Li, R., L. Xu, X.-W. Shi, N. Zhang, and Z.-Q. Lv, Improved differential evolution strategy for antenna array pattern synthesis

23 Progress In Electromagnetics Research B, Vol. 53, problems, Progress In Electromagnetics Research, Vol. 113, , Liu, D., Q.-Y. Feng, W.-B. Wang, and X. Yu, Synthesis of unequally spaced antenna arrays by using inheritance learning particle swarm optimization, Progress In Electromagnetics Research, Vol. 118, 05 1, Liu, Y., Z.-P. Nie, and Q. H. Liu, A new method for the synthesis of non-uniform linear arrays with shaped power patterns, Progress In Electromagnetics Research, Vol. 107, , Wang, W.-B., Q.-Y. Feng, and D. Liu, Application of chaotic particle swarm optimization algorithm to pattern synthesis of antenna arrays, Progress In Electromagnetics Research, Vol. 115, , Zeng, T. J. and Q. Feng, Penalty function solution to pattern synthesis of antenna array by a descent algorithm, Progress In Electromagnetics Research B, Vol. 49, , Caorsi, S., et al., Peak sidelobe level reduction with a hybrid approach based on GAs and difference sets, IEEE Transactions on Antennas and Propagation, Vol. 5, , Lizzi, L., G. Oliveri, and A. Massa, A time-domain approach to the synthesis of UWB antenna systems, Progress In Electromagnetics Research, Vol. 1, , Oliveri, G., Multibeam antenna arrays with common subarray layouts, IEEE Antennas and Wireless Propagation Letters, Vol. 9, , Manica, L., et al., Synthesis of multi-beam sub-arrayed antennas through an excitation matching strategy, IEEE Transactions on Antennas and Propagation, Vol. 59, 48 49, Bregains, J., et al., Synthesis of multiple-pattern planar antenna arrays with single prefixed or jointly optimized amplitude distributions, Microwave and Optical Technology Letters, Vol. 3, 74 78, Rocca, P., et al., Differential evolution as applied to electromagnetics, IEEE Antennas and Propagation Magazine, Vol. 53, 38 49, Shi, Z. and S. Wang, Nonmonotone adaptive trust region method, European Journal of Operational Research, Vol. 08, 8 36, Ahookhosh, M., et al., A nonmonotone trust-region line search method for large-scale unconstrained optimization, Applied Mathematical Modelling, Vol. 36, , 01.

24 90 Zeng and Feng 16. Zhang, J., et al., A nonmonotone adaptive trust region method for unconstrained optimization based on conic model, Applied Mathematics and Computation, Vol. 17, , Nocedal, J. and Y.-X. Yuan, Combining trust region and line search techniques, Department of Electrical Engineering and Computer Science, Northwestern University, Davidon, W. C., Conic approximations and collinear scalings for optimizers, SIAM Journal on Numerical Analysis, Vol. 17, 68 81, Di, S. and W. Sun, A trust region method for conic model to solve unconstraind optimizaions, Optimization methods and software, Vol. 6, 37 63, Ji, Y., et al., A new nonmonotone trust-region method of conic model for solving unconstrained optimization, Journal of Computational and Applied Mathematics, Vol. 33, , Deng, C., et al., Modified dynamic differential evolution for 0-1 knapsack problems, International Conference on Computational Intelligence and Software Engineering, CiSE 009, 1 4, Luebbers, R. and K. Kunz, Finite difference time domain calculations of antenna mutual coupling, IEEE Transactions on Electromagnetic Compatibility, Vol. 34, , Ye, Z. and C. Liu, Non-sensitive adaptive beamforming against mutual coupling, IET Signal Processing, Vol. 3, 1 6, Liao, B. and S.-C. Chan, Adaptive beamforming for uniform linear arrays with unknown mutual coupling, IEEE Antennas and Wireless Propagation Letters, Vol. 11, , 01.

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters. Title Adaptive beamforming for uniform linear arrays with unknown mutual coupling Author(s) Liao, B; Chan, SC Citation IEEE Antennas And Wireless Propagation Letters, 2012, v. 11, p. 464-467 Issued Date

More information

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang

More information

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)

Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS) Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb

More information

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

A MODIFIED TAGUCHI S OPTIMIZATION ALGORITHM FOR BEAMFORMING APPLICATIONS

A MODIFIED TAGUCHI S OPTIMIZATION ALGORITHM FOR BEAMFORMING APPLICATIONS Progress In Electromagnetics Research, Vol. 127, 553 569, 2012 A MODIFIED TAGUCHI S OPTIMIZATION ALGORITHM FOR BEAMFORMING APPLICATIONS Z. D. Zaharis * Telecommunications Center, Thessaloniki 54124, Greece

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,

More information

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize

Convergence of a Two-parameter Family of Conjugate Gradient Methods with a Fixed Formula of Stepsize Bol. Soc. Paran. Mat. (3s.) v. 00 0 (0000):????. c SPM ISSN-2175-1188 on line ISSN-00378712 in press SPM: www.spm.uem.br/bspm doi:10.5269/bspm.v38i6.35641 Convergence of a Two-parameter Family of Conjugate

More information

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Abstract and Applied Analysis Volume 013, Article ID 74815, 5 pages http://dx.doi.org/10.1155/013/74815 Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search Yuan-Yuan Chen

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components Applied Mathematics Volume 202, Article ID 689820, 3 pages doi:0.55/202/689820 Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

MATH 4211/6211 Optimization Quasi-Newton Method

MATH 4211/6211 Optimization Quasi-Newton Method MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization

Gradient method based on epsilon algorithm for large-scale nonlinearoptimization ISSN 1746-7233, England, UK World Journal of Modelling and Simulation Vol. 4 (2008) No. 1, pp. 64-68 Gradient method based on epsilon algorithm for large-scale nonlinearoptimization Jianliang Li, Lian

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Review of Classical Optimization

Review of Classical Optimization Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex

More information

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725 Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex

More information

DIRECTIVITY OPTIMIZATION IN PLANAR SUB-ARRAYED MONOPULSE ANTENNA

DIRECTIVITY OPTIMIZATION IN PLANAR SUB-ARRAYED MONOPULSE ANTENNA Progress In Electromagnetics Research Letters, Vol. 4, 1 7, 2008 DIRECTIVITY OPTIMIZATION IN PLANAR SUB-ARRAYED MONOPULSE ANTENNA P. Rocca, L. Manica, and A. Massa ELEDIA Department of Information Engineering

More information

Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning

Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Jacob Rafati http://rafati.net jrafatiheravi@ucmerced.edu Ph.D. Candidate, Electrical Engineering and Computer Science University

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S. Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

More information

FLEXIBLE ARRAY BEAMPATTERN SYNTHESIS USING HYPERGEOMETRIC FUNCTIONS

FLEXIBLE ARRAY BEAMPATTERN SYNTHESIS USING HYPERGEOMETRIC FUNCTIONS Progress In Electromagnetics Research M, Vol. 18, 1 15, 2011 FLEXIBLE ARRAY BEAMPATTERN SYNTHESIS USING HYPERGEOMETRIC FUNCTIONS L. Tu and B. P. Ng Nanyang Technological University 50 Nanyang Ave., Singapore

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Quasi-Newton Methods

Quasi-Newton Methods Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references

More information

Numerical Methods for PDE-Constrained Optimization

Numerical Methods for PDE-Constrained Optimization Numerical Methods for PDE-Constrained Optimization Richard H. Byrd 1 Frank E. Curtis 2 Jorge Nocedal 2 1 University of Colorado at Boulder 2 Northwestern University Courant Institute of Mathematical Sciences,

More information

DESIGN AND OPTIMIZATION OF EQUAL SPLIT BROADBAND MICROSTRIP WILKINSON POWER DI- VIDER USING ENHANCED PARTICLE SWARM OPTI- MIZATION ALGORITHM

DESIGN AND OPTIMIZATION OF EQUAL SPLIT BROADBAND MICROSTRIP WILKINSON POWER DI- VIDER USING ENHANCED PARTICLE SWARM OPTI- MIZATION ALGORITHM Progress In Electromagnetics Research, Vol. 118, 321 334, 2011 DESIGN AND OPTIMIZATION OF EQUAL SPLIT BROADBAND MICROSTRIP WILKINSON POWER DI- VIDER USING ENHANCED PARTICLE SWARM OPTI- MIZATION ALGORITHM

More information

Notes on Numerical Optimization

Notes on Numerical Optimization Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Array Antennas. Chapter 6

Array Antennas. Chapter 6 Chapter 6 Array Antennas An array antenna is a group of antenna elements with excitations coordinated in some way to achieve desired properties for the combined radiation pattern. When designing an array

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS

ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS J. of Electromagn. Waves and Appl., Vol. 23, 711 721, 2009 ON THE USE OF RANDOM VARIABLES IN PARTICLE SWARM OPTIMIZATIONS: A COMPARATIVE STUDY OF GAUSSIAN AND UNIFORM DISTRIBUTIONS L. Zhang, F. Yang, and

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review

OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

A new nonmonotone Newton s modification for unconstrained Optimization

A new nonmonotone Newton s modification for unconstrained Optimization A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

Static unconstrained optimization

Static unconstrained optimization Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R

More information

A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection

A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection Progress In Electromagnetics Research M, Vol. 35, 163 171, 2014 A New High-Resolution and Stable MV-SVD Algorithm for Coherent Signals Detection Basma Eldosouky, Amr H. Hussein *, and Salah Khamis Abstract

More information

Lecture 18: November Review on Primal-dual interior-poit methods

Lecture 18: November Review on Primal-dual interior-poit methods 10-725/36-725: Convex Optimization Fall 2016 Lecturer: Lecturer: Javier Pena Lecture 18: November 2 Scribes: Scribes: Yizhu Lin, Pan Liu Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Progress In Electromagnetics Research Letters, Vol. 16, 53 60, 2010 A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Y. P. Liu and Q. Wan School of Electronic Engineering University of Electronic

More information

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization

Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

1618. Dynamic characteristics analysis and optimization for lateral plates of the vibration screen

1618. Dynamic characteristics analysis and optimization for lateral plates of the vibration screen 1618. Dynamic characteristics analysis and optimization for lateral plates of the vibration screen Ning Zhou Key Laboratory of Digital Medical Engineering of Hebei Province, College of Electronic and Information

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Handling nonpositive curvature in a limited memory steepest descent method

Handling nonpositive curvature in a limited memory steepest descent method IMA Journal of Numerical Analysis (2016) 36, 717 742 doi:10.1093/imanum/drv034 Advance Access publication on July 8, 2015 Handling nonpositive curvature in a limited memory steepest descent method Frank

More information

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John

More information

Pure Strategy or Mixed Strategy?

Pure Strategy or Mixed Strategy? Pure Strategy or Mixed Strategy? Jun He, Feidun He, Hongbin Dong arxiv:257v4 [csne] 4 Apr 204 Abstract Mixed strategy evolutionary algorithms EAs) aim at integrating several mutation operators into a single

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

ORIE 6326: Convex Optimization. Quasi-Newton Methods

ORIE 6326: Convex Optimization. Quasi-Newton Methods ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted

More information

Lecture 14: October 17

Lecture 14: October 17 1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods Department of Statistical Sciences and Operations Research Virginia Commonwealth University Sept 25, 2013 (Lecture 9) Nonlinear Optimization

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Stochastic Quasi-Newton Methods

Stochastic Quasi-Newton Methods Stochastic Quasi-Newton Methods Donald Goldfarb Department of IEOR Columbia University UCLA Distinguished Lecture Series May 17-19, 2016 1 / 35 Outline Stochastic Approximation Stochastic Gradient Descent

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization

Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Xiaocheng Tang Department of Industrial and Systems Engineering Lehigh University Bethlehem, PA 18015 xct@lehigh.edu Katya Scheinberg

More information

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Improving the Convergence of Back-Propogation Learning with Second Order Methods the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible

More information

Preconditioned conjugate gradient algorithms with column scaling

Preconditioned conjugate gradient algorithms with column scaling Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

Gradient-Based Optimization

Gradient-Based Optimization Multidisciplinary Design Optimization 48 Chapter 3 Gradient-Based Optimization 3. Introduction In Chapter we described methods to minimize (or at least decrease) a function of one variable. While problems

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS

A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS A COMBINED CLASS OF SELF-SCALING AND MODIFIED QUASI-NEWTON METHODS MEHIDDIN AL-BAALI AND HUMAID KHALFAN Abstract. Techniques for obtaining safely positive definite Hessian approximations with selfscaling

More information