MIT Term Project: Numerical Experiments on Circular Ensembles and Jack Polynomials with Julia

Similar documents
c 2005 Society for Industrial and Applied Mathematics

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

Advances in Random Matrix Theory: Let there be tools

Progress in the method of Ghosts and Shadows for Beta Ensembles

The Random Matrix Technique of Ghosts and Shadows

A new type of PT-symmetric random matrix ensembles

arxiv:cond-mat/ v1 [cond-mat.stat-mech] 13 Mar 2003

Szegő-Lobatto quadrature rules

1 Intro to RMT (Gene)

On the algebraic structure of Dyson s Circular Ensembles. Jonathan D.H. SMITH. January 29, 2012

The restarted QR-algorithm for eigenvalue computation of structured matrices

Integrable structure of various melting crystal models

Alan Edelman. Why are random matrix eigenvalues cool? MAA Mathfest 2002 Thursday, August 1. MIT: Dept of Mathematics, Lab for Computer Science

How to generate random matrices from the classical compact groups

Quantum Chaos and Nonunitary Dynamics

Triangular matrices and biorthogonal ensembles

ETNA Kent State University

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CIRCULAR JACOBI ENSEMBLES AND DEFORMED VERBLUNSKY COEFFICIENTS

Multivariate Statistical Analysis

Linear Algebra: Matrix Eigenvalue Problems

A. HURWITZ AND THE ORIGINS OF RANDOM MATRIX THEORY IN MATHEMATICS

Vector valued Jack polynomials. Jean-Gabriel Luque

Matrix balancing and robust Monte Carlo algorithm for evaluating dominant eigenpair

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

ON THE QR ITERATIONS OF REAL MATRICES

Limits for BC Jacobi polynomials

Cylindric Young Tableaux and their Properties

Advanced Digital Signal Processing -Introduction

1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let

Large deviations of the top eigenvalue of random matrices and applications in statistical physics

Direct methods for symmetric eigenvalue problems

Quantum Computing Lecture 2. Review of Linear Algebra

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

A fast randomized algorithm for overdetermined linear least-squares regression

Bulk scaling limits, open questions

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

Math 108b: Notes on the Spectral Theorem

arxiv: v2 [math.sg] 14 Feb 2013

Research Article Inverse Eigenvalue Problem of Unitary Hessenberg Matrices

Numerical Methods in Matrix Computations

GROUP THEORY IN PHYSICS

Algebra I Fall 2007

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

SUPERSYMMETRY WITHOUT SUPERSYMMETRY : DETERMINANTS AND PFAFFIANS IN RMT

Numerical Analysis Lecture Notes

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Lectures 6 7 : Marchenko-Pastur Law

Tridiagonal test matrices for eigenvalue computations: two-parameter extensions of the Clement matrix

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

On the calculation of inner products of Schur functions

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis

Modelling large values of L-functions

FFTs in Graphics and Vision. The Laplace Operator

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

Linear Algebra Review

Large Deviations for Random Matrices and a Conjecture of Lukic

Clifford Algebras and Spin Groups

The effect of spin in the spectral statistics of quantum graphs

carries the circle w 1 onto the circle z R and sends w = 0 to z = a. The function u(s(w)) is harmonic in the unit circle w 1 and we obtain

The Sato-Tate conjecture for abelian varieties

1 Last time: least-squares problems

Linear Algebra Practice Problems

Basic Concepts in Matrix Algebra

arxiv: v1 [math-ph] 1 Nov 2016

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials

A DECOMPOSITION OF SCHUR FUNCTIONS AND AN ANALOGUE OF THE ROBINSON-SCHENSTED-KNUTH ALGORITHM

Foundations of Matrix Analysis

Linear Algebra Massoud Malek

Lie algebraic quantum control mechanism analysis

REPRESENTATION THEORY WEEK 7

Scientific Computing: An Introductory Survey

Freeness and the Transpose

Lecture 8 Nature of ensemble: Role of symmetry, interactions and other system conditions: Part II

Exponentials of Symmetric Matrices through Tridiagonal Reductions

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

Determinants of Perturbations of Finite Toeplitz Matrices. Estelle Basor American Institute of Mathematics

1 Tridiagonal matrices

Gaussian Free Field in beta ensembles and random surfaces. Alexei Borodin

Diagonalizing Matrices

Outline Python, Numpy, and Matplotlib Making Models with Polynomials Making Models with Monte Carlo Error, Accuracy and Convergence Floating Point Mod

DUALITY, CENTRAL CHARACTERS, AND REAL-VALUED CHARACTERS OF FINITE GROUPS OF LIE TYPE

Background Mathematics (2/2) 1. David Barber

APPLIED NUMERICAL LINEAR ALGEBRA

Numerical Methods I Eigenvalue Problems

Performance Evaluation of Generalized Polynomial Chaos

1 Singular Value Decomposition and Principal Component

Homework 1. Yuan Yao. September 18, 2011

SCHUR-WEYL DUALITY FOR U(n)

On Böttcher s mysterious identity

Least squares and Eigenvalues

SUMMARY OF MATH 1600

Fredholm determinant with the confluent hypergeometric kernel

CLASSIFICATION OF COMPLETELY POSITIVE MAPS 1. INTRODUCTION

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Variable-precision recurrence coefficients for nonstandard orthogonal polynomials

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Transcription:

MIT 18.338 Term Project: Numerical Experiments on Circular Ensembles and Jack Polynomials with Julia N. Kemal Ure May 15, 2013 Abstract This project studies the numerical experiments related to averages of Jack Polynomials on Circular Ensembles. Main aim of the project is to produce efficient numerical implementation of sampling from circular ensembles and evaluation of Jack polynomials. Julia computing language is utilized for this purpose. Numerical results reinforce the statements of the theorem and it is shown that especially parallel implementation of Julia provides superior computation speed compared to MATLAB and non-parallel Julia implementations. 1 Background This section provides the basic facts about circular ensembles [1] and Jack polynomials [2]. 1.1 Circular Ensembles Circular ensembles are simply measures on spaces of unitary matrices (i.e. complex square matrices U such that U U = UU = I, where U is the conjugate transpose and I is the identity matrix). Circular ensembles are parametrized by β > 0 R and the most commonly encountered circular ensembles are, 1. Circular Orthogonal Ensemble (COE), β = 1, symmetric unitary matrices 2. Circular Unitary Ensemble (CUE), β = 2, unitary matrices 3. Circular Symplectic Ensemble (CSE), β == 4, unitary quaternion matrices Let U(n) represent group of n n unitary matrices and let U be a random matrix from drawn according to the Haar measure on this group. Let, (λ 1,..., λ n ) be the eigenvalues of the random matrix U. Since U is a unitary 1

matrix, it s eigenvalues lie on the unit circle thus, λ k = e iθ k for some θ [0, 2π). It is a known result from random matrix theory that the probability density function of θ = (θ 1,..., θ n ) is given as p(θ) = 1 Z n,β 1 k<j n where Z n,β is a normalization constant. 1.2 Jack Polynomials e iθ k e iθj β, (1) Jack polynomials are a set of multivariate polynomials parametrized by a parameter β > 0. Let P k (n) the space of symmetric homogeneous polynomials of degree k in n variables. For a fixed β, Jack polynomials are indexed according to κ (a partition of k) and they serve as a basis for P k (n). There are many different definitions of Jack polynomials, the one that is more useful to us is given with their connection regarding to circular ensembles. Macdonald [3] showed that, [0,2π] n J β κ (e iθ1,..., e iθn ), J β λ (eiθ1,..., e iθn ) j<k e iθj e jθ k dθ 1...dθ n = δ κ,λ. (2) Eq. 2, in conjunction with p.d.f. in 1 shows that Jack Polynomials orthogonalize the circular ensembles. Let X be an square matrix with eigenvalues (λ 1,..., λ n ), and define Jκ β (X) = Jκ β (λ 1,..., λ n ). Thus for a fixed β and two partitions κ λ, we can state that, E[Jκ β (U)J β λ (U)] = 0, (3) where the expectation is taken over the random unitary matrix U according to the probability measure defined in Eq. 1. The basic aim of this project is to evaluate the expectation in 3 by Monte-Carlo simulations to verify that expectation indeed converges to zero as the number of samples used in the simulation increases. It is clear that in order to evaluate this expression we need two subroutines that, 1. Sample a random unitary matrix and find its eigenvalues 2. Evaluate the Jack function on these eigenvalues In the next two sections we give the description of the theoretical properties that are used in the efficient numerical implementation of these subroutines based on the works [4, 5].

2 Sampling from Circular Ensembles We will focus our attention to a specific class of unitary matrices called unitary upper Hessenberg matrices, which are almost upper triangular matrices with all zero entries above below it s first superdiagonal. Here is an example of an unitary upper Hessenberg matrix, 0.5693 + 0.094i 0.0042 + 0.0132i 0.2468 + 0.7785i 0.8168 0.0014 0.0097i 0.2614 + 0.5153i 0 0.9999 0.9999 In the work [4], Ammar et. al showed that there is one to one correspondence between the Schur parameters γ j C, j = 1,..., n and n n upper unitary Hessenberg matrices. The Schur parameters are a finite sequence of complex numbers such that γ j 1, 1 j < n, γ n = 1. (4) For given Schur parameters {γ} we can construct the corresponding unitary upper Hesseberg matrix as, H({γ}) = G 1 (γ 1 )...G n 1 (γ n 1 ) G n (γ n ), (5) where G is the Givens reflector [6], which also represents the generalization of this formula to an arbitrary β. Thus to sample a uniformly random unitary upper Hessenberg matrix, we can simply generate a random sequence of Schur parameters in Eq. 4, and then use the formula in 5 to compute the unitary upper Hessenberg matrix. 3 Efficient Evaluation of Jack Polynomials The Jack polynomials can be expressed in monomial basis through summation over semi-standard Young tableaux (SSYT), J β λ = f T (β)x T, T SSY T however this methods is computationally very inefficient. Demmel and Koe [5] developed and algorithm that recursively evaluate the Jack function based on the principal of dynamic programming. Recursion works as, J β λ (x 1,..., x n ) = µ λ J β µ (x 1,..., x n 1 )x λ/µ n c µλ, (6) where c µλ is a coefficient that deps on both partitions λ and µ. As it can be seen form the recursion, algorithm first evaluates the Jack functions of n 1 variables and uses linear combinations of these polynomials to calculate Jack polynomial in n variables. Koev provided an implementation of the algorithm at his website (http://math.mit.edu/ plamen/software/jack.m) in MATLAB language.

4 Results 4.1 Sampling Unitary Matrices The algorithm that samples unitary upper Hessenberg matrices (described in Section 2) was implemented in the JULIA language. The code is available in the appix. 1 t= 10 1 t= 100 0.5 0.5 0 0 0.5 0.5 1 1 0.5 0 0.5 1 1 1 0.5 0 0.5 1 1 t= 1000 1 t= 10000 0.5 0.5 0 0 0.5 0.5 1 1 0.5 0 0.5 1 1 1 0.5 0 0.5 1 Figure 1: Eigenvalue spread of random unitary Hessenberg matrices for different samples sizes. Fig. 1 displays the sampled eigenvalues of random unitary matrices (with n = 3) on the complex plane for different sample sizes (10,100,10 3 and 10 4 ). The figure shows that as the sample size increases eigenvalues of the sampled matrices cover the entire unit circle. These figures indicate that the algorithm succeeds in generating random matrices with eigenvalues on the unit circle. 4.2 Monte-Carlo Evaluation of Eq. 3 The recursive Jack polynomial evaluation algorithm described in Section 3 was implemented in JULIA language based on Koev s MATLAB implementation. Then the random matrices sampled with the code described above was fed to this process, and the outputs of the Jack functions were averaged over many samples and different partitions. Fig. 2 shows that as the number of samples increase, the absolute value of expectation in Eq. 3 decays to zero. Fig. 3 plots the average number of samples required for expectation to converge within ɛ = 0.0001 for different β values. It is shown that lesser number of samples is required for convergence as the value of β increases.

0.025 Absolute value of Averages of Jack Polynomials 0.02 0.015 0.01 0.005 0 1 2 3 4 5 6 7 8 9 10 Number of samples x 10 4 Figure 2: Absolute value of average of jack polynomials on circular ensembles vs. number of samples. 3 x 105 Number of samples required to converge within ε 2.5 2 1.5 1 0.5 0 0 5 10 15 20 25 30 35 40 45 50 β Figure 3: Average number of samples required for expectation to converge within ɛ = 0.0001 for different β values.

4.3 MATLAB - Julia Comparisons Table 1 shows the running time (in seconds) of MATLAB, Julia and Parallel Julia implementations for the numerical procedure described at the Results section for the calculation of Fig. 2 over different sample sizes. This table clearly indicates the computational power of Julia; for the larger sample values Julia is almost 20 times faster than the MATLAB implementation. Table also show that paralleling the code gives additional computation time saving. Number of Samples MATLAB Julia Parallel Julia (4 Cores) 500 5.1 1.7 1.3 1000 10.2 1.9 1.8 2000 20.3 2.4 1.9 5000 51.8 3.6 2.1 50000 516.4 25.6 3.9 Table 1: Comparison of running times (in seconds) of MATLAB, Julia and Parallel Julia implementations References [1] F.M. Dyson The threefold way. Algebraic structure of symmetry groups and ensembles in quantum mechanics. J. Math. Phys. 3: 1199. (1962). [2] Jack, Henry, A class of symmetric polynomials with a parameter, Proceedings of the Royal Society of Edinburgh, Section A. Mathematics 69: 118, (1971). [3] Macdonald, I. G., Symmetric functions and Hall polynomials, Oxford Mathematical Monographs (2nd ed.), New York: Oxford University Press, ISBN 0-19-853489-2, MR 1354144 (1995) [4] Ammar, Gregory, William Gragg, and Lothar Reichel. Constructing a unitary Hessenberg matrix from spectral data. In Numerical linear algebra, digital signal processing and parallel algorithms, pp. 385-395. Springer Berlin Heidelberg, 1991. [5] Demmel, James, and Plamen Koev. Accurate and efficient evaluation of Schur and Jack functions. Mathematics of computation 75, no. 253 (2006): 223-239. [6] Forrester, Peter J., and Eric M. Rains. Jacobians and rank 1 perturbations relating to unitary Hessenberg matrices. arxiv preprint math/0505552 (2005).

Appix JULIA codes # C a l c u l a t e emprical a v e r a f e o f Jack polynomials # on c i r c u l a r ensembles r e q u i r e ( c i r c u l a r. j l ) r e q u i r e ( j a c k e v a l. j l ) r e q u i r e ( samplejackaverage. j l ) n = 3 beta = 7 alpha=2/beta t = 10000 k1=[ 1 1 ] k2 =[2] xx = z e r o s ( 1, n ) j j 1 = JackEvaluator ( k1, xx, alpha ) j j 2 = JackEvaluator ( k2, xx, alpha ) v = 0 + 0 im t i c ( ) f o r i =1: t p r i n t l n ( At I t e r a t i o n, i ) v = v + samplejackaverage ( j j 1, j j 2, n, beta ) p r i n t l n ( abs (mean( v ) ) ) toc ( ) # P a r a l l e l implementation o f C i r c u l a r Jack @everywhere r e q u i r e ( c i r c u l a r. j l ) @everywhere r e q u i r e ( j a c k e v a l. j l ) @everywhere r e q u i r e ( samplejackaverage. j l ) @everywhere n = 3 @everywhere beta =50 @everywhere alpha=2/beta

#@ everywhere data = z e r o s ( Float64, 1, 1 0 ) #f o r j =1:10 t =100 @everywhere k1 =[ 1 1 ] @everywhere k2 =[2] @everywhere xx = z e r o s ( Complex128, 1, n ) @everywhere j j 1 = JackEvaluator ( k1, xx, alpha ) @everywhere j j 2 = JackEvaluator ( k2, xx, alpha ) @everywhere k =1 v = 0 + 0 im t i c ( ) #p r i n t l n (mean( v ) ) v = @ p a r a l l e l (+) f o r i =1: t samplejackaverage ( j j 1, j j 2, n, beta ) p r i n t l n ( abs ( v/ t ) ) #data [ j ] =abs ( v/ t ) # # #open ( r e a d a l l, dataa. txt ) #w r i t e c s v ( data. csv, data ) toc ( ) r e q u i r e ( c i r c u l a r. j l ) r e q u i r e ( j a c k e v a l. j l ) f u n c t i o n samplejackaverage ( JE1, JE2, n, beta ) e=e i g v a l s ( c i r c u l a r (n, beta ) ) r eturn e v a l u a t e ( JE1, e ) e v a l u a t e ( JE2, e ) # Code f o r c i r c u l a r ensemble, t r a n s l a t e d from A. Edelman s code

f u n c t i o n c i r c u l a r (n, beta ) # Compute Schur Parameters ( Verblunsky c o e f f i c i e n t s ) rn = rand ( 1, n 1) # Random numbers f o r schur magnitudes rp = rand ( 1, n 1) # Random numbers f o r schur phases e = ( beta / 2 ) [ 1 : ( n 1)] # The T h e t a s u b s c r i p t s ( v 1)/2 a = Array ( Float64, 1, n 1) einv = 1. / e f o r i = 1 : n 1 a [ i ] = s q r t (1 rn [ i ] ˆ einv [ i ] ) # Schur parameters from Theta rho = s q r t (1 a. ˆ 2 ) # Complementary Parameters a = a. exp (2 pi im rp ) # Random Phase # Compute u n i t a r y upper Hessenberg z = exp (2 pi im rand ( 1 ) ) f o r j =1:n 1 z =[1 z e r o s ( 1, j ) ; z e r o s ( j, 1) z ] G=[a [ j ] rho [ j ] ; rho [ j ] a [ j ] ] z [ 1 : 2, : ] =G z [ 1 : 2, : ] r eturn z # 18.338 P r o j e c t # Convert Plamen Koev s MATLAB code t o J u l i a # Type D e f i n i t i o n type JackEvaluator x j a alpha Lp Lmax n lma kappa f u n c t i o n JackEvaluator ( kappa, xx, alpha1 ) x = xx

alpha = alpha1 n = l e n g t h ( x ) Lp = sum( s i g n ( kappa ) ) lma = z e r o s (Lp, 1 ) kappa = kappa [ 1 : Lp ] Lmax = kappa+1 j a = I n f ones ( prod (Lmax+1),n ) + 0 im ones ( prod (Lmax+1),n ) new ( x, ja, alpha, Lp, Lmax, n, lma, kappa ) # Function D e f i n i t i o n s # The main Jack f u n c t i o n e v a l u a t o r f u n c t i o n e v a l u a t e ( s e l f : : JackEvaluator, xeval ) s e l f. x =xeval f = 0. 0 + 0.0 im i f s e l f. Lp>0 s e l f. lma [ s e l f. Lp ] = 1 f o r i = s e l f. Lp 1: 1:1 s e l f. lma [ i ] = s e l f. lma [ i +1] s e l f. Lmax [ i +1] # I n i t i a l i z e i n i t i a l i z e ( s e l f, 1, 0 ) ; f = jack1 ( s e l f, s e l f. n, 0,nmu( s e l f, s e l f. kappa ),nmu( s e l f, s e l f. kappa f = 1. 0 + 0.0 im r eturn f

## I n i t i a l i z e Function f u n c t i o n i n i t i a l i z e ( s e l f : : JackEvaluator, k, l ) i f k<=s e l f. Lp m =s e l f. Lmax [ k] 1 i f k>1 m = min (m, part ( s e l f, l, k 1)) f o r i =1:m l = l + s e l f. lma [ k ] s e l f. j a [ l +1,1:n ] = I n f ones ( 1, n ) + 0 im ones ( 1, n ) i n i t i a l i z e ( s e l f, k+1, l ) ; # Part Function f u n c t i o n part ( s e l f : : JackEvaluator, pn, i ) i f i >l e n g t h ( s e l f. lma ) f = 0 i f i == 1 f = f l o o r ( pn/ s e l f. lma [ i ] ) f = f l o o r (mod( pn, s e l f. lma [ i 1])/ s e l f. lma [ i ] ) #p r i n t l n ( Part output :, f ) r eturn f

# Jack1 Function f u n c t i o n jack1 ( s e l f : : JackEvaluator, j, k, lambda, l ) s = 1. 0 + 0.0 im i f 1<=j && l >0 t = s e l f. j a [ l +1, j ] i f k==0 && t!= I n f s = t i f part ( s e l f, l, j +1)>0 s = 0. 0 + 0.0 im i f j==1 s = s e l f. x [ 1 ] ˆ part ( s e l f, l, 1 ) prod (1 + s e l f. alpha [ 0 : part ( s e l f, l,1) 1 i f k ==0 i = 1 i = k s = jack1 ( s e l f, j 1,0, l, l ) AB( s e l f, lambda, l ) s e l f. x [ j ] ˆ lm ( s e l f, lambda while part ( s e l f, l, i )>0 i f part ( s e l f, l, i ) > part ( s e l f, l, i +1) i f part ( s e l f, l, i )>1 s = s + jack1 ( s e l f, j, i, lambda, l s e l f. lma [ i ] ) s = s+ jack1 ( s e l f, j 1,0, l s e l f. lma [ i ], l s e l f. lma [ i ] ) AB( s i = i +1

i f k == 0 s e l f. j a [ l +1, j ] = s #p r i n t l n ( Jack1 output :, s ) r eturn s # AB Function f u n c t i o n AB( s e l f : : JackEvaluator, l,m) f = 1 f o r i =1: s e l f. Lp f o r j =1: part ( s e l f,m, i ) i f l t ( s e l f, l, j ) == l t ( s e l f,m, j ) f = f /( l t ( s e l f,m, j ) i + s e l f. alpha ( part ( s e l f,m, i ) j + 1 ) ) f = f /( l t ( s e l f,m, j ) i + 1 + s e l f. alpha ( part ( s e l f,m, i ) j ) ) f o r i =1: s e l f. Lp f o r j =1: part ( s e l f, l, i ) i f l t ( s e l f, l, j ) == l t ( s e l f,m, j ) f = f ( l t ( s e l f, l, j ) i + s e l f. alpha ( part ( s e l f, l, i ) j + 1 ) ) f = f ( l t ( s e l f, l, j ) i + 1 + s e l f. alpha ( part ( s e l f, l, i ) j ) )

#p r i n t l n ( AB output :, f ) r eturn f # lm f u n c t i o n f u n c t i o n lm ( s e l f : : JackEvaluator, l,m) f = 0 f o r i =1: s e l f. Lp f = f+ part ( s e l f, l, i ) part ( s e l f,m, i ) #p r i n t l n ( lm output :, f ) r eturn f # l t f u n c t i o n f u n c t i o n l t ( s e l f : : JackEvaluator, l, q ) i =1 f = 0 while part ( s e l f, l, i ) >= q f = f+ 1 i = i +1 #p r i n t l n ( l t output :, f ) r eturn f # nmu f u n c t i o n f u n c t i o n nmu( s e l f : : JackEvaluator, l ) f = 0

f o r i =1: s e l f. Lp f = s e l f. Lmax [ i ] f i f i<= length ( l ) f = f+l [ i ] #p r i n t l n ( nmu output :, f ) r eturn f