A Spectral Algorithm For Latent Junction Trees - Supplementary Material
|
|
- Clarence Daniel
- 5 years ago
- Views:
Transcription
1 A Spectral Algoritm For Latent Junction Trees - Supplementary Material Ankur P. Parik, Le Song, Mariya Isteva, Gabi Teodoru, Eric P. Xing Discussion of Conditions for Observable Representation Te observable representation exists only if tere exist transformations F i = P[O i S i ] wit rank τ i := k S i and P[O i S i ] also as rank τ i (so tat P(O i, O i as rank τ i. Tus, it is required tat #states(o i #states(s i. Tis can eiter be acieved by eiter making O i consist of a few ig dimensional observations, or many smaller dimensional ones. In te case wen #states(o i > #states(s i, we need to project F i to a lower dimensional space suc tat it can be inverted using a tensor U i. In tis case, we define F i := P[O i S i ] Oi U i ]. Following tis troug te computation gives us tat P(C l = P(O l, O l Ol (P(O l, O l Oi U i. A goooice of U i can be obtained by performing a singular value decomposition of te matricized version of P(O i, O i (variables in O i are arranged to rows and tose in O i to columns. For HMMs and latent trees, tis rank condition can be expressed simply as requiring te conditional probability tables of te underlying model to not be rank-deficient. However, junction trees encode significantly more complex latent structures tat introduce more subtle considerations. Wile we consider a general caracterization of suc models were te observable representation to be future work, ere try to give some intuition on wat types of latent structures te rank condition may fail. First, te rank condition can fail is if tere are not enoug observed nodes/states, and tus #states(o i < τ i. Intuitively, tis corresponds to a model were te latent space is too expressive and inerently represents an intractable model (e.g. a set of n binary variables connected by a idden node wit n states is equivalent to a clique of size n. However, tere are more subtle considerations unique to non-tree models. In general, our metod is not limited to non-triangulated graps (see te factorial HMM in Figure 3 of te main paper, but te process of triangulation can introduce artificial dependencies tat can lead to complications. Consider Figure (a wic sows a DAG and its corresponding junction tree. To construct P(C AD, we may set P[O i, O i ] = P[D, F ] based on te junction tree topology. However, in te original model before triangulation, D F because of te v-structure. As a result, P[D, F ] does not ave rank k and tus cannot be inverted. However, note tat coosing P[O i, O i ] = P[D, E] is valid. Finally consider Figure (b, ignoring te orange nodes for now and assuming te variables are binary. In tis model, te idden variables are largely redundant since integrating out A and B would simply give a cain. F r must be set to P[F S r ] = P[F AB]. If we tink of A, B as just one large variable, ten it is clear tat P[F AB] = P[F D]P[D AB]. However, D only as two states wile AB as 4, so P[F AB] only as rank. Now, consider adding te orange node. In tis case we could set F r to P[F, G S r ] = P[F, G A, B] wose matricized version as rank 4. Note tat once te orange node as been added, integrating out A and B no longer produces a simple cain, but a more complicated structure. Tus, we believe tat more rigorously caracterizing te existence of te observable representation in more detail, may sed ligt on te intrinsic complexity/redundancy of latent variable models in te context of linear and tensor algebra. Linear Systems to Improve Stability As derived in Section 6 in te main paper, for non-root and non-leaf nodes: P(C i Oi P(O i, O i = P(O i, O i, O i (
2 D A C E B F AD S ABC CE BF E C A B D G F ACE ABC AG Sr ABD DF (a (b Figure : Models and teir corresponding junction trees, were constructing te observable representation poses complications. See Section. wic ten implies tat P(C i = P(O i, O i, O i Oi P(O i, O i ( However, tere may be many coices of O i (wic we denote wit O ( i,..., O(N i for wic te above equality is true: P(C i Oi P(O i, O ( i = P(O i, O i, O ( i P(C i Oi P(O i, O ( i = P(O i, O i, O ( i... P(C i Oi P(O i, O (N i = P(O i, O i, O (N i Tis defines an over-constrained linear system, and we can solve for P(C i using least squares. In te case were #states(o i > #states(s i, and te projection tensor U i is needed, our system of equations becomes: P(C i Oi (P(O i, O ( i O i U i = P(O i, O i, O ( i O i U i Oi U i P(C i Oi (P(O i, O ( i O i U i = P(O i, O i, O ( i O i U i Oi U i... P(C i Oi (P(O i, O (N i O i U i = P(O i, O i, O (N O i U i Oi U i In tis case, one goooice of U i is te top singular vectors of te matrix formed by te orizontal concatenation of P(O i, O ( i,..., P(O i, O (N i. Tis linear system metod allows for more robust estimation especially in smaller sample sizes (at te cost of more computation. It can be applied to te leaf case as well. One does not need to set up te linear system wit all te valioices; a subset is also acceptable. 3 Sample Complexity Teorem We prove te sample complexity teorem. 3. Notation For simplicity, te main text describes te algoritm in te context of a binary tree. However, in te sample complexity proof, we adopt a more general notation. Let C i be a clique, and C αi(,...,c αi(γ i denote its γ i cildren. Let C denote te set of all te cliques in te junction tree, and C denote its size. Define d max to be maximum degree of a tensor in te observable representation and e max to be maximum number of i
3 observed variables in any tensor. Furtermore, let τ i = k S i (i.e. te number of states associated wit te separator. We can now write te transformed representation for junction trees wit more tan 3 neigbors as: Root: P(C i = P(C i Sαi ( F α i(... Sαi (γ i F α i(γ i Internal nodes: P(C i = P(C i Si F i S αi ( F α i(... Sαi (γ i F α i(γ i Leaf: P(C i = P(C i Si F i and te observable representation as: root: P(C i = P(O αi(,..., O αi(γ i Oαi ( U α i(... Oαi (γ i U α i(γ i internal: P(C i = P(O αi(,..., O αi(γ i, O i O i (P(O i, O i Oi U i Oαi ( U α i(... Oαi (γ i U α i(γ i leaf: P(C i = P(R i, O i O i ( P(O i, O i Oi U i Sometimes wen te multiplication indices are clear, we will omit it to make tings simpler. Rearranged version: We will often find it convenient to rearrange te tensors into lower order tensors for te purpose of taking some norms (suc as te spectral norm. We define R( as te rearranging operation. For example, R( P(O i, O i is te matricized version of P(O i, O i wit O i being mapped to te rows and O i being mapped to te columns. More generally, if P(C i (wic as order d i ten R(P(C i is a rearrangement of of P(C i suc tat it as order equal to te number of neigbors of C i. Te set of modes of P(C i corresponding to a single separator of C i get mapped to a single mode in R(P(C i. We let R(S αi(j denote te mode tat S αi(j maps to in R(P(C i. In te case of te root, R(P(C i rearranges P(C i into a tensor of order γ i. For oter clique nodes, R(P(C i rearranges P(C i into a tensor of order γ i +. Example: In Figure of te main text P(C BCDE = P( B, D C, E. C BCDE as 3 neigbors and so R(P(C BCDE is of order 3 were eac of te modes correspond to {B, C}, {B, D}, {C, E} respectively. Tis rearrangement can be applied to te oter quantities. For example, R(P(O αi(,..., O αi(γ i is a rearranging of P(O αi(,..., O αi(γ i into a tensor of order γ i were te O αi(,..., O αi(γ i correspond to one mode eac. R( P(O i, O i is te matricized version of P(O i, O i wit O i being mapped to te rows and O i being mapped to te columns. Similarly, R(F i is te matricized version of F i and R(U i is te matricized version of U i. Tus, we can define te rearranged quantities: Root: R( P(C i = R(P(C i R(Sαi ( R(F αi(... R(Sαi (γ i R(F αi(γ i R( P(C i = R(P(C i R(Si R(F i R(S αi ( F αi(... R(Sαi (γ i R(F γi Leaf: R( P(C i = R(P(C i R(Si R(F i 3
4 and te observable representation as: root: R(P(C i = R(P(O αi(,..., O αi(γ i R(Sαi ( R(U αi(... R(Sαi (γ i R(U αi(γ i internal: R(P(C i = R(P(O Sαi (,..., O α i(γ i, O i R(S i R((P(O i, O i Si U i (3 R(Sαi ( R(U αi(... R(Sαi (γ i R(U αi(γ i leaf: R(P(C i = R( P(R i, O i R(O i R(( P(O i, O i Oi U i Furtermore define te following: α = min Ci C σ τi (P[O i, O i ] (4 β = min Ci C σ τi (F i (5 Te proof is based on te tecnique of HKZ [] but as differences due to te junction tree topology and iger order tensors. 3. Tensor Norms We briefly define several tensor norms tat are a generalization of matrix norms. For more information about matrix norms see []. Frobenius Norm: Just like for matrices, te frobenius norm of a tensor of order N is defined as: T F = T (i,..., i N (6 i,...,i N Elementwise One Norm: Similarly, te elementwise one norm of a tensor of order N is defined as: T = T (i,..., i N (7 i,...,i N Spectral Norm: For tensors of order N te spectral norm [3] can be defined as as T = sup T N v N,..., v v (8 v i s.t. v i in In our case, we will find it more convenient to use te rearranged spectral norm, wic we define as: were R( was defined in te previous section. T = R(T (9 Induced One Norm: For matrices te induced one norm is defined as te max column sum of te matrix: M = sup v s.t. v Mv. We can generalize tis to be te max slice sum of a tensor on a tensor were some modes are fixed and oters are summed over. Let σ denote te modes tat will be maxed over. Ten: T σ = sup T σ v,..., σ σ v σ (0 v i s.t. v i, i σ In te Appendix, we prove several lemmas regarding tese tensor norms. 3.3 Concentration Bounds ɛ(o,..., O d = P(O,..., O d P(O,..., O d ( F ɛ(o,..., O d e, o d e+,..., o d = P(O,..., O d e, o d e+,..., o d P(O,..., O d e, o d e+,..., o d ( F,..., d e denote te d e non-evidence variables wile d e +,..., d denote te e evidence variables. d indicates te total number of modes of te tensor. As te number of samples N gets large, we expect tese quantities to be small. 4
5 Lemma (variant of HKZ [] If te algoritm independently samples N of te observations, ten wit probability at least δ. ɛ(o,..., O d o d e+,...,o d ɛ(o,..., O d e, o d e+,..., o d C ln + N δ N k e max C k e max o ln + on δ N (3 (4 for all tuples (O,..., O d tat are used in te spectral algoritm. Te proof is te same as tat of HKZ [] except te union bound is taken over C instead of 3 (since eac transformed quantity in te spectral algoritm is composed of at most two suc terms. Te last bounan be made tigter, identical to HKZ, but for simplicity we do not pursue tat approac ere. 3.4 Singular Value Bounds Basically tis is te generalized version of Lemma 9 in HKZ [], wic is stated below for completeness: Lemma (variant of HKZ [] Suppose ɛ(o i, O i ε σ τi (P(O i, O i for some ε < /. Let ε 0 = ɛ(o i, O i /(( εσ τi (P(O i, O i. Ten:. ε 0 <. σ τi ( P(O i, O i Oi Û i ( ε 0 σ τi (P(O i, O i 3. σ τi (P(O i, O i Oi Û i ε 0 σ τi (P(O i, O i 4. σ τi ( P(O i S i Oi Û i ε 0 σ τi (P(O i S i It follows tat if ɛ(o i, O i σ τi (P(O i, O i /3 ten tis implies tat ε 0 /9 4/9 = 4. It ten follows tat,. σ τi ( P(O i, O i Oi Û i 3 4 σ τ i (P(O i, O i. σ τi (P(O i, O i Oi Û i 3 σ τ i (P(O i, O i 3. σ τi ( P(O i S i Oi Û i 3 σ τ i (P(O i S i 3.5 Bounding te Transformed Quantities Define,. root: P(Ci := P(O αi(,..., O αi(γ i Oαi ( Û αi(... Oαi (γ i Û αi(γ i. leaf: P ri (C i = P(R i = r i, O i O i (PÛ(O i, O i 3. internal: P(Ci = P(O Sαi (,..., O α i(γ i, O i O i (PÛ(O i, O i Oαi ( Û αi(... Oαi (γ i Û αi(γ i (wic are te observable quantities wit te true probabilities, but empirical Û s. Similarly, we can define i = P[O i S i ] Oi Û i. We ave also abbreviated P(O i, O i Oi Û i wit PÛ(O i, O i. We seek to bound te following tree quantities: δ root i := ( P(C i P(C i Oαi ( δi internal := ( P(C i P(C i O i i Oαi ( α i(,..., Oαi (γ i α i(γ i (5 Si (6 i,..., Oαi (γ i α i(γ i ξ leaf i := r i ( P ri (C i P ri (C i O i i (7 5
6 Lemma 3 If ɛ(o i, O i σ τi (P(O i, O i /3 ten δi root dmax ɛ(o αi(,..., O αi(γ i (8 3 d/ β ( d i dmax+3 ɛ(oαi(, 3..., O αi(γ i, O i ɛ(o i, O i 3 d + β d σ τi (P(O i, O i (σ τi (P(O i, O i (9 ( ξ leaf i 8kdmax ɛ(o i, O i 3 σ τi (P(O i, O i + r i ɛ(r i = r i, O i (0 σ τi (PÛ(O i, O i δ internal We define = max(δ root i, δ internal i, ξ leaf i (over all i. Proof Case δi root : For te root, P(Ci = P(O αi(,..., O αi(γ i Oαi Û ( αi(... Oαi (γ i Û αi(γ i and similarly P(C i = P(O αi(,..., O αi(γ i Û Oαi ( αi(... Û Oαi (γ i αi(γ i. δ root = = ( P(Oαi(,..., O αi(γ i Û Oαi ( αi(... Û Oαi (γ i αi(γ i P(O αi(,..., O αi(γ i Û Oαi ( αi(... Û Oαi (γ i αi(γ i Oαi ( α i(,..., Oαi (γ i ( P(O αi(,..., O αi(γ i P(O αi(,..., O αi(γ i Û Oαi ( αi(... Û Oαi (γ i αi(γ i... αi( α i( γi j= σ τ αi (j ( αi(j γi α i( ( P(O αi(,..., O αi(γ i P(O αi(,..., O αi(γ i Û Oαi ( αi(... Û Oαi (γ i αi(γ i j= σ τ ( ( P(O αi(,..., O αi(γ i P(O αi(,..., O αi(γ i αi (j αi(j Û αi(... Û α i(γ i Û α i( γi j= σ τ αi (j ( αi(j ɛ(o α i(,..., O αi(γ i Between te first and second line we use Lemma 8 to convert from elementwise one norm to spectral norm, and Lemma 6 (submultiplicativity. Lemma 6 (submultiplicativity is applied again to get to te second-to-last line. We also use te fact tat Û α i( =. Tus, by Lemma δ root i dmax ɛ(o αi(,..., O αi(γ i ( 3 dmax/ β dmax Case ξ leaf i : We note tat P ri (C i = P(R i = r i, O i O i ( PÛ(O i, O i and similarly P ri (C i = P(R i = r i, O i O i (PÛ(O i, O i. Again, we use Lemma 8 to convert from te one norm to te spectral norm, and Lemma 6 for submulti- 6
7 plicativity. ξ leaf i = r i ( Pri (C i P ri (C i O i i S i = r i ( Pri (C i P ri (C i O i Û i S i r i r i P[O i S i ] Si ( P ri (C i P ri (C i Û i ( P ri (C i P ri (C i ( Note tat P[O i S i ] Si =, and Û i =. P ri (C i P ri (C i = P(R i = r i, O i O i ( PÛ(O i, O i P(R i = r i, O i O i (PÛ(O i, O i P(R i = r i, O i O i ( PÛ(O i, O i PÛ(O i, O i + ( P(R i = r i, O i P(R i = r i, O i O i PÛ(O i, O i P(R i = r i, O i ( PÛ(O i, O i PÛ(O i, O i + PÛ(O i, O i ( P(Ri = r i, O i P(R i = r i, O i P(R i = r i ɛ(r i = r i, O i σ τi (PÛ(O i, O i ɛ(o i, O i min(σ τi ( PÛ(O i, O i, σ τij (PÛ(O i, O i Te last line follows from Eq. 35. We ave also used te fact tat P(R i = r i, O i P(R i = r i, O i F P[R i = r] by Lemma 7. Tus, using Lemma, ξ leaf i r i ( P(R i = r i + 5 ξ leaf i r i 8kdmax 3 ( ( ɛ(o i, O i min(σ τi ( PÛ(O i, O i, σ τi (PÛ(O i, O i P(R i = r i ɛ(o i, O i 9σ τi (PÛ(O i, O i + ɛ(r i = r i, O i 3στi (PÛ(O i, O i ɛ(o i, O i σ τi (P(O i, O i + r i ɛ(r i = r i, O i σ τi (PÛ(O i, O i ɛ(r i = r i, O i σ τi (PÛ(O i, O i (3 ( δ internal i P(C i = P(O αi(,..., O αi(γ i, O i O i (PÛ(O i, O i Oαi ( Û αi(... Oαi (γ i Û αi(γ i. Similarly, P(C i = P(O αi(,..., O αi(γ i, O i O i ( PÛ(O i, O i Oαi ( Û αi(... Oαi (γ i Û αi(γ i. 7
8 Again, we use Lemma 8 to convert from one norm to spectral norm and Lemma 6 for submultiplicativity. δi internal = ( P(C i P(C i O i i Oαi ( α i(,..., Oαi (γ i α i(γ i Si ( P(C i P(C i O i Û i Oαi ( α i(,..., Oαi (γ i α i(γ i Si P[O i S i ] Si ( P(C i P(C i Û i... γi j= σ τ αi (j ( αi(j Note tat P[O i S i ] Si =, and Û i =. = α i( α i(γ i ( P(C i P(C i (5 P(C i P(C i P(O αi(,..., O αi(γ i, O i O i ( PÛ(O i, O i Û Oαi ( αi(... Û Oαi (γ i αi(γ i P(O αi(,..., O αi(γ i, O i O i (PÛ(O i, O i Û Oαi ( αi(... Û Oαi (γ i αi(γ i ( P(O αi(,..., O αi(γ i, O i P(O αi(,..., O αi(γ i, O i O i PÛ(O i, O i Û Oαi ( αi(... Û Oαi (γ i αi(γ i + P(O αi(,..., O αi(γ i, O i O i (( PÛ(O i, O i (PÛ(O i, O i Û Oαi ( αi(... Û Oαi (γ i αi(γ i ( P(O αi(,..., O αi(γ i, O i P(O αi(,..., O αi(γ i, O i PÛ(Oi, O i Û αi(... + P(Oαi(,..., O αi(γ i, O i PÛ(O i, O i PÛ(O i, O i Û αi(... Û α i(γ i ɛ(o α i(,..., O αi(γ i, O i σ τi (PÛ(O i, O i ɛ(o i, O i min(σ τi (PÛ(O i, O i, σ τi ( PÛ(O i, O i Te last line follows from Eq. 35. We ave also used te fact tat P(Oαi(,..., O αi(γ i, O i P(O αi(,..., O αi(γ i, O i F via Lemma 7. Using Lemma, δ internal i ( (k dmax ɛ(oαi(, (β..., O αi(γ i, O i 3 dmax σ τi (PÛ(O i, O i ɛ(o i, O i min(σ τi (PÛ(O i, O i, σ τi ( PÛ(O i, O i (6 Û α i(γ i ( δi internal (k dmax ɛ(oαi(, (β..., O αi(γ i, O i 8ɛ(O i, O i + 3 dmax 3στi (P(O i, O i 3(σ τi (P(O i, O i ( 8(k dmax ɛ(oαi(, 3(β..., O αi(γ i, O i ɛ(o i, O i + 3 dmax σ τi (P(O i, O i (σ τi (P(O i, O i (7 3.6 Bounding te Propagation of Error We now sow all tese errors propagate on te junction tree. For tis section, assume te clique nodes are numbered,,..., C in breadt first order (suc tat is te root. Moreover let Φ :c (C be te transformed 8
9 factors accumulated so far if we computed te joint probability from te root down (instead of te bottom up. For example, Φ : (C = P(C (8 Φ : (C = P(C S P(C (9 Φ : (C = P(C S P(C S3 P(C 3 (30 (Note ow tis is very computationally inefficient: te tensors get very large. However, it is useful for proving statistical properties. Ten te modes of Φ :c can be partitioned into mode groups, M,..., M dc (were eac mode group consists of te variables on te corresponding separator edge. We now prove te following lemma, Lemma 4 Define = max(δi root x :c ( Φ :c (C Φ :c (C M, δ internal i, ξ leaf i. Ten,,..., Mdc ( + c δ root i + ( + c (3 x :c is all te observed variables in cliques troug c. Note tat wen c = C ten tis implies tat Φ :c (C = P[x,..., x O ] and tus, x P[x,..., x O ] P[x,..., x O ] ( + C δ root i + ( + C (3 Proof Te proof is by induction on c. Te base case follows trivially from te definition of δi root. For te induction step, assume te claim olds for c. Ten we prove it olds for c +. ( Φ :c+ (C Φ :c+ (C M,..., Mdc + x :c+ = ( Φ:c (C ( P(C c+ P(C c+ + ( Φ :c (C Φ :c (C ( P(C c+ P(C c+ + ( Φ :c (C Φ :c (C P(C c+ x :c+ M,..., Mdc+ + ( Φ :c (C ( P(C c+ P(C c+ M x :c+ (( Φ :c (C Φ :c (C ( P(C c+ P(C c+ M x :c+ x :c+ (( Φ :c (C Φ :c (C P(C c+ M,..., Mdc+ + +,..., Mdc+ +,..., Mdc+ + + Now we consider two cases, wen c + is a leaf clique and wen it is an internal clique. Case : Internal Clique Note ere te summation over x :c+ is irrelevant since tere is no evidence. We use Lemma 5 to break up te tree terms: Te first term, ( Φ :c (C ( P(C c+ P(C c+ M ( P(C c+ P(C c+ O (c+ c+ Oαc+ ( Φ :c (C M,..., Mdc,..., Mdc+ + α c+(,..., Oαc+ (γ c+ α c+(γ c+ Si 9
10 Te first term above is simply δi internal wile te second equals one since it is a joint distribution. Now for te second term, ( Φ :c (C Φ :c (C ( P(C c+ P(C c+ M ( Φ :c (C Φ :c (C M,..., Mdc ( P(C c+ P(C c+ O (c+ c+ Oαc+ ( Te tird term,,..., Mdc+ + α c+(,..., Oαc+ (γ c+ (( + c δ root + ( + c δc+ internal (via induction ypotesis (( + c δ root + ( + c (( Φ :c (C Φ :c (C P(C c+ M ( Φ :c (C Φ :c (C M P(C c+ O (c+ c+ Oαc+ (,..., Mdc+ +,..., Mdc (( + c δ root + ( + c Case : Leaf Clique Again we use Lemma 5 to break up te tree terms: Te first term, ( Φ :c (C ( P ri (C c+ P ri (C c+ M x :c+ x :c+ Φ :c (C M Φ :c (C M x :c α c+(,..., Oαc+ (γ c+ α c+(γ c+,..., Mdc+ + α c+(γ c+ Sc+,..., Mdc ( Pri (C c+ P ri (C c+ O (c+ c+ S c+,..., Mdc x c+ Sc+ ( P ri (C c+ P S c+ ri (C c+ O (c+ c+ Te first term above equals because it is a joint distribution and te second is te bound on te transformed quantity we ad proved earlier (since r i = x c+. Te second term, x :c x :c+ ( Φ :c (C Φ :c (C ( P ri (C c+ P ri (C c+ M ( Φ :c (C Φ :c (C M,..., Mdc (( + c δ root + ( + c ξ leaf c+ (( + c δ root + ( + c x c+,..., Mdc+ + ( P ri (C c+ P ri (C c+ O (c+ c+ Te tird term, x :c+ (( Φ :c (C Φ :c (C P ri (C c+ M x :c ( Φ :c (C Φ :c (C M (( + c δ root + ( + c,..., Mdc+ +,..., Mdc x c+ P S c+ ri (C c+ O (c+ c+ 0
11 Combining tese terms proves te induction step. 3.7 Putting it all togeter We use te fact from HKZ [] tat ( + a/t t + a for a /. Now is te main source of error. We set O(ɛ total / C. Note tat dmax dmax β dmax ( o d e+,...,o d ɛ(o,..., O d e, o d e+,..., o d + α o d e+,...,o d ɛ(o,..., O d e, o d e+,..., o d α Tis gives, ( dmax+3 3 o d e+,...,o d ɛ(o,..., O d e, o d e+,..., o d o + d e+,...,o d ɛ(o,..., O d e, o d e+,..., o d 3 dmax β α α Kɛ total / C dmax were K is some constant. Tis implies, totalα βdmax ɛ(o,..., O d e, o d e+,..., o d K 3dmax/+ ɛ o d e+,...,o d dmax+3 C (33 Now using te concentration bound (Lemma will give, Solving for N: K 3dmax/+ ɛ total α β dmax dmax+3 C k e max on ln C δ ( ( 4k dmax N O ko emax ln C δ C 3β ɛ total α4 + k e max o N (34 and tis completes te proof. 4 Appendix 4. Matrix Perturbation Bounds Tis is Teorem 3.8 from pg. 43 in Stewart and Sun, 990 [4]. Let A R m n, wit m n and let à = A + E. Ten 4. Tensor Norm Bounds Ã+ A max( A +, à E (35 For matrices it is true tat Mv M v. We prove te generalization to tensors. Lemma 5 Let T and T be tensors σ a set of (labeled modes. T σ T T σ T (36
12 Proof T σ T = i :N \σ j :M\σ = T (i :N \ σ, σ = xt (j :N \ σ, σ = x x T (i :N \ σ, σ = xt (j :N \ σ, σ = x i :N \σ x j :M\σ = T (i :N \ σ, σ = x T (j :N \ σ, σ = x i :N \σ σ j :M\σ max T (j :N \ σ, σ = x T x j :M\σ = T σ T We prove a restricted analog of te fact tat spectral norm is submultiplicative for matrices i.e. AB A B. Lemma 6 Let T be a tensor of order N and let M be a matrix. Ten, Proof T M = sup v m,v,...,v N = sup v m,v,...,v N i,...,i N,m T M T M (37 T (i, i,..., i N M(i, mv m (mv i (i v i3 (i 3...v in (i n T (i, i,..., i N M(i, mv m (m i,...,i N i m ( sup M(i, mv m (m i m ( T (i, i,..., i N i,...,i N m M(i M(i, mv m (m v i (i v i3 (i 3...v in (i n, mv m (m m sup v m,v,...,v N M T Lemma 7 Let T be a tensor of order N. Ten, T T v,..., n v n F T v,..., n v n F... T F (38 Proof It suffices to sow tat sup v s.t. v T v F T F. By submultiplicativity of te frobenius norm: T v F T F v F T, since v F = v. Lemma 8 Let T be a tensor of order N, were eac mode is of dimension k. Ten, T σ kn T (39 For any σ. Proof We simply prove tis for σ = (wic corresponds to elementwise one norm since T σ T σ if σ σ. Note tat T k N max( T (were max(t is te maximum element of T. Similarly, max( T T wic implies tat T k N T.
13 References [] R.A. Horn and C.R. Jonson. Matrix analysis. Cambridge Univ Pr, 990. [] D. Hsu, S. Kakade, and T. Zang. A spectral algoritm for learning idden Markov models. In Proc. Annual Conf. Computational Learning Teory, 009. [3] N.H. Nguyen, P. Drineas, and T.D. Tran. Tensor sparsification via a bound on te spectral norm of random tensors. Arxiv preprint arxiv: , 00. [4] GW Stewart and J. Sun. Matrix Perturbation Teory. Academic Press,
Supplemental for Spectral Algorithm For Latent Tree Graphical Models
Supplemental for Spectral Algorithm For Latent Tree Graphical Models Ankur P. Parikh, Le Song, Eric P. Xing The supplemental contains 3 main things. 1. The first is network plots of the latent variable
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationGeneric maximum nullity of a graph
Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More information1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist
Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter
More informationExam 1 Review Solutions
Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),
More information2.8 The Derivative as a Function
.8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open
More informationHow to Find the Derivative of a Function: Calculus 1
Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te
More informationEfficient algorithms for for clone items detection
Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More informationMATH745 Fall MATH745 Fall
MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext
More informationSECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES
(Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationMA455 Manifolds Solutions 1 May 2008
MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using
More informationClick here to see an animation of the derivative
Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,
More informationSymmetry Labeling of Molecular Energies
Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry
More informationA = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationExercises for numerical differentiation. Øyvind Ryan
Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can
More information1 Calculus. 1.1 Gradients and the Derivative. Q f(x+h) f(x)
Calculus. Gradients and te Derivative Q f(x+) δy P T δx R f(x) 0 x x+ Let P (x, f(x)) and Q(x+, f(x+)) denote two points on te curve of te function y = f(x) and let R denote te point of intersection of
More informationContinuity and Differentiability Worksheet
Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;
More informationThese errors are made from replacing an infinite process by finite one.
Introduction :- Tis course examines problems tat can be solved by metods of approximation, tecniques we call numerical metods. We begin by considering some of te matematical and computational topics tat
More informationIntroduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f =
Introduction to Macine Learning Lecturer: Regev Scweiger Recitation 8 Fall Semester Scribe: Regev Scweiger 8.1 Backpropagation We will develop and review te backpropagation algoritm for neural networks.
More information5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems
5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More information3.4 Worksheet: Proof of the Chain Rule NAME
Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are
More informationNUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,
NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing
More informationFunction Composition and Chain Rules
Function Composition an Cain Rules James K. Peterson Department of Biological Sciences an Department of Matematical Sciences Clemson University November 2, 2018 Outline Function Composition an Continuity
More informationA SHORT INTRODUCTION TO BANACH LATTICES AND
CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,
More information1watt=1W=1kg m 2 /s 3
Appendix A Matematics Appendix A.1 Units To measure a pysical quantity, you need a standard. Eac pysical quantity as certain units. A unit is just a standard we use to compare, e.g. a ruler. In tis laboratory
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationSolution for the Homework 4
Solution for te Homework 4 Problem 6.5: In tis section we computed te single-particle translational partition function, tr, by summing over all definite-energy wavefunctions. An alternative approac, owever,
More informationUniversity Mathematics 2
University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at
More informationContinuity and Differentiability of the Trigonometric Functions
[Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te
More informationMath 161 (33) - Final exam
Name: Id #: Mat 161 (33) - Final exam Fall Quarter 2015 Wednesday December 9, 2015-10:30am to 12:30am Instructions: Prob. Points Score possible 1 25 2 25 3 25 4 25 TOTAL 75 (BEST 3) Read eac problem carefully.
More informationPreconditioning in H(div) and Applications
1 Preconditioning in H(div) and Applications Douglas N. Arnold 1, Ricard S. Falk 2 and Ragnar Winter 3 4 Abstract. Summarizing te work of [AFW97], we sow ow to construct preconditioners using domain decomposition
More informationPoisson Equation in Sobolev Spaces
Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on
More informationHigher Derivatives. Differentiable Functions
Calculus 1 Lia Vas Higer Derivatives. Differentiable Functions Te second derivative. Te derivative itself can be considered as a function. Te instantaneous rate of cange of tis function is te second derivative.
More information1 2 x Solution. The function f x is only defined when x 0, so we will assume that x 0 for the remainder of the solution. f x. f x h f x.
Problem. Let f x x. Using te definition of te derivative prove tat f x x Solution. Te function f x is only defined wen x 0, so we will assume tat x 0 for te remainder of te solution. By te definition of
More informationOSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix
Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra
More informationIntroduction to Derivatives
Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))
More informationRobotic manipulation project
Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex
More information4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.
Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra
More information2.3 Algebraic approach to limits
CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.
More information. If lim. x 2 x 1. f(x+h) f(x)
Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value
More informationChapter 5 FINITE DIFFERENCE METHOD (FDM)
MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential
More informationREVIEW LAB ANSWER KEY
REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g
More informationExplicit Interleavers for a Repeat Accumulate Accumulate (RAA) code construction
Eplicit Interleavers for a Repeat Accumulate Accumulate RAA code construction Venkatesan Gurusami Computer Science and Engineering University of Wasington Seattle, WA 98195, USA Email: venkat@csasingtonedu
More informationOrder of Accuracy. ũ h u Ch p, (1)
Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical
More information3.1 Extreme Values of a Function
.1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find
More informationHOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS
HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS Po-Ceng Cang National Standard Time & Frequency Lab., TL, Taiwan 1, Lane 551, Min-Tsu Road, Sec. 5, Yang-Mei, Taoyuan, Taiwan 36 Tel: 886 3
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More information1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).
. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use
More informationBob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk
Bob Brown Mat 251 Calculus 1 Capter 3, Section 1 Completed 1 Te Tangent Line Problem Te idea of a tangent line first arises in geometry in te context of a circle. But before we jump into a discussion of
More informationch (for some fixed positive number c) reaching c
GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationMath Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim
Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z
More informationVolume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households
Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of
More informationOnline Learning: Bandit Setting
Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac
More informationGlobal Output Feedback Stabilization of a Class of Upper-Triangular Nonlinear Systems
9 American Control Conference Hyatt Regency Riverfront St Louis MO USA June - 9 FrA4 Global Output Feedbac Stabilization of a Class of Upper-Triangular Nonlinear Systems Cunjiang Qian Abstract Tis paper
More informationDerivatives of Exponentials
mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function
More informationLecture 21. Numerical differentiation. f ( x+h) f ( x) h h
Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However
More information1. Introduction. We consider the model problem: seeking an unknown function u satisfying
A DISCONTINUOUS LEAST-SQUARES FINITE ELEMENT METHOD FOR SECOND ORDER ELLIPTIC EQUATIONS XIU YE AND SHANGYOU ZHANG Abstract In tis paper, a discontinuous least-squares (DLS) finite element metod is introduced
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x
More information1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.
004 Algebra Pretest answers and scoring Part A. Multiple coice questions. Directions: Circle te letter ( A, B, C, D, or E ) net to te correct answer. points eac, no partial credit. Wic one of te following
More informationNumerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems
Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for
More information2.1 THE DEFINITION OF DERIVATIVE
2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative
More informationQuantum Mechanics Chapter 1.5: An illustration using measurements of particle spin.
I Introduction. Quantum Mecanics Capter.5: An illustration using measurements of particle spin. Quantum mecanics is a teory of pysics tat as been very successful in explaining and predicting many pysical
More informationSin, Cos and All That
Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives
More informationRegularized Regression
Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize
More information1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point
MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note
More informationMinimizing D(Q,P) def = Q(h)
Inference Lecture 20: Variational Metods Kevin Murpy 29 November 2004 Inference means computing P( i v), were are te idden variables v are te visible variables. For discrete (eg binary) idden nodes, exact
More informationLecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines
Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to
More informationCalculus I Homework: The Derivative as a Function Page 1
Calculus I Homework: Te Derivative as a Function Page 1 Example (2.9.16) Make a careful sketc of te grap of f(x) = sin x and below it sketc te grap of f (x). Try to guess te formula of f (x) from its grap.
More informationTopics in Generalized Differentiation
Topics in Generalized Differentiation J. Marsall As Abstract Te course will be built around tree topics: ) Prove te almost everywere equivalence of te L p n-t symmetric quantum derivative and te L p Peano
More informationMathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative
Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x
More informationDedicated to the 70th birthday of Professor Lin Qun
Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,
More informationERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*
EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T
More informationChapter 2 Limits and Continuity
4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(
More informationCS522 - Partial Di erential Equations
CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its
More informationOn convexity of polynomial paths and generalized majorizations
On convexity of polynomial pats and generalized majorizations Marija Dodig Centro de Estruturas Lineares e Combinatórias, CELC, Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa, Portugal
More informationHOMEWORK HELP 2 FOR MATH 151
HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More informationContinuity. Example 1
Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)
More informationName: Answer Key No calculators. Show your work! 1. (21 points) All answers should either be,, a (finite) real number, or DNE ( does not exist ).
Mat - Final Exam August 3 rd, Name: Answer Key No calculators. Sow your work!. points) All answers sould eiter be,, a finite) real number, or DNE does not exist ). a) Use te grap of te function to evaluate
More informationGradient Descent etc.
1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )
More informationLab 6 Derivatives and Mutant Bacteria
Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge
More informationDigital Filter Structures
Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical
More informationTaylor Series and the Mean Value Theorem of Derivatives
1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential
More informationSolutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014
Solutions to te Multivariable Calculus and Linear Algebra problems on te Compreensive Examination of January 3, 24 Tere are 9 problems ( points eac, totaling 9 points) on tis portion of te examination.
More informationMTH 119 Pre Calculus I Essex County College Division of Mathematics Sample Review Questions 1 Created April 17, 2007
MTH 9 Pre Calculus I Essex County College Division of Matematics Sample Review Questions Created April 7, 007 At Essex County College you sould be prepared to sow all work clearly and in order, ending
More informationMVT and Rolle s Theorem
AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state
More informationSome Review Problems for First Midterm Mathematics 1300, Calculus 1
Some Review Problems for First Midterm Matematics 00, Calculus. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd,
More informationParameter Fitted Scheme for Singularly Perturbed Delay Differential Equations
International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department
More informationFinite Difference Method
Capter 8 Finite Difference Metod 81 2nd order linear pde in two variables General 2nd order linear pde in two variables is given in te following form: L[u] = Au xx +2Bu xy +Cu yy +Du x +Eu y +Fu = G According
More informationWHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED
WHEN GENERALIZED SUMSETS ARE DIFFERENCE DOMINATED VIRGINIA HOGAN AND STEVEN J. MILLER Abstract. We study te relationsip between te number of minus signs in a generalized sumset, A + + A A, and its cardinality;
More information7 Semiparametric Methods and Partially Linear Regression
7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).
More informationSolve exponential equations in one variable using a variety of strategies. LEARN ABOUT the Math. What is the half-life of radon?
8.5 Solving Exponential Equations GOAL Solve exponential equations in one variable using a variety of strategies. LEARN ABOUT te Mat All radioactive substances decrease in mass over time. Jamie works in
More informationLIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION
LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y
More information