Exam, Solutions

Size: px
Start display at page:

Download "Exam, Solutions"

Transcription

1 Exam, - Solutions Q Constructing a balanced sequence containing three kinds of stimuli Here we design a balanced cyclic sequence for three kinds of stimuli (labeled {,, }, in which every three-element sequence (except for the sequence {,,}occurs exactly once We do this by extending the finite field to make a field of size 7, GF (, nalogous to, is the field containing {,, }, with addition and multiplication defined (mod The polynomial x + x+ = has no solutions in, so we add a formal quantity ξ to, and assert that ξ + ξ+ =, and that ξ satisfies the associative, commutative, and distributive laws for addition and multiplication with itself and with the elements of r Using ξ + ξ+ =, express ξ in terms of ξ, ξ,and ξ, for r =,,6 Note that the sequence of coefficients of ξ, considered cyclically, has the property that ever sequence of three labels (except for the sequence {,,}occurs exactly once Field operations are mod, ie, we can replace by +, and by, by, etc So, for example, ξ + ξ+ = implies ξ = ξ = ξ+, and ξ = ξ+ = ξ+ So, using the field properties and the equation satisfied by ξ, we find for example that = = ( + = + ; ξ ξ ξ ξ ξ ξ ξ Working similarly, the table of coefficients is: ξ ξ ξ ξ = ξ = ξ = ξ = ξ = ξ = 6 ξ = 7 ξ = 8 ξ = 9 ξ = ξ = ξ = ξ = ξ = ξ = ξ = 6 ξ = 7 ξ = 8 ξ = 9 ξ = ξ = ξ = ξ = ξ = ξ = ξ = 6 ξ = = = ( + = + = + + ξ ξ ξ ξ ξ ξ ξ ξ ξ ξ

2 B Consider the sequence of the coefficients of ξ that occur in the expansion of ξ k, for k =,, Note that the second half of the sequence can be obtained from the first half of the sequence by exchanging and Why must this be true? k s the above table shows, ξ = So ξ coefficient of + k = ξ, so the coefficient of ξ in ξ k + is twice the ξ in ξ k, which means that is replaced by =, and is replaced by = C What is the size of the multiplicative group generated by ξ? Show that for all elements g in this 6 group, g = The group has 6 elements, ξ =, ξξ,,, ξ, since the first time that the identity recurs in the above 6 table is ξ = (This in turn means that the above table contains all the nonzero elements of the field GF (,, since this field has = 7 elements Since the order of every element of a finite group must divide the size of the group, the only possible 6 orders are the factors of 6, namely,,,, and 6 To see that this implies that g = for all group elements g, note that 6 g = = g = g =, and if 6 g, then ( = 6 g, then ( g = g = So in all cases, D Which of the following maps are automorphisms of the multiplicative group (ie, are - maps that preserve multiplication? U( g= g, X ( g= g, Y( g= g U( g= elements onto g cannot be an automorphism, since ξ = but X ( g= g is an automorphism; its inverse is ( = Y g g is an automorphism; its inverse is ( ( ( ( ( ( ( U 6 ( ξ = ξ = ( ( ( X X, since ( Y Y Y, since = = = = = Y Y Y Y g g g g g g,so U maps two different ( X X X g = g = g = g = g E Which of the above maps are automorphisms of the field GF (, (ie, are - maps that preserve multiplication and addition? U( g= multiplicative group X ( g= g cannot be a field automorphism, since (as in D it is not even an automorphism of the g is a field automorphism, since X ( g+ h = ( g+ h = g + g h+ gh + h = g + h = X( g + X( h

3 Y( g= Y( g+ h = Y( g + Y( h convenient way to do this is g =, h = ξ From the above table, g is not a field automorphism One way to see this is to try some random choices to see if ξ= ξ, so Y( + ξ = Y( ξ = ξ = ξ = ξ = ξ = ξ + ξ+ On the other hand, Y( + Y( ξ = + ξ = + (ξ + ξ+ = ξ + ξ So Y does not preserve addition Q Decomposing group representations ecall that if we have a group G with a representation U in V and another representation U in V, we can define a group representation U U on V V by U U ( v v = U ( v U ( v Here we will show that if V and V are the same (ie, if (, g, g, g, g V= V = V and U and U are the same ( U= U = U,we can decompose U U into a symmetric and an antisymmetric component, and determine their characters We will do this by decomposing V V into sym( V V and anti( V V, and seeing how each transformation in U U acts in these components ecall that a linear transformation on V acts on a typical element sym( x y = ( x y + y x of sym( V V by ( sym( x y = ( ( x ( y + ( y ( x = sym( x y, and similarly for the action of in anti( V V Given that on V has distinct eigenvalues λ,, λm and eigenvectors v,, vm (and that V has dimension m, find the eigenvalues and eigenvectors for the action of in sym( V V and anti( V V and their traces, denoted tr( sym( and tr( anti( convenient basis for sym( V V is sym( v vk,, m with k These are eigenvectors of, since v ( v ( + v ( k v ( λλ k( v vk + λλ k ( vk v symv ( ( vk = = = λλ ksymv ( vk This calculation also shows that the eigenvalues are the products λλ k We don t consider pairs, k for which k>, since these have already been counted: sym( v v = sym( v v, for any pair, k in { } k k similar calculation holds for the action of in anti( V V, except that anti( v vk is only a basis element (and an eigenvector if k, since anti( v v = ( v v v v = We also don t consider pairs, k for which k>, since these have already been counted: anti( v v = anti( v v k k Thus, the eigenvalues for the action of in sym( V V are all pairwise products λλ k with k mm ( mm ( + (including λ a total of + m =, corresponding to the dimension of sym( V V m So tr( sym( = λλ k k

4 The eigenvalues for the action of in anti( V V are all pairwise products λλ k with < k ( (excluding λ a total of mm quantities, corresponding to the dimension of anti ( V V m So tr( anti( = λλ < k k B Our next step is to express tr( sym( (ie, the trace of acting in sym( V V, and tr( anti( (ie, the trace of acting in anti( V V, in terms of easier quantities Given the same setup as above, find the trace of (acting inv V and the trace of (acting in V, and relate this to tr( sym( and tr( anti( To calculate tr( (the sum of its eigenvalues, we use the basis v vk (all and k from to m for V V The eigenvalue corresponding to v vk is λλ k, as ( ( v vk = ( v vk = ( λv λkvk = λλk( v vk So m m tr( = λλ k = λ = ( tr( k, = = This can be rephrased as m m m ( tr( = λλ k = λλ k + λ k, = < k = To calculate tr(, we use the basis v for V The eigenvalue corresponding to v is v = ( v = ( λv = λv = λ v To express tr( sym( in terms of tr( ( tr( m m m k k k < k = tr( sym( = λλ = λλ + λ = and tr( : m m m From ( tr( = λλ k = λλ k + λ k, = < k m =, we have ( ( λλ = tr Therefore, m m m tr( sym( = λλ k + λ = ( tr( λ + = (( tr( + tr( < k = =, and m m tr( anti( = λλ k = ( tr( λ = (( tr( tr( < k = C Finally, recalling that the character is defined by L( g tr( Lg in terms of χ U λ, as m k λ < k = χ =, express χ and sym( U U χanti( U U

5 ( ( χ ( g = tr sym( U U = tr U + tr U = (( tr( Ug + tr ( U = (( χu( g + χu( g g Similarly, χ ( ( g = tr ( anti( U U = tr( U tr U = (( tr( Ug tr ( U = (( χu( g χu( g g ( ( ( ( sym( U U g g g g (( (( anti U U g g g g Q: Mutual information: synergy, redundancy, etc Consider a stimulus S that is equally likely to have one of several values, {,,, N }, and two neurons that are influenced by it We only concern ourselves with snapshots, so the responses and can be considered to be binary {,} In this scenario, we can calculate the information conveyed by each neuron alone: I= H( S + H( H( S, and I = H( S + H( H( S,, as well as the information conveyed by the entire population, I = H( S + H(, + H( S,, Note that H ( S,, is the entropy of the table of N entries, listing the probability p( S,, of that each value of S is associated with one of the four outputs patterns that and can produce (ie, { =, = }, { =, = }, { =, = }, { =, = } This table also determines the oint probabilities of S and each i, since ps (, = ps (,, = + ps (,, = and similarly p( S, = ps (, =, + ps (, =, Find an example of p( S,, that illustrates each of the behaviors below, or, alternatively, show that the behavior is impossible It may be work in terms of the stimulus probabilities p( S and the conditional probabilities p(, S (the probability that particular values of and occur, given a value of S, noting that p( S,, =, SpS ( I >, I >, I + = I+ I(independent channels Take N = and consider S to be a two-bit binary number (,,,, each with equal probability ( ps ( = / Have each neuron care only about one of the stimulus bits and ignore the other Since the bits (the channels are independent, the information in each channel should add To work this out as a simple example: we can have code the first bit with perfect reliability, and code the second bit with perfect reliability So we expect that I = I =, and I + = To verify: if S = ss(as bits, then we can take, S = if = s and = s, and zero otherwise To calculate I + : Express the stimulus-response relationship as a table

6 , S = = = = = = = = S = S = S = S = Since there are four equally likely inputs ( ps ( = /, the input entropy is log = Since there are also four equally likely outputs, the output entropy is also log = There are also equally likely table entries, so the table entropy is log = The information I + = + = To calculate I : educe the above table, by summing over values of : S = = S = S = S = S = The input entropy is log = The output entropy is log = There are equally likely table entries, so the table entropy is log = The information I = + = The above is spelling things out in much more detail than necessary, and are all straightforward consequences of a basic property of information, namely, that for independent channels, information adds The idea of course generalizes to neurons that are not fully reliable (ie, p( i = si = pi; in this case, the table for I + is = = = =, S = = = = S = pp ( p p p( p ( p( p, S = ( p p pp ( p( p p( p S = p( p ( p( p pp ( p p S = ( p ( p p ( p ( p p p p and the table for I is S = = S = p p S = p p S = p p S = p p Here, I = + plog p + ( p log ( p, which is nonzero provided p / i i i i i B I >, I >, I = max( I, I (completely redundant channels + i

7 We can ensure that I I + I and identical To spell this out: Let S be binary, and take = = = =, S = = = = S = p p S = p p That is, the only response configurations with nonzero probability are those for which =, and, and have reliability p( = s i = p The table for I is S = = S = p p S = p p For these, I I I plog p ( + p log ( p C I >, I >, max( I, I < I + < I+ I (partially redundant channels We can create partial redundancy by starting from, and eliminating the any single input (for example, S =, so that now, if either neuron indicates a input, then the other input must be and the other neuron s activity is uninformative So N = ; ps ( = = ps ( = = ps ( = = / For I +, the table is = = = =, S = = = = S = S = S = The input entropy, the output entropy, and the table entropy are all log (three equally likely possibilities for all, so I + = log+ log log = log 8 For I, S = = S = S = S = The input entropy and the table entropy are still log, but the output entropy is h = / log ( / / log (/ 98 So I = log + h log = h, and indeed, h = max( I, I < I < I + I = h + This of course also extends to less-than-reliable neurons, as above I > I + I ( synergistic channels D +

8 We can set up a scenario in which neither neuron alone has any information about the stimulus, but the pair does To do this, we caricature the idea that when S =, the neurons are asynchronous, but when S =, they are synchronized For I +, the table is = = = =, S = = = = S = / / S = / / The input entropy is, the output entropy is (all possible outputs have probability /, and the table entropy is So I + = + = But for each neuron alone, the information is : i S i = i = S = / /, S = / / since input entropy is, output entropy is, and table entropy is E I + < max( I, I ( occluding channels This is impossible, because of the Data Processing Inequality In detail: the response variable can be derived from the response variable (, by ignoring Therefore, I + cannot be less than I Similarly, I + cannot be less than than I So I < max( I, I is impossible + I < min( I, I ( strongly occluding channels This is impossible; it is a special case of E F + Q Mutual information with additive noise Here we will determine the mutual information between a Gaussian stimulus s and a response r that are related by r = gs+ a, where g is a constant (the gain, and a is an additive noise, uncorrelated with the input For definiteness, we assume that the stimulus s has variance V S and the noise term is drawn from a Gaussian with variance V, and that both have mean x For a Gaussian distribution of variance V, /V pv ( x = e, calculate the differential πv entropy The differential entropy is HV = pv( xlog pv( x dx Via standard steps: x HV = pv( xlog pv( x dx = pv( xln pv( x dx = pv( x ln dx log( log( πv V

9 The first term in parentheses is ust a constant; its contribution to the integral can be calculated from p V ( x dx = (since p V is a probability distribution The second term is proportional to contribution can be calculated from V V V x pv ( x dx = (since p V has variance V So, x ; its H = p ( xlog p ( x dx = ln ln ln ln log V V e πv = + = + log log ( ( π ( π B How is the output distributed (ie, what is p( r? What is its differential entropy? Since r = gs+ a, it is a sum of two independent Gaussian components, both of mean Therefore r is distributed as a Gaussian, of mean Its variance is the sum of the variances of the two terms: ( ( S V = r = gs+ a = gs + gsa+ a = g s + g sa + a = g V + V, where sa = because we have hypothesized that they are independent Since r is distributed as a Gaussian with the above variance, its differential entropy is H ( ( = ln g VS + V + ln π e log C What is the distribution of the output, conditional on a particular value of the input, say, s = s? Ie, what is p( r s? What is its differential entropy? With s = s given, r = gs + a, so p( r s is a Gaussian with mean gs and variance V The mean does not affect the differential entropy, as it ust translates the distribution So Hs = ( lnv ln + π e log D Calculate the mutual information between the input and the output We do this by comparing the unconditional entropy of the output, H, with the average condition entropy, H s, ie, I( S, = H psh ( s ds Since H s is constant (part C, this reduces to I( S, = H Hs, for any s Thus, gvs + V I( S, = ( ln( g VS + V + lnπe ( ln( V + lnπe = ln log log log V E Calculate the signal to-noise ratio, namely, the ratio of the variance of the signal term, gs, to the variance of the noise term, a elate this to the mutual information ( gs VS SN = = g So I( S, = ( SN+, a classic result a V log

10 F Calculate the correlation coefficient between the input and the output, ( s s ( r r C = elate this to the mutual information ( s s ( r r For the numerator of C: gvs gvs So C = = V V V g V + V S S S = = + = + = ( s s ( r r sr s( gs a gs sa gvs, from which gv + VV V = and = and C g V S S g VS C S gvs + V gv S I( S, = ln = ln + log V log V = ln + = ln + = ln = ln C C log log C log C log C another classic result, Q Toy examples of IC Take four data points, located at ( ±, ±, (see diagram I and consider its proection onto a unit vector v θ = ( cos θ,sinθ, to form a distribution of points x x I How does the variance of this distribution depend on θ? For which proections is it maximized (equivalently, what direction(s would be selected by PC? The proected distribution consists of the four points at positions xi = ± cosθ ± sinθ along v θ Their mean is zero So the variance is (( ( ( ( V = x = cos θ + sin θ + cos θ sin θ + cos θ + sin θ + cos θ sin θ, = cos θ + sin θ = which is independent of θ From the point of view of PC, no directions are special (ie, each accounts for the same amount of variance

11 B s in, but determine how the kurtosis of the proected distribution depends on θ ecall, kurtosis is defined by κ = ( x x V, where V = ( x x is the variance For which directions is kurtosis largest? Since x = and V =, (( ( ( ( κ = x = cosθ + sinθ + cosθ sinθ + cosθ + sinθ + cosθ sinθ = cos θ + 6cos θsin θ + sin θ One could graph it ( or, note that κ = cos θ + 6cos θsin θ + sin θ = ( cos θ + sin θ + cos θsin θ = cos θ sin θ = cosθ sinθ = sin θ ( ( So kurtosis is largest when ( sin θ is largest, ie, when sin θ = ±, which happens when θ = π /, π /,, ie, when θ = π /, π /, Using maximum kurtosis as a criterion, IC would identify the oblique axes as the unmixed coordinates C s in, but determine how the entropy of the proected distribution depends on θ For which proections is it minimized? The proected distribution consists of the four points ( θ θ ± cos ± sin v θ t all typical angles, ie, angles that are not multiples of π /, the four proection points are distinct In these cases, each value ( ± cosθ ± sinθ has probability /, and the distribution has entropy log (/ = t atypical angles that correspond to the cardinal directions θ =, π /, π,, the points coincide in pairs, and each pair proects onto one of two values, ± So they have probability /, and the distribution has entropy log (/ = t atypical angles that correspond to the oblique directions θ = π /, π /,, two of the points coincide (and proect to, and two do not, and proect to ± So the point has probability /, and each of the points ± have probability / The distribution has entropy log (/ log (/ = / So the entropy is minimized at the cardinal angles, θ =, π /, π, Using minimum entropy as a criterion, IC would identify the cardinal axes as the unmixed coordinates

12 x x II D Same as in, but take five data points, located at ( ±, ± and also the origin, (see diagram II and consider its proection onto a unit vector v θ = ( cos θ,sinθ, to form a distribution of points How does the variance of this distribution depend on θ? For which proections is it maximized (equivalently, what direction(s would be selected by PC? The proected distribution consists of five points, four at positions xi = ± cosθ ± sinθ and one at the origin So the variance is (( ( ( ( V = x = cos θ + sin θ + cos θ sin θ + cos θ + sin θ + cos θ sin θ, = cos θ + sin θ = again independent of θ So again, from the point of view of PC, no directions are special E s in B, but determine how the kurtosis of the proected distribution depends on θ For which directions is kurtosis largest? Since x = and V = /, (( ( ( ( x κ = = cos sin cos sin cos sin cos sin θ + θ + θ θ + θ + θ + θ θ (/ = ( cos θ + 6cos θ sin θ + sin θ This of course differs from the result in B only by a scale factor and an offset, so again the kurtosis is largest when ( sin θ is largest, ie, when θ = π /, π /, Using maximum kurtosis as a criterion, IC would identify the oblique axes as the unmixed coordinates F s in B, but determine how the entropy of the proected distribution depends on θ For which proections is it minimized? The proected distribution consists of the five points ( θ θ ± cos ± sin v θ and the origin t all typical angles, ie, angles that are not multiples of π /, the five proection points are distinct In these cases, each has probability /, and the distribution has entropy log (/ = log

13 t angles that correspond to the cardinal directions θ =, π /, π,, four of the points coincide in pairs, and there is one singleton The two values, ± (the results of the coincident pairs have probability /, and the origin has probability / So the distribution has entropy log(/ log(/ log(/ = log t angles that correspond to the oblique directions θ = π /, π /,, three of the points coincide (and proect to, and two do not, and proect to ± So the point has probability /, and each of the points ± have probability / The distribution has entropy log (/ log (/ log (/ = log log 7 So the entropy is minimized at the oblique angles, θ = π /, π /, Using minimum entropy as a criterion, IC would identify the oblique axes as the unmixed coordinates dding one point makes a difference for entropy, but not for the kurtosis Q6 Linear systems and feedback The diagram shows a linear system with input St ( and output ( t, and the filters F, i G i, and H are linear filters with transfer functions F ( ω, G ( ω, and H ( ω i H i S(t + F + F F (t G Find the Fourier transform S ( ω of St ( in terms of the Fourier transform ( ω of ( t Write X ( t as the output of F and Y( t as the output of F : H G S(t X(t + F + F Y(t F (t G G Then (omitting the ω -argument throughout, and using the standard rules for the transfer functions of combined linear systems, Y = F ( X + FGY F, from whichy = X FFG

14 Similarly, X = F( S + FG X + FHY, and, substituting the above equation for Y, F X = F S + FG X + FH X, from which FFG FS X = F FFG FFH FFG F Using Y = X for Y FFG : FFS Y = FFG FFG FFFH, so ( ( FFF = FY = S FFG FFG FFFH ( ( B Write out the relationship between S ( ω and ( ω when (a the feedback between F and F is absent, (b the feedback between F and F is absent, or (c the feedback between F and F is absent This is readily done by setting the appropriate filter in the above circuit to zero: (a Setting G FFF = yields = FFG FFFH (b Setting G FFF = yields = FFG FFFH (c Setting H = yields = FFF S S ( FFG ( FFG C Say that it is known that F = F = F = F, and that G = G = G Now consider the four configurations above (the full configuration, and the three configurations of part B, in which one of the three feedback filters has been removed Do they all produce different outputs? The transfer functions are given by: S Full system: = F ( S FG FH G F removed: = S F G F H G F removed: = S F G F H, the same as G removed H F removed: = S FG (

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M) RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

What is the neural code? Sekuler lab, Brandeis

What is the neural code? Sekuler lab, Brandeis What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how

More information

Information maximization in a network of linear neurons

Information maximization in a network of linear neurons Information maximization in a network of linear neurons Holger Arnold May 30, 005 1 Introduction It is known since the work of Hubel and Wiesel [3], that many cells in the early visual areas of mammals

More information

Question: Total. Points:

Question: Total. Points: MATH 308 May 23, 2011 Final Exam Name: ID: Question: 1 2 3 4 5 6 7 8 9 Total Points: 0 20 20 20 20 20 20 20 20 160 Score: There are 9 problems on 9 pages in this exam (not counting the cover sheet). Make

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment

Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ.

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Math 5378, Differential Geometry Solutions to practice questions for Test 2

Math 5378, Differential Geometry Solutions to practice questions for Test 2 Math 5378, Differential Geometry Solutions to practice questions for Test 2. Find all possible trajectories of the vector field w(x, y) = ( y, x) on 2. Solution: A trajectory would be a curve (x(t), y(t))

More information

Mechanics Physics 151

Mechanics Physics 151 Mechanics Phsics 151 Lecture 8 Rigid Bod Motion (Chapter 4) What We Did Last Time! Discussed scattering problem! Foundation for all experimental phsics! Defined and calculated cross sections! Differential

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Functions

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Functions ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 2017/2018 DR. ANTHONY BROWN 4. Functions 4.1. What is a Function: Domain, Codomain and Rule. In the course so far, we

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1 8.409 The Behavior of Algorithms in Practice //00 Lecture 4 Lecturer: Dan Spielman Scribe: Matthew Lepinski A Gaussian Elimination Example To solve: [ ] [ ] [ ] x x First factor the matrix to get: [ ]

More information

1 Vectors. c Kun Wang. Math 151, Fall Vector Supplement

1 Vectors. c Kun Wang. Math 151, Fall Vector Supplement Vector Supplement 1 Vectors A vector is a quantity that has both magnitude and direction. Vectors are drawn as directed line segments and are denoted by boldface letters a or by a. The magnitude of a vector

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Machine Learning (Spring 2012) Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis 1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in

More information

Proof: Let the check matrix be

Proof: Let the check matrix be Review/Outline Recall: Looking for good codes High info rate vs. high min distance Want simple description, too Linear, even cyclic, plausible Gilbert-Varshamov bound for linear codes Check matrix criterion

More information

Quadratic reciprocity (after Weil) 1. Standard set-up and Poisson summation

Quadratic reciprocity (after Weil) 1. Standard set-up and Poisson summation (September 17, 010) Quadratic reciprocity (after Weil) Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ I show that over global fields (characteristic not ) the quadratic norm residue

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs LECTURE 18 Last time: White Gaussian noise Bandlimited WGN Additive White Gaussian Noise (AWGN) channel Capacity of AWGN channel Application: DS-CDMA systems Spreading Coding theorem Lecture outline Gaussian

More information

Handout 2 for MATH 323, Algebra 1: Permutation groups and abstract groups

Handout 2 for MATH 323, Algebra 1: Permutation groups and abstract groups Handout 2 for MATH 323, Algebra 1: Permutation groups and abstract groups Laurence Barker, Mathematics Department, Bilkent University, version: 30th October 2011. These notes discuss only some aspects

More information

Inner Product Spaces 5.2 Inner product spaces

Inner Product Spaces 5.2 Inner product spaces Inner Product Spaces 5.2 Inner product spaces November 15 Goals Concept of length, distance, and angle in R 2 or R n is extended to abstract vector spaces V. Sucn a vector space will be called an Inner

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Endogenous Information Choice

Endogenous Information Choice Endogenous Information Choice Lecture 7 February 11, 2015 An optimizing trader will process those prices of most importance to his decision problem most frequently and carefully, those of less importance

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013 AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013 Lecturer: Dr. Mark Tame Introduction With the emergence of new types of information, in this case

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

10.1 Review of Parametric Equations

10.1 Review of Parametric Equations 10.1 Review of Parametric Equations Recall that often, instead of representing a curve using just x and y (called a Cartesian equation), it is more convenient to define x and y using parametric equations

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Extended 1-perfect additive codes

Extended 1-perfect additive codes Extended 1-perfect additive codes J.Borges, K.T.Phelps, J.Rifà 7/05/2002 Abstract A binary extended 1-perfect code of length n + 1 = 2 t is additive if it is a subgroup of Z α 2 Zβ 4. The punctured code

More information

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21 EECS 6A Designing Information Devices and Systems I Spring 26 Official Lecture Notes Note 2 Introduction In this lecture note, we will introduce the last topics of this semester, change of basis and diagonalization.

More information

Part II Problems and Solutions

Part II Problems and Solutions Problem 1: [Complex and repeated eigenvalues] (a) The population of long-tailed weasels and meadow voles on Nantucket Island has been studied by biologists They measure the populations relative to a baseline,

More information

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9 EECS 16A Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9 PRINT your student ID: PRINT AND SIGN your

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

An Introduction to Independent Components Analysis (ICA)

An Introduction to Independent Components Analysis (ICA) An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce

More information

Isomorphisms between pattern classes

Isomorphisms between pattern classes Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

THE INVERSE TRIGONOMETRIC FUNCTIONS

THE INVERSE TRIGONOMETRIC FUNCTIONS THE INVERSE TRIGONOMETRIC FUNCTIONS Question 1 (**+) Solve the following trigonometric equation ( x ) π + 3arccos + 1 = 0. 1 x = Question (***) It is given that arcsin x = arccos y. Show, by a clear method,

More information

Vectors Part 1: Two Dimensions

Vectors Part 1: Two Dimensions Vectors Part 1: Two Dimensions Last modified: 20/02/2018 Links Scalars Vectors Definition Notation Polar Form Compass Directions Basic Vector Maths Multiply a Vector by a Scalar Unit Vectors Example Vectors

More information

= 10 such triples. If it is 5, there is = 1 such triple. Therefore, there are a total of = 46 such triples.

= 10 such triples. If it is 5, there is = 1 such triple. Therefore, there are a total of = 46 such triples. . Two externally tangent unit circles are constructed inside square ABCD, one tangent to AB and AD, the other to BC and CD. Compute the length of AB. Answer: + Solution: Observe that the diagonal of the

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5 Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 15: Information Theory (cont d) Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt March 29 th, 2015 1 Example: Channel Capacity of BSC o Let then: o For

More information

X X (2) X Pr(X = x θ) (3)

X X (2) X Pr(X = x θ) (3) Notes for 848 lecture 6: A ML basis for compatibility and parsimony Notation θ Θ (1) Θ is the space of all possible trees (and model parameters) θ is a point in the parameter space = a particular tree

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

15.S24 Sample Exam Solutions

15.S24 Sample Exam Solutions 5.S4 Sample Exam Solutions. In each case, determine whether V is a vector space. If it is not a vector space, explain why not. If it is, find basis vectors for V. (a) V is the subset of R 3 defined by

More information

Matrix Theory and Differential Equations Homework 6 Solutions, 10/5/6

Matrix Theory and Differential Equations Homework 6 Solutions, 10/5/6 Matrix Theory and Differential Equations Homework 6 Solutions, 0/5/6 Question Find the general solution of the matrix system: x 3y + 5z 8t 5 x + 4y z + t Express your answer in the form of a particulaolution

More information

Notes on multivariable calculus

Notes on multivariable calculus Notes on multivariable calculus Jonathan Wise February 2, 2010 1 Review of trigonometry Trigonometry is essentially the study of the relationship between polar coordinates and Cartesian coordinates in

More information

Bounds on Quantum codes

Bounds on Quantum codes Bounds on Quantum codes No go we cannot encode too many logical qubits in too few physical qubits and hope to correct for many errors. Some simple consequences are given by the quantum Hamming bound and

More information

88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3. SYMMETRIES 88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

More information

Predator - Prey Model Trajectories are periodic

Predator - Prey Model Trajectories are periodic Predator - Prey Model Trajectories are periodic James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 4, 2013 Outline 1 Showing The PP

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions. Partial Fractions June 7, 04 In this section, we will learn to integrate another class of functions: the rational functions. Definition. A rational function is a fraction of two polynomials. For example,

More information

Error Correcting Codes Questions Pool

Error Correcting Codes Questions Pool Error Correcting Codes Questions Pool Amnon Ta-Shma and Dean Doron January 3, 018 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to

More information

Massachusetts Institute of Technology Physics Department

Massachusetts Institute of Technology Physics Department Massachusetts Institute of Technology Physics Department Physics 8.32 Fall 2006 Quantum Theory I October 9, 2006 Assignment 6 Due October 20, 2006 Announcements There will be a makeup lecture on Friday,

More information

5.4 Given the basis e 1, e 2 write the matrices that represent the unitary transformations corresponding to the following changes of basis:

5.4 Given the basis e 1, e 2 write the matrices that represent the unitary transformations corresponding to the following changes of basis: 5 Representations 5.3 Given a three-dimensional Hilbert space, consider the two observables ξ and η that, with respect to the basis 1, 2, 3, arerepresentedby the matrices: ξ ξ 1 0 0 0 ξ 1 0 0 0 ξ 3, ξ

More information

Last Update: April 7, 201 0

Last Update: April 7, 201 0 M ath E S W inter Last Update: April 7, Introduction to Partial Differential Equations Disclaimer: his lecture note tries to provide an alternative approach to the material in Sections.. 5 in the textbook.

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

33 Idempotents and Characters

33 Idempotents and Characters 33 Idempotents and Characters On this day I was supposed to talk about characters but I spent most of the hour talking about idempotents so I changed the title. An idempotent is defined to be an element

More information

Category Theory. Categories. Definition.

Category Theory. Categories. Definition. Category Theory Category theory is a general mathematical theory of structures, systems of structures and relationships between systems of structures. It provides a unifying and economic mathematical modeling

More information

Quadratic reciprocity (after Weil) 1. Standard set-up and Poisson summation

Quadratic reciprocity (after Weil) 1. Standard set-up and Poisson summation (December 19, 010 Quadratic reciprocity (after Weil Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ I show that over global fields k (characteristic not the quadratic norm residue symbol

More information

EEE 184: Introduction to feedback systems

EEE 184: Introduction to feedback systems EEE 84: Introduction to feedback systems Summary 6 8 8 x 7 7 6 Level() 6 5 4 4 5 5 time(s) 4 6 8 Time (seconds) Fig.. Illustration of BIBO stability: stable system (the input is a unit step) Fig.. step)

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson JUST THE MATHS SLIDES NUMBER 96 MATRICES 6 (Eigenvalues and eigenvectors) by AJHobson 96 The statement of the problem 962 The solution of the problem UNIT 96 - MATRICES 6 EIGENVALUES AND EIGENVECTORS 96

More information

Single-Mode Linear Attenuation and Phase-Insensitive Linear Amplification

Single-Mode Linear Attenuation and Phase-Insensitive Linear Amplification Massachusetts Institute of echnology Department of Electrical Engineering and Computer Science 6.453 Quantum Optical Communication Date: hursday, October 20, 2016 Lecture Number 12 Fall 2016 Jeffrey H.

More information

(x + 3)(x 1) lim(x + 3) = 4. lim. (x 2)( x ) = (x 2)(x + 2) x + 2 x = 4. dt (t2 + 1) = 1 2 (t2 + 1) 1 t. f(x) = lim 3x = 6,

(x + 3)(x 1) lim(x + 3) = 4. lim. (x 2)( x ) = (x 2)(x + 2) x + 2 x = 4. dt (t2 + 1) = 1 2 (t2 + 1) 1 t. f(x) = lim 3x = 6, Math 140 MT1 Sample C Solutions Tyrone Crisp 1 (B): First try direct substitution: you get 0. So try to cancel common factors. We have 0 x 2 + 2x 3 = x 1 and so the it as x 1 is equal to (x + 3)(x 1),

More information

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 5/5/2 2 version

More information

Vector Supplement Part 1: Vectors

Vector Supplement Part 1: Vectors Vector Supplement Part 1: Vectors A vector is a quantity that has both magnitude and direction. Vectors are drawn as directed line segments and are denoted by boldface letters a or by a. The magnitude

More information

Meaning of the Hessian of a function in a critical point

Meaning of the Hessian of a function in a critical point Meaning of the Hessian of a function in a critical point Mircea Petrache February 1, 2012 We consider a function f : R n R and assume for it to be differentiable with continuity at least two times (that

More information

Analysis II - few selective results

Analysis II - few selective results Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

FACTORIZATION OF IDEALS

FACTORIZATION OF IDEALS FACTORIZATION OF IDEALS 1. General strategy Recall the statement of unique factorization of ideals in Dedekind domains: Theorem 1.1. Let A be a Dedekind domain and I a nonzero ideal of A. Then there are

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Analysis of the Balanced Incomplete Block Design. Each of t treatments is applied a total of r times, distributed across b blocks, each made up

Analysis of the Balanced Incomplete Block Design. Each of t treatments is applied a total of r times, distributed across b blocks, each made up 1 Analysis of the Balanced Incomplete Bloc Design Each of t treatments is applied a total of r times, distributed across b blocs, each made up of plots. For = 1,...,t and = 1,...,b write n for the number

More information

Sign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2.

Sign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2. Math 258 Name: Final Exam Instructor: May 7, 2 Section: Calculators are NOT allowed. Do not remove this answer page you will return the whole exam. You will be allowed 2 hours to do the test. You may leave

More information

Author: Bob Howlett Group representation theory Lecture 1, 28/7/97. Introduction

Author: Bob Howlett Group representation theory Lecture 1, 28/7/97. Introduction Author: Bob Howlett Group representation theory Lecture 1, 28/7/97 Introduction This course is a mix of group theory and linear algebra, with probably more of the latter than the former. You may need to

More information

26 Group Theory Basics

26 Group Theory Basics 26 Group Theory Basics 1. Reference: Group Theory and Quantum Mechanics by Michael Tinkham. 2. We said earlier that we will go looking for the set of operators that commute with the molecular Hamiltonian.

More information

Feature selection and extraction Spectral domain quality estimation Alternatives

Feature selection and extraction Spectral domain quality estimation Alternatives Feature selection and extraction Error estimation Maa-57.3210 Data Classification and Modelling in Remote Sensing Markus Törmä markus.torma@tkk.fi Measurements Preprocessing: Remove random and systematic

More information

A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties:

A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties: Byte multiplication 1 Field arithmetic A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties: F is an abelian group under addition, meaning - F is closed under

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

Lectures 15: Cayley Graphs of Abelian Groups

Lectures 15: Cayley Graphs of Abelian Groups U.C. Berkeley CS294: Spectral Methods and Expanders Handout 15 Luca Trevisan March 14, 2016 Lectures 15: Cayley Graphs of Abelian Groups In which we show how to find the eigenvalues and eigenvectors of

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

Fact: Every matrix transformation is a linear transformation, and vice versa.

Fact: Every matrix transformation is a linear transformation, and vice versa. Linear Transformations Definition: A transformation (or mapping) T is linear if: (i) T (u + v) = T (u) + T (v) for all u, v in the domain of T ; (ii) T (cu) = ct (u) for all scalars c and all u in the

More information

August 2015 Qualifying Examination Solutions

August 2015 Qualifying Examination Solutions August 2015 Qualifying Examination Solutions If you have any difficulty with the wording of the following problems please contact the supervisor immediately. All persons responsible for these problems,

More information