1 Dirac Notation for Vector Spaces

Size: px
Start display at page:

Download "1 Dirac Notation for Vector Spaces"

Transcription

1 Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes in quantum mechanics, which are the complex numbers (which give the probability for something to occur upon being multiplied by their complex conjugate). It also lets us go smoothly from finite vector spaces (where matrix notation suffices) to the infinite-dimensional vector spaces of functions (where matrices would become infinite, sometimes needing a continuous index). So, in the following you will often see expressions in Dirac notation compared with the more familiar matrix notation used in linear algebra (and a few equivalents from tensor notation as well, when appropriate). 1 Dirac Notation for Vector Spaces We are used to vectors as being little arrows: objects with a magnitude and direction. Or, they may be represented as sets of several numbers, where the rules for addition and subtraction of vectors means that you must add (or subtract) the numbers kept at corresponding points in the set. It is more useful to think of these as merely special cases of more general vector spaces, where the rules for addition and subtraction, called the linear structure of the vector space, is fundamental to the definition. In other words, a vector space is anything where we can take two members of the space and form linear combinations of them: given vectors A and B and two numbers (real or complex) α and β, if it makes sense that α A + β B is also a vector (which we may then call αa+βb ), then that is the very definition of a vector space (real or complex, depending upon the numbers used). This clearly includes our component lists and little arrows, but also other examples of vast importance to theoretical physics, in particular vector spaces of functions obeying linear differential equations. This means vector notation can be used in Maxwell s equations, and also of course for abstract Hilbert space which is used to represent quantum systems (for which the Dirac notation already being used above was invented). Given any vector space, there always exists a dual vector space, whose objects are linear maps from the vectors to ordinary numbers (real or complex). Dirac notation makes clear the distinction between these objects, called covectors or one-forms, and the regular vectors. If you think of vectors as little arrows, then covectors are linear functions on the space where each arrow starts at the origin (as explained below). Dirac notation is often called braket notation, because in it, a number is represented by a bracket, given by the number produced by a covector acting on a vector. Inner products are often described as being generalizations of the ordinary dot product familiar from vector analysis, but that s actually a bit misleading. Inner products are actually a generalization of something more basic than dot products: a dot product 1

2 is actually an inner product together with a map which allows us to identify specific vectors with specific covectors (and vice-versa), as we ll see later below. 2 Summary of Dirac Notation After each object, the equivalents from linear algebra (LA) and in tensor notation (T) are given in parentheses, with some definitions to follow after the summary.... denotes a vector (LA: column vector; T: type (1,0) tensor whose components have one upstairs index, e.g. a i )... denotes a covector, also known as a one-form (LA: row vector; T: type (0,1) tensor with one down index on components) denotes a number formed by the inner product of a covector with a vector (LA: row vector on left, matrix product with column vector on right; T: summation over index of product of covector components with same-index vector components) denotes an outer product of a vector with a covector (LA: column vector on left, matrix product with row vector on right, giving a matrix; T: set of all products of a single covector component with a single vector component with no sum over indices) 3 Some Definitions and Examples 3.1 Covectors are Different than Vectors, Usually Given any vector space, the space of covectors (one-forms) is defined to be the space of linear maps from the vectors to regular numbers (where the numbers can be real or complex). That is, a covector is a machine which is ready to eat a vector and spit out a number, and to do so in a linear way. This means that if a covector φ is fed a linear combination of vectors, the number which it produces obeys φ αa+βb = α φ A +β φ B., for all vectors A and B and all numbers α and β. This means that the zero covector is a perfectly acceptable (but dull) covector, and in fact the space of covectors is somewhat confusingly a vector space in its own right! So covectors are always defined with some respect to some specific vector space which we have in mind. (Fortunately, when a space of covectors is considered to be a vector space of its own, their covectors are members of the original vector space, so we never have too many vector spaces to worry about at one time.) In two dimensions, if we think of a vector as an arrow with its magnitude and direction, then a covector is given by something which looks a bit like a ruler: a set of tick-marks, evenly spaced. Looking closer, the tick-marks are actually little parallel lines, each perpendicular to the edge of the ruler. For the covector/one-form, we should think of these lines as extending infinitely far in each direction. But just as changing a vector s direction gives a different vector, it s only fair to consider a covector as having its specific orientation, and if we 2

3 change that orientation then it s a different covector. In other words, a ruler s tick marks and the way it is pointing are both needed before we know which covector it represents. Inner products are then easy to picture: to get the inner product of a vector with a covector, count the total number of lines of the covector that the arrow representing the vector intersects. If the vector is mostly parallel to the ruler, count the number as positive. If the vector is mostly oppositely oriented to the ruler, count the number as negative. And if the vector is perpendicular to the ruler by being parallel to the tick-mark lines, count the number as zero. As an exercise, you should verify that the property of acting linearly on vectors, which is built into the definition of a covector, requires that the ruler s ticks must be extended into parallel lines, as described above. In three dimensions, there are two independent directions perpendicular to the line formed by the very edge of the ruler, so a covector is made up of not evenly spaced parallel lines, but rather evenly spaced parallel planes, going out infinitely far in every direction perpendicular to the ruler s edge. Generally, in n dimensions a covector is made of evenly spaced n 1 dimensional spaces (R n 1 ), called hypersurfaces because they generalize a surface in the 3-D case. So if a vector is like an arrow, a covector is like a ruler, capable of giving out a number when fed an arrow. To use a ruler properly, you should line up its edge parallel to the arrow v v you want to measure. More importantly, you keep the tick marks perpendicular to v. The ruler is now the covector representation of v, or v! It is a unit covector; recall from the previous notes that units must be specified in order to turn dimensionful quantities into numbers, and the spacing of the tick marks on the ruler defines one unit. There are many such unit covectors, but lining it up with the direction of v produces v. The inner product v v = v, exactly as should be produced when a ruler is used to measure a length. So much for the arrow interpretation. If a vector is thought of as a column vector in linear algebra, then a covector is just a row vector: matrix multiplication is linear and so the inner product will automatically satisfy the necessary linear property. Again, these are clearly different objects than the column vectors. So, which is the vector and which is the covector, in the ordinary dot product of two vectors? And why wasn t this emphasized way back when you first took a vector analysis course? It turns out that in some situations, we can identify vectors with covectors. These include the enormously important cases of ordinary Euclidean space in an orthonormal vector basis, and also Hilbert space in quantum mechanics in those, you may be sloppy about which is which. But they must always transform oppositely under changes of coordinate system (which is the tensor analysis viewpoint). So it s best to think of one vector in a dot product (w v) either one, say v as the vector, and the other vector and the dot are then the covector (here, w, which explains the strange notation introduced above). In linear algebra, non-orthonormal vector bases show up as a metric, which is a matrix placed in between the components of w, written as a row vector, 3

4 and those of v, written as a column vector. The covector is in that case not the row vector made of the components of w, but rather is that row vector multiplied by the matrix representing the metric on its right (which makes a new row vector). So the metric matrix represents the dot in the notation, and its presence ensures that covectors transform the right way under changes of coordinates to leave inner products alone. Again, it should be emphasized that a covector and vector inner product never needs any metric between them: a dot is only required to take an inner product of a vector with another vector. 3.2 Inner Products There are many different contexts in mathematical physics where inner products between a vector space and its dual space occur, and so Dirac notation can be useful. Mathematicians call this type of inner product bilinear, meaning that it is linear on the vector space and on the dual space (which, you will recall, is also a vector space itself). In Dirac notation, bilinearity looks like w 1 +αw 2 v 1 +βv 2 = w 1 v 1 +α w 2 v 1 +β w 1 v 2 +α β w 2 v 2. Notice that we take the complex conjugate of complex numbers which appear as labels in the dual space, as α in the example above. When the number appears outside the covector label, though, we write the conjugate explicitly: w 1 +αw 2 = w 1 +α w 2 and so you ll note that for self-consistency we d better have w 1 +α w 2 = w 1 +α w 2. The complex conjugate rule, although it may be confusing at first, is very convenient: when it makes sense to use the same labels on covectors as on vectors (as with orthonormal bases in finite dimensions, or in Hilbert space for quantum mechanics), we have a positive real inner product when a vector is dotted with itself : v 1 + αv 2 v 1 + αv 2 0. This is, of course, the extension of the rule alluded to above in linear algebra on complex vector spaces: we take the conjugate of the row vector before multiplying by the column vector. (The quotes are to remind you that really, we are forming the covector which goes canonically with the vector being labeled: the row vector is really a covector, ready to give us a number when fed a column vector.) Thisformsourfirstexampleofaninnerproduct: thefamiliaronefromlinear algebra in a finite-dimensional vector space. For it, we define the covector by the complex conjugate transpose, also known as the Hermitian conjugate, of the column vector which shares its label, and then multiply by the other vector. In short, w v = wiv i = w T v = w v i Going across in this from left to right, we have first the Dirac notation, and then tensor notation (in which v i stands for the number in row i of the column vector 4

5 representing v), and finally matrix notation in both the third and last versions. In the matrix versions, matrix multiplication of the objects is understood (and written out explicitly in the tensor notation before). For matrices, T stands for transpose: you switch the row and column indices of any matrix to take its transpose, so the transpose of a column vector produces a row vector. The final version introduces dagger notation: the dagger ( ) is short for complex conjugate transpose in finite dimensions (and adjoint in infinite dimensions, defined when we get to linear operators). Again, you ll notice that covectors and vectors are genuinely different, even when it makes sense to use the same labels for them. In fact, it is true in general that w v = v w when we exchange the vectors and covectors (whenever it makes sense to do so in their labels), and so we get a different number unless the number happens to be real. Our next examples are extremely important ones: the vector spaces of functions on some fixed interval, area, or volume. We define the dual space of covectors by taking the complex conjugate of the function, and integrating over the domain: g f = g (x)f(x)d n x where the integration is over the whole domain in question, whose dimension is written as n above. Notice that we can use this to define a norm, or length, of these vectors, whenever the integral of f 2 over the domain isn t infinite, because it gives a positive real number whenever f is nonzero over some portion of the domain: we can define the length by f = f f. (Mathematicians call this the L 2 norm. ) In fact, this is often used to define the vector space more precisely. As we will see in the discussion of linear operators acting on functions, it is crucially important to specify the boundary conditions of the functions at the edges of the domain. One such specification very common in quantum mechanics is that the function is normalizable : its norm cannot be infinite. This does indeed form a vector space, as you can check: f +g will be finite whenever f and g both are. This definition even works when the domain is infinitely large (such as the whole real line). The all-important boundary condition comes from physical considerations: in quantum mechanics, setting the overall probability of finding a particle somewhere on the real line equal to 1 makes use consider wavefunctions of norm 1 only. (Careful, as the functions of norm 1 do not form a vector space! Add one such to itself and you will see why.) When the domain is finite, boundary conditions become even more important, and again their specification usually comes from physical considerations. For instance, the shape of a violin string which is clamped at both ends leads one to consider functions on a line segment which go to zero at both endpoints, called homogeneous Dirichlet boundary conditions : for example, y(a) = 0 and y(b) = 0 would be allowed functions for y(x) between x = a and x = b. Adding two such functions produces another, so we have a vector space. On the other hand, replacing the condition y(b) = 0 with y(b) = 1 (which is an inhomogeneous Dirichlet boundary condition) do not give a vector space: add two such together, and you ll have y total (b) = 2, so it isn t again an allowed function. This 5

6 doesn t mean that vector spaces are useless in this case; if we agree to subtract away some fixed reference function (such as the straight line between 0 at a and 1 at b) from every function we consider, the resulting functions will be zero at both ends, and will again form a vector space. Before moving on to another bracket product example, a couple of remarks about this integration product, and vector spaces of functions in general, are in order. First, vector spaces of functions on an interval are always infinitedimensional: given any finite list of functions, we can always find another one which isn t a combination of those already in the list. (Note that we refer here not to the dimension n of the domain of the functions, which is finite, but rather the abstract space of functions themselves is what is infinite: there are a lot of possible functions, all linearly independent of each other.) Second, the dual space can actually be larger than the original vector space in infinite dimensions (in finite dimensions, dual spaces always have the same dimension as the vector space they are dual to). For example, if our vector space consists of continuous functions on a line, the dual space will not only include such functions but will also include distributions : Dirac delta functions, which produce the value of f at a single point when their inner product integral is taken with some f(x). On the other hand, the dual of the dual space (our original vector space) does not have Dirac deltas in it, because the integral of a Dirac delta squared does not return a number (in fact, it does not make sense mathematically). Finally, it is very helpful to consider this example as being essentially the same asthefirstexample, wheretheinnerproductwasasumoverafinitenumber of vector and covector components. (We ll consider just n = 1 functions on a line for the discussion here.) The index i has simply become continuous, and turned into the label x. Instead of having one number for each index i (an n-dim. vector), we have one for each value of x (a function). In the inner product, the sum over i becomes an integral over dx, and then the analogy is complete. A third example of an inner product which shows up in theoretical physics, but looks quite different at first than those above, is that defined on certain vector spaces of matrices, where the matrices themselves are the vectors. A convenient inner product is to define A B = Tr(A B), where Tr denotes the matrix trace (sum of the diagonal elements) of the matrix product of the Hermitian conjugate of A with B. (Note that this does not even require that A and B be square matrices; just that they both have the same number of rows and columns. If they are not square, then Tr(BA ) defines a different inner product, but is identical when they are square.) For diagonalizable square matrices, this definition is very convenient because it does not change when we make a unitary change of basis: replacing A by U 1 AU and B by U 1 BU leaves their inner product unchanged if U is unitary (U 1 = U ), as you should check. We can also use this to define the norm of a matrix: for diagonalizable matrices, A A will return the sum of the complex-squared eigenvalues of A, which is never negative. Often, the inner product will be defined with an extra 1/n accompanying the trace for n by n matrices; this gives the n-dimensional unit matrix a norm of 1. 6

7 3.3 Outer Products are Linear Operators As defined above, an inner product is a bracket, with the bra covector and the ket vector sandwiched together to give a number. What, then, is an outer product, where the ket (... ) is to the left of the bra (... )? In linear algebra, this would correspond to the matrix product of a column vector on the left with a row vector to the right. Instead of producing a number (as a 1 by 1 matrix), this instead produces a matrix (n by n, where n is the dimension of the vector space). A matrix is a linear operator: it acts on column vectors to give other column vectors, and does so linearly. It can also act on row (co)vectors (multiplying the matrix on the right) to give other row (co)vectors, and that operation is also linear (doubling the input covector doubles the output covector). As the Dirac notation immediately suggests, this interpretation extends to infinite-dimensional vector spaces: an outer product is a linear operator, which can act linearly on the vectors and also linearly on covectors. The anti-sandwiched bra and ket in an outer product can operate on a ket to the right, where the inner product it forms with the bra part of the outer product produces a number. That number then multiplies the ket part of the outer product, and that ket times the number formed becomes the result of the operation. Similarly, the ket part of the outer product is ready to form an inner product with a bra covector on the left, producing a number to multiply the bra from the outer product. Of course, either result can then operate on yet another vector (from the dual space of the result of the first operation) to produce an ordinary number. In other words, an outer product can operate on both a bra (on the left) and a ket (to the right) to produce just a number (the product of the two inner products created by this operation). One simple but important example of an outer product is that of a projection operator onto a single one-dimensional subspace of the original vector (or covector) space. To create this, suppose that we have chosen a basis for our vector space (say, { φ i }, where the index i runs over all members of the basis this means that each member is linearly independent of the others, and so cannot be written as a linear combination of them, and that any vector can be written uniquely as a linear combination of basis vectors). We can always then create the canonically dual basis of covectors (which we ll use the same labels for). These are constructed to satisfy φ i φ j = δ ij : each basis covector is orthogonal to all basis vectors except the one which shares its label, and it has an inner product of 1 with that one. Then the projection onto the one-dimensional subspace spanned by the first basis vector, for instance, is given by P 1 = φ 1 φ 1. When it acts on a vector, it leaves unchanged that part of the vector which is a multiple of φ 1, and removes the rest of the vector (and likewise for covectors). For an ordinary example, suppose that we are in regular 3-dimensional space, and we re using the usual Cartesian basis x,ŷ, and ẑ. These will still come in two types each: once as a basis vector and once as a basis covector. Then the projections onto the second basis vector is given by P 2 = ŷ ŷ. When acting 7

8 upon an arbitrary vector v, written as v = v x x + v y ŷ + v z ẑ, it returns just P 2 v = v y ŷ. This represents the shadow of v in just the y direction. Projection operators have many nice features, some of which we ll summarize in an upcoming handout on linear algebra. Repeated application of a projection operator doesn t do anything after the first projection, so they satisfy P 2 = P (asiseasytocheckwiththeaboveexamples). Thisisalsotrueofprojectionsinto subspaces of dimension higher than 1, which are created by adding up several different one-dimensional projection operators (and these also are themselves when squared, as you can easily check). One of the most important such is a summation over the entire set of projection operators: this doesn t change anything, so it is an identity operator: i P i = I, which is called a completeness relation. Just like general projections can be written as a sum of outer products, so can any general linear operator also can be written as a linear combination of outer products. So, outer products are very useful in general, as providing a basis for linear operators. An example (using familiar 3-dimensional space once again) is given by a reshuffling of the basis vectors into others: consider the operator R = ŷ x + ẑ ŷ + x ẑ. It is easy to see that when this acts on a vector, it simply relabels the x, y, and z components cyclically. So, it is a rotation about the axis parallel to the vector (x,y,z) = (1,1,1) by 120 degrees. 8

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

The quantum state as a vector

The quantum state as a vector The quantum state as a vector February 6, 27 Wave mechanics In our review of the development of wave mechanics, we have established several basic properties of the quantum description of nature:. A particle

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Lecture - 6 Postulates of Quantum Mechanics II (Refer Slide Time: 00:07) In my last lecture,

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lecture 3: Hilbert spaces, tensor products

Lecture 3: Hilbert spaces, tensor products CS903: Quantum computation and Information theory (Special Topics In TCS) Lecture 3: Hilbert spaces, tensor products This lecture will formalize many of the notions introduced informally in the second

More information

,, rectilinear,, spherical,, cylindrical. (6.1)

,, rectilinear,, spherical,, cylindrical. (6.1) Lecture 6 Review of Vectors Physics in more than one dimension (See Chapter 3 in Boas, but we try to take a more general approach and in a slightly different order) Recall that in the previous two lectures

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Contravariant and Covariant as Transforms

Contravariant and Covariant as Transforms Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to

More information

Categories and Quantum Informatics: Hilbert spaces

Categories and Quantum Informatics: Hilbert spaces Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Mathematical Introduction

Mathematical Introduction Chapter 1 Mathematical Introduction HW #1: 164, 165, 166, 181, 182, 183, 1811, 1812, 114 11 Linear Vector Spaces: Basics 111 Field A collection F of elements a,b etc (also called numbers or scalars) with

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Quantum Physics II (8.05) Fall 2002 Assignment 3

Quantum Physics II (8.05) Fall 2002 Assignment 3 Quantum Physics II (8.05) Fall 00 Assignment Readings The readings below will take you through the material for Problem Sets and 4. Cohen-Tannoudji Ch. II, III. Shankar Ch. 1 continues to be helpful. Sakurai

More information

Mathematical Foundations of Quantum Mechanics

Mathematical Foundations of Quantum Mechanics Mathematical Foundations of Quantum Mechanics 2016-17 Dr Judith A. McGovern Maths of Vector Spaces This section is designed to be read in conjunction with chapter 1 of Shankar s Principles of Quantum Mechanics,

More information

Consistent Histories. Chapter Chain Operators and Weights

Consistent Histories. Chapter Chain Operators and Weights Chapter 10 Consistent Histories 10.1 Chain Operators and Weights The previous chapter showed how the Born rule can be used to assign probabilities to a sample space of histories based upon an initial state

More information

II. The Machinery of Quantum Mechanics

II. The Machinery of Quantum Mechanics II. The Machinery of Quantum Mechanics Based on the results of the experiments described in the previous section, we recognize that real experiments do not behave quite as we expect. This section presents

More information

The Gram Schmidt Process

The Gram Schmidt Process u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple

More information

The Gram Schmidt Process

The Gram Schmidt Process The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case

More information

(III.D) Linear Functionals II: The Dual Space

(III.D) Linear Functionals II: The Dual Space IIID Linear Functionals II: The Dual Space First I remind you that a linear functional on a vector space V over R is any linear transformation f : V R In IIIC we looked at a finite subspace [=derivations]

More information

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Lecture - 4 Postulates of Quantum Mechanics I In today s lecture I will essentially be talking

More information

2. Introduction to quantum mechanics

2. Introduction to quantum mechanics 2. Introduction to quantum mechanics 2.1 Linear algebra Dirac notation Complex conjugate Vector/ket Dual vector/bra Inner product/bracket Tensor product Complex conj. matrix Transpose of matrix Hermitian

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 27th, 21 From separation of variables, we move to linear algebra Roughly speaking, this is the study

More information

Linear Algebra and Dirac Notation, Pt. 1

Linear Algebra and Dirac Notation, Pt. 1 Linear Algebra and Dirac Notation, Pt. 1 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 1 February 1, 2017 1 / 13

More information

Recitation 1 (Sep. 15, 2017)

Recitation 1 (Sep. 15, 2017) Lecture 1 8.321 Quantum Theory I, Fall 2017 1 Recitation 1 (Sep. 15, 2017) 1.1 Simultaneous Diagonalization In the last lecture, we discussed the situations in which two operators can be simultaneously

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

1 Infinite-Dimensional Vector Spaces

1 Infinite-Dimensional Vector Spaces Theoretical Physics Notes 4: Linear Operators In this installment of the notes, we move from linear operators in a finitedimensional vector space (which can be represented as matrices) to linear operators

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review Chapter III Quantum Computation These lecture notes are exclusively for the use of students in Prof. MacLennan s Unconventional Computation course. c 2017, B. J. MacLennan, EECS, University of Tennessee,

More information

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1 Two vectors are orthogonal when their dot product is zero: v w = orv T w =. This chapter moves up a level, from orthogonal vectors to orthogonal

More information

Vector spaces and operators

Vector spaces and operators Vector spaces and operators Sourendu Gupta TIFR, Mumbai, India Quantum Mechanics 1 2013 22 August, 2013 1 Outline 2 Setting up 3 Exploring 4 Keywords and References Quantum states are vectors We saw that

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

Vector Spaces for Quantum Mechanics J. P. Leahy January 30, 2012

Vector Spaces for Quantum Mechanics J. P. Leahy January 30, 2012 PHYS 20602 Handout 1 Vector Spaces for Quantum Mechanics J. P. Leahy January 30, 2012 Handout Contents Examples Classes Examples for Lectures 1 to 4 (with hints at end) Definitions of groups and vector

More information

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases MTH6140 Linear Algebra II Notes 7 16th December 2010 7 Inner product spaces Ordinary Euclidean space is a 3-dimensional vector space over R, but it is more than that: the extra geometric structure (lengths,

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I Physics 342 Lecture 2 Linear Algebra I Lecture 2 Physics 342 Quantum Mechanics I Wednesday, January 3th, 28 From separation of variables, we move to linear algebra Roughly speaking, this is the study of

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Outline 1. Real and complex p orbitals (and for any l > 0 orbital) 2. Dirac Notation :Symbolic vs shorthand Hilbert Space Vectors,

Outline 1. Real and complex p orbitals (and for any l > 0 orbital) 2. Dirac Notation :Symbolic vs shorthand Hilbert Space Vectors, chmy564-19 Fri 18jan19 Outline 1. Real and complex p orbitals (and for any l > 0 orbital) 2. Dirac Notation :Symbolic vs shorthand Hilbert Space Vectors, 3. Theorems vs. Postulates Scalar (inner) prod.

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Vector Spaces. Chapter 1

Vector Spaces. Chapter 1 Chapter 1 Vector Spaces Linear algebra is the study of linear maps on finite-dimensional vector spaces. Eventually we will learn what all these terms mean. In this chapter we will define vector spaces

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i.

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i. CS 94- Hilbert Spaces, Tensor Products, Quantum Gates, Bell States 1//07 Spring 007 Lecture 01 Hilbert Spaces Consider a discrete quantum system that has k distinguishable states (eg k distinct energy

More information

Linear Operators, Eigenvalues, and Green s Operator

Linear Operators, Eigenvalues, and Green s Operator Chapter 10 Linear Operators, Eigenvalues, and Green s Operator We begin with a reminder of facts which should be known from previous courses. 10.1 Inner Product Space A vector space is a collection of

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

OPERATORS AND MEASUREMENT

OPERATORS AND MEASUREMENT Chapter OPERATORS AND MEASUREMENT In Chapter we used the results of experiments to deduce a mathematical description of the spin-/ system. The Stern-Gerlach experiments demonstrated that spin component

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Mathematical Formulation of the Superposition Principle

Mathematical Formulation of the Superposition Principle Mathematical Formulation of the Superposition Principle Superposition add states together, get new states. Math quantity associated with states must also have this property. Vectors have this property.

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying Math 396. Isomorphisms and tensor products In this handout, we work out some examples of isomorphisms involving tensor products of vector spaces. The three basic principles are: (i) to construct maps involving

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

Linear Algebra in Hilbert Space

Linear Algebra in Hilbert Space Physics 342 Lecture 16 Linear Algebra in Hilbert Space Lecture 16 Physics 342 Quantum Mechanics I Monday, March 1st, 2010 We have seen the importance of the plane wave solutions to the potentialfree Schrödinger

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW When we define a term, we put it in boldface. This is a very compressed review; please read it very carefully and be sure to ask questions on parts you aren t sure of. x 1 WedenotethesetofrealnumbersbyR.

More information

LINEAR ALGEBRA: THEORY. Version: August 12,

LINEAR ALGEBRA: THEORY. Version: August 12, LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector,

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Quantum Algorithms. Andreas Klappenecker Texas A&M University. Lecture notes of a course given in Spring Preliminary draft.

Quantum Algorithms. Andreas Klappenecker Texas A&M University. Lecture notes of a course given in Spring Preliminary draft. Quantum Algorithms Andreas Klappenecker Texas A&M University Lecture notes of a course given in Spring 003. Preliminary draft. c 003 by Andreas Klappenecker. All rights reserved. Preface Quantum computing

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

1 Notes and Directions on Dirac Notation

1 Notes and Directions on Dirac Notation 1 Notes and Directions on Dirac Notation A. M. Steane, Exeter College, Oxford University 1.1 Introduction These pages are intended to help you get a feel for the mathematics behind Quantum Mechanics. The

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

Quantum Physics II (8.05) Fall 2004 Assignment 3

Quantum Physics II (8.05) Fall 2004 Assignment 3 Quantum Physics II (8.5) Fall 24 Assignment 3 Massachusetts Institute of Technology Physics Department Due September 3, 24 September 23, 24 7:pm This week we continue to study the basic principles of quantum

More information

A Field Extension as a Vector Space

A Field Extension as a Vector Space Chapter 8 A Field Extension as a Vector Space In this chapter, we take a closer look at a finite extension from the point of view that is a vector space over. It is clear, for instance, that any is a linear

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Quantum Information & Quantum Computing

Quantum Information & Quantum Computing Math 478, Phys 478, CS4803, February 9, 006 1 Georgia Tech Math, Physics & Computing Math 478, Phys 478, CS4803 Quantum Information & Quantum Computing Problems Set 1 Due February 9, 006 Part I : 1. Read

More information

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU /858B Fall 2017

DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU /858B Fall 2017 DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU 15-458/858B Fall 2017 LECTURE 4: DIFFERENTIAL FORMS IN R n DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

The Dirac Approach to Quantum Theory. Paul Renteln. Department of Physics California State University 5500 University Parkway San Bernardino, CA 92407

The Dirac Approach to Quantum Theory. Paul Renteln. Department of Physics California State University 5500 University Parkway San Bernardino, CA 92407 November 2009 The Dirac Approach to Quantum Theory Paul Renteln Department of Physics California State University 5500 University Parkway San Bernardino, CA 92407 c Paul Renteln, 1996,2009 Table of Contents

More information

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality Orthonormal Bases, Orthogonal Matrices The Major Ideas from Last Lecture Vector Span Subspace Basis Vectors Coordinates in different bases Matrix Factorization (Basics) The Major Ideas from Last Lecture

More information

1 Measurements, Tensor Products, and Entanglement

1 Measurements, Tensor Products, and Entanglement Stanford University CS59Q: Quantum Computing Handout Luca Trevisan September 7, 0 Lecture In which we describe the quantum analogs of product distributions, independence, and conditional probability, and

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Mathematical Foundations: Intro

Mathematical Foundations: Intro Mathematical Foundations: Intro Graphics relies on 3 basic objects: 1. Scalars 2. Vectors 3. Points Mathematically defined in terms of spaces: 1. Vector space 2. Affine space 3. Euclidean space Math required:

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225 HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES Math 225 Points Look around. What do you see? You see objects: a chair here, a table there, a book on the table. These objects occupy locations,

More information

Properties of Commutators and Schroedinger Operators and Applications to Quantum Computing

Properties of Commutators and Schroedinger Operators and Applications to Quantum Computing International Journal of Engineering and Advanced Research Technology (IJEART) Properties of Commutators and Schroedinger Operators and Applications to Quantum Computing N. B. Okelo Abstract In this paper

More information

Ordinary Differential Equations II

Ordinary Differential Equations II Ordinary Differential Equations II February 23 2017 Separation of variables Wave eq. (PDE) 2 u t (t, x) = 2 u 2 c2 (t, x), x2 c > 0 constant. Describes small vibrations in a homogeneous string. u(t, x)

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information