首页 > 技术文章 > Machine Learning Week_1 Linear Algebra Review 7-12

asmurmur 2021-10-09 16:00 原文

4.7 Video: Matrix Matrix Multiplication

In this video we'll talk about matrix-matrix multiplication, or how to multiply two matrices together. When we talk about the method in linear regression for how to solve for the parameters theta 0 and theta 1 all in one shot, without needing an iterative algorithm like gradient descent. When we talk about that algorithm, it turns out that matrix-matrix multiplication is one of the key steps that you need to know.

So let's, as usual, start with an example.Let's say I have two matrices and I want to multiply them together. Let me again just run through this example and then I'll tell you a little bit of what happened. So the first thing I'm gonna do is I'm going to pull out the first column of this matrix on the right. And I'm going to take this matrix on the left and multiply it by a vector that is just this first column.

image

And it turns out, if I do that, I'm going to get the vector 11, 9. So this is the same matrix-vector multiplication as you saw in the last video.

I worked this out in advance, so I know it's 11, 9. And then the second thing I want to do is I'm going to pull out the second column of this matrix on the right. And I'm then going to take this matrix on the left, so take that matrix, and multiply it by that second column on the right. So again, this is a matrix-vector multiplication step which you saw from the previous video. And it turns out that if you multiply this matrix and this vector you get 10, 14. And by the way, if you want to practice your matrix-vector multiplication, feel free to pause the video and check this product yourself.

Then I'm just gonna take these two results and put them together, and that'll be my answer. So it turns out the outcome of this product is gonna be a two by two matrix. And the way I'm gonna fill in this matrix is just by taking my elements 11, 9, and plugging them here. And taking 10, 14 and plugging them into the second column, okay? So that was the mechanics of how to multiply a matrix by another matrix. You basically look at the second matrix one column at a time and you assemble the answers. And again, we'll step through this much more carefully in a second. But I just want to point out also, this first example is a 2x3 matrix. Multiply that by a 3x2 matrix, and the outcome of this product turns out to be a 2x2 matrix. And again, we'll see in a second why this was the case. All right, that was the mechanics of the calculation.

Let's actually look at the details and look at what exactly happened. Here are the details. I have a matrix A and I want to multiply that with a matrix B and the result will be some new matrix C.

image

It turns out you can only multiply together matrices whose dimensions match. So A is an m x n matrix, so m rows, n columns. And we multiply with an n x o matrix. And it turns out this n here must match this n here. So the number of columns in the first matrix must equal to the number of rows in the second matrix. And the result of this product will be a m x o matrix, like the matrix C here. And in the previous video everything we did corresponded to the special case of o being equal to 1. That was to the case of B being a vector. But now we're gonna deal with the case of values of o larger than 1. So here's how you multiply together the two matrices. What I'm going to do is I'm going to take the first column of B and treat that as a vector, and multiply the matrix A by the first column of B. And the result of that will be a n by 1 vector, and I'm gonna put that over here.

Then I'm gonna take the second column of B, right? So this is another n by 1 vector. So this column here, this is n by 1. It's an n-dimensional vector. Gonna multiply this matrix with this n by 1 vector. The result will be a m-dimensional vector, which we'll put there, and so on.

And then I'm gonna take the third column, multiply it by this matrix. I get a m-dimensional vector. And so on, until you get to the last column. The matrix times the last column gives you the last column of C.

Just to say that again, the ith column of the matrix C is obtained by taking the matrix A and multiplying the matrix A with the ith column of the matrix B for the values of i = 1, 2, up through o. So this is just a summary of what we did up there in order to compute the matrix C.

image

Let's look at just one more example. Let's say I want to multiply together these two matrices. So what I'm going to do is first pull out the first column of my second matrix. That was my matrix B on the previous slide and I therefore have this matrix times that vector. And so, oh, let's do this calculation quickly. This is going to be equal to the 1, 3 x 0, 3, so that gives 1 x 0 + 3 x 3. And the second element is going to be 2, 5 x 0, 3, so that's gonna be 2 x 0 + 5 x 3. And that is 9, 15. Oh, actually let me write that in green. So this is 9, 15. And then next I'm going to pull out the second column of this and do the corresponding calculations. So that's this matrix times this vector 1, 2. Let's also do this quickly, so that's 1 x 1 + 3 x 2, so that was that row. And let's do the other one. So let's see, that gives me 2 x 1 + 5 x 2 and so that is going to be equal to, lets see, 1 x 1 + 3 x 1 is 7 and 2 x 1 + 5 x 2 is 12. So now I have these two and so my outcome, the product of these two matrices, is going to be this goes here and this goes here. So I get 9, 15 and 4, 12. [It should be 7,12] And you may notice also that the result of multiplying a 2x2 matrix with another 2x2 matrix, the resulting dimension is going to be that first 2 times that second 2. So the result is itself also a 2x2 matrix.

Finally, let me show you one more neat trick that you can do with matrix-matrix multiplication. Let's say, as before, that we have four houses whose prices we wanna predict.

image

Only now, we have three competing hypotheses shown here on the right. So if you want to apply all three competing hypotheses to all four of your houses, it turns out you can do that very efficiently using a matrix-matrix multiplication. So here on the left is my usual matrix, same as from the last video where these values are my housing prices [he means housing sizes] and I've put 1s here on the left as well. And what I am going to do is construct another matrix where here, the first column is this -40 and 0.25 and the second column is this 200, 0.1 and so on. And it turns out that if you multiply these two matrices, what you find is that this first column, I'll draw that in blue. Well, how do you get this first column?

Our procedure for matrix-matrix multiplication is, the way you get this first column, is you take this matrix and you multiply it by this first column. And we saw in the previous video that this is exactly the predicted housing prices of the first hypothesis, right, of this first hypothesis here.

And how about the second column? Well, [INAUDIBLE] second column. The way you get the second column is, well, you take this matrix and you multiply it by this second column. And so the second column turns out to be the predictions of the second hypothesis up there, and similarly for the third column.

image

And so I didn't step through all the details, but hopefully you can just feel free to pause the video and check the math yourself and check that what I just claimed really is true. But it turns out that by constructing these two matrices, what you can therefore do is very quickly apply all 3 hypotheses to all 4 house sizes to get all 12 predicted prices output by your 3 hypotheses on your 4 houses.

So with just one matrix multiplication step you managed to make 12 predictions. And even better, it turns out that in order to do that matrix multiplication, there are lots of good linear algebra libraries in order to do this multiplication step for you. And so pretty much any reasonable programming language that you might be using. Certainly all the top ten most popular programming languages will have great linear algebra libraries. And there'll be good linear algebra libraries that are highly optimized in order to do that matrix-matrix multiplication very efficiently. Including taking advantage of any sort of parallel computation that your computer may be capable of, whether your computer has multiple cores or multiple processors. Or within a processor sometimes there's parallelism as well called SIMD parallelism that your computer can take care of. And there are very good free libraries that you can use to do this matrix-matrix multiplication very efficiently, so that you can very efficiently make lots of predictions with lots of hypotheses.

unfamiliar words

  1. mechanic [məˈkænɪk] n. 技工;力学;机械师; 方法

    • exp1:N a person whose job is repairing machines, especially the engines of vehicles 机械师
      a car mechanic
      汽车修理工

    • exp2: N the science of movement and force 力学

    • exp3: N the way sth works or is done 方法
      The exact mechanics of how payment will be made will be decided later.
      确切的付款方法以后再作决定。

  2. assemble [əˈsembl] v. 集合;组装;装配;聚集;收集
    exp: to fit together all the separate parts of sth, for example a piece of furniture 组装
    The shelves are easy to assemble.
    搁架容易装配

  3. parallel [ˈpærəlel] adj. 平行的;极相似的; 并行的

    • parallel lines
      平行线
    • a parallel case
      同类型事例
    • parallel processing
      并行处理
  4. parallelism [ˈpærəlelɪzəm] n. 相似;相似的特点
    exp: N the state of being similar; a similar feature
    I think he exaggerates the parallelism between the two cases.
    我认为他夸大了两件事的相似之处。

unfamiliar words in transcript

So that was the mechanics of how to multiply a matrix by another matrix.

You assemble the answers.

Including taking advantage of any sort of parallel computation that your computer may be capable of, whether your computer has multiple cores or multiple processors. Or within a processor sometimes there's parallelism as well called SIMD parallelism that your computer can take care of.

4.8 Reading: Matrix Matrix Multiplication

We multiply two matrices by breaking it into several vector multiplications and concatenating the result.

\[\begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix} \ast \begin{bmatrix} w & x\\ y & z \end{bmatrix} = \begin{bmatrix} a \ast w + b \ast y & a \ast x + b \ast z \\ c \ast w + d \ast y & c \ast x + d \ast z \\ e \ast w + f \ast y & e \ast x + f \ast z \end{bmatrix} \]

An m x n matrix multiplied by an n x o matrix results in an m x o matrix. In the above example, a 3 x 2 matrix times a 2 x 2 matrix resulted in a 3 x 2 matrix.

To multiply two matrices, the number of columns of the first matrix must equal the number of rows of the second matrix.

For example:

% Initialize a 3 by 2 matrix 
A = [1, 2; 3, 4;5, 6]

% Initialize a 2 by 1 matrix 
B = [1; 2] 

% We expect a resulting matrix of (3 by 2)*(2 by 1) = (3 by 1) 
mult_AB = A*B

% Make sure you understand why we got that result

- - - - - - - - - - - - - - - - - - - - - - - -  - - - - - - - - - -
After run.
- - - - - - - - - - - - - - - - - - - - - - - -  - - - - - - - - - -

A =

   1   2
   3   4
   5   6

B =

   1
   2

mult_AB =

    5
   11
   17

unfamiliar words

  1. concatenate [kɒnˈkætɪˌneɪt] v. 连接;使连续
    The Concat function will concatenate two or more strings from the source into a single string value on the target side.
    Concat函数将源中的两个或多个字符串连接为目标中的单个字符串值。

unfamiliar words in transcript

We multiply two matrices by breaking it into several vector multiplications and concatenating the result.

4.9 Video: Matrix Multiplication Properties

Matrix multiplication is really useful, since you can pack a lot of computation into just one matrix multiplication operation. But you should be careful of how you use them. In this video, I wanna tell you about a few properties of matrix multiplication.

When working with just real numbers or when working with scalars, multiplication is commutative. And what I mean by that is that if you take 3 times 5, that is equal to 5 times 3. And the ordering of this multiplication doesn't matter. And this is called the commutative property of multiplication of real numbers. It turns out this property, they can reverse the order in which you multiply things.

image

This is not true for matrix multiplication. So concretely, if A and B are matrices. Then in general, A times B is not equal to B times A. So, just be careful of that. Its not okay to arbitrarily reverse the order in which you multiply matrices. Matrix multiplication in not commutative, is the fancy way of saying it. As a concrete example, here are two matrices. This matrix 1 1 0 0 times 0 0 2 0 and if you multiply these two matrices you get this result on the right. Now let's swap around the order of these two matrices. So I'm gonna take this two matrices and just reverse them. It turns out if you multiply these two matrices, you get the second answer on the right. And well clearly, right, these two matrices are not equal to each other.

So, in fact, in general if you have a matrix operation like A times B, if A is an m by n matrix, and B is an n by m matrix, just as an example. Then, it turns out that the matrix A times B, right, is going to be an m by m matrix. Whereas the matrix B times A is going to be an n by n matrix. So the dimensions don't even match, right? So if A x B and B x A may not even be the same dimension. In the example on the left, I have all two by two matrices. So the dimensions were the same, but in general, reversing the order of the matrices can even change the dimension of the outcome. So, matrix multiplication is not commutative.

Here's the next property I want to talk about. So, when talking about real numbers or scalars, let's say I have 3 x 5 x 2. I can either multiply 5 x 2 first. Then I can compute this as 3 x 10. Or, I can multiply 3 x 5 first, and I can compute this as 15 x 2. And both of these give you the same answer, right? Both of these is equal to 30. So it doesn't matter whether I multiply 5 x 2 first or whether I multiply 3 x 5 first, because sort of, well, 3 x (5 x 2) = (3 x 5) x 2. And this is called the associative property of real number multiplication. It turns out that matrix multiplication is associative.

image

So concretely, let's say I have a product of three matrices A x B x C. Then, I can compute this either as A x (B x C) or I can computer this as (A x B) x C, and these will actually give me the same answer. I'm not gonna prove this but you can just take my word for it I guess. So just be clear, what I mean by these two cases. Let's look at the first one, right. This first case. What I mean by that is if you actually wanna compute A x B x C. What you can do is you can first compute B x C. So that D = B x C then compute A x D. And so this here is really computing A x B x C. Or, for this second case, you can compute this as, you can set E = A x B, then compute E times C. And this is then the same as A x B x C, and it turns out that both of these options will give you this guarantee to give you the same answer. And so we say that matrix multiplication thus enjoy the associative property. Okay? And don't worry about the terminology associative and commutative. That's what it's called, but I'm not really going to use this terminology later in this class, so don't worry about memorizing those terms.

Finally, I want to tell you about the Identity Matrix, which is a special matrix. So let's again make the analogy to what we know of real numbers. When dealing with real numbers or scalar numbers, the number 1, you can think of it as the identity of multiplication. And what I mean by that is that for any number z, 1 x z = z x 1. And that's just equal to the number z for any real number z.

image

So 1 is the identity operation and so it satisfies this equation. So it turns out, that this in the space of matrices there's an identity matrix as well and it's usually denoted I or sometimes we write it as I of n x n if we want to make it explicit to dimensions. So I subscript n x n is the n x n identity matrix.

And so that's a different identity matrix for each dimension n. And here are few examples. Here's the 2 x 2 identity matrix, here's the 3 x 3 identity matrix, here's the 4 x 4 matrix. So the identity matrix has the property that it has ones along the diagonals.

All right, and so on. And 0 everywhere else. And so, by the way, the 1 x 1 identity matrix is just a number 1, and so the 1 x 1 matrix with just 1 in it. So it's not a very interesting identity matrix. And informally, when I or others are being sloppy, very often we'll write the identity matrices in fine notation. We'll draw square brackets, just write one one one dot dot dot dot one, and then we'll maybe somewhat sloppily write a bunch of zeros there. And these zeroes on the, this big zero and this big zero, that's meant to denote that this matrix is zero everywhere except for the diagonal. So this is just how I might sloppily write this identity matrix.

And it turns out that the identity matrix has its property that for any matrix A, A times identity equals I times A equals A so that's a lot like this equation that we have up here. Right? So 1 times z equals z times 1 equals z itself. So I times A equals A times I equals A.

image

Just to make sure we have the dimensions right. So if A is an m by n matrix, then this identity matrix here, that's an n by n identity matrix.

And if is and by then, then this identity matrix, right? For matrix multiplication to make sense, that has to be an m by m matrix. Because this m has the match up that m, and in either case, the outcome of this process is you get back the matrix A which is m by n.

So whenever we write the identity matrix I, you know, very often the dimension, right, will be implicit from the content. So these two I matrix, they're actually different dimension matrices. One may be n by n, the other is m by m. But when we want to make the dimension of the matrix explicit, then sometimes we'll write to this I subscript n by n, kind of like we had up here. But very often, the dimension will be implicit.

Finally, I just wanna point out that earlier I said that AB is not, in general, equal to BA. Right? For most matrices A and B, this is not true. But when B is the identity matrix, this does hold true, that A times the identity matrix does indeed equal to identity times A is just that you know this is not true for other matrices B in general.(Here's a problem. A can only be a square matrix.)

So, that's it for the properties of matrix multiplication and special matrices like the identity matrix I want to tell you about. In the next and final video on our linear algebra review, I'm going to quickly tell you about a couple of special matrix operations and after that everything you need to know about linear algebra for this class.

unfamiliar words

  1. commutative [kəˈmjuːtətɪv] adj. 交换的(排列次序不影响结果)
    exp: ADJ giving the same result whatever the order in which the quantities are shown
    The groups of the solvable equations are commutative.
    可解方程的群都是交换群。

  2. arbitrary [ˈɑːbɪtrəri] adj. 任意的;武断的;随意的
    Arbitrary arrests and detention without trial were common.
    随意逮捕和拘留屡见不鲜。

  3. in which 在其中

    • He had a vision of a world in which there would be no wars.
      他幻想有一个没有战争的世界。

    • There is a sense in which we are all to blame for the tragedy.
      在某种意义上,对这个悲剧我们大家都有责任。

  4. associative [əˈsəʊʃiətɪv] adj. 联想的;结合的
    exp: ADJ giving the same result no matter what order the parts of a calculation are done, for example (a × b) × c = a × (b × c) (mathematics 结合律)

  5. sloppy [ˈslɒpi] adj. 马虎的;凌乱的;草率的;
    exp: that shows a lack of care, thought or effort
    sloppy thinking
    不认真的思考

  6. implicit [ɪmˈplɪsɪt] adj. 含蓄的;不直接言明的;
    exp: ADJ suggested without being directly expressed

    • Implicit in his speech was the assumption that they were guilty.
      他话语中的言外之意是设定他们有罪。
    • implicit criticism
      含蓄的批评

unfamiliar words in transcript

When working with just real numbers or when working with scalars, multiplication is commutative.

Its not okay to arbitrarily reverse the order in which you multiply matrices.

And this is called the associative property of real number multiplication.

And informally, when I or others are being sloppy, very often we'll write the identity matrices in fine notation.

So whenever we write the identity matrix I, you know, very often the dimension, right, will be implicit from the content.

4.10 Reading: Matrix Multiplication Properties

Matrices are not commutative: \(A∗B \neq B∗A\)

Matrices are associative: \((A∗B)∗C = A∗(B∗C)\)

The identity matrix, when multiplied by any matrix of the same dimensions, results in the original matrix. It's just like multiplying numbers by 1. The identity matrix simply has 1's on the diagonal (upper left to lower right diagonal) and 0's elsewhere.

\[\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \]

When multiplying the identity matrix after some matrix (A∗I), the square identity matrix's dimension should match the other matrix's columns. When multiplying the identity matrix before some other matrix (I∗A), the square identity matrix's dimension should match the other matrix's rows.

% Initialize random matrices A and B 
A = [1,2;4,5]
B = [1,1;0,2]

% Initialize a 2 by 2 identity matrix
I = eye(2)

% The above notation is the same as I = [1,0;0,1]

% What happens when we multiply I*A ? 
IA = I*A 

% How about A*I ? 
AI = A*I 

% Compute A*B 
AB = A*B 

% Is it equal to B*A? 
BA = B*A 

% Note that IA = AI but AB != BA

- - - - - - - - - - - - - - - - - - - - - - - -  - - - - - - - - - -
After run.
- - - - - - - - - - - - - - - - - - - - - - - -  - - - - - - - - - -

A =

   1   2
   4   5

B =

   1   1
   0   2

I =

Diagonal Matrix

   1   0
   0   1

IA =

   1   2
   4   5

AI =

   1   2
   4   5

AB =

    1    5
    4   14

BA =

    5    7
    8   10

unfamiliar words

4.11 Video: Inverse and Transpose

In this video, I want to tell you about a couple of special matrix operations, called the matrix inverse and the matrix transpose operation.Let's start by talking about matrix inverse, and as usual we'll start by thinking about how it relates to real numbers.

In the last video, I said that the number one plays the role of the identity in the space of real numbers because one times anything is equal to itself.[Exclude 0] It turns out that real numbers have this property that every number have an, that each number has an inverse, for example, given the number three, there exists some number, which happens to be three inverse so that that number times gives you back the identity element one. And so to me, inverse of course this is just one third. And given some other number, maybe twelve there is some number which is the inverse of twelve written as twelve to the minus one, or really this is just one twelve. So that when you multiply these two things together. the product is equal to the identity element one again. Now it turns out that in the space of real numbers, not everything has an inverse. For example the number zero does not have an inverse, right? Because zero's a zero inverse, one over zero that's undefined. Like this one over zero is not well defined. And what we want to do, in the rest of this slide, is figure out what does it mean to compute the inverse of a matrix.

image

Here's the idea: If A is a n by n matrix, and it has an inverse, I will say a bit more about that later, then the inverse is going to be written A to the minus one and A times this inverse, A to the minus one, is going to equal to A inverse times A, is going to give us back the identity matrix. Okay? Only matrices that are m by m for some the idea of M having inverse. So, a matrix is m by m, this is also called a square matrix and it's called square because the number of rows is equal to the number of columns. Right and it turns out only square matrices have inverses, so A is a square matrix, is m by m, on inverse this equation over here. Let's look at a concrete example, so let's say I have a matrix, three, four, two, sixteen. So this is a two by two matrix, so it's a square matrix and so this may just could have an and it turns out that I happen to know the inverse of this matrix is zero point four, minus zero point one, minus zero point zero five, zero zero seven five. And if I take this matrix and multiply these together it turns out what I get is the two by two identity matrix, I, this is I two by two. Okay? And so on this slide, you know this matrix is the matrix A, and this matrix is the matrix A-inverse.

And it turns out if that you are computing A times A-inverse, it turns out if you compute A-inverse times A you also get back the identity matrix. So how did I find this inverse or how did I come up with this inverse over here? It turns out that sometimes you can compute inverses by hand but almost no one does that these days. And it turns out there is very good numerical software for taking a matrix and computing its inverse. So again, this is one of those things where there are lots of open source libraries that you can link to from any of the popular programming languages to compute inverses of matrices. Let me show you a quick example. How I actually computed this inverse, and what I did was I used software called Octave.

So let me bring that up. We will see a lot about Octave later. Let me just quickly show you an example. Set my matrix A to be equal to that matrix on the left, type three four(34), two sixteen(216), so that's my matrix A right. This is matrix 34, 216 that I have down here on the left.

And, the software lets me compute the inverse of A very easily. It's like pinv(A) equals this.And so, this is, right, this matrix here, O point four, minus O point one, and so on. This given the numerical solution to what is the inverse of A. So let me just write, inverse of A equals P inverse of A[inverseOfA = pinv(A)]. So that I can now just verify that A times A inverse is the identity matrix. As I type A times the inverse of A and the result of that is this matrix and this is one one on the diagonal and essentially ten to the minus seventeen, ten to the minus sixteen, so Up to numerical precision, up to a little bit of round off error, that my computer had multiplied these matrices, and these numbers off the diagonals are essentially zero. So A times the inverse is essentially the identity matrix. Can also verify the inverse of A times A is also equal to the identity, ones on the diagonals and values that are essentially zero except for a little bit of round dot error on the off diagonals.

If a definition that the inverse of a matrix is, I had this caveat first it must always be a square matrix, it had this caveat, that if A has an inverse, exactly what matrices have an inverse is beyond the scope of this linear algebra for review that one intuition you might take away that just as the number zero doesn't have an inverse, it turns out that if A is say the matrix of all zeros, then this matrix A also does not have an inverse because there's no matrix there's no A inverse matrix so that this matrix times some other matrix will give you the identity matrix so this matrix of all zeros, and there are a few other matrices with properties similar to this.

(If a matrix has a row with all zeros, multiply this row by any vector to get an element of 0. From the vector to the matrix, the matrix of the answer will get a row of 0, which does not conform to the identity matrix. )

That also don't have an inverse. But it turns out that in this review I don't want to go too deeply into what it means matrix have an inverse but it turns out for our machine learning application this shouldn't be an issue or more precisely for the learning algorithms where this may be an to namely whether or not an inverse matrix appears and I will tell when we get to those learning algorithms just what it means for an algorithm to have or not have an inverse and how to fix it in case. Working with matrices that don't have inverses. But the intuition if you want is that you can think of matrices as not have an inverse that is somehow too close to zero in some sense. So, just to wrap up the terminology, matrix that don't have an inverse Sometimes called a singular matrix or degenerate matrix and so this matrix over here is an example zero zero zero matrix. is an example of a matrix that is singular, or a matrix that is degenerate.

image

Finally, the last special matrix operation I want to tell you about is to do matrix transpose. So suppose I have matrix A, if I compute the transpose of A, that's what I get here on the right. This is a transpose which is written and A superscript T, and the way you compute the transpose of a matrix is as follows. To get a transpose I am going to first take the first row of A one to zero. That becomes this first column of this transpose. And then I'm going to take the second row of A, 3 5 9, and that becomes the second column. of the matrix A transpose. And another way of thinking about how the computer transposes is as if you're taking this sort of 45 degree axis and you are mirroring or you are flipping the matrix along that 45 degree axis. so here's the more formal definition of a matrix transpose.

Let's say A is a m by n matrix. And let's let B equal A transpose and so B equal A transpose like so. Then B is going to be a n by m matrix with the dimensions reversed so here we have a 2x3 matrix. And so the transpose becomes a 3x2 matrix, and moreover, the B_ij is equal to A_ji. So the ij element of this matrix B is going to be the ji element of that earlier matrix A. So for example, B_12 is going to be equal to, look at this matrix, B_12 is going to be equal to this element 3, 1st row, 2nd column. And that equal to this, which is A two one (A_21), second row first column, right, which is equal to 3 and some of the example B_32, right, that's B_32 is this element 9, and that's equal to A two three (A_23) which is this element up here, nine.

image

And so that wraps up the definition of what it means to take the transpose of a matrix and that in fact concludes our linear algebra review. So by now hopefully you know how to add and subtract matrices as well as multiply them and you also know how, what are the definitions of the inverses and transposes of a matrix and these are the main operations used in linear algebra for this course. In case this is the first time you are seeing this material. I know this was a lot of linear algebra material all presented very quickly and it's a lot to absorb but if you there's no need to memorize all the definitions we just went through and if you download the copy of either these slides or of the lecture notes from the course website. And use either the slides or the lecture notes as a reference then you can always refer back to the definitions and to figure out what are these matrix multiplications, transposes and so on definitions. And the lecture notes on the course website also has pointers to additional resources linear algebra which you can use to learn more about linear algebra by yourself.

And next with these new tools. We'll be able in the next few videos to develop more powerful forms of linear regression that can view of a lot more data, a lot more features, a lot more training examples and later on after the new regression we'll actually continue using these linear algebra tools to derive more powerful learning algorithims as well.

unfamiliar words

  1. happen to [ˈhæpən tu] 碰巧

    • You don't happen to know his name, do you?
      你不会碰巧知道他的名字吧?

    • If you happen to talk to him, have him call me
      如果你碰巧能和他说话,让他给我打个电话。

  2. caveat [ˈkeɪviˌæt] n. 警告;告诫,
    exp: N a warning that particular things need to be considered before sth can be done
    One caveat: these numbers likely vary somewhat in reliability, collected as they were from as Touch puts it, "a range of online sources. "
    有一点需要说明的是,这些数据的可靠性并不一致,正如Touch公司所言,这些数据来自“一系列网络来源”。

  3. singular [ˈsɪŋɡjələr] n.单数;单数形式 adj.单数的;单数形式的;非凡的;突出的

    • exp1: N a form of a noun or verb that refers to one person or thing
      The singular of ‘bacteria’ is ‘bacterium’.
      bacteria的单数形式是bacterium。

    • exp2: ADJ unusual; strange
      a singular style of dress
      奇特的服装款式

  4. degenerate [dɪˈdʒenəreɪt] v. 退化 恶化 adj.堕落的;颓废的

    • exp1: V to become worse, for example by becoming lower in quality or weaker
      Her health degenerated quickly.
      她的健康状况迅速恶化。

    • exp2: ADJ having moral standards that have fallen to a level that is very low and unacceptable to most people
      a degenerate popular culture
      颓废的大众文化

  5. degree [dɪˈɡriː] n. 度;程度;度数(角的量度单位或温度单位)

    • an angle of ninety degrees (90˚)
      90度角

    • Her job demands a high degree of skill.
      她的工作要求有高超的技能。

unfamiliar words in Transcript

I happen to know the inverse of this matrix.

I had this caveat first it must always be a square matrix.

This is an example of a matrix that is singular, or a matrix that is degenerate.

If you're taking this sort of 45 degree axis and you are mirroring or you are flipping the matrix along that 45 degree axis.

4.12 Reading: Inverse and Transpose

The inverse of a matrix A is denoted \(A^{-1}\). Multiplying by the inverse results in the identity matrix.

A non square matrix does not have an inverse matrix. We can compute inverses of matrices in octave with the \(pinv(A)\) function and in Matlab with the \(inv(A)\) function. Matrices that don't have an inverse are singular or degenerate.

The transposition of a matrix is like rotating the matrix 90° in clockwise direction and then reversing it. We can compute transposition of matrices in matlab with the transpose(A) function or A':

\[A = \begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix} \]

\[A^{T} = \begin{bmatrix} a & c & e \\ b & d & f \end{bmatrix} \]

In other words:

\[A_{ij} = A^{T}_{ji} \]

% Initialize matrix A 
A = [1,2,0;0,5,6;7,0,9]

% Transpose A 
A_trans = A' 

% Take the inverse of A 
A_inv = inv(A)

% What is A^(-1)*A? 
A_invA = inv(A)*A

- - - - - - - - - - - - - - - - - - - - - - - -  - - - - - - - - - -
After run.
- - - - - - - - - - - - - - - - - - - - - - - -  - - - - - - - - - -

A =

   1   2   0
   0   5   6
   7   0   9

A_trans =

   1   0   7
   2   5   0
   0   6   9

A_inv =

   0.348837  -0.139535   0.093023
   0.325581   0.069767  -0.046512
  -0.271318   0.108527   0.038760

A_invA =

   1.00000  -0.00000   0.00000
   0.00000   1.00000  -0.00000
  -0.00000   0.00000   1.00000

unfamiliar words

推荐阅读