首页 > 解决方案 > 为什么 NumPy 矩阵乘法广播在一个方向上工作,而不在转置方向上工作?

问题描述

考虑两个数组之间的以下矩阵乘积:

import numpy as np
A = np.random.rand(2,10,10)                                             
B = np.random.rand(2,2)                                                 
C = A.T @ B

……一切顺利。我认为上面是一个 1×2 乘以 2×2 向量矩阵乘积在 A 的 10×10 第二和第三维上广播。对结果的检查C证实了这种直觉;np.allclose(C[i,j], A.T[i,j] @ B)对于所有i, j.

现在从数学上讲,我应该能够计算 C.T以及:B.T @ A,但是:

B.T @ A                                                                
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-32-ffdbb14ca160> in <module>
----> 1 B.T @ A

ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 10 is different from 2)

所以广播方面,10×10×2 张量和 2×2 矩阵在矩阵乘积方面是兼容的,但是 2×2 矩阵和 2×10×10张量是不是?

额外信息:我希望能够计算“二次积” A.T @ B @ A,而不得不编写 for 循环以在其中一个维度上手动“广播”真的让我很恼火。感觉应该可以更优雅地做到这一点。我对 Python 和 NumPy 非常有经验,但我很少超越二维数组。

我在这里想念什么?转置对 NumPy 中张量的操作方式有什么我不明白的吗?

标签: pythonnumpyarray-broadcasting

解决方案


In [194]: A = np.random.rand(2,10,10)                                           
     ...:    
     ...: B = np.random.rand(2,2)                                               
In [196]: A.T.shape                                                             
Out[196]: (10, 10, 2)

In [197]: C = A.T @ B                                                           
In [198]: C.shape                                                               
Out[198]: (10, 10, 2)

The einsum equivalent is:

In [199]: np.allclose(np.einsum('ijk,kl->ijl',A.T,B),C)                         
Out[199]: True

or incorporating the transpose into the indexing:

In [200]: np.allclose(np.einsum('kji,kl->ijl',A,B),C)                           
Out[200]: True

Note that k is the summed dimension. j and l are other dot dimensions. i is a kind of 'batch' dimension.

Or as you explain np.einsum('k,kl->l', A.T[i,j], B)

To get C.T, the einsum result indices should be lji, or lk,jki->lji:

In [201]: np.allclose(np.einsum('lk,jki->lji', B.T, A.transpose(1,0,2)), C.T)      
Out[201]: True

In [226]: np.allclose(np.einsum('ij,jkl->ikl', B.T, A), C.T)                       
Out[226]: True

Matching [201] with @ requires a further transpose:

In [225]: np.allclose((B.T@(A.transpose(1,0,2))).transpose(1,0,2), C.T)          
Out[225]: True

With einsum when can place the axes in any order, but with matmul, the order is fixed (batch, i, k)@(batch, k, l) -> (batch, i, l) (where the batch dimensions can be broadcast).

Your example might be easier if A had shape (2,10,9) and B (2,3), with C resulting in (9,10,3)

In [229]: A = np.random.rand(2,10,9); B = np.random.rand(2,3)                   
In [230]: C = A.T @ B                                                           
In [231]: C.shape                                                               
Out[231]: (9, 10, 3)
In [232]: C.T.shape                                                             
Out[232]: (3, 10, 9)

In [234]: ((B.T) @ (A.transpose(1,0,2))).shape                                    
Out[234]: (10, 3, 9)
In [235]: ((B.T) @ (A.transpose(1,0,2))).transpose(1,0,2).shape                   
Out[235]: (3, 10, 9)
In [236]: np.allclose(((B.T) @ (A.transpose(1,0,2))).transpose(1,0,2), C.T)        
Out[236]: True

推荐阅读