Cara menggunakan scalar matrix multiplication python

This PEP proposes a new binary operator to be used for matrix multiplication, called @. [Mnemonic: @ is

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 for mATrices.]

A new binary operator is added to the Python language, together with the corresponding in-place version:

OpPrecedence/associativityMethods@Same as
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
5,
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
6
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
7n/a
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
8

No implementations of these methods are added to the builtin or standard library types. However, a number of projects have reached consensus on the recommended semantics for these operations; see below for details.

For details on how this operator will be implemented in CPython, see .

In numerical code, there are two important operations which compete for use of Python’s

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 operator: elementwise multiplication, and matrix multiplication. In the nearly twenty years since the Numeric library was first proposed, there have been many attempts to resolve this tension ; none have been really satisfactory. Currently, most numerical Python code uses
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 for elementwise multiplication, and function/method syntax for matrix multiplication; however, this leads to ugly and unreadable code in common circumstances. The problem is bad enough that significant amounts of code continue to use the opposite convention [which has the virtue of producing ugly and unreadable code in different circumstances], and this API fragmentation across codebases then creates yet more problems. There does not seem to be any good solution to the problem of designing a numerical API within current Python syntax – only a landscape of options that are bad in different ways. The minimal change to Python syntax which is sufficient to resolve these problems is the addition of a single new infix operator for matrix multiplication.

Matrix multiplication has a singular combination of features which distinguish it from other binary operations, which together provide a uniquely compelling case for the addition of a dedicated infix operator:

  • Just as for the existing numerical operators, there exists a vast body of prior art supporting the use of infix notation for matrix multiplication across all fields of mathematics, science, and engineering; @ harmoniously fills a hole in Python’s existing operator system.
  • @ greatly clarifies real-world code.
  • @ provides a smoother onramp for less experienced users, who are particularly harmed by hard-to-read code and API fragmentation.
  • @ benefits a substantial and growing portion of the Python user community.
  • @ will be used frequently – in fact, evidence suggests it may be used more frequently than
    import numpy as np
    from numpy.linalg import inv, solve
    
    # Using dot function:
    S = np.dot[[np.dot[H, beta] - r].T,
               np.dot[inv[np.dot[np.dot[H, V], H.T]], np.dot[H, beta] - r]]
    
    # Using dot method:
    S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]
    
    6 or the bitwise operators.
  • @ allows the Python numerical community to reduce fragmentation, and finally standardize on a single consensus duck type for all numerical array objects.

When we crunch numbers on a computer, we usually have lots and lots of numbers to deal with. Trying to deal with them one at a time is cumbersome and slow – especially when using an interpreted language. Instead, we want the ability to write down simple operations that apply to large collections of numbers all at once. The n-dimensional array is the basic object that all popular numeric computing environments use to make this possible. Python has several libraries that provide such arrays, with numpy being at present the most prominent.

When working with n-dimensional arrays, there are two different ways we might want to define multiplication. One is elementwise multiplication:

[[1, 2],     [[11, 12],     [[1 * 11, 2 * 12],
 [3, 4]]  x   [13, 14]]  =   [3 * 13, 4 * 14]]

and the other is matrix multiplication:

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]

Elementwise multiplication is useful because it lets us easily and quickly perform many multiplications on a large collection of values, without writing a slow and cumbersome

import numpy as np
from numpy.linalg import inv, solve

# Using dot function:
S = np.dot[[np.dot[H, beta] - r].T,
           np.dot[inv[np.dot[np.dot[H, V], H.T]], np.dot[H, beta] - r]]

# Using dot method:
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]
8 loop. And this works as part of a very general schema: when using the array objects provided by numpy or other numerical libraries, all Python operators work elementwise on arrays of all dimensionalities. The result is that one can write functions using straightforward code like
import numpy as np
from numpy.linalg import inv, solve

# Using dot function:
S = np.dot[[np.dot[H, beta] - r].T,
           np.dot[inv[np.dot[np.dot[H, V], H.T]], np.dot[H, beta] - r]]

# Using dot method:
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]
9, treating the variables as if they were simple values, but then immediately use this function to efficiently perform this calculation on large collections of values, while keeping them organized using whatever arbitrarily complex array layout works best for the problem at hand.

Matrix multiplication is more of a special case. It’s only defined on 2d arrays [also known as “matrices”], and multiplication is the only operation that has an important “matrix” version – “matrix addition” is the same as elementwise addition; there is no such thing as “matrix bitwise-or” or “matrix floordiv”; “matrix division” and “matrix to-the-power-of” can be defined but are not very useful, etc. However, matrix multiplication is still used very heavily across all numerical application areas; mathematically, it’s one of the most fundamental operations there is.

Because Python syntax currently allows for only a single multiplication operator

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2, libraries providing array-like objects must decide: either use
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 for elementwise multiplication, or use
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 for matrix multiplication. And, unfortunately, it turns out that when doing general-purpose number crunching, both operations are used frequently, and there are major advantages to using infix rather than function call syntax in both cases. Thus it is not at all clear which convention is optimal, or even acceptable; often it varies on a case-by-case basis.

Nonetheless, network effects mean that it is very important that we pick just one convention. In numpy, for example, it is technically possible to switch between the conventions, because numpy provides two different types with different

S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
3 methods. For
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
4 objects,
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 performs elementwise multiplication, and matrix multiplication must use a function call [
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
6]. For
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
7 objects,
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 performs matrix multiplication, and elementwise multiplication requires function syntax. Writing code using
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
4 works fine. Writing code using
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
7 also works fine. But trouble begins as soon as we try to integrate these two pieces of code together. Code that expects an
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
1 and gets a
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
2, or vice-versa, may crash or return incorrect results. Keeping track of which functions expect which types as inputs, and return which types as outputs, and then converting back and forth all the time, is incredibly cumbersome and impossible to get right at any scale. Functions that defensively try to handle both types as input and DTRT, find themselves floundering into a swamp of
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
3 and
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
4 statements.

PEP 238 split

# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
5 into two operators:
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
5 and
import numpy as np
from numpy.linalg import inv, solve

# Using dot function:
S = np.dot[[np.dot[H, beta] - r].T,
           np.dot[inv[np.dot[np.dot[H, V], H.T]], np.dot[H, beta] - r]]

# Using dot method:
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]
6. Imagine the chaos that would have resulted if it had instead split
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
8 into two types:
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
9, whose
# Version 1 [as above]
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]

# Version 2
trans_coef = H.dot[beta] - r
S = trans_coef.T.dot[inv[H.dot[V].dot[H.T]]].dot[trans_coef]

# Version 3
S = trans_coef.T.dot[solve[H.dot[V].dot[H.T]], trans_coef]
0 implemented floor division, and
# Version 1 [as above]
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]

# Version 2
trans_coef = H.dot[beta] - r
S = trans_coef.T.dot[inv[H.dot[V].dot[H.T]]].dot[trans_coef]

# Version 3
S = trans_coef.T.dot[solve[H.dot[V].dot[H.T]], trans_coef]
1, whose
# Version 1 [as above]
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]

# Version 2
trans_coef = H.dot[beta] - r
S = trans_coef.T.dot[inv[H.dot[V].dot[H.T]]].dot[trans_coef]

# Version 3
S = trans_coef.T.dot[solve[H.dot[V].dot[H.T]], trans_coef]
0 implemented true division. This, in a more limited way, is the situation that Python number-crunchers currently find themselves in.

In practice, the vast majority of projects have settled on the convention of using

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 for elementwise multiplication, and function call syntax for matrix multiplication [e.g., using
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
4 instead of
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
7]. This reduces the problems caused by API fragmentation, but it doesn’t eliminate them. The strong desire to use infix notation for matrix multiplication has caused a number of specialized array libraries to continue to use the opposing convention [e.g., scipy.sparse, pyoperators, pyviennacl] despite the problems this causes, and
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
7 itself still gets used in introductory programming courses, often appears in StackOverflow answers, and so forth. Well-written libraries thus must continue to be prepared to deal with both types of objects, and, of course, are also stuck using unpleasant funcall syntax for matrix multiplication. After nearly two decades of trying, the numerical community has still not found any way to resolve these problems within the constraints of current Python syntax [see below].

This PEP proposes the minimum effective change to Python syntax that will allow us to drain this swamp. It splits

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 into two operators, just as was done for
# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]
5:
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2 for elementwise multiplication, and @ for matrix multiplication. [Why not the reverse? Because this way is compatible with the existing consensus, and because it gives us a consistent rule that all the built-in numeric operators also apply in an elementwise manner to arrays; the reverse convention would lead to more special cases.]

So that’s why matrix multiplication doesn’t and can’t just use

[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
2. Now, in the rest of this section, we’ll explain why it nonetheless meets the high bar for adding a new operator.

Right now, most numerical code in Python uses syntax like

Count of Python source files on Github matching given search terms
                 [as of 2014-04-10, ~21:00 UTC]
================ ==========  ===============  =======  ===========
module           "import X"  "from X import"    total  total/numpy
================ ==========  ===============  =======  ===========
sys                 2374638            63301  2437939         5.85
os                  1971515            37571  2009086         4.82
re                  1294651             8358  1303009         3.12
numpy ************** 337916 ********** 79065 * 416981 ******* 1.00
warnings             298195            73150   371345         0.89
subprocess           281290            63644   344934         0.83
django                62795           219302   282097         0.68
math                 200084            81903   281987         0.68
threading            212302            45423   257725         0.62
pickle+cPickle       215349            22672   238021         0.57
matplotlib           119054            27859   146913         0.35
sqlalchemy            29842            82850   112692         0.27
pylab *************** 36754 ********** 41063 ** 77817 ******* 0.19
scipy *************** 40829 ********** 28263 ** 69092 ******* 0.17
lxml                  19026            38061    57087         0.14
zlib                  40486             6623    47109         0.11
multiprocessing       25247            19850    45097         0.11
requests              30896              560    31456         0.08
jinja2                 8057            24047    32104         0.08
twisted               13858             6404    20262         0.05
gevent                11309             8529    19838         0.05
pandas ************** 14923 *********** 4005 ** 18928 ******* 0.05
sympy                  2779             9537    12316         0.03
theano *************** 3654 *********** 1828 *** 5482 ******* 0.01
================ ==========  ===============  =======  ===========
2 or
Count of Python source files on Github matching given search terms
                 [as of 2014-04-10, ~21:00 UTC]
================ ==========  ===============  =======  ===========
module           "import X"  "from X import"    total  total/numpy
================ ==========  ===============  =======  ===========
sys                 2374638            63301  2437939         5.85
os                  1971515            37571  2009086         4.82
re                  1294651             8358  1303009         3.12
numpy ************** 337916 ********** 79065 * 416981 ******* 1.00
warnings             298195            73150   371345         0.89
subprocess           281290            63644   344934         0.83
django                62795           219302   282097         0.68
math                 200084            81903   281987         0.68
threading            212302            45423   257725         0.62
pickle+cPickle       215349            22672   238021         0.57
matplotlib           119054            27859   146913         0.35
sqlalchemy            29842            82850   112692         0.27
pylab *************** 36754 ********** 41063 ** 77817 ******* 0.19
scipy *************** 40829 ********** 28263 ** 69092 ******* 0.17
lxml                  19026            38061    57087         0.14
zlib                  40486             6623    47109         0.11
multiprocessing       25247            19850    45097         0.11
requests              30896              560    31456         0.08
jinja2                 8057            24047    32104         0.08
twisted               13858             6404    20262         0.05
gevent                11309             8529    19838         0.05
pandas ************** 14923 *********** 4005 ** 18928 ******* 0.05
sympy                  2779             9537    12316         0.03
theano *************** 3654 *********** 1828 *** 5482 ******* 0.01
================ ==========  ===============  =======  ===========
3 to perform matrix multiplication. This obviously works, so why do people make such a fuss about it, even to the point of creating API fragmentation and compatibility swamps?

Matrix multiplication shares two features with ordinary arithmetic operations like addition and multiplication on numbers: [a] it is used very heavily in numerical programs – often multiple times per line of code – and [b] it has an ancient and universally adopted tradition of being written using infix syntax. This is because, for typical formulas, this notation is dramatically more readable than any function call syntax. Here’s an example to demonstrate:

One of the most useful tools for testing a statistical hypothesis is the linear hypothesis test for OLS regression models. It doesn’t really matter what all those words I just said mean; if we find ourselves having to implement this thing, what we’ll do is look up some textbook or paper on it, and encounter many mathematical formulas that look like:

S = [Hβ − r]T[HVHT] − 1[Hβ − r]

Here the various variables are all vectors or matrices [details for the curious: ].

Now we need to write code to perform this calculation. In current numpy, matrix multiplication can be performed using either the function or method call syntax. Neither provides a particularly readable translation of the formula:

import numpy as np
from numpy.linalg import inv, solve

# Using dot function:
S = np.dot[[np.dot[H, beta] - r].T,
           np.dot[inv[np.dot[np.dot[H, V], H.T]], np.dot[H, beta] - r]]

# Using dot method:
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]

With the @ operator, the direct translation of the above formula becomes:

S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

Notice that there is now a transparent, 1-to-1 mapping between the symbols in the original formula and the code that implements it.

Of course, an experienced programmer will probably notice that this is not the best way to compute this expression. The repeated computation of Hβ − r should perhaps be factored out; and, expressions of the form

Count of Python source files on Github matching given search terms
                 [as of 2014-04-10, ~21:00 UTC]
================ ==========  ===============  =======  ===========
module           "import X"  "from X import"    total  total/numpy
================ ==========  ===============  =======  ===========
sys                 2374638            63301  2437939         5.85
os                  1971515            37571  2009086         4.82
re                  1294651             8358  1303009         3.12
numpy ************** 337916 ********** 79065 * 416981 ******* 1.00
warnings             298195            73150   371345         0.89
subprocess           281290            63644   344934         0.83
django                62795           219302   282097         0.68
math                 200084            81903   281987         0.68
threading            212302            45423   257725         0.62
pickle+cPickle       215349            22672   238021         0.57
matplotlib           119054            27859   146913         0.35
sqlalchemy            29842            82850   112692         0.27
pylab *************** 36754 ********** 41063 ** 77817 ******* 0.19
scipy *************** 40829 ********** 28263 ** 69092 ******* 0.17
lxml                  19026            38061    57087         0.14
zlib                  40486             6623    47109         0.11
multiprocessing       25247            19850    45097         0.11
requests              30896              560    31456         0.08
jinja2                 8057            24047    32104         0.08
twisted               13858             6404    20262         0.05
gevent                11309             8529    19838         0.05
pandas ************** 14923 *********** 4005 ** 18928 ******* 0.05
sympy                  2779             9537    12316         0.03
theano *************** 3654 *********** 1828 *** 5482 ******* 0.01
================ ==========  ===============  =======  ===========
5 should almost always be replaced by the more numerically stable
Count of Python source files on Github matching given search terms
                 [as of 2014-04-10, ~21:00 UTC]
================ ==========  ===============  =======  ===========
module           "import X"  "from X import"    total  total/numpy
================ ==========  ===============  =======  ===========
sys                 2374638            63301  2437939         5.85
os                  1971515            37571  2009086         4.82
re                  1294651             8358  1303009         3.12
numpy ************** 337916 ********** 79065 * 416981 ******* 1.00
warnings             298195            73150   371345         0.89
subprocess           281290            63644   344934         0.83
django                62795           219302   282097         0.68
math                 200084            81903   281987         0.68
threading            212302            45423   257725         0.62
pickle+cPickle       215349            22672   238021         0.57
matplotlib           119054            27859   146913         0.35
sqlalchemy            29842            82850   112692         0.27
pylab *************** 36754 ********** 41063 ** 77817 ******* 0.19
scipy *************** 40829 ********** 28263 ** 69092 ******* 0.17
lxml                  19026            38061    57087         0.14
zlib                  40486             6623    47109         0.11
multiprocessing       25247            19850    45097         0.11
requests              30896              560    31456         0.08
jinja2                 8057            24047    32104         0.08
twisted               13858             6404    20262         0.05
gevent                11309             8529    19838         0.05
pandas ************** 14923 *********** 4005 ** 18928 ******* 0.05
sympy                  2779             9537    12316         0.03
theano *************** 3654 *********** 1828 *** 5482 ******* 0.01
================ ==========  ===============  =======  ===========
6. When using @, performing these two refactorings gives us:

# Version 1 [as above]
S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv[H @ V @ H.T] @ trans_coef

# Version 3
S = trans_coef.T @ solve[H @ V @ H.T, trans_coef]

Notice that when comparing between each pair of steps, it’s very easy to see exactly what was changed. If we apply the equivalent transformations to the code using the .dot method, then the changes are much harder to read out or verify for correctness:

# Version 1 [as above]
S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]

# Version 2
trans_coef = H.dot[beta] - r
S = trans_coef.T.dot[inv[H.dot[V].dot[H.T]]].dot[trans_coef]

# Version 3
S = trans_coef.T.dot[solve[H.dot[V].dot[H.T]], trans_coef]

Readability counts! The statements using @ are shorter, contain more whitespace, can be directly and easily compared both to each other and to the textbook formula, and contain only meaningful parentheses. This last point is particularly important for readability: when using function-call syntax, the required parentheses on every operation create visual clutter that makes it very difficult to parse out the overall structure of the formula by eye, even for a relatively simple formula like this one. Eyes are terrible at parsing non-regular languages. I made and caught many errors while trying to write out the ‘dot’ formulas above. I know they still contain at least one error, maybe more. [Exercise: find it. Or them.] The @ examples, by contrast, are not only correct, they’re obviously correct at a glance.

If we are even more sophisticated programmers, and writing code that we expect to be reused, then considerations of speed or numerical accuracy might lead us to prefer some particular order of evaluation. Because @ makes it possible to omit irrelevant parentheses, we can be certain that if we do write something like

====  ======  ============  ====  ========
  op  stdlib  scikit-learn  nipy  combined
====  ======  ============  ====  ========
   =    2969          5536  4932      3376 / 10,000 SLOC
   -     218           444   496       261
   +     224           201   348       231
  ==     177           248   334       196
   *     156           284   465       192
   %     121           114   107       119
  **      59           111   118        68
  !=      40            56    74        44
   /      18           121   183        41
   >      29            70   110        39
  +=      34            61    67        39
   =      19            17    17        18
        29            70   110        39
  +=      34            61    67        39
   =      19            17    17        18
        29            70   110        39
  +=      34            61    67        39
   =      19            17    17        18
        29            70   110        39
  +=      34            61    67        39
   =      19            17    17        18
        29            70   110        39
  +=      34            61    67        39
   =      19            17    17        18
  2d, and another operand is 1d, then the above rules apply unchanged, with 1d->2d promotion performed before broadcasting. E.g., 
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
34 first promotes to
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
35, then broadcasts the right argument to create the aligned operation
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
36, multiplies to get an array with shape [10, 2, 1], and finally removes the added dimension, returning an array with shape [10, 2]. Similarly,
[[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
 [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
37 produces an intermediate array with shape [10, 1, 3], and a final array with shape [10, 3].

  • 0d [scalar] inputs raise an error. Scalar * matrix multiplication is a mathematically and algorithmically distinct operation from matrix @ matrix multiplication, and is already covered by the elementwise
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    2 operator. Allowing scalar @ matrix would thus both require an unnecessary special case, and violate TOOWTDI.
  • We group existing Python projects which provide array- or matrix-like types based on what API they currently use for elementwise and matrix multiplication.

    Projects which currently use * for elementwise multiplication, and function/method calls for matrix multiplication:

    The developers of the following projects have expressed an intention to implement @ on their array-like types using the above semantics:

    • numpy
    • pandas
    • blaze
    • theano

    The following projects have been alerted to the existence of the PEP, but it’s not yet known what they plan to do if it’s accepted. We don’t anticipate that they’ll have any objections, though, since everything proposed here is consistent with how they already do things:

    • pycuda
    • panda3d

    Projects which currently use * for matrix multiplication, and function/method calls for elementwise multiplication:

    The following projects have expressed an intention, if this PEP is accepted, to migrate from their current API to the elementwise-

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    2, matmul-@ convention [i.e., this is a list of projects whose API fragmentation will probably be eliminated if this PEP is accepted]:

    • numpy [
      S = [H @ beta - r].T @ inv[H @ V @ H.T] @ [H @ beta - r]
      
      7]
    • scipy.sparse
    • pyoperators
    • pyviennacl

    The following projects have been alerted to the existence of the PEP, but it’s not known what they plan to do if it’s accepted [i.e., this is a list of projects whose API fragmentation may or may not be eliminated if this PEP is accepted]:

    • cvxopt

    Projects which currently use * for matrix multiplication, and which don’t really care about elementwise multiplication of matrices:

    There are several projects which implement matrix types, but from a very different perspective than the numerical libraries discussed above. These projects focus on computational methods for analyzing matrices in the sense of abstract mathematical objects [i.e., linear maps over free modules over rings], rather than as big bags full of numbers that need crunching. And it turns out that from the abstract math point of view, there isn’t much use for elementwise operations in the first place; as discussed in the Background section above, elementwise operations are motivated by the bag-of-numbers approach. So these projects don’t encounter the basic problem that this PEP exists to address, making it mostly irrelevant to them; while they appear superficially similar to projects like numpy, they’re actually doing something quite different. They use

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    2 for matrix multiplication [and for group actions, and so forth], and if this PEP is accepted, their expressed intention is to continue doing so, while perhaps adding @ as an alias. These projects include:

    • sympy
    • sage

    New functions

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    45 and
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    46 are added to the standard library, with the usual semantics.

    A corresponding function

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    47 is added to the C API.

    A new AST node is added named

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    48, along with a new token
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    49 and new bytecode opcodes
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    50 and
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    51.

    Two new type slots are added; whether this is to

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    52 or a new
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    53 struct remains to be determined.

    Why @ instead of some other spelling? There isn’t any consensus across other programming languages about how this operator should be named ; here we discuss the various options.

    Restricting ourselves only to symbols present on US English keyboards, the punctuation characters that don’t already have a meaning in Python expression context are: @, backtick,

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    56,
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    57, and
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    58. Of these options, @ is clearly the best;
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    57 and
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    58 are already heavily freighted with inapplicable meanings in the programming context, backtick has been banned from Python by BDFL pronouncement [see PEP 3099], and
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    56 is uglier, even more dissimilar to
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    2 and , and has Perl/PHP baggage.
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    56 is probably the second-best option of these, though.

    Symbols which are not present on US English keyboards start at a significant disadvantage [having to spend 5 minutes at the beginning of every numeric Python tutorial just going over keyboard layouts is not a hassle anyone really wants]. Plus, even if we somehow overcame the typing problem, it’s not clear there are any that are actually better than @. Some options that have been suggested include:

    • U+00D7 MULTIPLICATION SIGN:
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      66
    • U+22C5 DOT OPERATOR:
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      67
    • U+2297 CIRCLED TIMES:
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      68
    • U+00B0 DEGREE:
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      69

    What we need, though, is an operator that means “matrix multiplication, as opposed to scalar/elementwise multiplication”. There is no conventional symbol with this meaning in either programming or mathematics, where these operations are usually distinguished by context. [And U+2297 CIRCLED TIMES is actually used conventionally to mean exactly the wrong things: elementwise multiplication – the “Hadamard product” – or outer product, rather than matrix/inner product like our operator]. @ at least has the virtue that it looks like a funny non-commutative operator; a naive user who knows maths but not programming couldn’t look at

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    71 versus
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    66, or
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    71 versus
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    67, or
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    71 versus
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    69 and guess which one is the usual multiplication, and which one is the special case.

    Finally, there is the option of using multi-character tokens. Some options:

    • Matlab and Julia use a
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      77 operator. Aside from being visually confusable with
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      2, this would be a terrible choice for us because in Matlab and Julia,
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      2 means matrix multiplication and
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      77 means elementwise multiplication, so using
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      77 for matrix multiplication would make us exactly backwards from what Matlab and Julia users expect.
    • APL apparently used
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      82, which by combining a multi-character token, confusing attribute-access-like . syntax, and a unicode character, ranks somewhere below U+2603 SNOWMAN on our candidate list. If we like the idea of combining addition and multiplication operators as being evocative of how matrix multiplication actually works, then something like
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      83 could be used – though this may be too easy to confuse with
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      84, which is just multiplication combined with the unary
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      85 operator.
    • PEP 211 suggested
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      86. This has the downside that it sort of suggests that there is a unary
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      2 operator that is being combined with unary
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      88, but it could work.
    • R uses
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      89 for matrix multiplication. In R this forms part of a general extensible infix system in which all tokens of the form
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      90 are user-defined binary operators. We could steal the token without stealing the system.
    • Some other plausible candidates that have been suggested:
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      91 [= ascii drawing of the multiplication sign ×]; the footnote operator
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      92 or
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      93 [but when used in context, the use of vertical grouping symbols tends to recreate the nested parentheses visual clutter that was noted as one of the major downsides of the function syntax we’re trying to get away from];
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      94.

    So, it doesn’t matter much, but @ seems as good or better than any of the alternatives:

    • It’s a friendly character that Pythoneers are already used to typing in decorators, but the decorator usage and the math expression usage are sufficiently dissimilar that it would be hard to confuse them in practice.
    • It’s widely accessible across keyboard layouts [and thanks to its use in email addresses, this is true even of weird keyboards like those in phones].
    • It’s round like
      [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
       [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
      
      2 and .
    • The mATrices mnemonic is cute.
    • The swirly shape is reminiscent of the simultaneous sweeps over rows and columns that define matrix multiplication
    • Its asymmetry is evocative of its non-commutative nature.
    • Whatever, we have to pick something.

    There was a long discussion about whether @ should be right- or left-associative [or even something more exotic ]. Almost all Python operators are left-associative, so following this convention would be the simplest approach, but there were two arguments that suggested matrix multiplication might be worth making right-associative as a special case:

    First, matrix multiplication has a tight conceptual association with function application/composition, so many mathematically sophisticated users have an intuition that an expression like RSx proceeds from right-to-left, with first S transforming the vector x, and then R transforming the result. This isn’t universally agreed [and not all number-crunchers are steeped in the pure-math conceptual framework that motivates this intuition ], but at the least this intuition is more common than for other operations like 2⋅3⋅4 which everyone reads as going from left-to-right.

    Second, if expressions like

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    98 appear often in code, then programs will run faster [and efficiency-minded programmers will be able to use fewer parentheses] if this is evaluated as
    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    99 then if it is evaluated like
    import numpy as np
    from numpy.linalg import inv, solve
    
    # Using dot function:
    S = np.dot[[np.dot[H, beta] - r].T,
               np.dot[inv[np.dot[np.dot[H, V], H.T]], np.dot[H, beta] - r]]
    
    # Using dot method:
    S = [H.dot[beta] - r].T.dot[inv[H.dot[V].dot[H.T]]].dot[H.dot[beta] - r]
    
    00.

    However, weighing against these arguments are the following:

    Regarding the efficiency argument, empirically, we were unable to find any evidence that

    [[1, 2],     [[11, 12],     [[1 * 11 + 2 * 13, 1 * 12 + 2 * 14],
     [3, 4]]  x   [13, 14]]  =   [3 * 11 + 4 * 13, 3 * 12 + 4 * 14]]
    
    98 type expressions actually dominate in real-life code. Parsing a number of large projects that use numpy, we found that when forced by numpy’s current funcall syntax to choose an order of operations for nested calls to
    ====  ======  ============  ====  ========
      op  stdlib  scikit-learn  nipy  combined
    ====  ======  ============  ====  ========
       =    2969          5536  4932      3376 / 10,000 SLOC
       -     218           444   496       261
       +     224           201   348       231
      ==     177           248   334       196
       *     156           284   465       192
       %     121           114   107       119
      **      59           111   118        68
      !=      40            56    74        44
       /      18           121   183        41
       >      29            70   110        39
      +=      34            61    67        39
       =      19            17    17        18
            29            70   110        39
      +=      34            61    67        39
       =      19            17    17        18
            29            70   110        39
      +=      34            61    67        39
       =      19            17    17        18
            29            70   110        39
      +=      34            61    67        39
       =      19            17    17        18
      

    Bài mới nhất

    Chủ Đề