Efficient element-wise multiplication of a matrix and a vector in TensorFlow

The simplest code to do this relies on the broadcasting behavior of tf.multiply()*, which is based on numpy’s broadcasting behavior: x = tf.constant(5.0, shape=[5, 6]) w = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) xw = tf.multiply(x, w) max_in_rows = tf.reduce_max(xw, 1) sess = tf.Session() print sess.run(xw) # ==> [[0.0, 5.0, 10.0, 15.0, 20.0, 25.0], # … Read more

matrix multiplication algorithm time complexity

Using linear algebra, there exist algorithms that achieve better complexity than the naive O(n3). Solvay Strassen algorithm achieves a complexity of O(n2.807) by reducing the number of multiplications required for each 2×2 sub-matrix from 8 to 7. The fastest known matrix multiplication algorithm is Coppersmith-Winograd algorithm with a complexity of O(n2.3737). Unless the matrix is … Read more

numerically stable way to multiply log probability matrices in numpy

logsumexp works by evaluating the right-hand side of the equation log(∑ exp[a]) = max(a) + log(∑ exp[a – max(a)]) I.e., it pulls out the max before starting to sum, to prevent overflow in exp. The same can be applied before doing vector dot products: log(exp[a] ⋅ exp[b]) = log(∑ exp[a] × exp[b]) = log(∑ exp[a … Read more

How to multiply two vector and get a matrix?

Normal matrix multiplication works as long as the vectors have the right shape. Remember that * in Numpy is elementwise multiplication, and matrix multiplication is available with numpy.dot() (or with the @ operator, in Python 3.5) >>> numpy.dot(numpy.array([[1], [2]]), numpy.array([[3, 4]])) array([[3, 4], [6, 8]]) This is called an “outer product.” You can get it … Read more