Efficient element-wise multiplication of a matrix and a vector in TensorFlow
Asked Answered
D

2

36

What would be the most efficient way to multiply (element-wise) a 2D tensor (matrix):

x11 x12 .. x1N
...
xM1 xM2 .. xMN

by a vertical vector:

w1
...
wN

to obtain a new matrix:

x11*w1 x12*w2 ... x1N*wN
...
xM1*w1 xM2*w2 ... xMN*wN

To give some context, we have M data samples in a batch that can be processed in parallel, and each N-element sample must be multiplied by weights w stored in a variable to eventually pick the largest Xij*wj for each row i.

Danieldaniela answered 10/12, 2015 at 1:32 Comment(0)
R
45

The simplest code to do this relies on the broadcasting behavior of tf.multiply()*, which is based on numpy's broadcasting behavior:

x = tf.constant(5.0, shape=[5, 6])
w = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
xw = tf.multiply(x, w)
max_in_rows = tf.reduce_max(xw, 1)

sess = tf.Session()
print sess.run(xw)
# ==> [[0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0],
#      [0.0, 5.0, 10.0, 15.0, 20.0, 25.0]]

print sess.run(max_in_rows)
# ==> [25.0, 25.0, 25.0, 25.0, 25.0]

* In older versions of TensorFlow, tf.multiply() was called tf.mul(). You can also use the * operator (i.e. xw = x * w) to perform the same operation.

Resor answered 10/12, 2015 at 2:8 Comment(3)
The link for the Documentation is dead. This is the actual one : tf.multiplyAcarus
don't forget the parenthesis:print(sess.run(xw)) and print(sess.run(max_in_rows))Isogonic
i applied this to a dense matrix and a sparse vector, which does not work.Planospore
S
0

As @mrry pointed out, __mul__ (aka *) does the job, e.g. x * w.

I just want to point out that tf.multiply() and __mul__() behave differently if the matrix is sparse because __mul__ does dense to sparse broadcasting under the hood.

An example:

import tensorflow as tf

x = tf.sparse.from_dense(tf.constant(5.0, shape=[5, 6]))  # SparseTensor
w = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])           # EagerTensor


tf.multiply(x, w)            # <---- error

xw = x * w                   # <--- OK
xw = tf.sparse.to_dense(xw)  # convert into an EagerTensor if desired

However, if the vector is sparse, * wouldn't work because as stated before, the broadcasting is only from the dense side to the sparse side. The simplest way in this case is probably to convert the vector into a dense vector and multiply 2 dense tensors.

x = tf.constant(5.0, shape=[5, 6])
w = tf.sparse.from_dense(tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]))

xw = tf.multiply(x, tf.sparse.to_dense(w))
#                      ^^^^^^^^^^^^^^^   <---- convert to a dense tensor
Solutrean answered 23/6, 2023 at 2:15 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.