首页 > 解决方案 > index a list inside the while_loop TensorFlow function

问题描述

Hello.

I have an issue. I actually have a list (real python list) of placeholders. My list is of length n ( = T in the below code) and is as follow:

my_list = [[D0, K], [D1, K], ... [Dn, K]]

Where the Di are not necessary of the same size. That is why I used a list (because I cannot convert this list to a tensor without padding)

What I want to do is that:

temp = []
for step in range(T):
    temp.append(tf.reduce_sum(x[step], axis=0))

sum_vn_t = tf.stack(temp)

Where x = my_list of length n defined previously. This piece of code will just transform my inputs list x that looks like:

[[D0, K], [D1, K], ... [Dn, K]]

into

[n, K]

Where I actually sum over each Di rows so that each jth line of my new tensor of size [n, K] contains: sum([Dj, K], axis=0)

The problem is that if I use a python for ... loop I am not sure the backpropagation will actually works (I'm quite new to TensorFlow but I think that if I don't use the while_loop function my operations won't be added to the Graph and so doing a native python for loop does not make sense ?).

So I've just tried to recode this piece of code using the tensorflow while_loop. The code is as follow:

def reduce_it(i, outputs):
    i_row = tf.reduce_sum(x[i], axis=0) # x[i] throw an error as i is a Tensor
    outputs = outputs.write(i, i_row)

    return i+1, outputs

temp = tf.TensorArray(dtype=tf.float32, infer_shape=False, size=1,
                  dynamic_size=True)
_, temp = tf.while_loop(lambda i, *args: tf.less(i, T),
                     reduce_it, [0, temp])
temp = temp.stack()

I've already seen someone asking this but nobody was able to give him a workaround. I tried to convert the Tensor i into an integer by passing a numpy array on which I add element during the while loop to get the shape of this array:

def reduce_it(arr, outputs):
    idx = arr.shape[0] - 1 # use shape[0] of array as i
    i_row = tf.reduce_sum(x[idx], axis=0)
    outputs = outputs.write(tf.constant([idx]), i_row)
    arr = np.append(arr, 0)
    return arr, outputs

temp = tf.TensorArray(dtype=tf.float32, infer_shape=False, size=1,
                  dynamic_size=True)
_, temp = tf.while_loop(lambda arr, *args: tf.less(arr.shape[0], T),
                     reduce_it, [np.array([0]), temp])
temp = temp.stack()

but it doesn't work because the shape of my array arr change during the loop so I might need to use the shape_invariants option of while_loop but I didn't manage to have a working code...

Also I have converted my list to a Tensor by adding a padding such that my tensor is of size: [T, max(Di), K] but I still need to know on which Dimension Di i'm working at each iteration of my loop that means I need to create a tensor (1d-array) of size n having Di as number on position i:

my_tensor = [D1, D2, ..., Dn]

then I need to gather Di in my while loop but if I simply do:

my_dim = tf.gather(my_tensor, i)

I will only gather a tensor and I need a integer.

I don't think I can define a session and recover my_dim.eval() as this code is part of my module which is then called during training (and I create a session at this moment).

Some experts of TF can think of a workaround or a hack?

Thank you in advance

Note: Also padding is a solution but actually later in my code I need to get each one of my initial matrix of size [Di, K] and so if I pad my [Di, K] so that I can build a Tensor of shape:

[n, max(Dn), K]

then, I still need to recover each [Di, K] to be able to use tf.matmul() (operations for example) with the correct dimensions. So padding is actually not a solution for me.

I hope my post is clear enough.

标签: listtensorflowindexingwhile-loop

解决方案


在下面找到一个潜在的解决方案,但是我不建议使用较大的值T(此方法创建的操作与 中的元素一样多my_list)。

否则,您用零填充张量的想法似乎是个好主意。如果我正确理解您的最终目标,那些额外的零不应该影响您tf.reduce_sum(x[idx], axis=0)的 (但是,T由于与以前相同的原因,可能不建议将这种解决方案用于大型 )。

最后,您还可以尝试将代码转换为使用tf.SparseTensorandtf.sparse_reduce_sum()来代替。


解决方案tf.case()tf.while_loop()

import tensorflow as tf
import numpy as np

T = 10
my_list = [tf.ones((np.random.randint(2, 42))) for i in range(T)] # list of random size tensors

def reduce_it(i, outputs):
    get_lambda_for_list_element = lambda idx: lambda: my_list[idx]
    cases = {tf.equal(i, idx): get_lambda_for_list_element(idx) for idx in range(len(my_list))}
    x = tf.case(cases, exclusive=True)

    # It's not clear to me what my_list contains / what your loop is suppose to compute.
    # Here's a toy example supposing the loop computes:
    #       outputs[i] = tf.reduce_sum(my_list[i]) for i in range(T)
    i_row = tf.reduce_sum(x)
    indices = tf.range(0, T)
    outputs = tf.where(tf.equal(indices, i), tf.tile(tf.expand_dims(i_row, 0), [T]), outputs)

    return i+1, outputs

temp = tf.zeros((T))
_, temp = tf.while_loop(lambda i, *args: tf.less(i, T), reduce_it, [0, temp])

with tf.Session() as sess:
    res = sess.run(temp)
    print(res)
    # [37.  2. 22. 16. 37. 40. 10.  3. 12. 26.]

    # Checking if values are correct:
    print([sess.run(tf.reduce_sum(tensor)) for tensor in my_list])
    # [37.0, 2.0, 22.0, 16.0, 37.0, 40.0, 10.0, 3.0, 12.0, 26.0]

解决方案tf.pad()

import tensorflow as tf
import numpy as np

T = 10
my_list = [tf.ones((np.random.randint(2, 42))) for i in range(T)]  # list of random size tensors

dims = [t.get_shape().as_list()[0] for t in my_list]
max_dims = max(dims)

my_padded_list = [tf.squeeze(
    # Padding with zeros:
    tf.pad(tf.expand_dims(t, 0),
           tf.constant([[0, 0], [int(np.floor((max_dims - t.get_shape().as_list()[0]) / 2)),
                                 int(np.ceil((max_dims - t.get_shape().as_list()[0]) / 2))]],
                       dtype=tf.int32),
           "CONSTANT"))
    for t in my_list]

my_padded_list = tf.stack(my_padded_list)
outputs_with_padding = tf.reduce_sum(my_padded_list, axis=1)

with tf.Session() as sess:
    # [13. 11. 24.  9. 16.  8. 24. 34. 35. 32.]
    res = sess.run(outputs_with_padding)
    print(res)

    # Checking if values are correct:
    print([sess.run(tf.reduce_sum(tensor)) for tensor in my_list])
    # [13.0, 11.0, 24.0, 9.0, 16.0, 8.0, 24.0, 34.0, 35.0, 32.0]

解决方案tf.SparseTensor

import tensorflow as tf
import numpy as np

T = 4
K = 2
max_length = 42
my_list = [np.random.rand(np.random.randint(1, max_length + 1), K) for i in range(T)]  # list of random size tensors

x = tf.sparse_placeholder(tf.float32)
res = tf.sparse_reduce_sum(x, axis=1)

with tf.Session() as sess:
    # Preparing inputs for sparse placeholder:
    indices = np.array([ [t, i, k] for t in range(T) 
                         for i in range(my_list[t].shape[0]) 
                         for k in range(my_list[t].shape[1]) ], dtype=np.int64)
    values = np.concatenate([t.reshape((-1)) for t in my_list])
    dense_shape = np.array([T, max_length, K], dtype=np.int64)
    sparse_feed_dict = {x: tf.SparseTensorValue(indices, values, dense_shape)}
    # or implictely, sparse_feed_dict = {x: (indices, values, dense_shape)}
    print(sess.run(res, feed_dict=sparse_feed_dict))
    # [[2.160928   3.38365   ]
    #  [13.332438  14.3232155]
    #  [6.563875   6.540451  ]
    #  [3.3114233  2.138658  ]]

    # Checking if values are correct:
    print([sess.run(tf.reduce_sum(tensor, axis=0)) for tensor in my_list])
    # [array([2.16092795, 3.38364983  ]), 
    #  array([13.33243797, 14.32321563]), 
    #  array([6.56387488, 6.54045109  ]), 
    #  array([3.31142322, 2.13865792  ])]

推荐阅读