首页 > 解决方案 > tensorflow,访问变量的内部元素并冻结其中一些

问题描述

在权重张量中,这是一个假设矩阵,我如何从权重矩阵中选择一些元素并将其添加到要冻结的变量列表和要在张量流中训练的矩阵的其余元素?示例:创建了一个大小为 20*20 的变量 W:我怎样才能挑选出几个元素,如 W[0][1],W[13][15] 并将它们冻结在优化器中

    ........
    def rnn_cell(rnn_input, state, weight):
           with tf.variable_scope('rnn_cell', reuse=True):
               W = tf.get_variable('W', [n_inputs + n_neurons, n_neurons])
               b = tf.get_variable('b', [1, n_neurons], 
                     initializer=tf.constant_initializer(0.0))
            return (tf.tanh(tf.matmul(tf.concat([rnn_input, state], 1), weight) + b))


     part_W = tf.scatter_nd([[0,0]], [W[0][0]], [178,150])
     W_2 = part_W + tf.stop_gradient(-part_W + W)
     state = init_state
     rnn_outputs = []
     for rnn_input in rnn_inputs:
             state = rnn_cell(rnn_input, state, W_2)
             rnn_outputs.append(state)
     final_state = rnn_outputs[-1]

     logits = fully_connected(final_state, n_outputs, activation_fn=None)
     xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
     logits=logits)
     loss = tf.reduce_mean(xentropy)
     optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
     training_op = optimizer.minimize(loss)
     correct = tf.nn.in_top_k(logits, y, 1)
     accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
     init = tf.global_variables_initializer()

     with tf.Session() as sess:
              init.run()
              for epoch in range(n_epochs):
              for iteration in range(mnist.train.num_examples // 
      batch_size):
                            X_batch, y_batch = 
                mnist.train.next_batch(batch_size)
                            X_batch = X_batch.reshape((-1, n_steps, 
                            n_inputs))
                            h=np.zeros([batch_size,n_neurons])
                            sess.run(training_op, feed_dict={X: X_batch, y: 
                            y_batch, p:h})
            acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch, 
            p:h})
            q=np.zeros([10000,n_neurons])
            acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test,p:q})
            print(epoch, "Train accuracy:", acc_train, "Test 
            accuracy:",acc_test)

标签: pythontensorflow

解决方案


我会将权重与 it-self 的副本重新组合tf.stop_gradient。例如,

import tensorflow as tf

w = tf.Variable(tf.zeros((10, 10)))
mask = tf.cast(tf.random_uniform((10, 10), 0, 2, dtype=tf.int32), tf.bool)
w = tf.where(mask, w, tf.stop_gradient(w))

推荐阅读