首页 > 解决方案 > Tensorflow 1.3 中的 InvalidArguementError

问题描述

我实现了一个 RNN 来使用 Tensorflow 1.3 执行情感分析。当我训练和测试模型时,我想创建一个函数,将“评论”作为输入并找到该评论的情绪。当我运行程序时,我得到了这个错误,我不明白为什么:

Traceback (most recent call last):
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call
    return fn(*args)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1306, in _run_fn
    status, run_metadata)
  File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
    next(self.gen)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int32 and shape [?,250]
     [[Node: Placeholder = Placeholder[dtype=DT_INT32, shape=[?,250], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/andreas/sentiment_analysis_with_rnn/sentiment_analysis_rnn.py", line 169, in <module>
    find_sentiment("This is bad")
  File "/home/andreas/sentiment_analysis_with_rnn/sentiment_analysis_rnn.py", line 166, in find_sentiment
    print(sess.run([prediction], {x: review_array}))
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1124, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    options, run_metadata)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int32 and shape [?,250]
     [[Node: Placeholder = Placeholder[dtype=DT_INT32, shape=[?,250], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op 'Placeholder', defined at:
  File "/home/andreas/sentiment_analysis_with_rnn/sentiment_analysis_rnn.py", line 95, in <module>
    x = tf.placeholder(tf.int32, [None, MAX_WORDS])
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1548, in placeholder
    return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2094, in _placeholder
    name=name)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/andreas/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype int32 and shape [?,250]
     [[Node: Placeholder = Placeholder[dtype=DT_INT32, shape=[?,250], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

这是我的功能:

def find_sentiment(text):
    vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_WORDS)
    review_array = np.array(list(vocab_processor.fit_transform(text)))
    print(review_array)

    x = tf.placeholder(tf.int32, [None, MAX_WORDS])

    with tf.Session() as sess:
        saver = tf.train.Saver()
        saver.restore(sess, 'saved/model.ckpt')
        print(sess.run([prediction], {x: review_array}))


find_sentiment("This is bad")

这就是我训练和测试 RNN 的方式:

MAX_WORDS = 250

# pad short reviews and truncate long reviews so that all reviews have the same size
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_WORDS)

x_data = np.array(list(vocab_processor.fit_transform(data)))
y_output = np.array(labels)

vocabulary_size = len(vocab_processor.vocabulary_)

np.random.seed(22)
shuffle_indices = np.random.permutation(np.arange(len(x_data)))

x_shuffled = x_data[shuffle_indices]
y_shuffled = y_output[shuffle_indices]

TRAIN_DATA = 24000
TOTAL_DATA = len(x_data)

# neural networks perform better when datasets are huge
train_data = x_shuffled[:TRAIN_DATA]
train_target = y_shuffled[:TRAIN_DATA]
tf.reset_default_graph()

x = tf.placeholder(tf.int32, [None, MAX_WORDS])
y = tf.placeholder(tf.int32, [None])

with tf.Session() as session:
    session.run(tf.global_variables_initializer())

    for epoch in range(epochs):
        num_batches = len(train_data) // batch_size
        num_batches += 1

        for i in range(num_batches):
            lower_bound = i * batch_size  # lower bound of batch
            upper_bound = np.min([len(train_data), ((i+1) * batch_size)])  # upper bound of batch

            x_train_batch = train_data[lower_bound:upper_bound]
            y_train_batch = train_target[lower_bound:upper_bound]
            session.run(train_step, feed_dict={x: x_train_batch, y: y_train_batch})

            train_loss, train_acc = session.run([loss, accuracy], feed_dict={x: x_train_batch, y: y_train_batch})

        # hyper parameters that can be used to improve the model is to increase the number of epochs, batch size and use
        # a larger dataset for training
        test_loss, test_acc = session.run([loss, accuracy], feed_dict={x: test_data, y: test_target})
        print("Epoch:", epoch+1, "Test loss:", test_loss, "Test accuracy:", test_acc)
    # saver = tf.train.Saver()
    # save_path = saver.save(session, "saved/model.ckpt")

我究竟做错了什么?我的代码适用于长度为 250 的二维数组的火车数据:当我运行 find_sentiment 时:

def find_sentiment(text):
    vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_WORDS)
    review_array = np.array(list(vocab_processor.fit_transform(text)))
    print(review_array)

    x = tf.placeholder(tf.int32, [MAX_WORDS])

    print(len(review_array))

    review_array.reshape(MAX_WORDS)

    with tf.Session() as sess:
        saver = tf.train.Saver()
        saver.restore(sess, 'saved/model.ckpt')
        print(sess.run([prediction], {x: review_array}))

我得到这个输出:

[[1 0 0 ... 0 0 0]
 [2 0 0 ... 0 0 0]
 [3 0 0 ... 0 0 0]
 ...
 [5 0 0 ... 0 0 0]
 [6 0 0 ... 0 0 0]
 [7 0 0 ... 0 0 0]]
11

它说我的矩阵的长度为 11,它的长度应该为 250。还会出现此错误:

Traceback (most recent call last):
  File "/home/andreas/sentiment_analysis_with_rnn/sentiment_analysis_rnn.py", line 173, in <module>
    find_sentiment("This is bad")
  File "/home/andreas/sentiment_analysis_with_rnn/sentiment_analysis_rnn.py", line 165, in find_sentiment
    review_array.reshape(MAX_WORDS)
ValueError: cannot reshape array of size 2750 into shape (250,)

2750/11=250,等于 MAX_WORDS。我错过了什么?

标签: pythontensorflowdeep-learning

解决方案


我不确定 Tensorflow 占位符是如何工作的,但是从堆栈跟踪和我的理解来看,你在y做的时候没有向占位符提供任何数据

print(sess.run([prediction], {x: review_array}))

这将解释该You must feed a value for placeholder tensor 'Placeholder_1'消息


推荐阅读