首页 > 解决方案 > 使用 tensorflow 时 __enter__ 出现 AttributeError 问题

问题描述

import tensorflow as tf
import numpy as np
tf.enable_eager_execution()

x_data = [[1,2,1,1],[2,1,3,2],[3,1,3,4],[4,1,5,5],[1,7,5,5],[1,2,5,6],[1,6,6,6],[1,7,7,7]]
y_data = [[0,0,1],[0,0,1],[0,0,1],[0,1,0],[0,1,0],[0,1,0],[1,0,0],[1,0,0]]

x_data = np.asarray(x_data, dtype=np.float32)
y_data = np.asarray(y_data, dtype=np.float32)
nb_classes = 3

W = tf.Variable(tf.random_normal([4, nb_classes]), name = 'weight')
b = tf.Variable(tf.random_normal([nb_classes]), name = 'bias')
variables = [W,b]

def hypothesis(X):
    hypo = tf.nn.softmax(tf.matmul(X,W) + b)
    return hypo 

def cost_fn(X,Y):
    logits = hypothesis(X)
    cost = -tf.reduce_sum(Y * tf.log(logits), axis = 1)
    cost_mean = tf.reduce_mean(cost)
    return cost_mean

def grad_fn(X,Y):
    with tf.GradientTape as tape:
        cost =  cost_fn(X,Y)
        grads = tape.gradient(cost, variables)
        return grads

所以我正在尝试分类并为梯度下降优化器制作梯度函数。并且错误发生在代码的最后一部分

with tf.GradientTape as tape:

AttributeError :出现输入,我不明白为什么。我可以得到错误的原因或解决方法吗?

标签: tensorflow

解决方案


您缺少 . 中的括号GradientTape。应该如下。

def grad_fn(X,Y):
    with tf.GradientTape() as tape:
        cost =  cost_fn(X,Y)
        grads = tape.gradient(cost, variables)
        return grads

推荐阅读