首页 > 解决方案 > tf.keras plot_model: add_node() 收到一个非节点类对象

问题描述

我正在重新使用 python,并一直在尝试一些 tensorflow 和 keras 的东西。我想使用 plot_model 函数,在整理了一些 graphviz 问题后,我现在收到了这个错误 -

TypeError: add_node() 收到一个非节点类对象:

我试图自己找到答案,但结果很短,因为我发现的唯一答案似乎与 tf. 任何建议或替代想法将不胜感激。这是代码和错误消息 - 我在这里的第一个问题很抱歉,如果我错过了什么,请告诉我。

我在 python 3.8 中使用 miniconda3

import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import EarlyStopping

from numpy import argmax
from matplotlib import pyplot
from random import randint

tf.keras.backend.set_floatx("float64")
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]

class mnist_model(Model):
    def __init__(self):
        super(mnist_model, self).__init__()
        self.conv = Conv2D(32, 3, activation = tf.nn.leaky_relu, kernel_initializer = 'he_uniform', input_shape = (28, 28, 3))
        self.pool = MaxPool2D((2,2))
        self.flat = Flatten()
        self.den1 = Dense(128, activation = tf.nn.relu, kernel_initializer = 'he_normal')
        self.drop = Dropout(0.25)
        self.den2 = Dense(10, activation = tf.nn.softmax)

    def call(self, inputs):
        n = self.conv(inputs)
        n = self.pool(n)
        n = self.flat(n)
        n = self.den1(n)
        n = self.drop(n)
        return self.den2(n)

model = mnist_model()

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

limit = EarlyStopping(monitor = 'val_loss', patience = 5)

history = model.fit(x_train, y_train, batch_size=64, epochs = 1, verbose = 2, validation_split = 0.15, steps_per_epoch = 100, callbacks = [limit])
print("\nTraining finished\n\nTesting 10000 samples")
model.evaluate(x_test, y_test, verbose = 1)
print("Testing finished\n")


plot_model(model, show_shapes = True, rankdir = 'LR')


##################################################################################################################################################################
                              ## Error message: ##

Train on 51000 samples, validate on 9000 samples

Training finished

Testing 10000 samples
10000/10000 [==============================] - 7s 682us/sample - loss: 0.2447 - accuracy: 0.9242
Testing finished

Traceback (most recent call last):

  File "C:\Users\Thomas\Desktop\Various Python\Tensorflow\Tensorflow_experimentation\tc_mnist.py", line 60, in <module>
    plot_model(model, show_shapes = True, rankdir = 'LR')

  File "C:\Users\Thomas\miniconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\utils\vis_utils.py", line 283, in plot_model
    dpi=dpi)

  File "C:\Users\Thomas\miniconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\utils\vis_utils.py", line 131, in model_to_dot
    dot.add_node(node)

  File "C:\Users\Thomas\miniconda3\envs\tensorflow\lib\site-packages\pydotplus\graphviz.py", line 1281, in add_node
    'class object: {}'.format(str(graph_node))

TypeError: add_node() received a non node class object: <pydotplus.graphviz.Node object at 0x00000221C7E3E888>`



标签: python-3.xtensorflow2.0tf.keras

解决方案


我认为问题的根本原因是子类模型的形状推断,其中model.summary显示multipleOutput Shape. 我在子类模型中添加了一个模型调用,如下所示。

def model(self):
        x = tf.keras.layers.Input(shape=(28, 28, 1))
        return Model(inputs=[x], outputs=self.call(x))

通过此修改,形状推断在功能 API 中是自动的。由于功能和顺序模型作为层的静态图,我们可以轻松地进行形状推断。但是,子类模型是一段 python 代码(调用方法),没有层图可以轻松推断。我们无法知道层是如何相互连接的(因为这是在调用主体中定义的,而不是作为显式数据结构),因此我们无法推断输入/输出形状。

Plot_model 看起来像

在此处查看完整代码以供参考。


推荐阅读