首页 > 解决方案 > 如何将多个输入(四个)合并到神经网络中进行二值图像分类?

问题描述

我将如何构建一个用于具有四个输入的二元分类的神经网络(即,我不想输入一个完整的图像,而是输入四个相同大小的图像块/图块,它们被标记为 1 类或 2 类)。

我当前的实现遵循二进制类的简单顺序模型,我想将其转换为上述方案。

train_generator = train_datagen.flow_from_directory(
        r'MY\\TRAINING\\PATH',
        classes = ['Class1', 'Class2'],
        target_size=(100, 100), 
        batch_size=16,
        shuffle=False,
        class_mode='binary')

validation_generator = validation_datagen.flow_from_directory(
        r'MY\\VALIDATION\\PATH',
        classes = ['Class1', 'Class2'],
        target_size=(100, 100), 
        batch_size=8,
        # Use binary labels
        shuffle=False,
        class_mode='binary')

model = tf.keras.models.Sequential([
                                    tf.keras.layers.Conv2D(8, (3, 3), activation='relu', input_shape=(100, 100, 3)),
                                    tf.keras.layers.MaxPooling2D((2, 2)),
                                    tf.keras.layers.Conv2D(16, (3, 3), activation='relu'),
                                    tf.keras.layers.MaxPooling2D((2, 2)),
                                    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
                                    tf.keras.layers.Flatten(),
                                    tf.keras.layers.Dense(1, activation='sigmoid')
                                    ])

例如,我的数据的文件结构如下所示:

├── training
│   ├── Class1_Quadrant1
│   │   ├── IMG_1.jpg
│   │   ├── IMG_2.jpg
│   │   ├── IMG_3.jpg
│   │   ├── etc.jpg
│   ├── Class1_Quadrant2
│   │   ├── IMG_1.jpg
│   │   ├── IMG_2.jpg
│   │   ├── IMG_3.jpg
│   │   ├── etc.jpg
│   ├── Class1_Quadrant3
│   │   ├── IMG_1.jpg
│   │   ├── IMG_2.jpg
│   │   ├── IMG_3.jpg
│   │   ├── etc.jpg
│   ├── Class1_Quadrant4
│   │   ├── IMG_1.jpg
│   │   ├── IMG_2.jpg
│   │   ├── IMG_3.jpg
│   │   ├── etc.jpg

并且我的验证集具有相同的结构。

理想情况下,如果有帮助,类似于此图像的内容:

神经网络示例

编辑:

# Input Model 1
input1 = Input(shape=(100, 100, 3))
conv11 = Conv2D(32, kernel_size = 4, activation = 'relu')(input1)
pool11 = MaxPooling2D(pool_size = (2, 2))(conv11)
conv12 = Conv2D(16, kernel_size = 4, activation = 'relu')(pool11)
pool12 = MaxPooling2D(pool_size = (2, 2))(conv12)
conv12 = Conv2D(8, kernel_size = 4, activation = 'relu')(pool11)
pool12 = MaxPooling2D(pool_size = (2, 2))(conv12)
flat1 = Flatten()(pool12)
dense1 = Dense(1)(flat1)

# ... same structure repeated up to Input 4


# Merge All Input Models
merge = concatenate([flat1, flat2, flat3, flat4])

# Dense layers
hidden1 = Dense(1, activation = 'relu')(merge)
hidden2 = Dense(1, activation = 'relu')(hidden1)
hidden3 = Dense(1, activation = 'relu')(hidden2)
hidden4 = Dense(1, activation = 'relu')(hidden3)
output = Dense(1, activation = 'sigmoid')(hidden4)
model = Model(inputs = [input1, input2, input3, input4], outputs = output)

# Lastly my data comes from 

train_datagen = ImageDataGenerator(1 / 255.0)
validation_datagen = ImageDataGenerator(1 / 255.0)

train_generator1 = train_datagen.flow_from_directory(
    r'training\\path\\Q1',  
    classes = ['D1', 'D2'],
    target_size = (100, 100),  
    batch_size = 16,
    shuffle = False,
    class_mode = 'binary'
    )

validation_generator1 = validation_datagen.flow_from_directory(
    r'validation\\path\\Q1',  
    classes = ['D1', 'D2'],
    target_size = (100, 100), 
    batch_size = 8,
    shuffle = False,
    class_mode = 'binary'
    )

# this continues on for the 4 training and 4 validation generators
# until I get the error thrown here

model.compile(
    optimizer = tensorflow.optimizers.Adam(learning_rate = 1e-4),
    loss = 'binary_crossentropy',
    metrics = ['accuracy']
    )

history = model.fit(
    [train_generator1, train_generator2, train_generator3, train_generator4],  
    epochs = 100,
    verbose = 1,
    validation_data = [validation_generator1, validation_generator2, validation_generator3, validation_generator4],
    )

追溯:

history = model.fit(
    [train_generator1, train_generator2, train_generator3, train_generator4],  
    epochs = 100,
    verbose = 1,
    validation_data = [validation_generator1, validation_generator2, validation_generator3, validation_generator4],
    )
Traceback (most recent call last):

  File "<ipython-input-23-332d3f7e4bba>", line 5, in <module>
    validation_data = [validation_generator1, validation_generator2, validation_generator3, validation_generator4],

  File "C:\Users\Eitan Flor\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 108, in _method_wrapper
    return method(self, *args, **kwargs)

  File "C:\Users\Eitan Flor\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1063, in fit
    steps_per_execution=self._steps_per_execution)

  File "C:\Users\Eitan Flor\anaconda3\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 1104, in __init__
    adapter_cls = select_data_adapter(x, y)

  File "C:\Users\Eitan Flor\anaconda3\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 971, in select_data_adapter
    _type_name(x), _type_name(y)))

ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {"<class 'tensorflow.python.keras.preprocessing.image.DirectoryIterator'>"}), <class 'NoneType'>

标签: pythontensorflowkerasconv-neural-network

解决方案


使用 Sequential API 是不可能的(每一层都应该有一个输入):https ://www.tensorflow.org/guide/keras/sequential_model

尝试功能 API:https ://www.tensorflow.org/guide/keras/functional


推荐阅读