首页 > 解决方案 > 带有 Keras 的自定义层:是否可以在基于零的 softmax 层的输出中将输出神经元设置为 0 作为输入层中的数据?

问题描述

我有一个神经网络,在最后一层使用 softmax 激活 (soft_out) 有 13 个输出神经元。我也确切地知道,根据输入值,输出层中的某些神经元应该有 0 值。所以我有一个由 13 个神经元组成的特殊输入层(inp),每个神经元都是 0 或 1。

是否有可能强制让我们说输出神经元没有。如果输入神经元 3 设置为 1,则 3 的值 = 0?

除此之外,它必须充当 softmax 层,因此最后神经元的总和必须为 1。因此必须校正输出行。

步骤如下: 1. 清除 inp 神经元==1 的 soft_out 神经元 2. 计算 soft_out 中行的总和 3. 检查总和为 0 的行 4. 将 sum 为 0 的行中的 soft_out 修正为任意常数值 5. 再次计算 soft_out 中行的总和 6. 检查总和为 0 的行并将其设置为 1 7. 每行返回 soft_out / sum(因此调整输出以使每行总和 =1 )

使用 numpy 这些是步骤: 输入数据

inp = np.array((5,13))
inp = np.random.choice([0, 1], size=5*13, p=[.5, .5])
inp = inp.reshape(5,13)
soft_out=np.around(np.random.random_sample((5,13)),2)
inp [3,:]=1
inp [4,:]=1
inp [4,12]=0
soft_out[4,12]=0
print ("inp",inp,"\n")
print ("soft_out",soft_out,"\n")

inp [[1 1 0 0 1 1 1 1 1 1 0 0 0] [0 0 0 0 1 0 0 0 1 1 0 0 1] [1 0 1 1 0 0 0 1 0 0 0 0 1] [1 1 1 1 1 1 1 1 1 1 1 1 1] [1 1 1 1 1 1 1 1 1 1 1 1 0]]

soft_out [[0.8 0.16 0.42 0.44 0.67 0.39 0.38 0.54 0.75 0.06 0.62 0.67 0.86] [0.87 0.28 0.51 0.92 0.89 0.97 0.1 0.17 0.73 0.43 0.84 0.96 0.57] [0.16 0.33 0.62 0.37 0.42 0.54 0.1 0.54 0.92 0.51 0.89 0.86 0.96] [0.53 0.59 0.6 0.63 0.57 0.95 0.41 0.1 0.32 0.81 0.87 0.35 0.16] [0.13 0.57 0.92 0.87 0.82 0.08 0.74 0.78 0.2 0.22 0.64 0.06 0. ]]

#0. find out where inp is set to 1 and to 0
mask_nonzero=np.where(inp != 0 )
print ("mask_nonzero", mask_nonzero,"\n")

mask_zero=np.where(inp == 0 )
print ("mask_zero", mask_zero,"\n")

#1. clear those values where inp is 1
soft_out[mask_nonzero]=0
print ("soft_out", soft_out,"\n")

#2. calculate the sum of the rows
row_sum_soft_out = np.sum(soft_out,axis=-1)
print ("row_sum_soft_out", row_sum_soft_out,"\n")

# 3. reshape in order to find out rows where the sum is zero >> this means that the soft_out values have to be corrected
row_sum_soft_out = row_sum_soft_out.reshape(5,1)
print ("row_sum_soft_out", row_sum_soft_out,"\n")

mask_sum_zero = np.where(row_sum_soft_out == 0 )
soft_out[mask_sum_zero[0]] = 1
print ("soft_out", soft_out,"\n")

print ("mask_sum_zero", mask_sum_zero,"\n")
soft_out[mask_nonzero]=0

# correct soft_out in the rows where sum is 0 to an arbitrary constant value
row_sum_soft_out = np.sum(soft_out,axis=-1)

#5. calculate sum of the rows in soft_out again
mask_sum_zero = np.where(row_sum_soft_out == 0 )

#6. check in which row where the sum is 0 and set it to 1
row_sum_soft_out[mask_sum_zero] = 1

row_sum_soft_out = row_sum_soft_out.reshape(5,1)
#7. return soft_out / sum per each row (so adjust the output to have sum =1 per row)
y = soft_out / row_sum_soft_out
print ("soft_out", y)
print (np.sum(y,axis=-1),"\n")

mask_nonzero (array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3 , 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4], dtype=int64), array([ 0, 1, 4, 5, 6, 7, 8, 9, 4, 8, 9, 12, 0, 2, 3, 7, 12, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], dtype=int64))

mask_zero (数组([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 4 ], dtype=int64), 数组([ 2, 3, 10, 11, 12, 0, 1, 2, 3, 5, 6, 7, 10, 11, 1, 4, 5, 6, 8, 9, 10, 11, 12], dtype=int64))

软输出 [[0. 0. 0.42 0.44 0. 0. 0. 0. 0. 0. 0.62 0.67 0.86] [0.87 0.28 0.51 0.92 0. 0.97 0.1 0.17 0. 0. 0.84 0.96 0. ] [0. 0.33 0. 0. 0.42 0.54 0.1 0. 0.92 0.51 0.89 0.86 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]]

row_sum_soft_out [3.01 5.62 4.57 0. 0. ]

row_sum_soft_out [[3.01] [5.62] [4.57] [0. ] [0。]]

软输出 [[0. 0. 0.42 0.44 0. 0. 0. 0. 0. 0. 0.62 0.67 0.86] [0.87 0.28 0.51 0.92 0. 0.97 0.1 0.17 0. 0. 0.84 0.96 0. ] [0. 0.33 0. 0. 0.42 0.54 0.1 0. 0.92 0.51 0.89 0.86 0. ] [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ]]

mask_sum_zero (array([3, 4], dtype=int64), array([0, 0], dtype=int64))

软输出 [[0. 0. 0.13953488 0.1461794 0. 0. 0. 0. 0. 0. 0.20598007 0.22259136 0.28571429] [0.15480427 0.04982206 0.09074733 0.16370107 0. 0.17259786 0.01779359 0.03024911 0. 0. 0.14946619 0.17081851 0. ] [0. 0.07221007 0. 0. 0.09190372 0.11816193 0.02188184 0. 0.20131291 0.11159737 0.19474836 0.18818381 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. ]] [1. 1. 1. 0. 1.]

有人可以帮忙写KERAS后端层吗?

标签: backendkeras-layersoftmax

解决方案


I came up with this code at the end ... one should change the shape (shape=(32, 13)) as needed

from keras import backend as K
import tensorflow as tf
def mask_output2(x):
    # soft_out is the output of the previous softmax layer, shape =(batch_size, 13) in my case
    # inp is the tensor containing 0 and 1-s, where 1 means that the action was already used, shape=(batch_size, 13)
    inp, soft_out = x
    # add a very small value in order to avoid having 0 everywhere
    c = K.constant(0.0000001, dtype='float32', shape=(32, 13))
    y = soft_out + c
    # clear invalid actions' values
    y = Lambda(lambda x: K.switch(K.equal(x[0],0), x[1], K.zeros_like(x[1])))([inp, soft_out])
    y_sum =  K.sum(y, axis=-1)
    # correct sum if it is 0 to avoid dividing by zero
    y_sum_corrected = Lambda(lambda x: K.switch(K.equal(x[0],0), K.ones_like(x[0]), x[0] ))([y_sum])
    # reshape output in order to have sum 1 per row, so first calculate 1/sum
    y_sum_corrected = tf.divide(1,y_sum_corrected)
    # multiply tensor (32,13) with tensor (32)
    y = tf.einsum('ij,i->ij', y, y_sum_corrected)
    return y

推荐阅读