首页 > 解决方案 > 有没有办法调整keras模型的输出?

问题描述

我试图使用 Stacked Autoencoder 制作推荐引擎。我正在使用 movielens 数据集。下面是示例数据集。行索引是用户,列索引是电影。每个单元格包含用户给出的电影评分,介于 1 到 5 之间,0 表示用户没有观看该电影。

mat = array([[[0. , 3.5, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]],
       [[0. , 0. , 4. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]],
       [[4. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]],
       [[0. , 0. , 0. , 0. , 0. , 3. , 0. , 0. , 0. , 4. ]],
       [[0. , 3. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]]], dtype=float32)

现在我构建了一个自动编码器,但是这个自动编码器可以训练整个数据,但我不希望它在用户没有评分的地方进行训练(值为 0 的单元格),所以我想更改模型的输出,例如索引,其中0 在原始数据中,在每个 epoch 后变为 0,但这会给我一个 @property 错误。有没有办法调整输出?或者有什么方法可以为这个 _output 变量创建一个 setter 函数?

import tensorflow as tf
import keras 
from keras.layers import Input, Dense
from keras import Model
import numpy as np

def build_model(input_shape,train=True):
    getin = Input(input_shape)
    encoder = Dense(128,activation='sigmoid')(getin)
    encoder = Dense(64,activation='sigmoid')(encoder)
    encoder = Dense(32,activation='sigmoid')(encoder)
    decoder = Dense(64,activation='sigmoid')(encoder)
    decoder = Dense(128,activation='sigmoid')(decoder)
    decoder = Dense(input_shape,activation='sigmoid',name='op')(decoder)
    model = Model(inputs=getin,outputs=decoder)
    model.compile(optimizer='rmsprop',loss='mean_absolute_percentage_error')
    return model
model = build_model(mat[0].shape[1])

for i in range(50):   #for epochs
    model.fit(mat,mat,epochs=1) #fitting the model
    model.output = tf.multiply(model.output,tf.cast(mat==0,tf.float32)) #Now changing some values of trained output tensor to zero where original dataset contains zero

错误信息 -

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __setattr__(self, name, value)
   2548       try:
-> 2549         super(tracking.AutoTrackable, self).__setattr__(name, value)
   2550       except AttributeError:

AttributeError: can't set attribute

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
<ipython-input-23-379ab2b4f62f> in <module>
      1 for _ in range(2):
      2     model.fit(mat,mat,epochs=1,batch_size=100)
----> 3     model.output = tf.multiply(model.output,tf.cast(mat==0,tf.float32))

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in __setattr__(self, name, value)
    454                          ' Always start with this line.'), None)
    455 
--> 456     super(Network, self).__setattr__(name, value)
    457 
    458     # Keep track of metric instance created in subclassed model/layer.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __setattr__(self, name, value)
   2552             ('Can\'t set the attribute "{}", likely because it conflicts with '
   2553              'an existing read-only @property of the object. Please choose a '
-> 2554              'different name.').format(name))
   2555       return
   2556 

AttributeError: Can't set the attribute "output", likely because it conflicts with an existing read-only @property of the object. Please choose a different name.

更新 - 得到了 PyTorch 解决方案。

mat = class SAE(nn.Module): def init (self): super(SAE,self)。init () self.fc1 = Linear(nmovie,50) self.fc2 = Linear(50,25) self.fc3 = Linear(25,10) self.fc4 = Linear(10,25) self.fc5 = Linear(25 ,50) self.fc6 = Linear(50,nmovie) self.activation = nn.Sigmoid() def forward(self,x): x = self.activation(self.fc1(x)) x = self.activation(self .fc2(x)) x = self.activation(self.fc3(x)) x = self.activation(self.fc4(x)) x = self.activation(self.fc5(x)) x = self.fc6 (x) 返回 x

sae = SAE()
sae.cuda()

criterion = nn.MSELoss()
opt = optim.RMSprop(sae.parameters(),lr=0.1,weight_decay=0.2)

for epoch in range(1,50):
    train_loss = 0
    s = 0.
    for i in range(nuser):
        getin = mat[i].unsqueeze(0)
        target = getin.clone()
        if torch.sum(target.data >0) > 0:
            output = sae(getin)
            target.require_grad = False

#This is what I want

            output[target==0] = 0 #Some output getting zero

            loss= criterion(output, target)
            mean_corrector = nmovie/float(torch.sum(target.data>0)+ 1e-10)
            loss.backward()
            train_loss += torch.sqrt(loss.data*mean_corrector)
            s+=1.
            opt.step()
    print('epoch: {}, loss: {}'.format(epoch,train_loss/s))

但仍然想知道如何使用 Keras 来做到这一点。

标签: pythontensorflowoopkerasautoencoder

解决方案


推荐阅读