首页 > 解决方案 > 如何在张量流的神经网络中应用权重约束?

问题描述

我正在使用 tensorflow 急切执行。在我的神经网络中,我想在隐藏层中应用权重约束。这是我的方法:

kernel_constraint=tf.keras.constraints.MaxNorm(max_value=0.5, axis=0)

但是当我运行一个小例子时,我看到内核约束没有被应用(意味着更新的权重部分是 >0.5 )。

这是我的例子:

import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np

tf.enable_eager_execution()


model = tf.keras.Sequential([
  tf.keras.layers.Dense(2, activation=tf.sigmoid,kernel_constraint=tf.keras.constraints.MaxNorm(max_value=0.5, axis=0),input_shape=(2,)),  # input shape required
  tf.keras.layers.Dense(2, activation=tf.sigmoid)
])

#set the weights
weights=[np.array([[0.25, 0.25],[0.2,0.3]]),np.array([0.35,0.35]),np.array([[0.4,0.5],[0.45, 0.55]]),np.array([0.6,0.6])]

model.set_weights(weights)

model.get_weights()

features = tf.convert_to_tensor([[0.05,0.10 ]])
labels =  tf.convert_to_tensor([[0.01,0.99 ]])

#define the loss function
def loss(model, x, y):
  y_ = model(x)
  return tf.losses.mean_squared_error(labels=y, predictions=y_)

#define the gradient calculation
def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
  return loss_value, tape.gradient(loss_value, model.trainable_variables) 

#create optimizer an global Step
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
global_step = tf.train.get_or_create_global_step()

#optimization step
loss_value, grads = grad(model, features, labels)
optimizer.apply_gradients(zip(grads, model.variables),global_step)
#masking the optimized weights 
model.get_weights()

我错过了什么?

标签: pythontensorflowkeras

解决方案


推荐阅读