python - Tensorflow:如何在模型中添加正则化
问题描述
我想像这样将正则化添加到我的优化器中:
tf.train.AdadeltaOptimizer(learning_rate=1).minimize(loss)
但我不知道如何将函数“损失”设计到下面的代码中
我看到的网站是: https ://blog.csdn.net/marsjhao/article/details/72630147
有人可以给我一些建议或与我讨论吗?
def train_nn_classifier_model_new(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
periods = 10
steps_per_period = steps / periods
# Create a DNNClassifier object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_classifier = tf.estimator.DNNClassifier(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["deal_or_not"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["deal_or_not"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["deal_or_not"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss (on training data):")
training_log_losses = []
validation_log_losses = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_probabilities =
dnn_classifier.predict(input_fn=predict_training_input_fn)
training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
print(training_probabilities)
validation_probabilities = dnn_classifier.predict(input_fn=predict_validation_input_fn)
validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])
training_log_loss = metrics.log_loss(training_targets, training_probabilities)
validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_log_loss))
# Add the loss metrics from this period to our list.
training_log_losses.append(training_log_loss)
validation_log_losses.append(validation_log_loss)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.tight_layout()
plt.plot(training_log_losses, label="training")
plt.plot(validation_log_losses, label="validation")
plt.legend()
return dnn_classifier
result = train_nn_classifier_model_new(
my_optimizer=tf.train.AdadeltaOptimizer (learning_rate=1),
steps=30000,
batch_size=250,
hidden_units=[150, 150, 150, 150],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets
)
解决方案
Regularization are added to loss function. Your Optimizer AdadeltaOptimizer
do not support regularization parameter. If you want to add regularization to your optimizer you should use tf.train.ProximalAdagradOptimizer
as it has l2_regularization_strength
and l1_regularization_strength
parameters where you can set the values.These parameters were part of the original algorithm.
Other wise you simply have to apply regularization to your custom loss function but DNNClassifier
does not allow to use any custom loss function.You have to create your network manually for that.
How to add regularization ,check it here.
推荐阅读
- r - r:我将经度和纬度坐标投影到地图上的 shapefile 的方式正确吗?
- laravel - 获取上传文件的正确 URL
- java - 检测 OutOfMemoryError - 使用 G1 时的意外行为
- sql - 从多对多关系表中选择
- r - 获取正确对象类的问题。R
- google-sheets - 如何在 Google Sheet 中使用通配符对整行使用条件格式?
- mysql - 为什么我的查询返回“OK”而不是行?
- php - 将特定类别中的 woocommerce 订单随机分配给商店经理
- javascript - 每次页面加载时如何启动功能?
- java - 从 ArrayList 中删除重复项
或者如何找出arraylist中是否已经存在类似的项目?