首页 > 解决方案 > Keras 中的自定义 LearningRateScheduler

问题描述

我正在根据上一个时代的准确性实现衰减学习率。

捕获指标:

class CustomMetrics(tf.keras.callbacks.Callback):
  def on_train_begin(self, logs={}):
    self.metrics={'loss': [],'accuracy': [],'val_loss': [],'val_accuracy': []}
    self.lr=[]
  
  def on_epoch_end(self, epoch, logs={}):     
    print(f"\nEPOCH {epoch} Callng from METRICS CLASS")
    self.metrics['loss'].append(logs.get('loss'))
    self.metrics['accuracy'].append(logs.get('accuracy'))
    self.metrics['val_loss'].append(logs.get('val_loss'))
    self.metrics['val_accuracy'].append(logs.get('val_accuracy'))

自定义学习衰减:

from tensorflow.keras.callbacks import LearningRateScheduler
def changeLearningRate(epoch):
  initial_learningrate=0.1
  #print(f"EPOCH {epoch}, Calling from ChangeLearningRate:")
  lr = 0.0
  if epoch != 0:
    if custom_metrics_dict.metrics['accuracy'][epoch] < custom_metrics_dict.metrics['accuracy'][epoch-1]:    
      print(f"Accuracy @ epoch {epoch} is less than acuracy at epoch {epoch-1}")
      print("[INFO] Decreasing Learning Rate.....")
      lr = initial_learningrate*(0.1)
      print(f"LR Changed to {lr}")  
  return lr

模型准备:

input_layer = Input(shape=(2))
layer1 = Dense(32,activation='tanh',kernel_initializer=tf.random_uniform_initializer(0,1,seed=30))(input_layer)
output = Dense(2,activation='softmax',kernel_initializer=tf.random_uniform_initializer(0,1,seed=30))(layer1)


model = Model(inputs=input_layer,outputs=output)

custom_metrics_dict=CustomMetrics()
lrschedule = LearningRateScheduler(changeLearningRate, verbose=1) 

optimizer = tf.keras.optimizers.SGD(learning_rate=0.1,momentum=0.9)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=4, validation_data=(X_test,Y_test), batch_size=16 ,callbacks=[custom_metrics_dict,lrschedule])

它与index out of range error. 从我注意到的情况来看,每个时代,LRScheduler 代码都被多次调用。我无法想出一种方法来进行适当的函数调用。接下来我可以尝试什么?

标签: pythontensorflowkerasdeep-learning

解决方案


调度程序函数的签名def scheduler(epoch, lr):意味着您应该从该参数中获取 lr。你不应该写initial_learningrate = 0.1,如果你这样做你的 lr 不会衰减,当准确性降低时你总是会返回相同的。对于您检查的超出范围的异常,epoch 不是 0,这意味着您在 epoch = 1 时检查custom_metrics_dict.metrics['accuracy'][epoch]and custom_metrics_dict.metrics['accuracy'][epoch-1],但是您只存储了一个准确度值,epoch 0 没有准确度值,因此该数组custom_metrics_dict.metrics['accuracy']只有一个值它

我已经使用此功能正确运行了您的代码

from tensorflow.keras.callbacks import LearningRateScheduler
def changeLearningRate(epoch, lr):
  print(f"EPOCH {epoch},  Calling from ChangeLearningRate: {custom_metrics_dict.metrics['accuracy']}")
  if epoch > 1:
    if custom_metrics_dict.metrics['accuracy'][epoch - 1] > custom_metrics_dict.metrics['accuracy'][epoch-2]:    
      print(f"Accuracy @ epoch {epoch} is less than acuracy at epoch {epoch-1}")
      print("[INFO] Decreasing Learning Rate.....")
      lr = lr*(0.1)
      print(f"LR Changed to {lr}")  
  return lr

推荐阅读