首页 > 解决方案 > 为什么相同的价值观

问题描述

我正在编写一个自定义函数,它接受一些参数并返回一个元组,它只是一个具有模型训练历史的值列表。

它的参数为model, generator, callback, batch_size list( bs) , epoch_list( ep) , shape_list( sh) 并以元组的形式返回值。

这是由类似的东西给出的shape_acc = shape_acc.append(h.history["accuracy"])

正如预期的那样,它应该返回不同的值,但是当我使用这段代码时:

b_acc , b_val , e_acc , e_val , s_acc , s_val = check_best_value(model , gen , callback , bs_lis , ep_lis 
                                                                 , tg_lis)

b_acc = np.array(bs_acc) 
print(b_acc)

e_acc = np.array(ep_acc)
print(ep_acc)

它返回相同的值数组

[[0.13667651 0.1494592  0.15437561]]
[[0.13667651 0.1494592  0.15437561]]

为什么会发生这种情况,我认为问题在于这些行的缩进。


            shape_acc = shape_acc.append(h.history["accuracy"])
            shape_val = shape_val.append(h.history["val_accuracy"])

        epoch_acc = epoch_acc.append(h.history["accuracy"])
        epoch_val = epoch_val.append(h.history["val_accuracy"])                

    bs_acc = bs_acc.append(h.history["accuracy"])
    bs_val = bs_val.append(h.history["val_accuracy"])

    return bs_acc , bs_val , epoch_acc , epoch_val , shape_acc , shape_val

任何人都可以提出解决方案并提供一些启示吗?这是我的原始代码:


def check_best_value(model, generator , callback , bs , ep , sh):
    bs_acc = []
    bs_val = []

    epoch_acc = []
    epoch_val = []

    shape_acc = []
    shape_val = []


    callback = EarlyStopping(monitor = 'val_loss' , patience = 1 , verbose = 1 , 
    restore_best_weights=True , mode = 'auto')

    for bs_size in bs:
        for ep_size in ep:
            for img_size in sh:
                train_gen = generator.flow_from_directory(TRAIN_DIR , 
                                                                target_size = (img_size , img_size),
                                                                batch_size = bs_size ,
                                                                class_mode = 'categorical',
                                                                subset = 'training',
                                                                seed = 14)
                
                val_gen = generator.flow_from_directory(TRAIN_DIR , 
                                                                target_size = (img_size , img_size),
                                                                batch_size = bs_size ,
                                                                class_mode = 'categorical',
                                                                subset = 'validation',
                                                                seed = 14)
                h = model.fit(train_gen,
                                steps_per_epoch=train_gen.samples // bs_size,
                                epochs=ep_size,
                                validation_data=val_gen,
                                validation_steps= val_gen.samples // bs_size,
                                callbacks = [callback],
                                verbose=0)
                
            shape_acc = shape_acc.append(h.history["accuracy"])
            shape_val = shape_val.append(h.history["val_accuracy"])

        epoch_acc = epoch_acc.append(h.history["accuracy"])
        epoch_val = epoch_val.append(h.history["val_accuracy"])                

    bs_acc = bs_acc.append(h.history["accuracy"])
    bs_val = bs_val.append(h.history["val_accuracy"])
    
    return bs_acc , bs_val , epoch_acc , epoch_val , shape_acc , shape_val 

标签: pythonloopskerasdeep-learninghyperparameters

解决方案


推荐阅读