首页 > 解决方案 > 如何在bert之上添加新层?

问题描述

你能帮我在 bert 输出上添加新的 bilstm 层吗?

output_layer = model.get_pooled_output()
    # Here, we make alterations to add the extra features
    output_layer_extra_features = tf.concat([output_layer, tf.convert_to_tensor(extra_features, dtype=tf.float32)],
                                            axis=1)

hidden_size = output_layer_extra_features.shape[-1].value

output_weights = tf.get_variable(
    "output_weights", [num_labels, hidden_size],
    initializer=tf.truncated_normal_initializer(stddev=0.02))

output_bias = tf.get_variable(
    "output_bias", [num_labels], initializer=tf.zeros_initializer())

with tf.variable_scope("loss"):
    if is_training:
        # I.e., 0.1 dropout
        output_layer_extra_features = tf.nn.dropout(output_layer_extra_features, keep_prob=0.9)

    logits = tf.matmul(output_layer_extra_features, output_weights, transpose_b=True)
    logits = tf.nn.bias_add(logits, output_bias)
    probabilities = tf.nn.softmax(logits, axis=-1)
    log_probs = tf.nn.log_softmax(logits, axis=-1)

    one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)

    per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
    loss = tf.reduce_mean(per_example_loss)

标签: machine-learningartificial-intelligence

解决方案


推荐阅读