首页 > 解决方案 > 试图找到信息增益但不知道如何处理条件熵

问题描述

数据集:https ://raw.githubusercontent.com/Kuntal-G/Machine-Learning/master/R-machine-learning/data/banknote-authentication.csv

如何计算条件熵并从这样的数据集中找到最佳信息增益? 在此处输入图像描述

计算熵的代码:

def entropy(column):
""" Calculates the entropy"""
values, counts = np.unique(column, return_counts=True)
entropy_val = 0
for i in range(len(counts)):
    entropy_val += (
            (-counts[i] / sum(counts)) * math.log2(counts[i] / (sum(counts)))
    )
    
return entropy_val

其中“列”是数据框中的一个特征,例如 df[0]。我对从这里去哪里有点困惑......谁能指出我正确的方向,我的最终目标是找到最佳的信息增益。

entropy_vals = {}
entropy_vals = entropy(X[0]), entropy(X[1]), entropy(X[2]), entropy(X[3]), entropy(y)

print(entropy_vals)

在此处输入图像描述

    df = pd.read_csv('data_banknote_authentication.txt', header=None)
print(df)


y = df.iloc[:, -1]
X = df.iloc[:, :4]


def count_labels(rows):
    """Counts number of each unique value in selected column."""
    counts = {}
    for row in rows:
        label = row
        if label not in counts:
            counts[label] = 1
        else:
            counts[label] += 1
    return counts


def entropy(column):
    """ Calculates the entropy"""
    values, counts = np.unique(column, return_counts=True)
    entropy_val = 0
    for i in range(len(counts)):
        entropy_val += (
                (-counts[i] / sum(counts)) * math.log2(counts[i] / (sum(counts)))
        )

    return entropy_val


entropy_vals = {}
entropy_vals = entropy(X[0]), entropy(X[1]), entropy(X[2]), entropy(X[3]), entropy(y)

print(entropy_vals)


def check_unique(data):
    label_col = data[data.columns[-1]]
    print(label_col)
    unique_features = np.unique(label_col)
    if len(unique_features) == 1:
        return True
    else:
        return False


def categorize_data(data):
    label_col = data[data.columns[-1]]
    values, counts = np.unique(label_col, return_counts=True)
    print(values, counts)
    index = counts.argmax()
    category = values[index]

    return category



def split(data):
    x_less = data[data <= np.mean(data)]
    x_greater = data[data > np.mean(data)]

    return x_less, x_greater

标签: pythonpandasnumpydatasetentropy

解决方案


推荐阅读