首页 > 解决方案 > 使用 Numpy 分类并计算熵

问题描述

我正在尝试执行以下任务:

对于给定的数据列(存储为 numpy 数组),以贪婪的方式“bin”数据,我测试当前对象和下一个对象以计算其熵。

伪代码如下所示:

split_data(feature):
        BestValues = 0
        For Each Value in Feature:
                Calculate CurrentGain As InformationGain(Entropy(Feature) - Entropy(Value + Next Value))
                If CurrentGain > BestGain:
                        Set BestValues = Value,Next Value
                        Set BestGain = CurrentGain


        return BestValues

我目前有一个 Python 代码,如下所示:

# This function finds the total entropy for a given dataset
def entropy(dataset):
    # Declare variables
    total_entropy = 0
    # Determine classes and numby of items in each class
    classes = numpy.unique(dataset[:,-1])

    # Loop through each "class", or label
    for aclass in classes:
        # Create temp variables
        currFreq = 0
        currProb = 0
        # Loop through each row in the dataset
        for row in dataset:
            # If that row has the same label as the current class, implement the frequency
            if (aclass == row[-1]):
                currFreq = currFreq + 1
            # If not, continue
            else:
                continue
        # The current probability is the # of occurences / total occurences
        currProb = currFreq / len(dataset)
        # If it is 0, then the entropy is 0. If not, use entropy formula
        if (currFreq > 0):
            total_entropy = total_entropy + (-currProb * math.log(currProb, 2))
        else:
            return 0

    # Return the total entropy
    return total_entropy

# This function gets the entropy for a single attribute
def entropy_by_attribute(dataset, feature):
    # The attribute is the specific feature of the dataset
    attribute = dataset[:,feature]
    # The target_variables are the unique values in that feature
    target_variables = numpy.unique(dataset[:,-1])
    # The unique values in the column we are evaluating
    variables = numpy.unique(attribute)
    # The entropy for the attribute in question
    entropy_attribute = 0

    # Loop through each of the possible values
    for variable in variables:
        denominator = 0
        entropy_each_feature = 0
        # For every row in the column
        for row in attribute:
            # If it is equal to the current value we are estimating, increase your denominator
            if row == variable:
                denominator = denominator + 1

        # Now loop through each class
        for target_variable in target_variables:
            numerator = 0
            # Loop through the dataset
            for row in dataset:
                index = 0
                # if the current row in the feature is equal to the value you are evaluating
                # and the label is equal to the label you are evaluating, increase the numerator
                if dataset[index][feature] == variable and dataset[index][-1] == target_variable:
                    numerator = numerator + 1
                else:
                    continue
                index = index + 1

            # use eps to protect from divide by 0
            fraction = numerator/(denominator+numpy.finfo(float).eps)
            entropy_each_feature = entropy_each_feature + (-fraction * math.log(fraction+numpy.finfo(float).eps, 2))

        # Now calculate the total entropy for the attribute in question
        big_fraction = denominator / len(dataset)
        entropy_attribute = entropy_attribute +(-big_fraction*entropy_each_feature)

    # Return that entropy
    return entropy_attribute

# This function calculates the information gain
def infogain(dataset, feature):
    # Grab the entropy from the total dataset
    total_entropy = entropy(dataset)
    # Grab the entropy for the current feature being evaluated
    feature_entropy = entropy_by_attribute(dataset, feature)
    # Calculate the infogain
    infogain = float(abs(total_entropy - feature_entropy))

    # Return the infogain
    return infogain

但是,我不确定如何执行以下操作:

  1. 对于一个特征,获取它的总熵
  2. 对于单个特征,使用我正在测试两个值的分箱技术确定熵

我无法从逻辑上设想如何开发代码来完成 1 和 2,我正在努力奋斗。我将继续更新我所取得的进展。

标签: pythonnumpystatistics

解决方案


以下函数处理每列(特征)的熵计算

def entropy(attributes, dataset, targetAttr):
    freq = {}
    entropy = 0.0
    index = 0
    for item in attributes:
        if (targetAttr == item):
            break
        else:
            index = index + 1
    index = index - 1
    for item in dataset:
        if ((item[index]) in freq):
            # Increase the index
            freq[item[index]] += 1.0
        else:
            # Initialize it by setting it to 0
            freq[item[index]] = 1.0

    for freq in freq.values():
        entropy = entropy + (-freq / len(dataset)) * math.log(freq / len(dataset), 2)
    return entropy

推荐阅读