首页 > 解决方案 > Python3 Pandas Dataframe KeyError 问题

问题描述

我有一个 Dataframe 爬网,如下所示: 在此处输入图像描述

当我运行这段代码

crawl_stats = (
crawls['updated']
    .groupby(crawls.index.get_level_values('url'))
    .agg({
        'number of crawls': 'count', 
        'proportion of updates': 'mean', 
        'number of updates': 'sum'
    })

它显示错误:

KeyError                                  Traceback (most recent call last)
<ipython-input-62-180f1041465d> in <module>
      8 crawl_stats = (
      9     crawls['updated']
---> 10         .groupby(crawls.index.get_level_values('url'))
     11         # .groupby('url')
     12         .agg({

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/core/indexes/base.py in _get_level_values(self, level)
   3155         """
   3156 
-> 3157         self._validate_index_level(level)
   3158         return self
   3159 

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/core/indexes/base.py in _validate_index_level(self, level)
   1942         elif level != self.name:
   1943             raise KeyError('Level %s must be same as name (%s)' %
-> 1944                            (level, self.name))
   1945 
   1946     def _get_level_number(self, level):

KeyError: 'Level url must be same as name (None)'

我尝试了这个修改后的代码:

crawl_stats = (
crawls['updated']
    # .groupby(crawls.index.get_level_values('url'))
    .groupby('url')
    .agg({
        'number of crawls': 'count', 
        'proportion of updates': 'mean', 
        'number of updates': 'sum'
    })

它还显示错误:

KeyError                                  Traceback (most recent call last)
<ipython-input-63-8c5f0f6f7c86> in <module>
      9     crawls['updated']
     10         # .groupby(crawls.index.get_level_values('url'))
---> 11         .groupby('url')
     12         .agg({
     13             'number of crawls': 'count',       
3293             # Add key to exclusions

    KeyError: 'url'

我之前已经尝试过在堆栈溢出中做其他指导,但它仍然不起作用。有人可以帮我解决吗?谢谢!

这是我创建 Dataframe 爬网的代码。

def make_crawls_dataframe(crawl_json_records):
    """Creates a Pandas DataFrame from the given list of JSON records.

    The DataFrame corresponds to the following relation:

        crawls(primary key (url, hour), updated)

    Each hour in which a crawl happened for a page (regardless of
    whether it found a change) should be represented.  `updated` is
    a boolean value indicating whether the check for that hour found
    a change.

    The result is sorted by URL in ascending order and **further**
    sorted by hour in ascending order among the rows for each URL.

    Args:
      crawl_json_records (list): A list of JSON objects such as the
                                 crawl_json variable above.

    Returns:
      DataFrame: A table whose schema (and sort order) is described
                 above.
    """
    url = []
    hour = []
    updated = []


    # To get the 1000 url, number of checks and positive checks
    for i in range(len(crawl_json_records)):
        temp_url = [crawl_json_records[i]['url']]
        temp_len = crawl_json_records[i]["number of checks"]
        temp_checks = crawl_json_records[i]["positive checks"]

        # url.append(temp_url*temp_len)
        for item0 in temp_url*temp_len:
            url.append(item0)
        # hour.append(list(range(1,temp_len+1)))
        for item1 in list(range(1,temp_len+1)):
            hour.append(item1)
        temp_updated = [0]*temp_len

        for item in temp_checks:
            temp_updated[item-1] = 1
            # updated.append(temp_updated)
        for item2 in temp_updated:
            updated.append(item2)

    # print('len(url):',len(url))
    # 521674
    # print('len(hour):',len(hour))
    # print('len(updated):',len(updated))
    # Above 3 is 521674
    #print(type(temp_len))
    #print(temp_len)
    #print(temp_url*temp_len)

    columns = ['url','hour','updated']
    data = np.array((url,hour,updated)).T
    df = pd.DataFrame(data=data, columns=columns)
    df.index += 1
    # df.index = df['url']
    return df.sort_values(by=['url','hour'], ascending=True)

crawls = make_crawls_dataframe(crawl_json)
crawls.head(50)  # crawls shows as the image

标签: pythonpython-3.xpandas

解决方案


你需要替换这个:

.groupby(crawls.index.get_level_values('url'))

和:

.groupby('url')

因为您的 DataFrame 中没有索引。


推荐阅读