首页 > 解决方案 > 为什么我的 for 循环覆盖而不是附加 CSV?

问题描述

我正在尝试抓取 IB 网站。所以,我在做什么,我已经创建了要迭代的 url,并且我能够提取所需的信息,但似乎数据框一直被覆盖而不是附加。

import pandas as pd
from pandas import DataFrame as df
from bs4 import BeautifulSoup
import csv
import requests

base_url = "https://www.interactivebrokers.com/en/index.phpf=2222&exch=mexi&showcategories=STK&p=&cc=&limit=100"
n = 1

url_list = []

while n <= 2:
    url = (base_url + "&page=%d" % n)
    url_list.append(url)
    n = n+1

def parse_websites(url_list):
    for url in url_list:
        html_string = requests.get(url)
        soup = BeautifulSoup(html_string.text, 'lxml') # Parse the HTML as a string
        table = soup.find('div',{'class':'table-responsive no-margin'}) #Grab the first table
        df = pd.DataFrame(columns=range(0,4), index = [0]) # I know the size 

        for row_marker, row in enumerate(table.find_all('tr')):
            column_marker = 0
            columns = row.find_all('td')
            try:
                df.loc[row_marker] = [column.get_text() for column in columns]
            except ValueError:
            # It's a safe way when [column.get_text() for column in columns] is empty list.
                continue

        print(df)
        df.to_csv('path_to_file\\test1.csv')

parse_websites(url_list)

你能看看我的代码,告诉我我做错了什么吗?

标签: pythonpython-3.xpandaspandas-datareader

解决方案


如果要在文件上附加数据帧,一种解决方案是以附加模式写入:

df.to_csv('path_to_file\\test1.csv', mode='a', header=False)

否则,您应该按照评论中的说明在外部创建数据框。


推荐阅读