首页 > 解决方案 > 创建通过多个 read_html 链接循环的数据框

问题描述

我是 python 新手,我正在尝试从网站的多个页面中抓取一个表格。

在阅读了多个网站并观看了视频之后,我设法编写了一个能够抓取单个页面并将其保存到 Excel 的代码。分页的 url 是简单地更改 url 末尾的 page=x 值。我尝试循环浏览多个页面并创建数据框但失败了。

单页抓取

import pandas as pd
import requests
from bs4 import BeautifulSoup
from tabulate import tabulate

urlbase = "https://www.olx.in/coimbatore/?&page=1"
res = requests.get(urlbase)
soup = BeautifulSoup(res.content,'lxml')
table = soup.find('table', id="offers_table")
df = pd.read_html(str(table), header=1)

df[0].rename(index=str, columns={"Unnamed: 0": "Full Desc", "Unnamed: 2": 
"Detail", "Unnamed: 3": "Price", "Unnamed: 4": "Time"}, inplace = True)
df[0].dropna(thresh=3).to_excel('new.xlsx', sheet_name='Page_2', columns= 
['Detail','Price','Time'], index = False)

抓取多个页面

import pandas as pd
import requests
from bs4 import BeautifulSoup
from tabulate import tabulate

urlbase = "https://www.olx.in/coimbatore/?&page="

for x in range (4)[1:]:
 res = requests.get(urlbase + str(x))

然后通过组合从每个页面创建的多个数据框来创建一个数据框。我不知道如何在一个循环中创建多个数据框并将它们组合在一起。

标签: pythonpandasdataframe

解决方案


你快到了,你可以使用:

frames = []
for x in range (4):
    res = requests.get(urlbase + str(x))
    soup = BeautifulSoup(res.content,'lxml')
    table = soup.find('table', id="offers_table")
    df = pd.read_html(str(table), header=1)
    df[0].rename(index=str, columns={"Unnamed: 0": "Full Desc", "Unnamed: 2": 
        "Detail", "Unnamed: 3": "Price", "Unnamed: 4": "Time"}, inplace = True)
    frames.append(df[0].dropna(thresh=3))
res = pd.concat(frames)
res.to_excel('new.xlsx', sheet_name='Page_2', columns= ['Detail','Price','Time'], index = False)

推荐阅读