首页 > 解决方案 > 如何将网络抓取的表格转换为 csv?

问题描述

一年前,我在我的一个课程中学习了一些 Python,但从那时起就不用太多了,所以这可能是一个简单的问题,也可能不是一个简单的问题。

我正在尝试从 Box Office Mojo 中抓取所有时间表中票房最高的电影,我想获取 2010 年代前 10 部电影的排名、标题和总票房。我一直在玩python,我可以将整个表放入python,但我不知道如何从那里操作它,更不用说写出一个csv文件了。任何指导/提示?

这是将为我打印整个表格的内容(前几行是从旧的网络抓取作业中复制的,以帮助我开始):

    import bs4
    import requests
    from bs4 import BeautifulSoup as soup

    url = "https://www.boxofficemojo.com/chart/top_lifetime_gross/"
    headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, 
     like Gecko) Chrome/71.0.3578.98 Safari/537.36'}
    page_html = requests.get(url, headers=headers)

    page_soup = soup(page_html.text, "html.parser")

    boxofficemojo_table = page_soup.find("div", {"class": "a-section imdb-scroll-table-inner"})
    complete_table = boxofficemojo_table.get_text()
    print(complete_table)`

标签: pythonhtmlweb-scrapingbeautifulsoup

解决方案


您可以为此使用 pd.read_html。

import pandas as pd

Data = pd.read_html(r'https://www.boxofficemojo.com/chart/top_lifetime_gross/')
for data in Data:
    data.to_csv('Data.csv', ',')

2.使用BS4

import pandas as pd
from bs4 import BeautifulSoup
import requests

URL = r'https://www.boxofficemojo.com/chart/top_lifetime_gross/'
print('\n>> Exctracting Data using Beautiful Soup for :'+ URL)

try:
    res = requests.get(URL)
except Exception as e:
    print(repr(e))

print('\n<> URL present status Code = ',(res.status_code))
soup = BeautifulSoup(res.text,"lxml")
table = soup.find('table')

list_of_rows = []
for row in table.findAll('tr'):
    list_of_cells = []
    for cell in row.findAll(["td"]):
        text = cell.text
        list_of_cells.append(text)
    list_of_rows.append(list_of_cells)

for item in list_of_rows:
    ' '.join(item)

Data = pd.DataFrame(list_of_rows)
Data.dropna(axis = 0, how = 'all',inplace = True)
print(Data.head(10))

Data.to_csv('Table.csv')

推荐阅读