首页 > 解决方案 > Python Webscrape HTML to CSV File For Loop

问题描述

我对 Python 还是很陌生,我很难用我的 For 循环来提取某个站点上的所有 Web 链接。这是我的代码:

import requests
import csv
from bs4 import BeautifulSoup
j= [["Population and Housing Unit Estimates"]] # Title
k= [["Web Links"]] # Column Headings
example_listing='https://www.census.gov/programs-surveys/popest.html' #Source
r=requests.get(example_listing) #Grab page source html
html_page=r.text
soup=BeautifulSoup(html_page,'html.parser') #Build Beautiful Soup object to help parse the html
with open('HTMLList.csv','w',newline="") as f: #Choose what you want to grab
    writer=csv.writer(f,delimiter=' ',lineterminator='\r')
    writer.writerows(j)
    writer.writerows(k)
    for link in soup.find_all('a'):
        f.append(link.get('href'))
        if not f:
            ""
        else:
            writer.writerow(f)
f.close()

任何帮助深表感谢。我真的不知道从这里去哪里。谢谢!

标签: pythoncsvfor-loopbeautifulsoup

解决方案


假设您尝试将站点中的 URL 保存到 CSV 文件中 - 每行一个 URL。首先不要重用f,那是为了文件。您可以通过将链接包含在数组中来直接将链接写入 CSV writer.writerow([link.get('href')])。希望对您有所帮助。否则,请编辑您的问题并添加更多详细信息。

import csv
import requests
from bs4 import BeautifulSoup

j= [["Population and Housing Unit Estimates"]] # Title
k= [["Web Links"]] # Column Headings

example_listing='https://www.census.gov/programs-surveys/popest.html' #Source
r=requests.get(example_listing) #Grab page source html
html_page=r.text
soup=BeautifulSoup(html_page,'html.parser') #Build Beautiful Soup object to help parse the html
with open('HTMLList.csv','w', newline="") as f: #Choose what you want to grab
    writer=csv.writer(f, delimiter=' ',lineterminator='\r')
    writer.writerows(j)
    writer.writerows(k)
    for link in soup.find_all('a'):
        url = link.get('href')
        if url:
            writer.writerow([url])

推荐阅读