首页 > 解决方案 > 在 Windows 中 Python 网络抓取和下载特定的 zip 文件

问题描述

我正在尝试在网页上下载和流式传输特定 zip 文件的内容。

该网页具有使用表格结构的 zip 文件的标签和链接,如下所示:

Filename    Flag    Link    
testfile_20190725_csv.zip  Y  zip
testfile_20190725_xml.zip  Y  zip 
testfile_20190724_csv.zip  Y  zip 
testfile_20190724_xml.zip  Y  zip 
testfile_20190723_csv.zip  Y  zip 
testfile_20190723_xml.zip  Y  zip 
(etc.)

上面的“zip”一词是指向 zip 文件的链接。我只想下载 CSV zip 文件和页面上出现的第一个 x(比如 7) - 但没有 XML zip 文件。

网页代码示例如下:

<tr>
 <td class="labelOptional_ind">
  testfile_20190725_csv.zip
 </td>
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
  Y
  </div>
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
   <a href="/test1/servlets/mbDownload?doclookupId=671334586">
    zip
   </a>
  </div>
 </td>
</tr>
<tr>
 <td class="labelOptional_ind">
  testfile_20190725_xml.zip
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
  N
  </div>
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">
   <a href="/test1/servlets/mbDownload?doclookupId=671190392">
    zip
   </a>
  </div>
 </td>
</tr>
<tr>
 <td class="labelOptional_ind">
  testfile_20190724_csv.zip
 </td>
 <td class="labelOptional" width="15%">
  <div align="center">

我想我快到了,但需要一点帮助。到目前为止我能够做的是:1.检查本地下载文件夹是否存在,如果不存在则创建它 2.设置 BeautifulSoup,从网页中读取所有主要标签(表格的第一列) ,并读取所有 zip 链接 - 即“a hrefs” 3. 为进行测试,手动将变量设置为其中一个标签,将另一个变量设置为相应的 zip 文件链接,下载文件并流式传输 zip 文件的 CSV 内容

我需要帮助的是:下载所有主要标签及其相应的链接,然后遍历每个标签/链接,跳过任何 XML 标签/链接,并仅下载/流式传输 CSV 标签/链接

这是我的代码:

# Read zip files from page, download file, extract and stream output
from io import BytesIO
from zipfile import ZipFile
import urllib.request
import os,sys,requests,csv
from bs4 import BeautifulSoup

# check for download directory existence; create if not there
if not os.path.isdir('f:\\temp\\downloaded'):
    os.makedirs('f:\\temp\\downloaded')

# Get labels and zip file download links
mainurl = "http://www.test.com/"
url = "http://www.test.com/thisapp/GetReports.do?Id=12331"

# get page and setup BeautifulSoup
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")

# Get all file labels and filter so only use CSVs
mainlabel = soup.find_all("td", {"class": "labelOptional_ind"})
for td in mainlabel:
    if "_csv" in td.text:
        print(td.text)

# Get all <a href> urls
for link in soup.find_all('a'):
    print(mainurl + link.get('href'))

# QUESTION: HOW CAN I LOOP THROUGH ALL FILE LABELS AND FIND ONLY THE
# CSV LABELS AND THEIR CORRESPONDING ZIP DOWNLOAD LINK, SKIPPING ANY
# XML LABELS/LINKS, THEN LOOP AND EXECUTE THE CODE BELOW FOR EACH, 
# REPLACING zipfilename WITH THE MAIN LABEL AND zipurl WITH THE ZIP 
# DOWNLOAD LINK?

# Test downloading and streaming
zipfilename = 'testfile_20190725_xml.zip'
zipurl = 'http://www.test.com/thisdownload/servlets/thisDownload?doclookupId=674992379'
outputFilename = "f:\\temp\\downloaded\\" + zipfilename

# Unzip and stream CSV file
url = urllib.request.urlopen(zipurl)
zippedData = url.read()

# Save zip file to disk
print ("Saving to ",outputFilename)
output = open(outputFilename,'wb')
output.write(zippedData)
output.close()

# Unzip and stream CSV file
with ZipFile(BytesIO(zippedData)) as my_zip_file:
   for contained_file in my_zip_file.namelist():
    with open(("unzipped_and_read_" + contained_file + ".file"), "wb") as output:
        for line in my_zip_file.open(contained_file).readlines():
            print(line)

标签: pythonweb-scrapingbeautifulsoup

解决方案


要获取所有必需的链接,您可以使用find_all()带有自定义功能的方法。该函数将搜索<td>文本以 . 结尾的标签"csv.zip"

data是问题的 HTML 片段:

from bs4 import BeautifulSoup

soup = BeautifulSoup(data, 'html.parser')

for td in soup.find_all(lambda tag: tag.name=='td' and tag.text.strip().endswith('csv.zip')):
    link = td.find_next('a')
    print(td.get_text(strip=True), link['href'] if link else '')

印刷:

testfile_20190725_csv.zip /test1/servlets/mbDownload?doclookupId=671334586
testfile_20190724_csv.zip 

推荐阅读