首页 > 解决方案 > 当网络抓取多个 URL 时,for 循环不起作用。只抓取一个 URL

问题描述

我正在尝试为不同类型的产品抓取多个网站。我能够在网上抓取一个网址。我创建了一个列表来网络抓取多个 url,然后将产品名称和价格导出到 CVL 文件。但是,它似乎没有按需要工作。

下面是我的代码:

#imports
import pandas as pd
import requests
from bs4 import BeautifulSoup

#Product Websites For Consolidation
urls = ['https://www.aeroprecisionusa.com/ar15/lower-receivers/stripped-lowers?product_list_limit=all', 'https://www.aeroprecisionusa.com/ar15/lower-receivers/complete-lowers?product_list_limit=all']
for url in urls:
    headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0"}
    page = requests.get(url, headers=headers)
    soup = BeautifulSoup(page.content, 'html.parser')


    #Locating All Products On Page
    all_products_on_page = soup.find(class_='products wrapper container grid products-grid')
    individual_items = all_products_on_page.find_all(class_='product-item-info')


    #Breaking Down Product By Name And Price
    aero_product_name = [item.find(class_='product-item-link').text for item in individual_items]
    aero_product_price = [p.text if (p := item.find(class_='price')) is not None else 'no price' for item in individual_items]


    Aero_Stripped_Lowers_Consolidated = pd.DataFrame(
        {'Aero Product': aero_product_name,
        'Prices': aero_product_price,
        })

    Aero_Stripped_Lowers_Consolidated.to_csv('MasterPriceTracker.csv')

该代码根据需要将产品名称和价格导出到 CVL 文件,但仅适用于第二个 URL,即“完整-较低”的 URL。我不确定我在 For 循环中搞砸了什么,导致它不能通过网络抓取两个 URL。我验证了两个 URL 的 HTML 代码相同。

任何帮助将不胜感激!

标签: pythonweb-scrapingbeautifulsouppython-requests

解决方案


to_csv呼叫移到循环之外。因为它在循环内,所以它正在为每个条目重写 csv 文件(因此只有最后一个条目出现在文件中)。

在循环中,将字典附加到循环开始之前创建的数据框。此外,不需要headers在循环中每次都重新定义,所以我也将它们拉到外面。

import pandas as pd
import requests
from bs4 import BeautifulSoup

#Product Websites For Consolidation
urls = ['https://www.aeroprecisionusa.com/ar15/lower-receivers/stripped-lowers?product_list_limit=all', 'https://www.aeroprecisionusa.com/ar15/lower-receivers/complete-lowers?product_list_limit=all']

Aero_Stripped_Lowers_Consolidated = pd.DataFrame(columns=['Aero Product', 'Prices'])
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0"}

for url in urls:
    page = requests.get(url, headers=headers)
    soup = BeautifulSoup(page.content, 'html.parser')


    #Locating All Products On Page
    all_products_on_page = soup.find(class_='products wrapper container grid products-grid')
    individual_items = all_products_on_page.find_all(class_='product-item-info')


    #Breaking Down Product By Name And Price
    aero_product_name = [item.find(class_='product-item-link').text for item in individual_items]
    aero_product_price = [p.text if (p := item.find(class_='price')) is not None else 'no price' for item in individual_items]


    Aero_Stripped_Lowers_Consolidated = Aero_Stripped_Lowers_Consolidated.append(pd.DataFrame(
        {'Aero Product': aero_product_name,
        'Prices': aero_product_price,
        }))

Aero_Stripped_Lowers_Consolidated.to_csv('MasterPriceTracker.csv')

推荐阅读