首页 > 解决方案 > 当它不一致时,如何让网站从 GET 请求中一致地返回内容?

问题描述

我之前发布了一个类似的问题,但我认为这是一个更精致的问题。

我正在尝试抓取:https ://www.prosportstransactions.com/football/Search/SearchResults.php?Player=&Team=&BeginDate=&EndDate=&PlayerMovementChkBx=yes&submit=Search&start=0

当我向 URL 发送 GET 请求时,我的代码随机抛出错误。调试后,我看到发生了以下情况。将发送对以下 URL 的 GET 请求(示例 URL,可能发生在任何页面上):https://www.prosportstransactions.com/football/Search/SearchResults.php?Player=&Team=&BeginDate=&EndDate=&PlayerMovementChkBx=yes&submit =搜索&开始=2400

然后网页会显示“没有找到匹配的交易。”。但是,如果我刷新页面,则会加载内容。我正在使用 BeautifulSoup 和 Selenium,并在我的代码中添加了 sleep 语句,希望它能起作用但无济于事。这是网站端的问题吗?对我来说,一个 GET 请求不会返回任何东西,但完全相同的请求会返回一些东西,这对我来说没有意义。另外,有什么我可以解决的还是它失控了?

这是我的代码示例:t

def scrapeWebsite(url, start, stop):
    driver = webdriver.Chrome(executable_path='/Users/Downloads/chromedriver')
    print(start, stop)


    madeDict = {"Date": [], "Team": [], "Name": [], "Relinquished": [], "Notes": []}

    #for i in range(0, 214025, 25):
    for i in range(start, stop, 25):
        print("Current Page: " + str(i))
        currUrl = url + str(i)
        #print(currUrl)
        #r = requests.get(currUrl)
        #soupPage = BeautifulSoup(r.content)

        driver.get(currUrl)
        #Sleep program for dynamic refreshing
        time.sleep(1)
        soupPage = BeautifulSoup(driver.page_source, 'html.parser')

        #page = urllib2.urlopen(currUrl)
        #time.sleep(2)
        #soupPage = BeautifulSoup(page, 'html.parser')


        info = soupPage.find("table", attrs={'class': 'datatable center'})
        time.sleep(1)
        extractedInfo = info.findAll("td")

错误发生在最后一行。“findAll”报错,因为当内容为空时找不到findAll(意味着GET请求什么也没返回)

标签: beautifulsoup

解决方案


我做了一些解决方法来使用try except.

可能请求循环太快了,页面无法支持它。

看下面的例子,就像一个魅力:

import requests
from bs4 import BeautifulSoup

URL = 'https://www.prosportstransactions.com/football/Search/SearchResults.php?Player=&Team=&BeginDate=&EndDate=' \
      '&PlayerMovementChkBx=yes&submit=Search&start=%s'


def scrape(start=0, stop=214525):
    for page in range(start, stop, 25):
        current_url = URL % page

        print('scrape: current %s' % page)
        while True:
            try:
                response = requests.request('GET', current_url)
                if response.ok:
                    soup = BeautifulSoup(response.content.decode('utf-8'), features='html.parser')

                    table = soup.find("table", attrs={'class': 'datatable center'})
                    trs = table.find_all('tr')

                    slice_pos = 1 if page > 0 else 0
                    for tr in trs[slice_pos:]:
                        yield tr.find_all('td')

                    break
            except Exception as exception:
                print(exception)


for columns in scrape():
    values = [column.text.strip() for column in columns]
    # Continuous your code ...

推荐阅读