首页 > 解决方案 > 抓取会跳过一些页面 url,并且不会在 CSV 中写入所有值

问题描述

我是 Python 新手。我昨天才开始。我想抓取一个网站并在字典中收集数据。所有导入都添加在 python 脚本的开头

title_and_urls = {} #dictionary
totalNumberOfPages = 12
for x in range(1,int(totalNumberOfPages)+1):
    url_pages = 'https://abd.com/api?&page=' +str(x)+'&year=2017'
    resp = requests.get(url_pages, timeout=60)
    soup = BeautifulSoup(resp.text, 'lxml')
    for div in soup.find_all('div', {"class": "block2"}):
        a = div.find('a')
        h3 = a.find('h3')
        print(h3,url_pages) #prints correct
        title_and_urls[h3.text] = base_enthu_url+a.attrs['href']

print(title_and_urls)


with open('dict.csv', 'wb') as csv_file:
    writer = csv.writer(csv_file)
    for key, value in title_and_urls.items():
       writer.writerow([key, value])

这里有一些问题 1. 我总共有 12 页,但它跳过了第 7 页和第 8 页 2. 打印行print(h3,url_pages)打印了 60 个项目,而 csv 文件只有 36 个。

我感谢所有的帮助和解释。请建议最佳做法

标签: pythonpython-2.7beautifulsoup

解决方案


使用尝试功能

**title_and_urls = {} #dictionary
totalNumberOfPages = 12
for x in range(1,int(totalNumberOfPages)+1):
    try:
        url_pages = 'https://abd.com/api?&page=' +str(x)+'&year=2017'
        resp = requests.get(url_pages, timeout=60)
        soup = BeautifulSoup(resp.text, 'lxml')
        for div in soup.find_all('div', {"class": "block2"}):
            a = div.find('a')
            h3 = a.find('h3')
            print(h3,url_pages) #prints correct
            title_and_urls[h3.text] = base_enthu_url+a.attrs['href']
    except:
        pass


with open('dict.csv', 'wb') as csv_file:
    writer = csv.writer(csv_file)
    for key, value in title_and_urls.items():
       writer.writerow([key, value])**

推荐阅读