首页 > 解决方案 > 在 beautifulSoup 中使用 webdriver 进行网页抓取

问题描述

我尝试使用 beautifulSoup 进行分页网页抓取,所以我使用 webdriver 分页到其他页面。但是,我真的不确定使用 webdriver 从动态网页获取内容并与我的代码匹配的任何其他方式。下面是我尝试实现 webdriver 但 webdriver 不工作的完整代码。我要抓取的网络是 [link here][1]

for i in range(1, MAX_PAGE_NUM + 1):
    page_num = (MAX_PAGE_DIG - len(str(i))) * "0" + str(i)
    raw = requests.get('').text

driver.get(raw)

raw = raw.replace("</br>", "")

soup = BeautifulSoup(raw, 'html.parser')

name = soup.find_all('div', {'class' :'cbp-vm-companytext'})
phone = [re.findall('\>.*?<',d.find('span')['data-content'])[0][1:][:-1] for d in soup.find_all('div',{'class':'cbp-vm-cta'})]
addresses = [x.text.strip().split("\r\n")[-1].strip() for x in soup.find_all("div", class_='cbp-vm-address')]

print(addresses)
print(name)

num_page_items = len(addresses)
with open('results.csv', 'a') as f:
    for i in range(num_page_items):
        f.write(name[i].text + "," + phone[i] + "," + addresses[i] + "," +  "\n")

当然,我错误地在代码中添加了 webdriver。我应该解决什么问题才能使 webdriver 正常工作?

标签: pythonbeautifulsoupwebdriver

解决方案


如果您使用Selenium阅读页面,那么您也可以使用Selenium搜索页面上的元素。

有些元素没有,companytext所以如果你分开companytext和分开address/phone然后你可以创建错误的对:(second name, first phone, first address),,(third name, second phone, second address)等等。最好在这个元素中找到分组的元素,然后搜索,,如果name它找不到 phone名字那么你必须在该组内放置空名称或搜索具有名称的不同元素。我发现有些元素显示带有徽标而不是名称的图像,并且它们的名称在addressnamephoneaddress<img alt="...">

使用标准将 CSV 数据写入文件不是一个好主意,write()因为address可能有很多,列,并且可能会创建很多列。使用模块csv它将地址" "作为单列。

from selenium import webdriver
import csv

MAX_PAGE_NUM = 5

#driver = webdriver.Chrome()
driver = webdriver.Firefox()

with open('results.csv', 'w') as f:
    csv_writer = csv.writer(f)
    csv_writer.writerow(["Business Name", "Phone Number", "Address"])

    for page_num in range(1, MAX_PAGE_NUM+1):
        #page_num = '{:03}'.format(page_num)
        url = 'https://www.yellowpages.my/listing/results.php?keyword=boutique&where=selangor&screen={}'.format(page_num)
        driver.get(url)
        for item in driver.find_elements_by_xpath('//div[@id="content_listView"]//li'):
            try:
                name = item.find_element_by_xpath('.//div[@class="cbp-vm-companytext"]').text
            except Exception as ex:
                #print('ex:', ex)
                name = item.find_element_by_xpath('.//a[@class="cbp-vm-image"]/img').get_attribute('alt')

            phone = item.find_element_by_xpath('.//div[@class="cbp-vm-cta"]//span[@data-original-title="Phone"]').get_attribute('data-content')
            phone = phone[:-4].split(">")[-1]

            address = item.find_element_by_xpath('.//div[@class="cbp-vm-address"]').text
            address = address.split('\n')[-1]

            print(name, '|', phone, '|', address)
            csv_writer.writerow([name, phone, address])

顺便说一句:您不必将页码转换为三位数 - 即。001- 它也适用于1. 但是如果你想转换然后使用字符串格式

page_num = '{:03}'.format(i)

也可以只使用requestsBeautifulSoup不使用Selenium.

如果你必须从那里获取 HTML,Selenium那么你有driver.page_source- 但driver.get()需要url,然后你不需要requests这个。

driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')

编辑:只有当我使用requestsBeautifulSoup不是. HTML 中似乎存在一些错误,并且无法正确解析它Selenium"lxml""html.parser""html.parser"

import requests
from bs4 import BeautifulSoup as BS
import csv
#import webbrowser

MAX_PAGE_NUM = 5

#headers = {
#  "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:74.0) Gecko/20100101 Firefox/74.0"
#}

with open('results.csv', 'w') as f:
    csv_writer = csv.writer(f)
    csv_writer.writerow(["Business Name", "Phone Number", "Address"])

    for page_num in range(1, MAX_PAGE_NUM+1):
        #page_num = '{:03}'.format(page_num)
        url = 'https://www.yellowpages.my/listing/results.php?keyword=boutique&where=selangor&screen={}'.format(page_num)

        response = requests.get(url) #, headers=headers)
        soup = BS(response.text, 'lxml')
        #soup = BS(response.text, 'html.parser')

        #with open('temp.html', 'w') as fh:
        #    fh.write(response.text)
        #webbrowser.open('temp.html')

        #all_items = soup.find('div', {'id': 'content_listView'}).find_all('li')
        #print('len:', len(all_items))

        #for item in all_items:
        for item in soup.find('div', {'id': 'content_listView'}).find_all('li'):
            try:
                name = item.find('div', {'class': 'cbp-vm-companytext'}).text
            except Exception as ex:
                #print('ex:', ex)
                name = item.find('a', {'class': 'cbp-vm-image'}).find('img')['alt']

            phone = item.find('div', {'class': 'cbp-vm-cta'}).find('span', {'data-original-title': 'Phone'})['data-content']
            phone = phone[:-4].split(">")[-1].strip()

            address = item.find('div', {'class': 'cbp-vm-address'}).text
            address = address.split('\n')[-1].strip()

            print(name, '|', phone, '|', address)
            csv_writer.writerow([name, phone, address])

推荐阅读