首页 > 解决方案 > 使用 Selenium 在 div 中查找索引元素

问题描述

我正在抓取网页的前端,并且难以在 div 中获取 div 的 HMTL 文本。

基本上,我正在模拟点击——页面上列出的每个事件都有一个。从那里,我想抓取事件的日期和时间,以及事件的位置。

这是我试图抓取的页面之一的示例:

https://www.bandsintown.com/e/1013664851-los-grandes-de-la-banda-at-aura-nightclub?came_from=257&utm_medium=web&utm_source=home&utm_campaign=event

<div class="eventInfoContainer-54d5deb3">
    <div class="lineupContainer-570750d2"> 
    <div class="eventInfoContainer-9e539994">
        <img src="assets.bandsintown.com/images.clock.svg">
        <div>Sunday, April 21st, 2019</div> <!––***––&gt; 
        <div class="eventInfoContainer-50768f6d">5:00PM</div><!––***––&gt; 
     </div> 
<div class="eventInfoContainer-1a68a0e1">
    <img src="assets.bandsintown.com/images.clock.svg">
    <div class="eventInfoContainer-2d9f07df">
        <div>Aura Nightclub</div> <!––***––&gt; 
        <div>283 1st St., San Jose, CA 95113</div> <!––***––&gt; 
</div>

我已经用星号标记了我想要提取的元素——日期、时间、地点和地址。这是我的代码:

base_url = 'https://www.bandsintown.com/?came_from=257&page='
events = []
eventContainerBucket = []
for i in range(1, 2):
    driver.get(base_url + str(i))

# get events links
event_list = driver.find_elements_by_css_selector('div[class^=eventList-] a[class^=event-]')
# collect href attribute of events in even_list
events.extend(list(event.get_attribute("href") for event in event_list))



# iterate through all events and open them.
for event in events:
    driver.get(event)
    uniqueEventContainer = driver.find_elements_by_css_selector('div[class^=eventInfoContainer-]')[0]
   
    print "Event information: "+ uniqueEventContainer.text

这打印:

Event information: Sunday, April 21st, 2019
3:00 PM
San Francisco Brewing Co.
3150 Polk St, Sf, CA 94109
View All The Fourth Son Tour Dates

我的问题是我无法单独访问嵌套的 eventInfoContainer div。例如,“日期”div 是位置 [1],因为它是其父 div“eventInfoContainer-9e539994”中的第二个元素(在 img 之后)。父 div“eventInfoContainer-9e539994”位于位置 [1] 中,它同样是其父 div“eventInfoContainer-54d5deb3”(在“lineupContainer”之后)的第二个元素。

按照这个逻辑,我不应该能够通过这段代码访问日期文本吗:(访问第一个位置元素,它的父元素是容器内的第一个位置元素(第 0 个位置元素)?

for event in events:
    driver.get(event)
    uniqueEventContainer = driver.find_elements_by_css_selector('div[class^=eventInfoContainer-]')[0][1][1]

我收到以下错误:

TypeError: 'WebElement' object does not support indexing

标签: pythonseleniumindexingweb-scrapingbeautifulsoup

解决方案


当您索引到 webElements 列表(这是find_elements_by_css_selector('div[class^=eventInfoContainer-]')返回的)时,您会得到一个 webElement,您不能进一步索引它。您可以拆分 webElement 的文本以生成列表以供进一步索引。

如果页面之间存在规则结构,您可以将 html for div 加载到 BeautifulSoup 中。示例网址:

from selenium import webdriver
from bs4 import BeautifulSoup as bs

d = webdriver.Chrome()
d.get('https://www.bandsintown.com/e/1013664851-los-grandes-de-la-banda-at-aura-nightclub?came_from=257&utm_medium=web&utm_source=home&utm_campaign=event')
soup = bs(d.find_element_by_css_selector('[class^=eventInfoContainer-]').get_attribute('outerHTML'), 'lxml')
date = soup.select_one('img + div').text
time = soup.select_one('img + div + div').text
venue = soup.select_one('[class^=eventInfoContainer-]:nth-of-type(3) div > div').text
address = soup.select_one('[class^=eventInfoContainer-]:nth-of-type(3) div + div').text

print(date, time, venue, address)

如果换行符一致:

containers = d.find_elements_by_css_selector('div[class^=eventInfoContainer-]')
array = containers[0].text.split('\n')
date = array[3]
time = array[4]
venue = array[5]
address = array[6]
print(date, time, venue, address)

使用索引和拆分:

from selenium import webdriver
from bs4 import BeautifulSoup as bs

d = webdriver.Chrome()
d.get('https://www.bandsintown.com/e/1013664851-los-grandes-de-la-banda-at-aura-nightclub?came_from=257&utm_medium=web&utm_source=home&utm_campaign=event')
containers = d.find_elements_by_css_selector('div[class^=eventInfoContainer-]')
date_time = containers[1].text.split('\n')
i_date = date_time[0]
i_time = date_time[1]
venue_address = containers[3].text.split('\n')
venue = venue_address[0]
address = venue_address[1]
print(i_date, i_time, venue, address)

推荐阅读