首页 > 解决方案 > 如果在硒中找不到元素,如何删除元素

问题描述

我正在从一个名为 startup-India 的网站上抓取数据,我正在尝试删除配置文件 URL 和名称,但有些配置文件没有,如果某些配置文件没有 URL,我应该为此设置名称和 URL我已经尝试了很多替代方法,例如 try-except 语句和 if-else 语句,但它们都不起作用,所以我需要帮助。

这是代码:

import scrapy
import urllib
from selenium import webdriver
import os
import logging

from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.support.wait import WebDriverWait

CHROME_DRIVER_WINDOW_PATH = "C:/Users/RAJ/PycharmProjects/WebCrawler/WebCrawler/WebCrawler/spiders/chromedriver.exe"


class ProductSpider(scrapy.Spider):
    name = "product_spider"
    allowed_domains = ['https://www.startupindia.gov.in/']
    start_urls = [
        'https://www.startupindia.gov.in/content/sih/en/search.html?industries=sih:industry/agriculture&sectors=sih:industry/agriculture/dairy-farming&states=sih:location/india/andhra-pradesh&roles=Startup&page=0']

    def __init__(self):
        cwd = os.getcwd()
        opts = ChromeOptions()
        opts.add_argument("--headless")  # for headless browser it's not necessary

        self.driver = webdriver.Chrome(executable_path=CHROME_DRIVER_WINDOW_PATH)

    def parse(self, response):
        self.driver.get(response.url)

        next = self.driver.find_elements_by_xpath("//*[@id='persona-results']//a[@class='img-wrap']")

        for i in next:
            try:
                i.click()  # click on image in page
                # move to new tab open
                self.driver.switch_to.window(self.driver.window_handles[next.index(i) + 1])
                logging.info(self.driver.current_url)
                self.driver.get(self.driver.current_url)
                self.scrape_data()

                self.driver.switch_to.window(self.driver.window_handles[0])



                # get the data and write it to scrapy items
            except Exception as e:
                print(e)
        # company_url = self.driver.find_element_by_css_selector('div.container div.company-name span a')
        # company_url_text = company_url.text

    def scrape_data(self):
        url_of_comp = self.driver.find_element_by_css_selector('div.container div.company-name span > a').text
        name = self.driver.find_element_by_css_selector('div.container div.company-name p').text
        logging.info(url_of_comp)
        logging.info(name)

代码将不胜感激。

标签: pythonseleniumselenium-webdriver

解决方案


您不必爬取每个详细信息页面来抓取nameURL详细信息。列表页面应该足够了。

查看更新的解析函数。

    def parse(self, response):
        self.driver.get(response.url)

        item_list = []
        list_items = self.driver.find_elements_by_xpath("//*[@id='persona-results']//a[@class='img-wrap']")
        for item in list_items:
            items = { "url": item.get_attribute("href"),
                      "name": item.find_element_by_xpath('./div/div[@class="events-details"]/h3').text }
            item_list.append(items)
            yield items
        print(item_list)

推荐阅读