首页 > 解决方案 > 编写循环:Beautifulsoup 和 lxml 用于在逐页跳过设置中获取页面内容

问题描述

更新:现在有超过 6600 个目标页面之一的图像:https ://europa.eu/youth/volunteering/organisation/48592 见下文 - 目标目标和数据的图像、解释和描述被通缉。

我是志愿服务领域数据工作领域的新手。任何帮助表示赞赏。在过去的几天里,我从 αԋɱҽԃ αмєяιcαη 和 KunduK 等一些编码英雄那里学到了很多东西。

基本上,我们的目标是快速概述欧洲免费志愿服务的一系列机会。我有我想用来获取数据的 URL 列表。我可以为一个这样的 url 做这样的事情:- 目前正在研究一种深入研究 python 编程的方法:我有几个已经可以工作的解析器部分 - 请参阅下面的几个页面的概述。顺便说一句:我想我们应该用 pandas 收集信息并将其存储在 csv 中......

...等等等等.... - [注意 - 不是每个 URL 和 id 都备份有内容页面 - 因此我们需要一个增量 n+1 设置] 因此我们可以逐个计算页面- 并计算增量 n+1

查看示例:

方法:我使用了 CSS 选择器;XPath 和 CSS 选择器执行相同的任务,但是 - 对于 BS 或 lxml,我们可以使用它或与 find() 和 findall() 混合使用。

所以我在这里运行这个迷你方法:

from bs4 import BeautifulSoup

import requests

url = 'https://europa.eu/youth/volunteering/organisation/50160'

resonse = requests.get(url)

soup = BeautifulSoup(resonse.content, 'lxml')

tag_info = soup.select('.col-md-12 > p:nth-child(3) > i:nth-child(1)')

print(tag_info[0].text)

输出: Norwegian Judo Federation

小方法2:

from lxml import html

import requests

url = 'https://europa.eu/youth/volunteering/organisation/50160'

response = requests.get(url)

tree = html.fromstring(response.content)

tag_info = tree.xpath("//p[contains(text(),'Norwegian')]")

print(tag_info[0].text)

输出: Norwegian Judo Federation (NJF) is a center organisation for Norwegian Judo clubs. NJF has 65 member clubs, which have about 4500 active members. 73 % of the members are between ages of 3 and 19. NJF is organized in The Norwegian Olympic and Paralympic Committee and Confederation of Sports (NIF). We are a member organisation in European Judo Union (EJU) and International Judo Federation (IJF). NJF offers and organizes a wide range of educational opportunities to our member clubs.

等等等等堡垒。我想要实现的目标:旨在从所有 6800 个页面中收集所有有趣的信息- 这意味着信息,例如:

在这里看一张照片

...并迭代到下一页,获取所有信息等等。所以我尝试下一步以获得更多经验:...从所有页面收集信息注意:我们有6926 个页面

在此处输入图像描述

问题是 - 关于 URL ,如何找出哪个是第一个 URL,哪个是最后一个 URL - 想法:如果我们从零迭代到 10 000!?

用网址的数字!?

import requests
from bs4 import BeautifulSoup
import pandas as pd

numbers = [48592, 50160]


def Main(url):
    with requests.Session() as req:
        for num in numbers:
            resonse = req.get(url.format(num))
            soup = BeautifulSoup(resonse.content, 'lxml')
            tag_info =soup.select('.col-md-12 > p:nth-child(3) > i:nth-child(1)')
            print(tag_info[0].text)



Main("https://europa.eu/youth/volunteering/organisation/{}/")

但在这里我遇到了问题。猜猜我在结合上述部分的想法时监督了一些事情。再次。我想我们应该用 pandas 收集信息并将其存储在 csv 中......

标签: pythonloopsweb-scrapingbeautifulsoup

解决方案


import requests
from bs4 import BeautifulSoup
import re
import csv
from tqdm import tqdm


first = "https://europa.eu/youth/volunteering/organisations_en?page={}"
second = "https://europa.eu/youth/volunteering/organisation/{}_en"


def catch(url):
    with requests.Session() as req:
        pages = []
        print("Loading All IDS\n")
        for item in tqdm(range(0, 347)):
            r = req.get(url.format(item))
            soup = BeautifulSoup(r.content, 'html.parser')
            numbers = [item.get("href").split("/")[-1].split("_")[0] for item in soup.findAll(
                "a", href=re.compile("^/youth/volunteering/organisation/"), class_="btn btn-default")]
            pages.append(numbers)
        return numbers


def parse(url):
    links = catch(first)
    with requests.Session() as req:
        with open("Data.csv", 'w', newline="", encoding="UTF-8") as f:
            writer = csv.writer(f)
            writer.writerow(["Name", "Address", "Site", "Phone",
                             "Description", "Scope", "Rec", "Send", "PIC", "OID", "Topic"])
            print("\nParsing Now... \n")
            for link in tqdm(links):
                r = req.get(url.format(link))
                soup = BeautifulSoup(r.content, 'html.parser')
                task = soup.find("section", class_="col-sm-12").contents
                name = task[1].text
                add = task[3].find(
                    "i", class_="fa fa-location-arrow fa-lg").parent.text.strip()
                try:
                    site = task[3].find("a", class_="link-default").get("href")
                except:
                    site = "N/A"
                try:
                    phone = task[3].find(
                        "i", class_="fa fa-phone").next_element.strip()
                except:
                    phone = "N/A"
                desc = task[3].find(
                    "h3", class_="eyp-project-heading underline").find_next("p").text
                scope = task[3].findAll("span", class_="pull-right")[1].text
                rec = task[3].select("tbody td")[1].text
                send = task[3].select("tbody td")[-1].text
                pic = task[3].select(
                    "span.vertical-space")[0].text.split(" ")[1]
                oid = task[3].select(
                    "span.vertical-space")[-1].text.split(" ")[1]
                topic = [item.next_element.strip() for item in task[3].select(
                    "i.fa.fa-check.fa-lg")]
                writer.writerow([name, add, site, phone, desc,
                                 scope, rec, send, pic, oid, "".join(topic)])


parse(second)

注意:我已经测试了第一10页,如果你想获得更多speed,我建议你使用concurrent.futures. 如果有任何错误。使用try/except.


推荐阅读