首页 > 解决方案 > 使用 BeautifulSoup 遍历列表

问题描述

我正在使用 BeautifulSoup4 构建一个 JSON 格式的列表,其中包含:来自公共 Linkedin 职位搜索的“标题”、“公司”、“位置”、“发布日期”和“链接”,我已经按照我想要的方式进行了格式化它,但是它只列出页面中的一个工作列表,并且希望以相同的格式遍历页面中的每个工作。

例如,我试图实现这一点:

[{'title': 'Job 1', 'company': 'company 1.', 'location': 'sunny side, California', 'date posted': '2 weeks ago', 'link': 'example1.com'}]

[{'title': 'Job 2', 'company': 'company 2.', 'location': 'runny side, California', 'date posted': '2 days ago', 'link': 'example2.com'}]

我尝试将第 48、52、56、60 和 64 行从 contents.find 更改为 contents.findAll,但是,它返回所有内容,而不是按照我试图实现的顺序。

from bs4 import BeautifulSoup
import requests

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()


def search_website(url):
    # Search HTML Page
    result = requests.get(url)
    content = result.content

soup = BeautifulSoup(content, 'html.parser')

# Job List
jobs = []

for contents in soup.find_all('body'):
    # Title
    title = contents.find('h3', attrs={'class': 'result-card__title ''job-result-card__title'})
    formatted_title = strip_tags(str(title))

    # Company
    company = contents.find('h4', attrs={'class': 'result-card__subtitle job-result-card__subtitle'})
    formatted_company = strip_tags(str(company))

    # Location
    location = contents.find('span', attrs={'class': 'job-result-card__location'})
    formatted_location = strip_tags(str(location))

    # Date Posted
    posted = contents.find('time', attrs={'class': 'job-result-card__listdate'})
    formatted_posted = strip_tags(str(posted))

    # Apply Link
    links = contents.find('a', attrs={'class': 'result-card__full-card-link'})
    formatted_link = (links.get('href'))

    # Add a new compiled job to our dict
    jobs.append({'title': formatted_title,
                 'company': formatted_company,
                 'location': formatted_location,
                 'date posted': formatted_posted,
                 'link': formatted_link
                 })

# Return our jobs
return jobs


link = ("https://www.linkedin.com/jobs/search/currentJobId=1396095018&distance=25&f_E=3%2C4&f_LF=f_AL&geoId=102250832&keywords=software%20engineer&location=Mountain%20View%2C%20California%2C%20United%20States")


print(search_website(link))

我希望输出看起来像

[{'title': 'x', 'company': 'x', 'location': 'x', 'date posted': 'x', 'link': 'x'}] [{'title': 'x', 'company': 'x', 'location': 'x', 'date posted': 'x', 'link': 'x'}] +..

切换到 FindAll 时的输出返回:

[{'title': 'x''x''x''x''x', 'company': 'x''x''x''x''x', 'location': 'x''x''x''x', 'date posted': 'x''x''x''x', 'link': 'x''x''x''x'}]

标签: pythonlistdictionaryweb-scrapingbeautifulsoup

解决方案


这是您的代码的简化版本,但它应该可以帮助您:

result = requests.get('https://www.linkedin.com/jobs/search/?distance=25&f_E=2%2C3&f_JT=F&f_LF=f_AL&geoId=102250832&keywords=software%20engineer&location=Mountain%20View%2C%20California%2C%20United%20States')

soup = bs(result.content, 'html.parser')

# Job List
jobs = []

for contents in soup.find_all('body'):
    # Title
    title = contents.find('h3', attrs={'class': 'result-card__title ''job-result-card__title'})        

    # Company
    company = contents.find('h4', attrs={'class': 'result-card__subtitle job-result-card__subtitle'})        

    # Location
    location = contents.find('span', attrs={'class': 'job-result-card__location'})        

    # Date Posted
    posted = contents.find('time', attrs={'class': 'job-result-card__listdate'})        

    # Apply Link
    link = contents.find('a', attrs={'class': 'result-card__full-card-link'})

    # Add a new compiled job to our dict
    jobs.append({'title': title.text,
                 'company': company.text,
                 'location': location.text,
                 'date posted': posted.text,
                 'link': link.get('href')
                 })

    for job in jobs:
        print(job)

输出:

{'title': 'Systems Software Engineer - Controls', 'company': 'Blue River Technology', 'location': 'Sunnyvale, California', 'date posted': '1 day ago', 'link': 'https://www.linkedin.com/jobs/view/systems-software-engineer-controls-at-blue-river-technology-1380882942?position=1&pageNum=0&trk=guest_job_search_job-result-card_result-card_full-click'}


推荐阅读