首页 > 解决方案 > 如何让 python 将抓取的数据与填充的 csv 进行比较?

问题描述

我有一个 python 代码,它创建一个 csv 文件并用当前美国州最高法院法官的名字填充它。我已经让 python 来抓取数据,并创建一个 csv 并用数据填充它。

我正在尝试让 python 每天运行(我已经设置了 windows 任务调度程序),抓取相同的页面,将新数据与旧 csv 进行比较,提醒用户抓取的数据是否等同于 csv 中的数据,跟踪更改,并使用新抓取的数据更新 csv。

我是 python 新手,所以我不确定如何继续我的代码。

我可以在我的代码中添加什么来实现这一点?谢谢!这是我当前的代码:

import requests
from bs4 import BeautifulSoup
import pandas as pd

list = ['https://ballotpedia.org/Alabama_Supreme_Court', 
'https://ballotpedia.org/Alaska_Supreme_Court', 
'https://ballotpedia.org/Arizona_Supreme_Court', 
'https://ballotpedia.org/Arkansas_Supreme_Court', 
'https://ballotpedia.org/California_Supreme_Court', 
'https://ballotpedia.org/Colorado_Supreme_Court', 
'https://ballotpedia.org/Connecticut_Supreme_Court', 
'https://ballotpedia.org/Delaware_Supreme_Court']

temp_dict = {}

for page in list:
    r = requests.get(page)
    soup = BeautifulSoup(r.content, 'html.parser')

    temp_dict[page.split('/')[-1]] = [item.text for item in 
    soup.select("table.wikitable.sortable.jquery- 
    tablesorter a")]

df = pd.DataFrame.from_dict(temp_dict, 
orient='index').transpose()
df.to_csv('State Supreme Court Justices.csv')

标签: pythonpandascsvweb-scrapingbeautifulsoup

解决方案


我不太明白你的问题。让我给你举个例子。

from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain
from datetime import datetime
import requests
class MySpider(Spider):
  name = 'ballotpedia.org'
  allowed_domains = ['ballotpedia.org']
  start_urls = [
    'https://ballotpedia.org/Alabama_Supreme_Court', 
    'https://ballotpedia.org/Alaska_Supreme_Court', 
    'https://ballotpedia.org/Arizona_Supreme_Court', 
    'https://ballotpedia.org/Arkansas_Supreme_Court', 
    'https://ballotpedia.org/California_Supreme_Court', 
    'https://ballotpedia.org/Colorado_Supreme_Court', 
    'https://ballotpedia.org/Connecticut_Supreme_Court', 
    'https://ballotpedia.org/Delaware_Supreme_Court'
  ]
  # refresh_urls = True # For debug. If efresh_urls = True, start_urls will be crawled again.
  custom_down = True # All pages are downloaded using custom methods
  def customDown(self,url):
    if url["url"] not in self.start_urls: return ""
    r = requests.get(url['url']) # Use requests to download the page and return the HTML string
    return r.content.decode('utf-8')

  def extract(self, url, html, models, modelNames):
    if url["url"] not in self.start_urls: return True
    doc = SimplifiedDoc(html)
    lstA = doc.select('table.wikitable sortable jquery-tablesorter').listA(url=url["url"]) # Get link data for subsequent crawling
    return {"Urls": lstA, "Data": lstA} # Return data to framework
  # If you want to collect start_urls regularly, override this method.
  # It returns an array of hours and minutes
  def plan(self):
    if datetime.now().weekday()>=6: # Except for weekends
      return []
    else:
      return [{'hour':8,'minute':30},{'hour':18,'minute':0}]

SimplifiedMain.startThread(MySpider()) # Start crawling

推荐阅读