首页 > 解决方案 > 我想从 Selenium Scrape 数据中将地图保存在 JSON 文件中

问题描述

我抓取了一个网站,从中获取了所有数据,我想将它存储在 json 文件中,以便我可以将它用作 API在 for 循环中,这是我尝试做的

我怎样才能附加到章节列表并保持标题不变我有更多的书也将是一个列表

{
  "title":"Kingdom",
   "chapters":[
    {
      "chapter-title": "Kingdom - 12",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }{
      "chapter-title": "Kingdom - 13",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }{
      "chapter-title": "Kingdom - 14",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }{
      "chapter-title": "Kingdom - 15",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }
  ]
}

这就是我得到的:

{
  "title":"Kingdom",
   "chapters":[
    {
      "chapter-title": "Kingdom - 12",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }
  ]
}{
  "title":"Kingdom",
   "chapters":[
    {
      "chapter-title": "Kingdom - 13",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }
  ]
}{
  "title":"Kingdom",
   "chapters":[
    {
      "chapter-title": "Kingdom - 14",
      "images" : [
        "image1.jpg",
        "image2.jpg",
        "image3.jpg"
      ]
    }
  ]
}
def getAllImages(url=""):
    chrome_options = Options()
    chrome_options.add_argument("--headless")

    driver = webdriver.Chrome(chrome_options=chrome_options)

    try:
        driver.get(url)
        driver.implicitly_wait(2)
    except Exception as e:
        print("Error Getting Images Page :", e)

    print(driver.title)

    divs = driver.find_elements_by_class_name("page-break ")

    images = []
    for div in divs:
        image = div.find_elements_by_tag_name("img")

        [images.append(j.get_attribute("src").strip()) for j in image]

    chapters ={"chapter-title": driver.title, "images": images}

    list = [{"title":"Kingdom","chapters":[]}]
    list[0]["chapters"].append(chapters)
    
    toJson = json.dumps(list, ensure_ascii=False, indent=2)
    with open("./" + "manga.json", "r+") as f:
        if len(f.read()) == 0:
            f.write(toJson)
        else:
            f.write(",\n" + toJson)

    print("Successfully created ")

for url in links:
    getAllImages(url)

标签: pythonjsonselenium-webdriverweb-scrapingpython-jsonschema

解决方案


推荐阅读