首页 > 解决方案 > 无法以某种自定义方式将结果写入 csv 文件

问题描述

我创建了一个脚本来解析网页中不同容器的、 和singersout their concerning links。脚本运行良好。我不能做的是将结果相应地写入 csv 文件。actorstheir concerning links

网页链接

我试过:

import csv
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

base_url = 'https://www.hindigeetmala.net'
link = 'https://www.hindigeetmala.net/movie/2_states.htm'

res = requests.get(link)
soup = BeautifulSoup(res.text,"lxml")

with open("hindigeetmala.csv","w",newline="") as f:
    writer = csv.writer(f)
    writer.writerow(['singer_records','actor_records'])

    for item in soup.select("tr[itemprop='track']"):
        try:
            singers = [i.get_text(strip=True) for i in item.select("span[itemprop='byArtist']") if i.get_text(strip=True)]
        except Exception: singers = ""

        try:
            singer_links = [urljoin(base_url,i.get("href")) for i in item.select("a:has(> span[itemprop='byArtist'])") if i.get("href")]
        except Exception: singer_links = ""
        singer_records = [i for i in zip(singers,singer_links)]

        try:
            actors = [i.get_text(strip=True) for i in item.select("a[href^='/actor/']") if i.get("href")]
        except Exception: actors = ""
        try:
            actor_links = [urljoin(base_url,i.get("href")) for i in item.select("a[href^='/actor/']") if i.get("href")]
        except Exception: actor_links = ""
        actor_records = [i for i in zip(actors,actor_links)]
        song_name = item.select_one("span[itemprop='name']").get_text(strip=True)
        writer.writerow([singer_records,actor_records,song_name])
        print(singer_records,actor_records,song_name)

如果我按原样执行脚本,这就是我得到的输出。

当我尝试喜欢writer.writerow([*singer_records,*actor_records,song_name])时,我得到了这种类型的输出。只写入第一对元组。

这是我的预期输出

如何根据第三张图像将结果写入 csv 文件中的名称及其链接?

PS 为简洁起见,输出的所有图像都代表 csv 文件的第一列。

标签: pythonpython-3.xweb-scraping

解决方案


根据 SIM 的反馈,我认为这就是您要寻找的(我刚刚添加了一个用于格式化列表的功能)

import csv
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

base_url = 'https://www.hindigeetmala.net'
link = 'https://www.hindigeetmala.net/movie/2_states.htm'

res = requests.get(link)
soup = BeautifulSoup(res.text, "lxml")


def merge_results(inpt):
    return [','.join(nested_items for nested_items in
                     [','.join("'" + tuple_item + "'" for tuple_item in item)
                      for item in inpt])]


with open("hindigeetmala.csv", "w", newline="") as f:
    writer = csv.writer(f)
    writer.writerow(['singer_records', 'actor_records'])

    for item in soup.select("tr[itemprop='track']"):
        try:
            singers = [i.get_text(strip=True) for i in item.select(
                "span[itemprop='byArtist']") if i.get_text(strip=True)]
        except Exception:
            singers = ""

        try:
            singer_links = [urljoin(base_url, i.get("href")) for i in item.select(
                "a:has(> span[itemprop='byArtist'])") if i.get("href")]
        except Exception:
            singer_links = ""
        singer_records = [i for i in zip(singers, singer_links)]

        try:
            actors = [i.get_text(strip=True) for i in item.select(
                "a[href^='/actor/']") if i.get("href")]
        except Exception:
            actors = ""
        try:
            actor_links = [urljoin(base_url, i.get("href")) for i in item.select(
                "a[href^='/actor/']") if i.get("href")]
        except Exception:
            actor_links = ""
        actor_records = [i for i in zip(actors, actor_links)]
        song_name = item.select_one(
            "span[itemprop='name']").get_text(strip=True)
        writer.writerow(merge_results(singer_records) +
                        merge_results(actor_records)+[song_name])
        print(singer_records, actor_records, song_name)

推荐阅读