首页 > 解决方案 > 使用 Beautifulsoup 进行网页抓取并收集表格文本值

问题描述

我的代码如下,它从 NSE 网站收集数据。基本上我想收集2个信息:

  1. 什么是Announcement主题
  2. 检查是否有任何pdf文件可用,然后打印链接。

我能够获得 pdf 链接,但无法阅读Announcement主题

MIC Electronics Limited 已就“M/s.的解决方案”通知联交所。Cosyn Consortium 处理 M/s 问题。MIC Electronics Limited 已获得 Hon'ble NCLT, Hyderabad Bench 的批准

任何帮助。

import requests
import json
import bs4

base_url = 'https://www.nseindia.com'
url = 'https://www.nseindia.com/corporates/directLink/latestAnnouncementsCorpHome.jsp'

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'}

response = requests.get(url, headers=headers)
jsonStr = response.text.strip()
keys_needing_quotes = ['company:','date:','desc:','link:','symbol:']

for key in keys_needing_quotes:
    jsonStr = jsonStr.replace(key, '"%s":' %(key[:-1]))

data = json.loads(jsonStr)
data = data['rows']
# print(data)

symbol_list = ['MIC']
for x in range(0, len(data)):
    if data[x]['symbol'] in symbol_list:
        response = requests.get(base_url + data[x]['link'], headers=headers)
        soup = bs4.BeautifulSoup(response.text, 'html.parser')
        print(soup)

        try:
            pdf_file = base_url + soup.find_all('a', href=True)[0]['href']
            print("File_Link:", pdf_file)
        except:
            print('PDF not found')

标签: pythonpython-3.xbeautifulsoup

解决方案


或者您可以使用:

for s in soup.find_all('td', 'tablehead'):
    if 'Announcement' in s.text:
        break

print(s.find_next_sibling().text))
# output: 
# MIC Electronics Limited has informed the Exchange regarding 'Resolution Plan of M/s. Cosyn Consortium in the matter of M/s. MIC Electronics Limited has been approved by Hon'ble NCLT, Hyderabad Bench 

推荐阅读