首页 > 解决方案 > Python 代码有时会被执行,有时不会

问题描述

我正在建立一个数据库(Pandas Dataframe)来存储公司列表的新闻文章(上周文章)的新闻链接。我写了一个 python 代码,但是代码有时会被执行,有时不会,它也不会产生任何错误。由于它没有产生任何日志或错误,我发现很难理解这个问题的背景。

我尝试从浏览器中删除缓存,因为我使用的是 Jupyter 笔记本,并且我尝试使用 Sypder 等其他应用程序。我对 Jupyter 笔记本和其他应用程序有同样的问题


links_output=[]

class Newspapr:
    def __init__(self,term):
        self.term=term
        self.url='https://www.google.com/search?q={0}&safe=active&tbs=qdr:w,sdb:1&tbm=nws&source=lnt&dpr=1'.format(self.term)

    def NewsArticlerun(self):
        response=requests.get(self.url)
        soup=BeautifulSoup(response.text,'html.parser')
        links=soup.select(".r a")

        numOpen = min(5, len(links))
        for i in range(numOpen):
            response_links = "https://www.google.com" + links[i].get("href")
            print(response_links)
            links_output.append({"Weblink":response_links})
        pd.DataFrame.from_dict(links_output)



list_of_companies=["Wipro","Reliance","icici bank","vedanta", "DHFL","yesbank","tata motors","tata steel","IL&FS","Jet airways","apollo tyres","ashok leyland","Larson & Turbo","Mindtree","Infosys","TCS","AxisBank","Mahindra & Mahindra"]

for i in list_of_companies:
    comp_list = str('"'+ i + '"')
    call_code=Newspapr(comp_list)
    call_code.NewsArticlerun()

我希望打印网络链接并作为熊猫数据框

标签: pythonpython-3.xbeautifulsoup

解决方案


这可能是因为没有user-agent指定,因为默认requests user-agentpython-requests,Google 理解它并阻止请求,因此您收到了具有不同选择器的完全不同的 HTML。检查你的user-agent.

传入user-agent请求headers

headers = {
    'User-agent':
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
requests.get("YOUR_URL", headers=headers)

如果您想频繁地抓取大量新闻结果,您可以做的一件事是对每个请求进行随机化(旋转user-agents,例如random.choice()通过将它们添加到list()并迭代它们。名单user-agents


在线 IDE 中的代码和完整示例:


from bs4 import BeautifulSoup
import requests, lxml

headers = {
    "User-Agent":
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}

params = {
    "q": "best potato recipes", # query
    "hl": "en",                 # language 
    "gl": "us",                 # country to search from
    "tbm": "nws",               # news results filter
}

html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')

for result in soup.select('.dbsr'):
    title = result.select_one('.nDgy9d').text
    link = result.a['href']
    source = result.select_one('.WF4CUc').text
    snippet = result.select_one('.Y3v8qd').text
    date_published = result.select_one('.WG9SHc span').text
    
    print(f'{title}\n{link}\n{snippet}\n{date_published}\n{source}\n')

    # code to save to DataFrame

------
'''
9 Best Potato Recipes for Sides, Desserts, or Entrées
https://www.themanual.com/food-and-drink/9-best-potato-recipes-for-sides-desserts-or-entrees/
9 Best Potato Recipes for Sides, Desserts, or Entrées · Potato Latkes with 
Sour Cream and Applesauce · Smoked Hasselback Potatoes · Potato Salad.
3 weeks ago
The Manual
...
'''

或者,您可以使用来自 SerpApi的Google News Results API来做同样的事情。这是一个带有免费计划的付费 API。

您的情况的不同之处在于,您不知道为什么某些东西没有按应有的方式正确提取,因为它已经为最终用户完成了,而需要做的就是迭代结构化 JSON 并获取你想要的数据。

要集成的代码:

import os
from serpapi import GoogleSearch

params = {
  "engine": "google",
  "q": "best potato recipe",
  "tbm": "nws",
  "api_key": os.getenv("API_KEY"),
}

search = GoogleSearch(params)
results = search.get_dict()

for news_result in results["news_results"]:
  print(f"Title: {news_result['title']}\nLink: {news_result['link']}\n")
    
  # code to save to DataFrame


------
'''
Title: 9 Best Potato Recipes for Sides, Desserts, or Entrées
Link: https://www.themanual.com/food-and-drink/9-best-potato-recipes-for-sides-desserts-or-entrees/
...
'''

免责声明,我为 SerpApi 工作。


推荐阅读