首页 > 解决方案 > 404 HTTP 错误,尽管能够在浏览器中看到页面

问题描述

我正在尝试绘制此网站的地图,但在尝试完全抓取它时遇到了问题。即使 URL 存在,我也会收到错误 404。

这是我的代码:

import csv
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re

csvFile = open("C:/Users/Pichau/codigo/govbr/brasil/govfederal/govbr/arquivos/teste.txt",'wt')
paginas = set()
def getLinks(pageUrl):
    global paginas
    html = urlopen("https://www.gov.br/pt-br/"+pageUrl)
    bsObj = BeautifulSoup(html, "html.parser")
    writer = csv.writer(csvFile)
    for link in bsObj.findAll("a"):
      if 'href' in link.attrs:
       if link.attrs['href'] not in paginas:
             #nova página encontrada
                newPage = link.attrs['href']
                print(newPage)
                paginas.add(newPage)
                getLinks(newPage)
                csvRow = []
                csvRow.append(newPage)
                writer.writerow(csvRow)

   
getLinks("")
csvFile.close()  

这是我在尝试运行该代码后收到的错误消息:

#wrapper
/
#main-navigation
#nolivesearchGadget
#tile-busca-input
#portal-footer
http://brasil.gov.br
Traceback (most recent call last):
  File "c:\Users\Pichau\codigo\govbr\brasil\govfederal\govbr\teste2.py", line 26, in <module>
    getLinks("")
  File "c:\Users\Pichau\codigo\govbr\brasil\govfederal\govbr\teste2.py", line 20, in getLinks
    getLinks(newPage)
  File "c:\Users\Pichau\codigo\govbr\brasil\govfederal\govbr\teste2.py", line 20, in getLinks
    getLinks(newPage)
  File "c:\Users\Pichau\codigo\govbr\brasil\govfederal\govbr\teste2.py", line 20, in getLinks
    getLinks(newPage)
  [Previous line repeated 4 more times]
  File "c:\Users\Pichau\codigo\govbr\brasil\govfederal\govbr\teste2.py", line 10, in getLinks
    html = urlopen("https://www.gov.br/pt-br/"+pageUrl)
  File "C:\Users\Pichau\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 214, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Users\Pichau\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 523, in open
    response = meth(req, response)
  File "C:\Users\Pichau\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 632, in http_response
    response = self.parent.error(
  File "C:\Users\Pichau\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 561, in error
    return self._call_chain(*args)
  File "C:\Users\Pichau\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 494, in _call_chain
    result = func(*args)
  File "C:\Users\Pichau\AppData\Local\Programs\Python\Python39\lib\urllib\request.py", line 641, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
PS C:\Users\Pichau\codigo\govbr>

我尝试仅使用主链接来执行此操作,并且效果很好,但是一旦我将pageurl变量添加到 url,它就会给我这个错误。我该如何解决这个错误?

标签: pythonbeautifulsoupweb-crawlerscreen-scrapingurllib

解决方案


从我所看到的,你是对的 - 页面就在那里......对于我们浏览器上的人。我假设正在发生的是一些基本的反机器人机制,它禁止不常见的 UserAgent,或者换句话说,只允许浏览器查看页面。但是,由于用户代理是我们可以控制的标头,因此我们可以对其进行操作,使其不会引发 404 错误。

我目前无法输入代码,但您需要配对这个 StackOverflow 答案,描述如何更改 urllib 中的标头,并且您必须编写一些代码来获取该答案并将“UserAgent”标头更改为值Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36,我从这里取的。

更改 UserAgent 标头后,您应该能够成功下载页面。


推荐阅读