首页 > 解决方案 > 网页抓取脚本返回和空列表

问题描述

我正在尝试为测试网站编写我的第一个网络爬虫。它涉及登录,我遵循了有关如何处理此类情况的教程。

import requests
from lxml import html



payload = {
"email": "test_test@test.com",
"password": "123qweasd",
"_token": "3ow4dl7COwnRHa8a6nvNGp4eLkF3wQapT3otGXjR"
 }

rs = requests.session()

login_url = 'https://cloud.webscraper.io/login'
log_page = rs.get(login_url)

tree = html.fromstring(log_page.content)
auth_token = list(set(tree.xpath("//input[@name='_token']/@value")))[0]

login = rs.post(login_url,data=payload, headers=dict(referer=login_url))

url = "https://cloud.webscraper.io/sitemaps"
result = rs.get(url, headers=dict(referer=url))

tree = html.fromstring(result.text)
sidebar_cat = tree.xpath('//*[@id="main-menu-inner"]/ul')

print(sidebar_cat)

我希望这个脚本列出侧边栏中的类别。似乎脚本每次都返回并清空列表。电流输出为

"[] 
Process finished with exit code 0"

标签: pythonweb-scrapingpython-requests

解决方案


您已经提取_token了值,但使用了硬编码值。尝试将提取的值传递给payload

import requests
from lxml import html

rs = requests.session()

login_url = 'https://cloud.webscraper.io/login'
log_page = rs.get(login_url)

tree = html.fromstring(log_page.content)
auth_token = tree.xpath("//input[@name='_token']/@value")[0]

payload = {
    "email": "test_test@test.com",
    "password": "123qweasd",
    "_token": auth_token
 }

login = rs.post(login_url,data=payload, headers=dict(referer=login_url))

url = "https://cloud.webscraper.io/sitemaps"
result = rs.get(url, headers=dict(referer=url))

tree = html.fromstring(result.text)
sidebar_cat = tree.xpath('//*[@id="main-menu-inner"]/ul')

print(sidebar_cat)

推荐阅读