首页 > 技术文章 > python爬虫用到的一些东西

carious 2018-11-26 08:30 原文

原装requests

>>> import requests
>>> response = requests.get('http://www.baidu.com')
>>> response.text 打印源代码
>>> response.headers
{'Cache-Control': 'private, no-cache, no-store, proxy-revalidate, no-transform', 'Connection': 'Keep-Alive', 'Content-Encoding': 'gzip', 'Content-Type': 'text/html', 'Date': 'Mon, 26 Nov 2018 00:21:32 GMT', 'Last-Modified': 'Mon, 23 Jan 2017 13:28:36 GMT', 'Pragma': 'no-cache', 'Server': 'bfe/1.0.8.18', 'Set-Cookie': 'BDORZ=27315; max-age=86400; domain=.baidu.com; path=/', 'Transfer-Encoding': 'chunked'}
>>> response.status_code
200

>>> headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'}
>>> response = requests.get('http://www.baidu.com',headers=headers) 添加了header头部

二进制文件的打印,图片文件处理

>>> response = requests.get('https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1543204467171&di=19de509bd81641d74f3ac61472898d8e&imgtype=0&src=http%3A%2F%2Fimage.biaobaiju.com%2Fuploads%2F20180803%2F20%2F1533299921-zRLwijpYoE.jpg')
>>> response.content 输出二进制文件
>>> with open('./1.jpg','wb') as f:
... f.write(response.content)

使用selenium模拟浏览器的操作

>>> from selenium import webdriver
>>> driver.get('http://m.weibo.cn') # 打开微博
>>> driver.get('http://www.zhihu.com') # 打开知乎
>>> driver.get('http://www.taobao.com') #打开淘宝
>>> driver.page_source #获取网页源代码

推荐阅读