講解Python的Scrapy爬蟲框架使用代理進(jìn)行采集的方法
1.在Scrapy工程下新建“middlewares.py”
# Importing base64 library because we'll need it ONLY in case if the proxy we are going to use requires authentication import base64 # Start your middleware class class ProxyMiddleware(object): # overwrite process request def process_request(self, request, spider): # Set the location of the proxy request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT" # Use the following lines if your proxy requires authentication proxy_user_pass = "USERNAME:PASSWORD" # setup basic authentication for the proxy encoded_user_pass = base64.encodestring(proxy_user_pass) request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
2.在項(xiàng)目配置文件里(./project_name/settings.py)添加
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
'project_name.middlewares.ProxyMiddleware': 100,
}
只要兩步,現(xiàn)在請(qǐng)求就是通過代理的了。測(cè)試一下^_^
from scrapy.spider import BaseSpider
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.http import Request
class TestSpider(CrawlSpider):
name = "test"
domain_name = "whatismyip.com"
# The following url is subject to change, you can get the last updated one from here :
# http://www.whatismyip.com/faq/automation.asp
start_urls = ["http://xujian.info"]
def parse(self, response):
open('test.html', 'wb').write(response.body)
3.使用隨機(jī)user-agent
默認(rèn)情況下scrapy采集時(shí)只能使用一種user-agent,這樣容易被網(wǎng)站屏蔽,下面的代碼可以從預(yù)先定義的user- agent的列表中隨機(jī)選擇一個(gè)來采集不同的頁(yè)面
在settings.py中添加以下代碼
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
'Crawler.comm.rotate_useragent.RotateUserAgentMiddleware' :400
}
注意: Crawler; 是你項(xiàng)目的名字 ,通過它是一個(gè)目錄的名稱 下面是蜘蛛的代碼
#!/usr/bin/python
#-*-coding:utf-8-*-
import random
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware
class RotateUserAgentMiddleware(UserAgentMiddleware):
def __init__(self, user_agent=''):
self.user_agent = user_agent
def process_request(self, request, spider):
#這句話用于隨機(jī)選擇user-agent
ua = random.choice(self.user_agent_list)
if ua:
request.headers.setdefault('User-Agent', ua)
#the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape
#for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php
user_agent_list = [\
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]
- Python爬蟲框架Scrapy安裝使用步驟
- 零基礎(chǔ)寫python爬蟲之使用Scrapy框架編寫爬蟲
- 使用scrapy實(shí)現(xiàn)爬網(wǎng)站例子和實(shí)現(xiàn)網(wǎng)絡(luò)爬蟲(蜘蛛)的步驟
- scrapy爬蟲完整實(shí)例
- 深入剖析Python的爬蟲框架Scrapy的結(jié)構(gòu)與運(yùn)作流程
- Python使用Scrapy爬蟲框架全站爬取圖片并保存本地的實(shí)現(xiàn)代碼
- Python的Scrapy爬蟲框架簡(jiǎn)單學(xué)習(xí)筆記
- 實(shí)踐Python的爬蟲框架Scrapy來抓取豆瓣電影TOP250
- 使用Python的Scrapy框架編寫web爬蟲的簡(jiǎn)單示例
- python爬蟲框架scrapy實(shí)戰(zhàn)之爬取京東商城進(jìn)階篇
- 淺析python實(shí)現(xiàn)scrapy定時(shí)執(zhí)行爬蟲
- Scrapy爬蟲多線程導(dǎo)致抓取錯(cuò)亂的問題解決
相關(guān)文章
Elasticsearch映射字段數(shù)據(jù)類型及管理
這篇文章主要介紹了Elasticsearch映射字段數(shù)據(jù)類型及管理的講解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04
Python sqlalchemy時(shí)間戳及密碼管理實(shí)現(xiàn)代碼詳解
這篇文章主要介紹了Python sqlalchemy時(shí)間戳及密碼管理實(shí)現(xiàn)代碼詳解,文中通過示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2020-08-08
Python實(shí)現(xiàn)多線程并發(fā)請(qǐng)求測(cè)試的腳本
這篇文章主要為大家分享了一個(gè)Python實(shí)現(xiàn)多線程并發(fā)請(qǐng)求測(cè)試的腳本,文中的示例代碼簡(jiǎn)潔易懂,具有一定的借鑒價(jià)值,需要的小伙伴可以了解一下2023-06-06
pandas中.loc和.iloc以及.at和.iat的區(qū)別說明
這篇文章主要介紹了pandas中.loc和.iloc以及.at和.iat的區(qū)別說明,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2021-04-04
Pyside2中嵌入Matplotlib的繪圖的實(shí)現(xiàn)
這篇文章主要介紹了Pyside2中嵌入Matplotlib的繪圖的實(shí)現(xiàn),文中通過示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2021-02-02
利用python在excel里面直接使用sql函數(shù)的方法
今天小編就為大家分享一篇利用python在excel里面直接使用sql函數(shù)的方法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2019-02-02
Python常用標(biāo)準(zhǔn)庫(kù)詳解(pickle序列化和JSON序列化)
這篇文章主要介紹了Python常用標(biāo)準(zhǔn)庫(kù),主要包括pickle序列化和JSON序列化模塊,通過使用場(chǎng)景分析給大家介紹的非常詳細(xì),需要的朋友可以參考下2022-05-05

