在使用Scrapy爬取數(shù)據(jù)時(shí),有時(shí)會(huì)碰到需要根據(jù)傳遞給Spider的參數(shù)來(lái)決定爬取哪些Url或者爬取哪些頁(yè)的情況。
例如,百度貼吧的放置奇兵吧的地址如下,其中 kw參數(shù)用來(lái)指定貼吧名稱、pn參數(shù)用來(lái)對(duì)帖子進(jìn)行翻頁(yè)。
https://tieba.baidu.com/f?kw=放置奇兵&ie=utf-8&pn=250
如果我們希望通過(guò)參數(shù)傳遞的方式將貼吧名稱和頁(yè)數(shù)等參數(shù)傳給Spider,來(lái)控制我們要爬取哪一個(gè)貼吧、爬取哪些頁(yè)。遇到這種情況,有以下兩種方法向Spider傳遞參數(shù)。
方式一
通過(guò) scrapy crawl 命令的 -a 參數(shù)向 spider 傳遞參數(shù)。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
# -*- coding: utf-8 -*- import scrapy class TiebaSpider(scrapy.Spider): name = 'tieba' # 貼吧爬蟲 allowed_domains = [ 'tieba.baidu.com' ] # 允許爬取的范圍 start_urls = [] # 爬蟲起始地址 # 命令格式: scrapy crawl tieba -a tiebaName=放置奇兵 -a pn=250 def __init__( self , tiebaName = None , pn = None , * args, * * kwargs): print ( '< 貼吧名稱 >: ' + tiebaName) super ( eval ( self .__class__.__name__), self ).__init__( * args, * * kwargs) self .start_urls = [ 'https://tieba.baidu.com/f?kw=%s&ie=utf-8&pn=%s' % (tiebaName,pn)] def parse( self , response): print (response.request.url) # 結(jié)果:https://tieba.baidu.com/f?kw=%E6%94%BE%E7%BD%AE%E5%A5%87%E5%85%B5&ie=utf-8&pn=250 |
方式二
仿照 scrapy 的 crawl 命令的源代碼,重新自定義一個(gè)專用命令。
settings.py
首先,需要在settings.py文件中增加如下配置來(lái)指定自定義 scrapy 命令的存放目錄。
1
2
|
# 指定 Scrapy 命令存放目錄 COMMANDS_MODULE = 'baidu_tieba.commands' |
run.py
在指定的命令存放目錄中創(chuàng)建命令文件,在這里我們創(chuàng)建的命令文件為 run.py ,將來(lái)執(zhí)行的命令格式為:
scrapy run [ -option option_value]
。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
import scrapy.commands.crawl as crawl from scrapy.exceptions import UsageError from scrapy.commands import ScrapyCommand class Command(crawl.Command): def add_options( self , parser): # 為命令添加選項(xiàng) ScrapyCommand.add_options( self , parser) parser.add_option( "-k" , "--keyword" , type = "str" , dest = "keyword" , default = "", help = "set the tieba's name you want to crawl" ) parser.add_option( "-p" , "--pageNum" , type = "int" , action = "store" , dest = "pageNum" , default = 0 , help = "set the page number you want to crawl" ) def process_options( self , args, opts): # 處理從命令行中傳入的選項(xiàng)參數(shù) ScrapyCommand.process_options( self , args, opts) if opts.keyword: tiebaName = opts.keyword.strip() if tiebaName ! = '': self .settings. set ( 'TIEBA_NAME' , tiebaName, priority = 'cmdline' ) else : raise UsageError( "U must specify the tieba's name to crawl,use -kw TIEBA_NAME!" ) self .settings. set ( 'PAGE_NUM' , opts.pageNum, priority = 'cmdline' ) def run( self , args, opts): # 啟動(dòng)爬蟲 self .crawler_process.crawl( 'tieba' ) self .crawler_process.start() |
pipelines.py
在BaiduTiebaPipeline的open_spider()方法中利用 run 命令傳入的參數(shù)對(duì)TiebaSpider進(jìn)行初始化,在這里示例設(shè)置了一下start_urls。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
# -*- coding: utf-8 -*- import json class BaiduTiebaPipeline( object ): @classmethod def from_settings( cls , settings): return cls (settings) def __init__( self , settings): self .settings = settings def open_spider( self , spider): # 開啟爬蟲 spider.start_urls = [ 'https://tieba.baidu.com/f?kw=%s&ie=utf-8&pn=%s' % ( self .settings[ 'TIEBA_NAME' ], self .settings[ 'PAGE_NUM' ])] def close_spider( self , spider): # 關(guān)閉爬蟲 pass def process_item( self , item, spider): # 將帖子內(nèi)容保存到文件 with open ( 'tieba.txt' , 'a' , encoding = 'utf-8' ) as f: json.dump( dict (item), f, ensure_ascii = False , indent = 2 ) return item |
設(shè)置完成后,別忘了在settings.py中啟用BaiduTiebaPipeline。
1
2
3
|
ITEM_PIPELINES = { 'baidu_tieba.pipelines.BaiduTiebaPipeline' : 50 , } |
啟動(dòng)示例
大功告成,參照如下命令格式啟動(dòng)貼吧爬蟲。
1
|
scrapy run - k 放置奇兵 - p 250 |
參考文章:
https://blog.csdn.net/c0411034/article/details/81750028
https://blog.csdn.net/qq_24760381/article/details/80361400
https://blog.csdn.net/qq_38282706/article/details/80991196
到此這篇關(guān)于Scrapy中如何向Spider傳入?yún)?shù)的方法實(shí)現(xiàn)的文章就介紹到這了,更多相關(guān)Scrapy Spider傳入?yún)?shù)內(nèi)容請(qǐng)搜索服務(wù)器之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持服務(wù)器之家!
原文鏈接:https://blog.csdn.net/pengjunlee/article/details/90604736