1.安裝pyinstaller
2.安裝pywin32
3.安裝其他模塊
注意點:
scrapy用pyinstaller打包不能用
1
|
cmdline.execute( 'scrapy crawl douban -o test.csv --nolog' .split()) |
我用的是CrawlerProcess方式來輸出
舉個栗子:
1、在scrapy項目根目錄下建一個crawl.py(你可以自己定義)如下圖
cralw.py代碼如下
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
|
# -*- coding: utf-8 -*- from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings from douban.spiders.douban_spider import Douban_spider #打包需要的import import urllib.robotparser import scrapy.spiderloader import scrapy.statscollectors import scrapy.logformatter import scrapy.dupefilters import scrapy.squeues import scrapy.extensions.spiderstate import scrapy.extensions.corestats import scrapy.extensions.telnet import scrapy.extensions.logstats import scrapy.extensions.memusage import scrapy.extensions.memdebug import scrapy.extensions.feedexport import scrapy.extensions.closespider import scrapy.extensions.debug import scrapy.extensions.httpcache import scrapy.extensions.statsmailer import scrapy.extensions.throttle import scrapy.core.scheduler import scrapy.core.engine import scrapy.core.scraper import scrapy.core.spidermw import scrapy.core.downloader import scrapy.downloadermiddlewares.stats import scrapy.downloadermiddlewares.httpcache import scrapy.downloadermiddlewares.cookies import scrapy.downloadermiddlewares.useragent import scrapy.downloadermiddlewares.httpproxy import scrapy.downloadermiddlewares.ajaxcrawl import scrapy.downloadermiddlewares.chunked import scrapy.downloadermiddlewares.decompression import scrapy.downloadermiddlewares.defaultheaders import scrapy.downloadermiddlewares.downloadtimeout import scrapy.downloadermiddlewares.httpauth import scrapy.downloadermiddlewares.httpcompression import scrapy.downloadermiddlewares.redirect import scrapy.downloadermiddlewares.retry import scrapy.downloadermiddlewares.robotstxt import scrapy.spidermiddlewares.depth import scrapy.spidermiddlewares.httperror import scrapy.spidermiddlewares.offsite import scrapy.spidermiddlewares.referer import scrapy.spidermiddlewares.urllength import scrapy.pipelines import scrapy.core.downloader.handlers.http import scrapy.core.downloader.contextfactory from douban.pipelines import DoubanPipeline from douban.items import DoubanItem import douban.settings if __name__ = = '__main__' : setting = get_project_settings() process = CrawlerProcess(settings = setting) process.crawl(Douban_spider) process.start() |
2、在crawl.py目錄下pyinstaller crawl.py 生成dist,build(可刪)和crawl.spec(可刪)。
3、在crawl.exe目錄下創建文件夾scrapy,然后到自己安裝的scrapy文件夾中把VERSION和mime.types兩個文件復制到剛才創建的scrapy文件夾中。
4、發布程序 包括douban/dist 和douban/scrapy.cfg
如果沒有scrapy.cfg無法讀取settings.py和pipelines.py的配置
5、在另外一臺機器上測試成功
6、對于自定義的pipelines和settings,貌似用pyinstaller打包后的 exe無法讀取到settings和pipelines,哪位高手看看能解決這個問題???
到此這篇關于Pyinstaller打包Scrapy項目的實現步驟的文章就介紹到這了,更多相關Pyinstaller打包Scrapy內容請搜索服務器之家以前的文章或繼續瀏覽下面的相關文章希望大家以后多多支持服務器之家!
原文鏈接:https://blog.csdn.net/vample/article/details/86224021