知乎专栏 | 多维度架构 |
Scrapy Shell 是一个爬虫命令行交互界面调试工具,可以使用它分析被爬的页面
neo@MacBook-Pro /tmp % scrapy shell http://www.netkiller.cn 2017-09-01 15:23:05 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot) 2017-09-01 15:23:05 [scrapy.utils.log] INFO: Overridden settings: {'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'LOGSTATS_INTERVAL': 0} 2017-09-01 15:23:05 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage'] 2017-09-01 15:23:05 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2017-09-01 15:23:05 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2017-09-01 15:23:05 [scrapy.middleware] INFO: Enabled item pipelines: [] 2017-09-01 15:23:05 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2017-09-01 15:23:05 [scrapy.core.engine] INFO: Spider opened 2017-09-01 15:23:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.netkiller.cn> (referer: None) [s] Available Scrapy objects: [s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc) [s] crawler <scrapy.crawler.Crawler object at 0x103b2afd0> [s] item {} [s] request <GET http://www.netkiller.cn> [s] response <200 http://www.netkiller.cn> [s] settings <scrapy.settings.Settings object at 0x1049019e8> [s] spider <DefaultSpider 'default' at 0x104be2a90> [s] Useful shortcuts: [s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed) [s] fetch(req) Fetch a scrapy.Request and update local objects [s] shelp() Shell help (print this help) [s] view(response) View response in a browser >>>
response 是爬虫返回的页面,可以通过 css(), xpath() 等方法取出你需要的内容。
css() 这个方法可以用来选择html和css
>>> response.css('title') [<Selector xpath='descendant-or-self::title' data='<title>Netkiller ebook - Linux ebook</ti'>] >>> response.css('title').extract() ['<title>Netkiller ebook - Linux ebook</title>'] >>> response.css('title::text').extract() ['Netkiller ebook - Linux ebook']
基于 class 选择
>>> response.css('a.ulink')[1].extract() '<a class="ulink" href="http://netkiller.github.io/" target="_top">http://netkiller.github.io</a>' >>> response.css('a.ulink::text')[3].extract() 'http://netkiller.sourceforge.net'
数组的处理
>>> response.css('a::text').extract_first() '简体中文' >>> response.css('a::text')[1].extract() '繁体中文' >>> response.css('div.blockquote')[1].css('a.ulink::text').extract() ['Netkiller Architect 手札', 'Netkiller Developer 手札', 'Netkiller PHP 手札', 'Netkiller Python 手札', 'Netkiller Testing 手札', 'Netkiller Java 手札', 'Netkiller Cryptography 手札', 'Netkiller Linux 手札', 'Netkiller FreeBSD 手札', 'Netkiller Shell 手札', 'Netkiller Security 手札', 'Netkiller Web 手札', 'Netkiller Monitoring 手札', 'Netkiller Storage 手札', 'Netkiller Mail 手札', 'Netkiller Docbook 手札', 'Netkiller Project 手札', 'Netkiller Database 手札', 'Netkiller PostgreSQL 手札', 'Netkiller MySQL 手札', 'Netkiller NoSQL 手札', 'Netkiller LDAP 手札', 'Netkiller Network 手札', 'Netkiller Cisco IOS 手札', 'Netkiller H3C 手札', 'Netkiller Multimedia 手札', 'Netkiller Perl 手札', 'Netkiller Amateur Radio 手札']
正则表达式
>>> response.css('title::text').re(r'Netkiller.*') ['Netkiller ebook - Linux ebook'] >>> response.css('title::text').re(r'N\w+') ['Netkiller'] >>> response.css('title::text').re(r'(\w+) (\w+)') ['Netkiller', 'ebook', 'Linux', 'ebook']
>>> response.xpath('//title') [<Selector xpath='//title' data='<title>Netkiller ebook - Linux ebook</ti'>] >>> response.xpath('//title/text()').extract_first() 'Netkiller ebook - Linux ebook'
xpath 也可以使用 re() 方法做正则处理
>>> response.xpath('//title/text()').re(r'(\w+)') ['Netkiller', 'ebook', 'Linux', 'ebook'] >>> response.xpath('//div[@class="time"]/text()').re('[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}') ['2017-09-21 02:01:38']
抽取HTML属性值,如图片URL。
>>> response.xpath('//img/@src').extract() ['graphics/spacer.gif', 'graphics/note.gif', 'graphics/by-nc-sa.png', '/images/weixin.jpg', 'images/neo.jpg', '/images/weixin.jpg']
筛选 class
>>> response.xpath('//a/@href')[0].extract() 'http://netkiller.github.io/' >>> response.xpath('//a/text()')[0].extract() '简体中文' >>> response.xpath('//div[@class="blockquote"]')[1].css('a.ulink::text').extract() ['Netkiller Architect 手札', 'Netkiller Developer 手札', 'Netkiller PHP 手札', 'Netkiller Python 手札', 'Netkiller Testing 手札', 'Netkiller Java 手札', 'Netkiller Cryptography 手札', 'Netkiller Linux 手札', 'Netkiller FreeBSD 手札', 'Netkiller Shell 手札', 'Netkiller Security 手札', 'Netkiller Web 手札', 'Netkiller Monitoring 手札', 'Netkiller Storage 手札', 'Netkiller Mail 手札', 'Netkiller Docbook 手札', 'Netkiller Project 手札', 'Netkiller Database 手札', 'Netkiller PostgreSQL 手札', 'Netkiller MySQL 手札', 'Netkiller NoSQL 手札', 'Netkiller LDAP 手札', 'Netkiller Network 手札', 'Netkiller Cisco IOS 手札', 'Netkiller H3C 手札', 'Netkiller Multimedia 手札', 'Netkiller Perl 手札', 'Netkiller Amateur Radio 手札']
使用 | 匹配多组规则
>>> response.xpath('//ul[@class="topnews_nlist"]/li/h2/a/@href|//ul[@class="topnews_nlist"]/li/a/@href').extract()