• 欢迎访问开心洋葱网站,在线教程,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站,欢迎加入开心洋葱 QQ群
  • 为方便开心洋葱网用户,开心洋葱官网已经开启复制功能!
  • 欢迎访问开心洋葱网站,手机也能访问哦~欢迎加入开心洋葱多维思维学习平台 QQ群
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏开心洋葱吧~~~~~~~~~~~~~!
  • 由于近期流量激增,小站的ECS没能经的起亲们的访问,本站依然没有盈利,如果各位看如果觉着文字不错,还请看官给小站打个赏~~~~~~~~~~~~~!

如何使用scrapy实现的一个简单的蜘蛛采集程序

python 水墨上仙 2285次浏览

通过scrapy实现的一个简单的蜘蛛采集程序

# Standard Python library imports
 
# 3rd party imports
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
 
# My imports
from poetry_analysis.items import PoetryAnalysisItem
 
HTML_FILE_NAME = r'.+\.html'
 
class PoetryParser(object):
    """
    Provides common parsing method for poems formatted this one specific way.
    """
    date_pattern = r'(\d{2} \w{3,9} \d{4})'
 
    def parse_poem(self, response):
        hxs = HtmlXPathSelector(response)
        item = PoetryAnalysisItem()
        # All poetry text is in pre tags
        text = hxs.select('//pre/text()').extract()
        item['text'] = ''.join(text)
        item['url'] = response.url
        # head/title contains title - a poem by author
        title_text = hxs.select('//head/title/text()').extract()[0]
        item['title'], item['author'] = title_text.split(' - ')
        item['author'] = item['author'].replace('a poem by', '')
        for key in ['title', 'author']:
            item[key] = item[key].strip()
        item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern)
        return item
 
 
class PoetrySpider(CrawlSpider, PoetryParser):
    name = 'example.com_poetry'
    allowed_domains = ['www.example.com']
    root_path = 'someuser/poetry/'
    start_urls = ['http://www.example.com/someuser/poetry/recent/',
                  'http://www.example.com/someuser/poetry/less_recent/']
    rules = [Rule(SgmlLinkExtractor(allow=[start_urls[0] + HTML_FILE_NAME]),
                                    callback='parse_poem'),
             Rule(SgmlLinkExtractor(allow=[start_urls[1] + HTML_FILE_NAME]),
                                    callback='parse_poem')]


开心洋葱 , 版权所有丨如未注明 , 均为原创丨未经授权请勿修改 , 转载请注明如何使用scrapy实现的一个简单的蜘蛛采集程序
喜欢 (0)
加载中……