Advertisement

爬虫实现二级链接页面信息爬取

阅读量:
一.scrapy环境搭建,参考我的博客–>爬虫框架虚拟环境搭建
二.scrapy设置配置

1.设置用户代理

页面刷新后访问,并切换至开发者工具;选定目标网页后,在其网络 headers 中获取该属性;即可完成。

复制代码
    USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
    
      
    
    AI写代码

2.是否遵守爬虫协议改为否(原因你懂的)

复制代码
    ROBOTSTXT_OBEY = False
    
      
    
    AI写代码

3.一次允许的最大请求数

复制代码
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    CONCURRENT_REQUESTS = 2
    
      
      
    
    AI写代码

4.设置下载延迟时间,因而使得爬虫更像是人的行为,避免IP被屏蔽

复制代码
    DOWNLOAD_DELAY = 3
    
      
    
    AI写代码

5.设置下载中间键

复制代码
    DOWNLOADER_MIDDLEWARES = {
       'xymtest.middlewares.XymtestDownloaderMiddleware': 543,
    }
    
      
      
      
    
    AI写代码

6.设置管道

复制代码
    ITEM_PIPELINES = {
       'xymtest.pipelines.XymtestPipeline': 300,
    }
    
      
      
      
    
    AI写代码

7.取消最后几行的注释

复制代码
    HTTPCACHE_ENABLED = True
    HTTPCACHE_EXPIRATION_SECS = 0
    HTTPCACHE_DIR = 'httpcache'
    HTTPCACHE_IGNORE_HTTP_CODES = []
    HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
      
      
      
      
      
    
    AI写代码
三.开始编写爬虫代码

1.设置要爬取得items

复制代码
    # define the fields for your item here like:
    title = scrapy.Field()
    content = scrapy.Field()
    
      
      
      
    
    AI写代码

2.在spider下创建一个.py文件,编写爬虫代码.

复制代码
    class testInformation(scrapy.Spider):
    name = 'test'
    #域名后面跟的一串数字不要跟上来
    allowed_domains = ['blog.test.net']    
    #the format of different page's address is https://blog.test.net/u42/article/list/ follows with a number,https://blog.test.net/u042/article/list/1 means the first page, https://blog.test.net/u014229742/article/list/2 is the second.so we can use a the same part plus a number,but we can not add a int type with a string,so we change the number to str
    
    start_urls = ['https://blog.test.net/u42/article/list/' + str(x) for x in range(1, 4)]
    
    
    
    #函数
    def parse(self, response):
    
         #get the xpath of the title:
        #the first title xpath is://*[@id="mainBox"]/main/div[2]/div[1]/h4/a
        #the seconde title xpath is://*[@id="mainBox"]/main/div[2]/div[2]/h4/a
        #the same part is://*[@id="mainBox"]/main/div[2],and from the next div everything is different.div[1] means the first title's xpath.div[2] means the second title's xpath.
        #so if when want to get all the xpath,we can use://*[@id="mainBox"]/main/div[2]/div
        #获取到所有标题的xpath
        li_list = response.xpath('//*[@id="mainBox"]/main/div[2]/div')
    
        #we have to get all the title,so there must has a recycle,xq means one of the 
        for xq in li_list:
            item = XymtestItem()
            #获取到标题内容//*[@id="mainBox"]/main/div[2]/div[1]/h4/a/text()
                         #//*[@id="mainBox"]/main/div[2]/div[2]/h4/a/text()
             #获取到的标题去掉li_list中的公共部分
            item_list = xq.xpath('h4/a/text()').extract()
            #因为获取到的item_list有空的内容,如果直接extract()[0],会报错,故先判断长度,长度不为空,开始取标题
            if len(item_list) > 0:
            #strip()函数可以去除空格
                item['title'] = item_list[1].strip()
             #获取到每个标题的href内容
                url = xq.xpath('h4/a/@href').extract()[0]
              #Request(url, meta={'item': item}, callback = self.parse_detail)方法实现二层链接函数的调取
                yield Request(url, meta={'item': item}, callback = self.parse_detail)
    
    def parse_detail(self, response):
    
        item = response.meta['item']
    #获取到二层链接中要爬取的页面的xpath
        item['content'] = response.xpath('//*[@id="mainBox"]/main/div[1]/div[2]/div/div/span/text()').extract()[0]  
    
        yield item
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    
    AI写代码
复制代码
    好了,以上代码基本实现了一个二层链接的爬取,接下来要做的事将爬取到的数据存储到数据库中供我们使用.想知道更多,继续关注小姐姐!
    
    
      
    
    AI写代码

全部评论 (0)

还没有任何评论哟~