Advertisement

scrapy爬虫,爬取整形美容网医生信息

阅读量:

因为公司的商务部门需要网站上的医生信息,所以让我爬取。
网址:https://www.010yt.com/doc/
因为之前学习了scrapy爬虫,所以在爬取这个项目信息的时候就用了这个信息。
首先就先是在cmd上面导入项目所需要的包

复制代码
    pip install scrapy
    pip install requests
    pip install pymysql
    
    
      
      
      
    

这是这个项目所需要的三个模块的包
导入之后就是创建一个scrapy项目

复制代码
    scrapy startproject 项目名
    
    cd 项目名
    
    scrapy genspider 爬虫名 域名
    
    
      
      
      
      
      
    

上面这是创建scrapy项目的模板代码

然后就在pycharm上面导入这个项目
在这里插入图片描述导入进去之后,第一件事情然后就是,修改配置文件
具体怎么修改,我参考的是这篇文章:scrapy案例
直接贴我的settings的代码

复制代码
    # Scrapy settings for hos project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'hos'
    
    SPIDER_MODULES = ['hos.spiders']
    NEWSPIDER_MODULE = 'hos.spiders'
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    USER_AGENT = 'Mozilla/5.0'
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    CONCURRENT_REQUESTS = 100
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    DOWNLOAD_DELAY = 0
    # The download delay setting will honor only one of:
    CONCURRENT_REQUESTS_PER_DOMAIN = 100
    CONCURRENT_REQUESTS_PER_IP = 100
    
    # Disable cookies (enabled by default)
    COOKIES_ENABLED = False
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    # Override the default request headers:
    DEFAULT_REQUEST_HEADERS = {
      'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
      'Accept-Language': 'en',
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
    }
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    #SPIDER_MIDDLEWARES = {
    #    'hos.middlewares.HosSpiderMiddleware': 543,
    #}
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #DOWNLOADER_MIDDLEWARES = {
    #    'hos.middlewares.HosDownloaderMiddleware': 543,
    #}
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'hos.pipelines.HosPipeline': 300,
    }
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    

然后就是在items里边定义自己想要的数据
还是直接贴代码:

复制代码
    # Define here the models for your scraped items
    #
    # See documentation in:
    # https://docs.scrapy.org/en/latest/topics/items.html
    
    import scrapy
    
    
    class HosItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    docname = scrapy.Field()
    sex = scrapy.Field()
    zhicheng = scrapy.Field()
    college = scrapy.Field()
    nianxian = scrapy.Field()
    tsxiangmu = scrapy.Field()
    zxxiangmu = scrapy.Field()
    docjianjie = scrapy.Field()
    city = scrapy.Field()
    hos = scrapy.Field()
    
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    

items里边写完了,然后就在 爬虫名.py 这个文件里边写具体的操作了

复制代码
    import scrapy
    from  ..items import HosItem
    class DoctorSpider(scrapy.Spider):
    name = 'doctor'
    allowed_domains = ['www.010yt.com']
    start_urls = ['https://www.010yt.com/doc/show-21.html']
    offset = 21
    
    def parse(self, response):
        # docnama = scrapy.Field()
        # sex = scrapy.Field()
        # zhicheng = scrapy.Field()
        # college = scrapy.Field()
        # nianxian = scrapy.Field()
        # tsxiangmu = scrapy.Field()
        # zxxiangmu = scrapy.Field()
        # docjianjie = scrapy.Field() city
    
        items = HosItem()
    
        if response.xpath('normalize-space(/html/body/div[3]/div[1]/h1/text())').get()!='':
            print(response.xpath('/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[4]/span/a/@href').get())
            if response.xpath('/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[4]/span/a/@href').get() !='':
                items["docname"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/h1/text())').get()
                items["sex"] = ''
                items["zhicheng"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[1]/span/text())').get()
                items["college"] = ''
                items["nianxian"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[6]/span/text())').get()
                items["tsxiangmu"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[3]/span/text())').get()
                items["zxxiangmu"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[2]/p[2]/text())').get()
                items["docjianjie"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[2]/p[1]/text())').get()
                items["city"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[5]/span/text())').get()
                items["hos"] = response.xpath('normalize-space(/html/body/div[3]/div[1]/div[4]/div[1]/ul/li[4]/span/a/text())').get()
                yield items
    
                if self.offset < 18000:
                    self.offset += 1
                    url = 'https://www.010yt.com/doc/show-{}.html'.format(
                        str(self.offset))
    
                    yield scrapy.Request(url=url, callback=self.parse)
            else:
                if self.offset < 18000:
                    self.offset += 1
                    url = 'https://www.010yt.com/doc/show-{}.html'.format(
                        str(self.offset))
    
                    yield scrapy.Request(url=url, callback=self.parse)
    
        else:
            if self.offset < 18000:
                self.offset += 1
                url = 'https://www.010yt.com/doc/show-{}.html'.format(
                    str(self.offset))
    
                yield scrapy.Request(url=url, callback=self.parse)
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    

在这里边获取了数据之后就交给管道pipeline处理
我在pipeline这里将数据存储到了本地的数据库

复制代码
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    
    
    # useful for handling different item types with a single interface
    from itemadapter import ItemAdapter
    
    import pymysql
    class HosPipeline:
    def process_item(self, item, spider):
        print(item)
        con = pymysql.connect(host='localhost', user='root', password='', db='mydbb', charset='utf8')
        sql = "INSERT INTO doc(city,docname,sex,zhicheng,college,nianxian,tsxiangmu,zxxiangmu,docjianjie,hos) VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)"
        cursor = con.cursor()
        cursor.execute(sql,(item['city'],item['docname'],item['sex'],item['zhicheng'],item['college'],item['nianxian'],item['tsxiangmu'],item['zxxiangmu'],item['docjianjie'],item['hos']))
    
        con.commit()
        cursor.close()
        print("关闭成功1")
        con.close()
        print("关闭成功2")
    
        return item
    
    
    
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
    

然后这儿的main文件是需要自己创建然后写的,用于程序的执行

复制代码
    from scrapy import cmdline
    
    cmdline.execute('scrapy crawl doctor'.split())
    
    
      
      
      
    

然后运行main文件就行了。

全部评论 (0)

还没有任何评论哟~