先命令行创建一个项目,然后依次写入各个文件。item文件#-*-coding:utf-8-*-importscrapyclassCnblogItem(scrapy.Item):title=scrapy.Field()link=scrapy.Field()spider文件#-*-coding:utf-8-*-importscrapyfromcnblog.itemsimportCnblogItemclassCnblogSpider(scrapy.Spider):name="cnblog"allowed_domains=["cnblogs.com"]url='https://www.cnblogs.com/sitehome/p/'offset=1start_urls=[url+str(offset)]defparse(self,response):item=CnblogItem()item['title']=response.xpath('//a[@class="titlelnk"]/text()').extract()item['link']=response.xpath('//a[@class="titlelnk"]/@href').extract()yielditemprint("第{0}页第一取完成".format(self.offset))ifself.offset<15:self.offset+=1url2=self.url+str(self.offset)print(url2)yieldscrapy.Request(url=url2,callback=self.parse)pipelines文件classFilePipeline(object):defprocess_item(self,item,spider):data=''withopen('cnblog.txt','a',encoding='utf-8')asf:titles=item['title']links=item['link']fori,jinzip(titles,links):数据+=i+''+j+'\n'f.write(data)f.close()returnitem更改设置文件#覆盖默认请求头:DEFAULT_REQUEST_HEADERS={'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language':'en','User-Agent':"Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/58.0.3029.110Safari/537.36"}ITEM_PIPELINES={'cnblog.pipelines.FilePipeline':300,#Realizesavingtotxtfile}写入主文件scrapy无法在编译器中调试。但是你可以写一个主文件,运行这个主文件,像正常项目一样调试编译器中的代码:#-*-coding:utf-8-*-fromscrapyimportcmdline#--nologisnotRuninthe显示日志的形式。如果需要查看详细信息,可以去掉cmdline.execute("scrapycrawlcnblog--nolog".split())
