最近在看冲天老师的MOOCPython网络爬虫与信息抽取课程。首先,您必须安装请求库。方法很简单,pipinstallrequests。以上就是requests库importrequestsurl='http://www.baidu.com'r=requests.get(url)print(r.status_code)r.encoding='utf-8'print((important)Requests库中异常params的使用1.爬取京东示例1.爬取京东网站时importrequestsurl='https://item.jd.com/100006713417.html'try:r=requests.get(url)print(r.status_code)print(r.text[:1000])except:print('Anexceptionoccurred')得到这个结果:显然这不是我们要的信息,发现访问结果中的链接是京东的登录界面。2.问题解决搜索相关资料后发现京东有sourcereview,可以修改headers和cookie参数实现cookie参数查找方法:进入页面后按F12,然后进入网络interface,andfindthecorrespondingafterrefreshing页面,如图importrequestsurl='https://item.jd.com/100006713417.html'cookiestr='unpl=V2_ZzNtbRAHQ0ZzDk9WKBlbDWJXQF5KBBYRfQ0VBHhJWlEyABBaclRCFnQUR11nGlUUZwYZWEdcRxxFCEVkexhdBGAAE19BVXMlRQtGZHopXAFvChZVRFZLHHwJRVRyEVQDZwQRWENncxJ1AXZkMEAaDGAGEVxHVUARRQtDU34dXjVmMxBcQ1REHXAPQ11LUjIEKgMWVUtTSxN0AE9dehpcDG8LFF1FVEYURQhHVXoYXAJkABJtQQ%3d%3d;__jdv=122270672|mydisplay.ctfile.com|t_1000620323_|推光|ca1b7783b1694ec29bd594ba2a7ed236|1598597100230;__jdu=15985970988021899716240;shshshfpa=7645286e-aab6-ce64-5f78-039ee4cc7f1e-1598597100;areaId=22;ipLoc-djd=22-1930-49324-0;PCSYCityID=CN_510000_510100_510116;shshshfpb=uxViv6Hw0rcSrj5Z4lZjH4g%3D%3D;__jdc=122270672;__jda=122270672.15985970988021899716240.1598597098.1598597100.1599100842.2;shshshfp=f215b3dcb63dedf2e335349645cbb45e;3AB9D23F7A4B3C9B=4BFMWHJNBVGI6RF55ML2PWUQHGQ2KQMS4KJIAGEJOOL3ESSN35PFEIXQFE352263KVFC2JIKWUJHDRXXMXGAAANAPA;shshshsID=2f3061bf1cc51a3f6162742028f11a80_5_1599101419724;__jdb=122270672.11.15985970988021899716240|2.1599100842;wlfstk_smdl=mwti16fwg6li5o184teuay0iftfocdez'headers={"User-Agent":"Mozilla/5.0(WindowsNT6.1;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/83.0.4103.14Safari/537.36Edg/83.13",.478“cookie”:cookiestr}尝试:r=requests.get(url=url,headers=headers)r.raise_for_status()r.encoding=r.apparent_encodingprint(r.text[:1000])except:print('爬行失败')importrequestsskv={'user-agent':'Mozilla/5.0'}url="https://item.jd.com/100006713417.html"尝试:r=requests.get(url,headers=kv)r.encoding=r.apparent_encodingr.raise_for_status()print(r.text[:1000])except:print('Error')都得到了我们想要的结果2.爬取亚马逊实例importrequestsurl="https://www.amazon.cn/dp/B072C3KZ48/ref=sr_1_5?keywords=Elizabeth+Arden+%E4%BC%8A%E4%B8%BD%E8%8E%8E%E7%99%BD%E9%9B%85%E9%A1%BF&qid=1599103843&sr=8-5"尝试:r=requests.get(url)print(r.status_code)print(r.encoding)print(r.request.headers)r.encoding=r.apparent_encodingprint(r.text[:5000])except:print('Error')出现同样的问题,老师这里解释一下,亚马逊也有sourcereview,在不修改headers参数的情况下,程序告诉亚马逊服务器这是一个pyrequestslibraryaccess所以有报错2,解决办法同上,修改user-agent即可。结果:还是有问题,下面有提示,这里提示可能是cookie相关的问题,于是我们找到网页的cookie,放到headers里,问题就顺利解决了3.爬取图片当我们要爬取网页上的图片时,我们应该怎么做。现在我知道网页上图片链接的格式是url链接以jpg结尾,描述是图片。我按照冲天老师的提示写了一段代码。,2818876915&fm=26&gp=0.jpg'root='d:/pics//'path=root+url.split('/')[-1]try:ifnotos.path.exists(root):os.makedirs(root)如果不是os.path.exists(path):r=requests.get(url)withopen(path,'wb')asf:f.write(r.content)f.close()print('文件保存成功')else:print('文件已经存在')except:print('抓取错误')这里导入os判断文件是否存在,结果图片也保存,自动查询的IP地址归属是在实战过程中遇到一个小问题,先上传代码importrequestsurl_1='https://www.ip138.com/iplookup.asp?ip=112.44.101.245&action=2'ip_address=input('Pleaseinputyouripaddress')url=url_1+ip_address+'&action=2'ifip_address:try:r=requests.get(url)print(r.status_code)print(r.text[-500:])除了:print('error')else:print('ip地址不能为空')然后程序一直报错,所以我去掉tryexcept模块,让我们看看问题出在哪里。果然问题出在源码审核上。这个报错说明ip138网站应该有sourcereview,所以我改了headers中的agent,尝试修改user-agent参数。刚刚找了一个美国人成功的话importrequestsurl_1='https://www.ip138.com/iplookup.asp?ip='ip_address=input('请输入你的ip地址')kv={'user-agent':'chrome/5.0'}url=url_1+ip_address+'&action=2'ifip_address:try:r=requests.get(url,headers=kv)print(r.status_code)r.encoding=r.apparent_encodingprint(r.text)except:print('error')else:print('ip地址不能为空')第二周BeautifulSoup库1.安装BeautifulSoup库CMDpipinstallbeautifulsoup42.requestslibrary获取源码web页面importrequestsr=requests.get("https://python123.io/ws/demo.html")print(r.text)3.bs库的使用importrequestsfrombs4importBeautifulSoupr=requests.get("https://python123.io/ws/demo.html")demo=r.textsoup=BeautifulSoup(demo,'html.parser')print(soup.prettify())4.bs库的基本构成bs库这里我有点迷糊,主要是在HTML之前我对i的三大形式不太了解信息组织:YAML、XML和JSON。我之前用这个YAML建了一个博客。错误5示例:爬取中国大学排名,并在demo中实现信息抽取。find_all(str)函数正则表达式库re:importrere.complie('b'):表示所有以B开头的元素find_all方法的用法封装成结构清晰的三个函数以上代码和结果来自bs4importBeautifulSoupimportbs4importrequestsdefgetHTMLText(url):try:r=requests.get(url,timeout=30)r.raise_for_status()r.encoding=r.apparent_encoding返回r。文本除外:return""deffillUniList(ulist,html):soup=BeautifulSoup(html,'html.parser')fortrinsoup.find('tbody').children:ifisinstance(tr,bs4.element.Tag):tds=tr('td')ulist.append([tds[0].string,tds[1].string,tds[2].string,tds[3]])defprintUniList(ulist,num):print("{:^10}\t{:^6}\t{:^10}\t".format('排名','学校名称','省份','总分'))foriin范围(num):u=ulist[i]print("{:^10}\t{:^6}\t{:^10}\t".format(u[0],u[1],u[2]))defmain():uinfo=[]url='http://www.zuihaodaxue.com/zuihaodaxuepaiming2018.html'html=getHTMLText(url)fillUniList(uinfo,html)printUniList(uinfo,20)main()发现没有对齐使用format函数对齐输出第三周正则表达式1.语法searchmatchfindallsplitfinditersubcomplie
