直接上需求和代碼
首先是需要爬取的鏈接和網(wǎng)頁:http://211.81.31.34/uhtbin/cgisirsi/x/0/0/57/49?user_id=LIBSCI_ENGI&password=LIBSC
登陸進(jìn)去之后進(jìn)入我的賬號(hào)——借閱、預(yù)約及申請(qǐng)記錄——借閱歷史就可以看到所要爬取的內(nèi)容
然后將借閱歷史中的題名、著者、借閱日期、歸還日期、索書號(hào)存入Mongodb數(shù)據(jù)庫中,以上便是這次爬蟲的需求。
下面開始:
各軟件版本為:
- python 2.7.11
- MongoDb 3.2.1
- Pycharm 5.0.4
- MongoDb Management Studio 1.9.3
- 360極速瀏覽器 懶得查了
一、登陸模塊
python中的登陸一般都是用urllib和urllib2這兩個(gè)模塊,首先我們要查看網(wǎng)頁的源代碼:
- <form name="loginform" method="post" action="/uhtbin/cgisirsi/?ps=nPdFje4RP9/理工大學(xué)館/125620449/303">
- <!-- Copyright (c) 2004, Sirsi Corporation - myProfile login or view myFavorites -->
- <!-- Copyright (c) 1998 - 2003, Sirsi Corporation - Sets the default values for USER_ID, ALT_ID, and PIN prompts. - The USER_ID, ALT_ID, and PIN page variables will be returned. -->
- <!-- If the user has not logged in, first try to default to the ID based on the IP address - the $UO and $Uf will be set. If that fails, then default to the IDs in the config file. If the user has already logged in, default to the logged in user's IDs, unless the user is a shared login. -->
- <!-- only user ID is used if both on -->
- <div class="user_name">
- <label for="user_id">借閱證號(hào)碼:</label>
- <input class="user_name_input" type="text" name="user_id" id="user_id" maxlength="20" value=""/>
- </div>
- <div class="password">
- <label for="password">個(gè)人密碼:</label>
- <input class="password_input" type="password" name="password" id="password" maxlength="20" value=""/>
- </div>
- <input type="submit" value="用戶登錄" class="login_button"/>
查找網(wǎng)頁中的form表單中的action,方法為post,但是隨后我們發(fā)現(xiàn),該網(wǎng)頁中的action地址不是一定的,是隨機(jī)變化的,刷新一下就變成了下面這樣子的:
- <form name="loginform" method="post" action="/uhtbin/cgisirsi/?ps=1Nimt5K1Lt/理工大學(xué)館/202330426/303">
我們可以看到/?ps到/之間的字符串是隨機(jī)變化的(加粗部分),于是我們需要用到另一個(gè)模塊——BeautifulSoup實(shí)時(shí)獲取該鏈接:
- url = "http://211.81.31.34/uhtbin/cgisirsi/x/0/0/57/49?user_id=LIBSCI_ENGI&password=LIBSC"
- res = urllib2.urlopen(url).read()
- soup = BeautifulSoup(res, "html.parser")
- login_url = "http://211.81.31.34" + soup.findAll("form")[1]['action'].encode("utf8")
之后就可以正常使用urllib和urllib來模擬登陸了,下面列舉一下BeautifulSoup的常用方法,之后的HTML解析需要:
1.soup.contents 該屬性可以將tag的子節(jié)點(diǎn)以列表的方式輸出
2.soup.children 通過tag的.children生成器,可以對(duì)tag的子節(jié)點(diǎn)進(jìn)行循環(huán)
3.soup.parent 獲取某個(gè)元素的父節(jié)點(diǎn)
4.soup.find_all(name,attrs,recursive,text,**kwargs) 搜索當(dāng)前tag的所有tag子節(jié)點(diǎn),并判斷是否符合過濾器的條件
5.soup.find_all("a",class="xx") 按CSS搜索
6.find(name,attrs,recursive,text,**kwargs) 可以通過limit和find_all區(qū)分開
二、解析所獲得的HTML
先看看需求中的HTML的特點(diǎn):
- <tbody id="tblSuspensions">
- <!-- OCLN changed Listcode to Le to support charge history -->
- <!-- SIRSI_List Listcode="LN" -->
- <tr>
- <td class="accountstyle" align="left">
- <!-- SIRSI_Conditional IF List_DC_Exists="IB" AND NOT List_DC_Comp="IB^" -->
- <!-- Start title here -->
- <!-- Title -->
- 做人要低調(diào),說話要幽默 孫郡鎧編著
- </td>
- <td class="accountstyle author" align="left">
- <!-- Author -->
- 孫郡鎧 編著
- </td>
- <td class="accountstyle due_date" align="center">
- <!-- Date Charged -->
- 2015/9/10,16:16
- </td>
- <td class="accountstyle due_date" align="left">
- <!-- Date Returned -->
- 2015/9/23,15:15
- </td>
- <td class="accountstyle author" align="center">
- <!-- Call Number -->
- B821-49/S65
- </td>
- </tr>
- <tr>
- <td class="accountstyle" align="left">
- <!-- SIRSI_Conditional IF List_DC_Exists="IB" AND NOT List_DC_Comp="IB^" -->
- <!-- Start title here -->
- <!-- Title -->
- 我用一生去尋找 潘石屹的人生哲學(xué) 潘石屹著
- </td>
- <td class="accountstyle author" align="left">
- <!-- Author -->
- 潘石屹, 1963- 著
- </td>
- <td class="accountstyle due_date" align="center">
- <!-- Date Charged -->
- 2015/9/10,16:16
- </td>
- <td class="accountstyle due_date" align="left">
- <!-- Date Returned -->
- 2015/9/25,15:23
- </td>
- <td class="accountstyle author" align="center">
- <!-- Call Number -->
- B821-49/P89
- </td>
- </tr>
由所有代碼,注意這行:
- <tbody id="tblSuspensions">
該標(biāo)簽表示下面的內(nèi)容將是借閱書籍的相關(guān)信息,我們采用遍歷該網(wǎng)頁所有子節(jié)點(diǎn)的方法獲得id="tblSuspensions"的內(nèi)容:
- for i, k in enumerate(BeautifulSoup(detail, "html.parser").find(id='tblSuspensions').children):
- # print i,k
- if isinstance(k, element.Tag):
- bookhtml.append(k)
- # print type(k)
三、提取所需要的內(nèi)容
這一步比較簡單,bs4中的BeautifulSoup可以輕易的提取:
for i in bookhtml:
- # p
- # rint i
- name = i.find(class_="accountstyle").getText()
- author = i.find(class_="accountstyle author", align="left").getText()
- Date_Charged = i.find(class_="accountstyle due_date", align="center").getText()
- Date_Returned = i.find(class_="accountstyle due_date", align="left").getText()
- bookid = i.find(class_="accountstyle author", align="center").getText()
- bookinfo.append(
- [name.strip(), author.strip(), Date_Charged.strip(), Date_Returned.strip(), bookid.strip()])
這一步采用getText()的方法將text中內(nèi)容提取出來;strip()方法是去掉前后空格,同時(shí)可以保留之間的空格,比如:s=" a a ",使用s.strip()之后即為"a a"
四、連接數(shù)據(jù)庫
據(jù)說NoSQL以后會(huì)很流行,隨后采用了Mongodb數(shù)據(jù)庫圖圖新鮮,結(jié)果一折騰真是煩,具體安裝方法在上一篇日記中記載了。
1.導(dǎo)入python連接Mongodb的模塊
import pymongo
2.創(chuàng)建python和Mongodb的鏈接:
# connection database
conn = pymongo.MongoClient("mongodb://root:root@localhost:27017")
db = conn.book
collection = db.book
3.將獲得的內(nèi)容保存到數(shù)據(jù)庫:
user = {"_id": xuehao_ben,
"Bookname": name.strip(),
"Author": author.strip(),
"Rent_Day": Date_Charged.strip(),
"Return_Day": Date_Returned.strip()}
j += 1
collection.insert(user)
上面基本完成了,但是爬蟲做到這個(gè)沒有意義,重點(diǎn)在下面
五、獲取全校學(xué)生的借閱記錄
我們學(xué)校的圖書館的密碼都是一樣的,應(yīng)該沒有人閑得無聊改密碼,甚至沒有人用過這個(gè)網(wǎng)站去查詢自己的借閱記錄,所以,做個(gè)循環(huán),就可以輕易的獲取到全校的借閱記錄了,然后并沒有那么簡單,str(0001)強(qiáng)制將int變成string,但是在cmd的python中是報(bào)錯(cuò)的(在1位置),在pycharm前面三個(gè)0是忽略的,只能用傻瓜式的四個(gè)for循環(huán)了。好了,下面是所有代碼:
- # encoding=utf8
- import urllib2
- import urllib
- import pymongo
- import socket
- from bs4 import BeautifulSoup
- from bs4 import element
- # connection database
- conn = pymongo.MongoClient("mongodb://root:root@localhost:27017")
- db = conn.book
- collection = db.book
- # 循環(huán)開始
- def xunhuan(xuehao):
- try:
- socket.setdefaulttimeout(60)
- s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- s.bind(("127.0.0.1", 80))
- url = "http://211.81.31.34/uhtbin/cgisirsi/x/0/0/57/49?user_id=LIBSCI_ENGI&password=LIBSC"
- res = urllib2.urlopen(url).read()
- soup = BeautifulSoup(res, "html.parser")
- login_url = "http://211.81.31.34" + soup.findAll("form")[1]['action'].encode("utf8")
- params = {
- "user_id": "賬號(hào)前綴你猜你猜" + xuehao,
- "password": "密碼你猜猜"
- }
- print params
- params = urllib.urlencode(params)
- req = urllib2.Request(login_url, params)
- lianjie = urllib2.urlopen(req)
- # print lianjie
- jieyue_res = lianjie.read()
- # print jieyue_res 首頁的HTML代碼
- houmian = BeautifulSoup(jieyue_res, "html.parser").find_all('a', class_='rootbar')[1]['href']
- # print houmian
- houmian = urllib.quote(houmian.encode('utf8'))
- url_myaccount = "http://211.81.31.34" + houmian
- # print url_myaccount
- # print urllib.urlencode(BeautifulSoup(jieyue_res, "html.parser").find_all('a',class_ = 'rootbar')[0]['href'])
- lianjie2 = urllib.urlopen(url_myaccount)
- myaccounthtml = lianjie2.read()
- detail_url = ''
- # print (BeautifulSoup(myaccounthtml).find_all('ul',class_='gatelist_table')[0]).children
- print "連接完成,開始爬取數(shù)據(jù)"
- for i in (BeautifulSoup(myaccounthtml, "html.parser").find_all('ul', class_='gatelist_table')[0]).children:
- if isinstance(i, element.NavigableString):
- continue
- for ii in i.children:
- detail_url = ii['href']
- break
- detail_url = "http://211.81.31.34" + urllib.quote(detail_url.encode('utf8'))
- detail = urllib.urlopen(detail_url).read()
- # print detail
- bookhtml = []
- bookinfo = []
- # 解決沒有借書
- try:
- for i, k in enumerate(BeautifulSoup(detail, "html.parser").find(id='tblSuspensions').children):
- # print i,k
- if isinstance(k, element.Tag):
- bookhtml.append(k)
- # print type(k)
- print "look here!!!"
- j = 1
- for i in bookhtml:
- # p
- # rint i
- name = i.find(class_="accountstyle").getText()
- author = i.find(class_="accountstyle author", align="left").getText()
- Date_Charged = i.find(class_="accountstyle due_date", align="center").getText()
- Date_Returned = i.find(class_="accountstyle due_date", align="left").getText()
- bookid = i.find(class_="accountstyle author", align="center").getText()
- bookinfo.append(
- [name.strip(), author.strip(), Date_Charged.strip(), Date_Returned.strip(), bookid.strip()])
- xuehao_ben = str(xuehao) + str("_") + str(j)
- user = {"_id": xuehao_ben,
- "Bookname": name.strip(),
- "Author": author.strip(),
- "Rent_Day": Date_Charged.strip(),
- "Return_Day": Date_Returned.strip()}
- j += 1
- collection.insert(user)
- except Exception, ee:
- print ee
- print "此人沒有借過書"
- user = {"_id": xuehao,
- "Bookname": "此人",
- "Author": "沒有",
- "Rent_Day": "借過",
- "Return_Day": "書"}
- collection.insert(user)
- print "********" + str(xuehao) + "_Finish"+"**********"
- except Exception, e:
- s.close()
- print e
- print "socket超時(shí),重新運(yùn)行"
- xunhuan(xuehao)
- # with contextlib.closing(urllib.urlopen(req)) as A:
- # print A
- # print xuehao
- # print req
- for i1 in range(0, 6):
- for i2 in range(0, 9):
- for i3 in range(0, 9):
- for i4 in range(0, 9):
- xueha = str(i1) + str(i2) + str(i3) + str(i4)
- chushi = '0000'
- if chushi == xueha:
- print "=======爬蟲開始=========="
- else:
- print xueha + "begin"
- xunhuan(xueha)
- conn.close()
- print "End!!!"
下面是Mongodb Management Studio的顯示內(nèi)容(部分):
總結(jié):這次爬蟲遇到了很多問題,問了很多人,但是最終效果還不是很理想,雖然用了try except語句,但是還是會(huì)報(bào)錯(cuò)10060,連接超時(shí)(我只能質(zhì)疑學(xué)校的服務(wù)器了TT),還有就是,你可以看到數(shù)據(jù)庫中列的順序不一樣=。=這個(gè)我暫時(shí)未理解,希望大家可以給出解決方法。
以上就是本文的全部內(nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助。