Python爬蟲包BeautifulSoup實(shí)例(三)
一步一步構(gòu)建一個(gè)爬蟲實(shí)例,抓取糗事百科的段子
先不用beautifulsoup包來(lái)進(jìn)行解析
第一步,訪問(wèn)網(wǎng)址并抓取源碼
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 20:17:13
import urllib
import urllib2
import re
import os
if __name__ == '__main__':
# 訪問(wèn)網(wǎng)址并抓取源碼
url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
try:
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
content = response.read()
except urllib2.HTTPError as e:
print e
exit()
except urllib2.URLError as e:
print e
exit()
print content.decode('utf-8')
第二步,利用正則表達(dá)式提取信息
首先先觀察源碼中,你需要的內(nèi)容的位置以及如何識(shí)別
然后用正則表達(dá)式去識(shí)別讀取
注意正則表達(dá)式中的 . 是不能匹配\n的,所以需要設(shè)置一下匹配模式。
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 20:17:13
import urllib
import urllib2
import re
import os
if __name__ == '__main__':
# 訪問(wèn)網(wǎng)址并抓取源碼
url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
try:
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
content = response.read()
except urllib2.HTTPError as e:
print e
exit()
except urllib2.URLError as e:
print e
exit()
regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S)
items = re.findall(regex, content)
# 提取數(shù)據(jù)
# 注意換行符,設(shè)置 . 能夠匹配換行符
for item in items:
print item
第三步,修正數(shù)據(jù)并保存到文件中
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 21:41:32
import urllib
import urllib2
import re
import os
if __name__ == '__main__':
# 訪問(wèn)網(wǎng)址并抓取源碼
url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
try:
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
content = response.read()
except urllib2.HTTPError as e:
print e
exit()
except urllib2.URLError as e:
print e
exit()
regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S)
items = re.findall(regex, content)
# 提取數(shù)據(jù)
# 注意換行符,設(shè)置 . 能夠匹配換行符
path = './qiubai'
if not os.path.exists(path):
os.makedirs(path)
count = 1
for item in items:
#整理數(shù)據(jù),去掉\n,將<br/>換成\n
item = item.replace('\n', '').replace('<br/>', '\n')
filepath = path + '/' + str(count) + '.txt'
f = open(filepath, 'w')
f.write(item)
f.close()
count += 1
第四步,將多個(gè)頁(yè)面下的內(nèi)容都抓取下來(lái)
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 20:17:13
import urllib
import urllib2
import re
import os
if __name__ == '__main__':
# 訪問(wèn)網(wǎng)址并抓取源碼
path = './qiubai'
if not os.path.exists(path):
os.makedirs(path)
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S)
count = 1
for cnt in range(1, 35):
print '第' + str(cnt) + '輪'
url = 'http://www.qiushibaike.com/textnew/page/' + str(cnt) + '/?s=4941357'
try:
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
content = response.read()
except urllib2.HTTPError as e:
print e
exit()
except urllib2.URLError as e:
print e
exit()
# print content
# 提取數(shù)據(jù)
# 注意換行符,設(shè)置 . 能夠匹配換行符
items = re.findall(regex, content)
# 保存信息
for item in items:
# print item
#整理數(shù)據(jù),去掉\n,將<br/>換成\n
item = item.replace('\n', '').replace('<br/>', '\n')
filepath = path + '/' + str(count) + '.txt'
f = open(filepath, 'w')
f.write(item)
f.close()
count += 1
print '完成'
使用BeautifulSoup對(duì)源碼進(jìn)行解析
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 16:16:08
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 21:34:02
import urllib
import urllib2
import re
import os
from bs4 import BeautifulSoup
if __name__ == '__main__':
url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
headers = {'User-Agent':user_agent}
request = urllib2.Request(url = url, headers = headers)
response = urllib2.urlopen(request)
# print response.read()
soup_packetpage = BeautifulSoup(response, 'lxml')
items = soup_packetpage.find_all("div", class_="content")
for item in items:
try:
content = item.span.string
except AttributeError as e:
print e
exit()
if content:
print content + "\n"
這是用BeautifulSoup去抓取書本以及其價(jià)格的代碼
可以通過(guò)對(duì)比得出到bs4對(duì)標(biāo)簽的讀取以及標(biāo)簽內(nèi)容的讀取
(因?yàn)槲易约阂矝](méi)有學(xué)到這一部分,目前只能依葫蘆畫瓢地寫)
# -*- coding: utf-8 -*-
# @Author: HaonanWu
# @Date: 2016-12-22 20:37:38
# @Last Modified by: HaonanWu
# @Last Modified time: 2016-12-22 21:27:30
import urllib2
import urllib
import re
from bs4 import BeautifulSoup
url = "https://www.packtpub.com/all"
try:
html = urllib2.urlopen(url)
except urllib2.HTTPError as e:
print e
exit()
soup_packtpage = BeautifulSoup(html, 'lxml')
all_book_title = soup_packtpage.find_all("div", class_="book-block-title")
price_regexp = re.compile(u"\s+\$\s\d+\.\d+")
for book_title in all_book_title:
try:
print "Book's name is " + book_title.string.strip()
except AttributeError as e:
print e
exit()
book_price = book_title.find_next(text=price_regexp)
try:
print "Book's price is "+ book_price.strip()
except AttributeError as e:
print e
exit()
print ""
以上全部為本篇文章的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
- Python3實(shí)現(xiàn)爬蟲爬取趕集網(wǎng)列表功能【基于request和BeautifulSoup模塊】
- python3 BeautifulSoup模塊使用字典的方法抓取a標(biāo)簽內(nèi)的數(shù)據(jù)示例
- python利用beautifulSoup實(shí)現(xiàn)爬蟲
- python爬蟲入門教程--HTML文本的解析庫(kù)BeautifulSoup(四)
- python3第三方爬蟲庫(kù)BeautifulSoup4安裝教程
- python爬蟲之BeautifulSoup 使用select方法詳解
- Python爬蟲beautifulsoup4常用的解析方法總結(jié)
- Python爬蟲包BeautifulSoup簡(jiǎn)介與安裝(一)
- Python爬蟲包 BeautifulSoup 遞歸抓取實(shí)例詳解
- Python爬蟲庫(kù)BeautifulSoup獲取對(duì)象(標(biāo)簽)名,屬性,內(nèi)容,注釋
- Python爬蟲包BeautifulSoup異常處理(二)
- python爬蟲學(xué)習(xí)筆記之Beautifulsoup模塊用法詳解
相關(guān)文章
如何解決vscode下powershell終端進(jìn)入python虛擬環(huán)境venv問(wèn)題
這篇文章主要介紹了如何解決vscode下powershell終端進(jìn)入python虛擬環(huán)境venv問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2024-05-05
30行Python代碼打造一款簡(jiǎn)單的人工語(yǔ)音對(duì)話
使用gtts和speech_recognition實(shí)現(xiàn)簡(jiǎn)單的人工語(yǔ)音對(duì)話,通過(guò)將語(yǔ)音變成文本,然后文本變成語(yǔ)音,僅用30行代碼,超級(jí)簡(jiǎn)單,對(duì)Python人工語(yǔ)音對(duì)話的實(shí)現(xiàn)過(guò)程及完整代碼感興趣的朋友一起看看吧2021-05-05
Python中字典的基礎(chǔ)知識(shí)歸納小結(jié)
這篇文章主要介紹了Python中字典的基礎(chǔ)知識(shí)歸納小結(jié),都是Python入門學(xué)習(xí)中的基本知識(shí),值得反復(fù)鞏固:)需要的朋友可以參考下2015-08-08
Tensorflow與RNN、雙向LSTM等的踩坑記錄及解決
這篇文章主要介紹了Tensorflow與RNN、雙向LSTM等的踩坑記錄及解決方案,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2021-05-05
Python?代替?xftp?從?Linux?服務(wù)器下載文件的操作方法
python 多線程實(shí)現(xiàn)檢測(cè)服務(wù)器在線情況
Python實(shí)現(xiàn)ElGamal加密算法的示例代碼

