各位用户为了找寻关于python 3利用BeautifulSoup抓取div标签的方法示例的资料费劲了很多周折。这里教程网为您整理了关于python 3利用BeautifulSoup抓取div标签的方法示例的相关资料,仅供查阅,以下为您介绍关于python 3利用BeautifulSoup抓取div标签的方法示例的详细内容
前言
本文主要介绍的是关于python 3用BeautifulSoup抓取div标签的方法示例,分享出来供大家参考学习,下面来看看详细的介绍:
示例代码:
? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60# -*- coding:utf-8 -*-
#python 2.7
#XiaoDeng
#http://tieba.baidu.com/p/2460150866
#标签操作
from
bs4
import
BeautifulSoup
import
urllib.request
import
re
#如果是网址,可以用这个办法来读取网页
#html_doc = "http://tieba.baidu.com/p/2460150866"
#req = urllib.request.Request(html_doc)
#webpage = urllib.request.urlopen(req)
#html = webpage.read()
html
=
"""
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" rel="external nofollow" class="sister" id="xiaodeng"><!-- Elsie --></a>,
<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" rel="external nofollow" class="sister" id="link3">Tillie</a>;
<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" class="sister" id="xiaodeng">Lacie</a>
and they lived at the bottom of a well.</p>
<div class="ntopbar_loading"><img src="https://www.herecours.com/d/file/p/2023/0706/20230706172337242419.jpg">加载中…</div>
<div class="SG_connHead">
<span class="title" comp_title="个人资料">个人资料</span>
<span class="edit">
</span>
<div class="info_list">
<ul class="info_list1">
<li><span class="SG_txtc">博客等级:</span><span id="comp_901_grade"><img src="https://www.herecours.com/d/file/p/2023/0706/20230706172337242421.jpg" real_src="https://www.herecours.com/d/file/p/2023/0706/20230706172337242422.jpg" /></span></li>
<li><span class="SG_txtc">博客积分:</span><span id="comp_901_score"><strong>0</strong></span></li>
</ul>
<ul class="info_list2">
<li><span class="SG_txtc">博客访问:</span><span id="comp_901_pv"><strong>3,971</strong></span></li>
<li><span class="SG_txtc">关注人气:</span><span id="comp_901_attention"><strong>0</strong></span></li>
<li><span class="SG_txtc">获赠金笔:</span><strong id="comp_901_d_goldpen">0支</strong></li>
<li><span class="SG_txtc">赠出金笔:</span><strong id="comp_901_r_goldpen">0支</strong></li>
<li class="lisp" id="comp_901_badge"><span class="SG_txtc">荣誉徽章:</span></li>
</ul>
</div>
<div class="atcTit_more"><span class="SG_more"><a href="http://blog.sina.com.cn/" rel="external nofollow" rel="external nofollow" target="_blank">更多>></a></span></div>
<p class="story">...</p>
"""
soup
=
BeautifulSoup(html,
'html.parser'
)
#文档对象
# 类名为xxx而且文本内容为hahaha的div
for
k
in
soup.find_all(
'div'
,
class_
=
'atcTit_more'
):
#,string='更多'
print
(k)
#<div class="atcTit_more"><span class="SG_more"><a href="http://blog.sina.com.cn/" rel="external nofollow" rel="external nofollow" target="_blank">更多>></a></span></div>
总结
以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作能带来一定的帮助,如果有疑问大家可以留言交流,谢谢大家服务器之家的支持。
原文链接:http://www.cnblogs.com/dengyg200891/p/6060129.html