python抓取网页信息代码(python抓取网站源代码)

http://www.itjxue.com  2023-04-07 13:37  来源:未知  点击次数: 

python抓取网页中文信息

#?-*-?coding:utf-8?-*-

import?urllib

import?re

#?使用正则表达式限定抓取的网页地址

regex?=?r'a?href="(.+?)"?target="_blank"strong?class="'

pat?=?re.compile(regex)

page?=?1

url?=?"

info?=?urllib.urlopen(url).read()

Sub_pages?=?re.findall(pat,info)

#?获得网址中所有产品信息

regex?=?r'td(.+?)nbsp;/td'

pat?=?re.compile(regex)

for?page?in?Sub_pages:

????content?=?urllib.urlopen(page).read()

????info?=?re.findall(pat,content)

????print?'\n'.join(info)#改成这样试试

python如何提取网页信息?

page = urllib2.urlopen(url)

contents = page.read()

#获得了整个网页的内容也就是源代码

print(contents)

如何用Python爬虫抓取网页内容?

爬虫流程

其实把网络爬虫抽象开来看,它无外乎包含如下几个步骤

模拟请求网页。模拟浏览器,打开目标网站。

获取数据。打开网站之后,就可以自动化的获取我们所需要的网站数据。

保存数据。拿到数据之后,需要持久化到本地文件或者数据库等存储设备中。

那么我们该如何使用 Python 来编写自己的爬虫程序呢,在这里我要重点介绍一个 Python 库:Requests。

Requests 使用

Requests 库是 Python 中发起 HTTP 请求的库,使用非常方便简单。

模拟发送 HTTP 请求

发送 GET 请求

当我们用浏览器打开豆瓣首页时,其实发送的最原始的请求就是 GET 请求

import requests

res = requests.get('')

print(res)

print(type(res))

Response [200]

class 'requests.models.Response'

如何用python爬取网站数据?

这里简单介绍一下吧,以抓取网站静态、动态2种数据为例,实验环境win10+python3.6+pycharm5.0,主要内容如下:

抓取网站静态数据(数据在网页源码中):以糗事百科网站数据为例

1.这里假设我们抓取的数据如下,主要包括用户昵称、内容、好笑数和评论数这4个字段,如下:

对应的网页源码如下,包含我们所需要的数据:

2.对应网页结构,主要代码如下,很简单,主要用到requests+BeautifulSoup,其中requests用于请求页面,BeautifulSoup用于解析页面:

程序运行截图如下,已经成功爬取到数据:

抓取网站动态数据(数据不在网页源码中,json等文件中):以人人贷网站数据为例

1.这里假设我们爬取的是债券数据,主要包括年利率、借款标题、期限、金额和进度这5个字段信息,截图如下:

打开网页源码中,可以发现数据不在网页源码中,按F12抓包分析时,才发现在一个json文件中,如下:

2.获取到json文件的url后,我们就可以爬取对应数据了,这里使用的包与上面类似,因为是json文件,所以还用了json这个包(解析json),主要内容如下:

程序运行截图如下,已经成功抓取到数据:

至此,这里就介绍完了这2种数据的抓取,包括静态数据和动态数据。总的来说,这2个示例不难,都是入门级别的爬虫,网页结构也比较简单,最重要的还是要会进行抓包分析,对页面进行分析提取,后期熟悉后,可以借助scrapy这个框架进行数据的爬取,可以更方便一些,效率更高,当然,如果爬取的页面比较复杂,像验证码、加密等,这时候就需要认真分析了,网上也有一些教程可供参考,感兴趣的可以搜一下,希望以上分享的内容能对你有所帮助吧。

求python抓网页的代码

python3.x中使用urllib.request模块来抓取网页代码,通过urllib.request.urlopen函数取网页内容,获取的为数据流,通过read()函数把数字读取出来,再把读取的二进制数据通过decode函数解码(编号可以通过查看网页源代码中meta? http-equiv="content-type" content="text/html;charset=gbk" /得知,如下例中为gbk编码。),这样就得到了网页的源代码。

如下例所示,抓取本页代码:

import?urllib.request

html?=?urllib.request.urlopen('

).read().decode('gbk')?#注意抓取后要按网页编码进行解码

print(html)

以下为urllib.request.urlopen函数说明:

urllib.request.urlopen(url,

data=None, [timeout, ]*, cafile=None, capath=None,

cadefault=False, context=None)

Open the URL url, which can be either a string or a Request object.

data must be a bytes object specifying additional data to be sent to

the server, or None

if no such data is needed. data may also be an iterable object and in

that case Content-Length value must be specified in the headers. Currently HTTP

requests are the only ones that use data; the HTTP request will be a

POST instead of a GET when the data parameter is provided.

data should be a buffer in the standard application/x-www-form-urlencoded format. The urllib.parse.urlencode() function takes a mapping or

sequence of 2-tuples and returns a string in this format. It should be encoded

to bytes before being used as the data parameter. The charset parameter

in Content-Type

header may be used to specify the encoding. If charset parameter is not sent

with the Content-Type header, the server following the HTTP 1.1 recommendation

may assume that the data is encoded in ISO-8859-1 encoding. It is advisable to

use charset parameter with encoding used in Content-Type header with the Request.

urllib.request module uses HTTP/1.1 and includes Connection:close header

in its HTTP requests.

The optional timeout parameter specifies a timeout in seconds for

blocking operations like the connection attempt (if not specified, the global

default timeout setting will be used). This actually only works for HTTP, HTTPS

and FTP connections.

If context is specified, it must be a ssl.SSLContext instance describing the various SSL

options. See HTTPSConnection for more details.

The optional cafile and capath parameters specify a set of

trusted CA certificates for HTTPS requests. cafile should point to a

single file containing a bundle of CA certificates, whereas capath

should point to a directory of hashed certificate files. More information can be

found in ssl.SSLContext.load_verify_locations().

The cadefault parameter is ignored.

For http and https urls, this function returns a http.client.HTTPResponse object which has the

following HTTPResponse

Objects methods.

For ftp, file, and data urls and requests explicitly handled by legacy URLopener and FancyURLopener classes, this function returns a

urllib.response.addinfourl object which can work as context manager and has methods such as

geturl() — return the URL of the resource retrieved,

commonly used to determine if a redirect was followed

info() — return the meta-information of the page, such

as headers, in the form of an email.message_from_string() instance (see Quick

Reference to HTTP Headers)

getcode() – return the HTTP status code of the response.

Raises URLError on errors.

Note that None

may be returned if no handler handles the request (though the default installed

global OpenerDirector uses UnknownHandler to ensure this never happens).

In addition, if proxy settings are detected (for example, when a *_proxy environment

variable like http_proxy is set), ProxyHandler is default installed and makes sure the

requests are handled through the proxy.

The legacy urllib.urlopen function from Python 2.6 and earlier has

been discontinued; urllib.request.urlopen() corresponds to the old

urllib2.urlopen.

Proxy handling, which was done by passing a dictionary parameter to urllib.urlopen, can be

obtained by using ProxyHandler objects.

Changed in version 3.2: cafile

and capath were added.

Changed in version 3.2: HTTPS virtual

hosts are now supported if possible (that is, if ssl.HAS_SNI is true).

New in version 3.2: data can be

an iterable object.

Changed in version 3.3: cadefault

was added.

Changed in version 3.4.3: context

was added.

如何利用Python抓取静态网站及其内部资源?

这个非常简单,requests+BeautifulSoup组合就可以轻松实现,下面我简单介绍一下,感兴趣的朋友可以自己尝试一下,这里以爬取糗事百科网站数据(静态网站)为例:

1.首先,安装requets模块,这个直接在cmd窗口输入命令“pipinstallrequests”就行,如下:

2.接着安装bs4模块,这个模块包含了BeautifulSoup,安装的话,和requests一样,直接输入安装命令“pipinstallbs4”即可,如下:

3.最后就是requests+BeautifulSoup组合爬取糗事百科,requests用于请求页面,BeautifulSoup用于解析页面,提取数据,主要步骤及截图如下:

这里假设爬取的数据包含如下几个字段,包括用户昵称、内容、好笑数和评论数:

接着打开对应网页源码,就可以直接看到字段信息,内容如下,嵌套在各个标签中,后面就是解析这些标签提取数据:

基于上面网页内容,测试代码如下,非常简单,直接find对应标签,提取文本内容即可:

程序运行截图如下,已经成功抓取到网站数据:

至此,我们就完成了使用python来爬去静态网站。总的来说,整个过程非常简单,也是最基本的爬虫内容,只要你有一定的python基础,熟悉一下上面的示例,很快就能掌握的,当然,你也可以使用urllib,正则表达式匹配等,都行,网上也有相关教程和资料,介绍的非常详细,感兴趣的话,可以搜一下,希望以上分享的内容能对你有所帮助吧,也欢迎大家评论、留言进行补充。

(责任编辑:IT教学网)

更多