Reference¶
-
class
urlfetch.
Response
(r, **kwargs)[source]¶ A Response object.
>>> import urlfetch >>> response = urlfetch.get("http://docs.python.org/") >>> response.total_time 0.033042049407959 >>> response.status, response.reason, response.version (200, 'OK', 10) >>> type(response.body), len(response.body) (<type 'str'>, 8719) >>> type(response.text), len(response.text) (<type 'unicode'>, 8719) >>> response.getheader('server') 'Apache/2.2.16 (Debian)' >>> response.getheaders() [ ('content-length', '8719'), ('x-cache', 'MISS from localhost'), ('accept-ranges', 'bytes'), ('vary', 'Accept-Encoding'), ('server', 'Apache/2.2.16 (Debian)'), ('last-modified', 'Tue, 26 Jun 2012 19:23:18 GMT'), ('connection', 'close'), ('etag', '"13cc5e4-220f-4c36507ded580"'), ('date', 'Wed, 27 Jun 2012 06:50:30 GMT'), ('content-type', 'text/html'), ('x-cache-lookup', 'MISS from localhost:8080') ] >>> response.headers { 'content-length': '8719', 'x-cache': 'MISS from localhost', 'accept-ranges': 'bytes', 'vary': 'Accept-Encoding', 'server': 'Apache/2.2.16 (Debian)', 'last-modified': 'Tue, 26 Jun 2012 19:23:18 GMT', 'connection': 'close', 'etag': '"13cc5e4-220f-4c36507ded580"', 'date': 'Wed, 27 Jun 2012 06:50:30 GMT', 'content-type': 'text/html', 'x-cache-lookup': 'MISS from localhost:8080' }
Raises: ContentLimitExceeded
-
body
[source]¶ Response body.
Raises: ContentLimitExceeded
,ContentDecodingError
-
content
¶
Cookies in dict
Cookie string
-
classmethod
from_httplib
(connection, **kwargs)[source]¶ Make an
Response
object from a httplib response object.
-
headers
[source]¶ Response headers.
Response headers is a dict with all keys in lower case.
>>> import urlfetch >>> response = urlfetch.get("http://docs.python.org/") >>> response.headers { 'content-length': '8719', 'x-cache': 'MISS from localhost', 'accept-ranges': 'bytes', 'vary': 'Accept-Encoding', 'server': 'Apache/2.2.16 (Debian)', 'last-modified': 'Tue, 26 Jun 2012 19:23:18 GMT', 'connection': 'close', 'etag': '"13cc5e4-220f-4c36507ded580"', 'date': 'Wed, 27 Jun 2012 06:50:30 GMT', 'content-type': 'text/html', 'x-cache-lookup': 'MISS from localhost:8080' }
-
json
[source]¶ Load response body as json.
Raises: ContentDecodingError
-
next
()¶
-
read
(chunk_size=65536)[source]¶ Read content (for streaming and large files)
Parameters: chunk_size (int) – size of chunk, default is 8192.
-
reason
= None¶ Reason phrase returned by server.
-
status
= None¶ Status code returned by server.
-
total_time
= None¶ total time
-
version
= None¶ HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1.
-
-
class
urlfetch.
Session
(headers={}, cookies={}, auth=None)[source]¶ A session object.
urlfetch.Session
can hold common headers and cookies. Every request issued by aurlfetch.Session
object will bring u these headers and cookies.urlfetch.Session
plays a role in handling cookies, just like a cookiejar.Parameters: - headers (dict) – Init headers.
- cookies (dict) – Init cookies.
- auth (tuple) – (username, password) for basic authentication.
cookies
Cookie string.
It’s assignalbe, and will change
cookies
correspondingly.>>> s = Session() >>> s.cookiestring = 'foo=bar; 1=2' >>> s.cookies {'1': '2', 'foo': 'bar'}
-
headers
= None¶ headers
Remove an cookie from default cookies.
Add an cookie to default cookies.
-
urlfetch.
request
(url, method='GET', params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)[source]¶ request an URL
Parameters: - url (string) – URL to be fetched.
- method (string) – (optional) HTTP method, one of
GET
,DELETE
,HEAD
,OPTIONS
,PUT
,POST
,TRACE
,PATCH
.GET
is the default. - params (dict/string) – (optional) Dict or string to attach to url as querystring.
- headers (dict) – (optional) HTTP request headers.
- timeout (float) – (optional) Timeout in seconds
- files – (optional) Files to be sended
- randua – (optional) If
True
orpath string
, use a random user-agent in headers, instead of'urlfetch/' + __version__
- auth (tuple) – (optional) (username, password) for basic authentication
- length_limit (int) – (optional) If
None
, no limits on content length, if the limit reached raised exception ‘Content length is more than …’ - proxies (dict) – (optional) HTTP proxy, like {‘http’: ‘127.0.0.1:8888’, ‘https’: ‘127.0.0.1:563’}
- trust_env (bool) – (optional) If
True
, urlfetch will get infomations from env, such as HTTP_PROXY, HTTPS_PROXY - max_redirects (int) – (integer, optional) Max redirects allowed within a request. Default is 0, which means redirects are not allowed.
- source_address (tuple) – (optional) A tuple of (host, port) to specify the source_address to bind to. This argument is ignored if you’re using Python prior to 2.7/3.2.
Returns: A
Response
objectRaises:
-
urlfetch.
fetch
(*args, **kwargs)[source]¶ fetch an URL.
fetch()
is a wrapper ofrequest()
. It callsget()
by default. If one of parameterdata
or parameterfiles
is supplied,post()
is called.
-
urlfetch.
get
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a get request
-
urlfetch.
post
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a post request
-
urlfetch.
head
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a head request
-
urlfetch.
put
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a put request
-
urlfetch.
delete
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a delete request
-
urlfetch.
options
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a options request
-
urlfetch.
trace
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a trace request
-
urlfetch.
patch
(url, params=None, data=None, headers={}, timeout=None, files={}, randua=False, auth=None, length_limit=None, proxies=None, trust_env=True, max_redirects=0, source_address=None, **kwargs)¶ Issue a patch request
Exceptions¶
helpers¶
-
urlfetch.
parse_url
(url)[source]¶ Return a dictionary of parsed url
Including scheme, netloc, path, params, query, fragment, uri, username, password, host, port and http_host
-
urlfetch.
random_useragent
(filename=True)[source]¶ Returns a User-Agent string randomly from file.
Parameters: filename (string) – (Optional) Path to the file from which a random useragent is generated. By default it’s True
, a file shipped with this module will be used.Returns: An user-agent string.
-
urlfetch.
url_concat
(url, args, keep_existing=True)[source]¶ Concatenate url and argument dictionary
>>> url_concat("http://example.com/foo?a=b", dict(c="d")) 'http://example.com/foo?a=b&c=d'
Parameters: - url (string) – URL being concat to.
- args (dict) – Args being concat.
- keep_existing (bool) – (Optional) Whether to keep the args which are
alreay in url, default is
True
.