Using Python to sign into website, fill in a form, then sign out

import urllib import urllib2 name = “name field” data = { “name” : name } encoded_data = urllib.urlencode(data) content = urllib2.urlopen(“http://www.abc.com/messages.php?action=send”, encoded_data) print content.readlines() just replace http://www.abc.com/messages.php?action=send with the url where your form is being submitted reply to your comment: if the url is the url where your form is located, and you need to … Read more

Mock exception raised in function using Pytest

You can mock error raising via side_effect parameter: Alternatively side_effect can be an exception class or instance. In this case the exception will be raised when the mock is called. In your case, this can be used like this (assuming call_api is defined in module foo): import pytest from unittest.mock import patch def test_api(): with … Read more

Python ‘requests’ library – define specific DNS?

requests uses urllib3, which ultimately uses httplib.HTTPConnection as well, so the techniques from https://stackoverflow.com/questions/4623090/python-set-custom-dns-server-for-urllib-requests (now deleted, it merely linked to Tell urllib2 to use custom DNS) still apply, to a certain extent. The urllib3.connection module subclasses httplib.HTTPConnection under the same name, having replaced the .connect() method with one that calls self._new_conn. In turn, this delegates … Read more

urllib.urlretrieve with custom header

I found a way where you only have to add a few extra lines of code… import urllib.request opener = urllib.request.build_opener() opener.addheaders = [(‘User-agent’, ‘Mozilla/5.0’)] urllib.request.install_opener(opener) urllib.request.urlretrieve(“type URL here”, “path/file_name”) Should you wish to learn about the details you can refer to the python documentation: https://docs.python.org/3/library/urllib.request.html

How to handle urllib’s timeout in Python 3?

Catch the different exceptions with explicit clauses, and check the reason for the exception with URLError (thank you Régis B. and Daniel Andrzejewski) from socket import timeout from urllib.error import HTTPError, URLError try: response = urllib.request.urlopen(url, timeout=10).read().decode(‘utf-8’) except HTTPError as error: logging.error(‘HTTP Error: Data of %s not retrieved because %s\nURL: %s’, name, error, url) except … Read more

How to download any(!) webpage with correct charset in python?

When you download a file with urllib or urllib2, you can find out whether a charset header was transmitted: fp = urllib2.urlopen(request) charset = fp.headers.getparam(‘charset’) You can use BeautifulSoup to locate a meta element in the HTML: soup = BeatifulSoup.BeautifulSoup(data) meta = soup.findAll(‘meta’, {‘http-equiv’:lambda v:v.lower()==’content-type’}) If neither is available, browsers typically fall back to user … Read more

What command to use instead of urllib.request.urlretrieve?

Deprecated is one thing, might become deprecated at some point in the future is another. If it suits your needs, I’d continuing using urlretrieve. That said, you can use shutil.copyfileobj: from urllib.request import urlopen from shutil import copyfileobj with urlopen(my_url) as in_stream, open(‘my_filename’, ‘wb’) as out_file: copyfileobj(in_stream, out_file)

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)