Skip to Content

Requests: What You Need To Build Useful Apps

Today’s article has been written by guest Abdur-Rahmaan Janhangeer. He is Organising Member of the Python Mauritius User Group, Arabic Coordinator for the docs and maintainer of and was kind enough to submit this article to be posted there.

Requests is a popular Python library. It has particularly been praised for its intuitive programming API. If you are new to Python, this should be a top priority in your learning path. Today we are going to give you some useful blocks to immediately get started! Have fun!

Requests gets 1M+ downloads per day, yes per day! That’s really crazy. It’s the lifetime download of an average package! It has also been used among others to help humanity see the first image of a blackhole. Sphinx, Celery and Tornado also use it!

How Do Server Requests Work? #

Requests get it’s name from requests made to servers. Servers always listen for requests to resources. You tell it: give me a packet of sugar, it looks for it, if found, it’ll tell: “ok! found it!”" else, it will tell: “oh oh not found”. In real life, your browser asks for images, webpages etc. Here are some types of requests:

  • GET
  • POST
  • PUT

The most used type is the GET request. While opening a normal webpage, your browser makes a GET request.

Comparing Normal Code To Requests Code #

Here is what you’ll do to get a byte representation of a webpage (which gives the page source).

import urllib.request
content = urllib.request.urlopen("").read()

with requests (it’s pip install requests), you do

import requests as r
req = r.get("")

req has many attributes to try. Let’s explore some!

Requests Exploration #

Try the following in a shell:

>>> import requests as r
>>> req = r.get("")
>>> req.text
>>> req.status_code # 200 means ok!
>>> req.reason
>>> req.text
>>> req.content

Start Cooking: Using Beautiful Soup #

Here’s a BeautifulSoup snippet usage with requests:

from bs4 import BeautifulSoup
import requests as r

req = r.get("")
soup = BeautifulSoup(req.content, 'html.parser')

As a side note, if you have a url like this:

you can make a get request like this:

req = r.get('', params={'name':'dimitri'})

and you will get the same thing.

Getting JSON data #

JSON data are similar to Python dictionaries. Here is a snippet to try out:

import requests as r

req = r.get("")

It’s an api that generates fake user data. req.json() returns a Python dictionary ready to use.

Download File #

Here’s a function that downloads a file where you want, given the url:

import requests

def download_file(file_url, path):
    r = requests.get(file_url, allow_redirects=True)
    open(path, 'wb').write(r.content)

Since r.content returns a byte object, we write the file in wb (write binary) mode, thus getting our file. It works for any file url. Try it!

Upload File #

POST requests are used to send data to the server, a typical post requests look like this:'', data={'name':'moi', 'password':'1234'})

Let’s say while disseting a form you get a form using POST to upload a file which looks like this:

<form action="/upload" method = "POST"
<input type="file" name="file_to_upload" />
<input type="submit"/>

We can write a script like this to upload a file, an image in this case:

import requests as r

files = {'file': open('<path_to>/image.png', 'rb')}
req ="", files=files)

The above snippet is useful if you have to upload to your own custom backend. But, in the real world, sites add some input values with different name attributes to detect automatic submissions and you might have to modify to"", files=files, data={'<name>':'<value>',})

Caveats To Look For #

It should be born in mind that requests does not execute the JavaScript on a web page. If you are doing webscraping, there are other tools like Selenium.

Requests in an incredible tool to add to your arsenal, make good use of it!

Thanks for reading this far :)

I'd love to hear what you have to say, so please feel free to leave a comment below, or read the contact page for more ways to get in touch with me.

Note that to get notified when new articles are published, you can either: