Pushing the response time of your web app and keep running into problems with 3rd party APIs? Requests provides a solid method to rein in those unruly calls that haunt your web apps' response time metrics. In this poster, I will show how to expect failure from these external calls and make multiple calls concurrently to improve the overall speed and reliability of your app.
Despite all the caching, and off-loading there are still going to be times in a web request that you want to make a HTTP request to another service. This is a killer for web apps because the response time of your app depends on the response time of another HTTP service.
We have all relied on Facebook’s, Google’s or Flickr’s APIs to build dynamic web apps the easiest place to put those requests is inside your own request/response cycle. But your server’s response time will be directly impacted.
As processing power increases more, open files and connections take down servers more often. Having an external request in your request/response cycle can leave open connections decreasing your overall server throughput and ultimately can crash your web server(s).
The Requests library is “HTTP for humans” but it also provides plenty of methods for you to make sure that your response time does not suffer from slow external (or internal) APIs.
We work directly with weather providers and their APIs to display the most up-to-date personalized weather information. Caching, prerendering is not an option. We have to make multiple requests to APIs during the request/response cycle. This meant a lot of timeouts and required cleaning up in the past.
Now with Requests we explicitly set a timeout and anticipate failure within a certain period of time. In this presentation, I will present how we expect failures and fail often failure when requesting external APIs. Providing multiple chances for a user to get a full experience and when things can’t be recovered, providing a clean back up experience.
Requests has an additional feature that has been separated out of the core library recently, GRequests. This allows you to cue up multiple requests and leverage Greenlets to concurrently make multiple requests to complete even more requests inside of an acceptable amount of time.
I will talk about how to leverage the responses from multiple threads to make sure that your HTTP responses are fast and reliable. I will present the real life code that allows us to handle thousands of requests a minute from an EC2 micro instance with multiple external API calls during a bulk of those requests.