Optimizing Rate Limiting in NGINX for API Stability

We are currently enhancing our open source project and encountering issues with rate limiting on the Zoom APIs. We’ve discussed whether we should address this concern, and we’ve decided to proceed.

For more information about our open source

For more information on rate limiting, please refer to this URL where you’ll find a detailed description. Our goal is to implement a solution that assists developers in adhering to the daily request limits.

For instance, suppose a customer is allowed to make 4 requests per second with a maximum of 2000 requests per day. To simulate this scenario and better understand how to manage it, we explored various approaches and tools. This exploration is crucial because if you exceed these limits when using the Zoom API, you will consistently receive an HTTP 429 “Too Many Requests” error, which we aim to avoid.

To achieve this, I decided to use standalone Wiremock and an NGINX server, both configured and managed through a Docker Compose file.

Below is the configuration for my docker-compose.yaml file:

version: '3.8'
services:
  nginx:
    container_name: nginx
    image: nginx:latest
    ports:
      - "8991:80"  
    volumes:
      - ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf  # Mount the custom NGINX config
      - ./nginx/logs:/var/log/nginx  # Mount a directory for NGINX logs
    depends_on:
      - wiremock
    networks:
      - app-network

  wiremock:
    container_name: wiremock
    image: rodolpheche/wiremock
    ports:
      - "8992:8080"
    volumes:
      - ./wiremock:/home/wiremock
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

As you review the configuration, there are several key aspects to note:

NGINX Volumes:

  1. Exposure of NGINX Logs: I use this to monitor current return statuses, which helps in quickly identifying and resolving issues with the server responses.
  2. Exposure of NGINX Configuration: Customization of NGINX is necessary to set up rate limiting for each API. Details of this setup will be discussed later.

Wiremock Volumes:

Exposing Wiremock directories is crucial for flexible and dynamic testing:

  • Mappings: This folder contains all JSON files specifying the APIs you wish to mock. It allows for easy updates and additions to the API endpoints being simulated.
  • __files: This directory holds all JSON responses for the mocked APIs, enabling you to customize the output of each mock request.

These configurations are designed to facilitate development and testing by allowing immediate access to logs and easy customization of responses and rate limits.

Setup Rate limiting on Nginx

To effectively manage rate limiting, we have configured NGINX as a proxy server positioned in front of a standalone Wiremock. This setup allows us to precisely control the rate of requests directed towards our services.

Understanding NGINX Rate Limiting: NGINX implements rate limiting using the leaky bucket algorithm. This method is particularly effective for managing uneven traffic and smoothing out bursts of incoming requests. For a comprehensive explanation of this algorithm, you can refer to this detailed resource

NGINX Configuration Example: Below is the nginx.conf file we use to implement rate limiting. This configuration demonstrates how we set up NGINX to handle different request rates, ensuring our services remain stable under various load conditions:

# Automatically adjust the number of worker processes
worker_processes auto;  
# Logging configuration
error_log /var/log/nginx/error.log warn;
# PID file location  
pid /var/run/nginx.pid;  

events {
# Max number of simultaneous connections per worker
    worker_connections 1024;  
}

http {
    # Log format
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';  
    # Access log location
    access_log /var/log/nginx/access.log main;  
    # Define a rate limiting zone
    limit_req_zone $binary_remote_addr zone=mylimit:10m rate=4r/s;  

    server {
        # NGINX will listen on port 80
        listen 80; 


        location / {
            # Apply the rate limit
            limit_req zone=mylimit burst=3 nodelay; 
            # Redirect 503 errors to 429 
            error_page 503 =429 /custom_429.html;   
            # Forward the host header
            proxy_set_header Host $host;  
                # Proxy requests to WireMock running on port 8080
                proxy_pass http://wiremock:8080;  
        }

        location = /custom_429.html {
            internal;
            default_type text/html;
            return 429 'Too many requests. Please try again later.';
        }
    }
}

It would be good to highlight the most crutial things you do not forge:

  • Beacuse if you reach you rate limit which you settle up, nginx return http code 503 Service Unavailable, because of that, we need to rewrite this logic and there is add custom logic to replace http code 503 for 429
        location = /custom_429.html {
            internal;
            default_type text/html;
            return 429 'Too many requests. Please try again later.';
        }

Another thing wich you need to focus is burst parameter in your setup of rate limit:

  • first of all you need prepare your rate limit zone
# Define a rate limiting zone    
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=4r/s;  
  • zone=mylimit:10m: This part of the directive sets up a named zone (mylimit) for storing the states of requests.
    • mylimit: The name assigned to this rate limiting zone.
    • 10m: This specifies how much memory NGINX should allocate for this zone. Here, 10m stands for 10 megabytes, which determines the capacity of the zone to keep track of incoming requests. The amount of memory allocated influences how many client addresses and their corresponding states can be stored. Generally, more memory allows for more client addresses to be tracked.
  • rate=4r/s: This defines the allowed rate of requests that clients can make.
    • 4r/s means four requests per second. This setting restricts each client identified by their IP address to no more than four requests per second, on average.

This is setup for rate limit 4 request per second which is base rate limit on Zoom API

There are some usefull commands how you can test this whole setup

Command for viewing logs in real time

#for docker 
docker exec -it <nginx-container-id> tail -f /var/log/nginx/access.log

#for log which is exposed into your folder 
tail -f ./nginx/logs/access.log

Command for testing with ab (Apache HTTP server benchmarking tool)

ab -n 10 -c 5 http://<address_to_nginx>/v2/users
  • ab: This is the command for Apache Benchmark, a tool included with the Apache HTTP server software but commonly used to test any HTTP server.
  • -n 10: This option specifies the total number of requests to make during the benchmarking session. In this case, it’s set to 10 requests.
  • -c 5: This option sets the level of concurrency, meaning the number of multiple requests to

There is github when you can find whole setup

Summary

Here is a summary of how to set up rate limiting on a specific API to avoid locking or disabling your account, which is crucial to manage carefully. A key aspect to focus on is the burst setting. While it’s possible to set burst=1, this can lead to issues with how many requests successfully pass through the NGINX proxy server. Initially, we encountered a problem where only 2 out of 10 requests were allowed through, despite our target of allowing 4 requests per second based on our settings.

This discrepancy is largely due to the algorithm used by NGINX, which necessitates finding the right burst setting if you need to strictly manage the throughput and ensure a specific number of requests per second. Adjusting the burst setting will help align the actual throughput with your intended rate limit, ensuring optimal performance and compliance with API usage policies.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *

×