Dump the code
Best ressources to improve your Python Skills

Nginx reverse proxy cache

Created 10 months ago
Posted By admin
10min read
Adding caching to a reverse proxy in Nginx can help improve performance by serving cached content instead of fetching it from the backend server every time a request is made.

Configuring the cache

The proxy_cache_path directive is used within the http block to define the location and parameters of the cache storage for the proxy module. When a request is made to a backend server, Nginx can store the response in the cache to improve performance and reduce the load on the upstream server.

The proxy_cache_path directive typically looks like this:

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
  • /path/to/cache: This is the path to the directory where Nginx will store the cached data.
  • levels=1:2: This parameter sets the number of subdirectory levels in the cache. In this example, there will be one top-level directory and two subdirectories.
  • keys_zone=my_cache:10m: This parameter defines the shared memory zone for caching. The zone is named my_cache and has a size of 10 megabytes.
  • max_size=10g: This parameter sets the maximum size of the cache. In this example, the maximum size is 10 gigabytes.
  • inactive=60m: This parameter defines the time after which an item in the cache is considered inactive and can be removed. In this example, it's set to 60 minutes.
  • use_temp_path=off: This parameter disables the use of a temporary file storage path for the cache. When set to "off," Nginx will store the cache directly in the specified path.

By configuring proxy_cache_path appropriately, you can control the caching behavior of Nginx when acting as a reverse proxy. Cached content can significantly improve response times for frequently requested resources and reduce the load on the backend servers.

Setup the behavior of the cache

The proxy_cache directive and related caching directives are typically used within a location block inside a server block. This is where you specify the conditions under which caching should occur, as well as the caching behavior itself.

http {
    proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;

    server {
        location / {
            proxy_pass http://backend_server;
            proxy_cache my_cache; 
            proxy_cache_key $host$uri$is_args$args;
            add_header X-Proxy-Cache $upstream_cache_status;
        }
    }
}

  • proxy_cache: Enables caching for a specific location.
  • proxy_cache_key: Defines the key used for caching. In this example, it's based on the host, URI, and query string.
  • add_header X-Proxy-Cache: Adds a custom header to the response indicating the cache status.

When a client makes a request, the proxy_pass directive forwards the request to the defined backend server. Concurrently, the proxy_cache directive activates caching, utilizing a cache zone named my_cache to store copies of frequently accessed resources.

The proxy_cache_key directive plays a crucial role in uniquely identifying cached items by concatenating the host name, URI, and query string arguments.

Before reaching out to the backend server, Nginx checks this cache using the generated key. If the requested resource is found (cache hit), Nginx serves the cached content directly to the client, reducing response times and minimizing the load on the backend server.

In case of a cache miss, the request is forwarded to the backend server, and subsequent responses meeting the caching criteria are stored in the cache for future use.

This configuration strikes a balance between performance optimization and ensuring that cached content aligns with the current state of the backend resources.

Cache control

The proxy_cache_valid directive is used to control caching behavior for responses from proxied servers. It specifies the time during which the response is considered valid and can be served from the cache without contacting the upstream server again.
For example, if you have the following configuration:

proxy_cache_valid 200 302 10m;
This means that responses with HTTP status codes 200 and 302 will be cached and considered valid for 10 minutes. After 10 minutes, Nginx will check with the upstream server to see if there's a new version of the resource.

It's important to note that proxy_cache_valid only affects the caching duration. It doesn't control when the cache is refreshed or when a stale cache is served while a new response is fetched from the upstream server. To control those aspects, you may need additional directives such as proxy_cache_use_stale, proxy_cache_background_update, and others.

Refresh the cache

The cache refresh, or cache revalidation, occurs when the validity period specified by proxy_cache_valid expires. When a client requests a resource, Nginx checks its cache to see if a valid (not expired) copy of the resource is available. If the resource is still valid according to the proxy_cache_valid settings, Nginx serves it directly from the cache without contacting the upstream server.

If the cached content has expired, Nginx will initiate a request to the upstream server to fetch a fresh copy of the resource. Once the new response is obtained, it replaces the expired content in the cache, and subsequent requests for the same resource will be served from the updated cache.

Serve a stale cache

The stale cache is served while a new response is fetched from the upstream server depends on the configuration of the proxy_cache_use_stale directive.
The proxy_cache_use_stale directive allows you to control under what conditions Nginx can serve stale content while it's in the process of fetching a new response from the upstream server.

Here is an example of how you might use proxy_cache_use_stale:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 302 10m;
    proxy_cache_use_stale error timeout updating http_500 http_502;
    proxy_cache_background_update on;
}
In this example, the proxy_cache_use_stale directive is set to timeout, updating, and specific HTTP status codes (http_500, http_502). This means that Nginx will serve stale content in the following scenarios:

1. If the upstream server takes too long to respond (timeout).
2. While a background update is in progress (updating).
3. If the upstream server returns one of the specified HTTP status codes.

During any of these conditions, Nginx will serve the stale content from the cache while simultaneously initiating a request to the upstream server for a fresh copy. Once the new response is obtained, it replaces the stale content in the cache.

Revalidate cache

The proxy_cache_revalidate directive in Nginx is used to control whether the cached content should be revalidated with the origin server before serving it to clients. When proxy_cache_revalidate is set to on, Nginx will revalidate the cached content by sending conditional requests to the origin server based on certain conditions.

Here's a breakdown of the behavior when proxy_cache_revalidate is set to on:

1. Conditional requests:
   - When a client requests a resource, Nginx checks if the resource is already in the cache.
   - If the resource is in the cache, Nginx sends a conditional request to the origin server to check if the cached content is still valid.

2. Conditional GET or HEAD requests:
   - The conditional request is typically a conditional GET (If-Modified-Since or If-None-Match) or a HEAD request.
   - If the origin server responds with a "Not Modified" (304) status code, it means the cached content is still valid, and Nginx serves the cached content to the client.

3. If-Modified-Since and If-None-Match Headers:
   - The If-Modified-Since header is used if the cached content has a Last-Modified timestamp.
   - The If-None-Match header is used if the cached content has an ETag.

4. Freshness check:
   - The purpose of this process is to ensure that the cached content is still fresh and hasn't changed on the origin server since it was cached.

Here's an example of how to use proxy_cache_revalidate:

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m;

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend_server;
        proxy_cache my_cache;
        proxy_cache_revalidate on;
    }
}
In this example, the proxy_cache_revalidate on; directive is used to enable cache revalidation for the specified location. When clients request a resource, Nginx will revalidate the cached content with the origin server before serving it, ensuring that clients receive fresh content whenever possible.

Note: Enabling proxy_cache_revalidate may introduce additional requests to the origin server for validation, so it's important to consider the impact on backend server load and response times.

Nginx cache minimum uses

The proxy_cache_min_uses directive in Nginx is used to set the minimum number of requests that must be made to a particular resource before it gets cached. This means that the content won't be cached until it has been requested a specified minimum number of times.

For example, if you set proxy_cache_min_uses 3; it means that Nginx will only start caching the responses for a specific resource after it has been requested three times. Until that threshold is reached, Nginx will continue forwarding the requests to the backend server without caching the responses.

This directive can be useful in scenarios where you want to ensure that only frequently accessed content gets cached. By setting a minimum number of uses, you can avoid caching content that might be requested infrequently, saving cache space for more popular resources.

Here's an example of how you might use proxy_cache_min_uses in an Nginx configuration:

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend_server;
        
        # Set the minimum number of requests before caching
        proxy_cache_min_uses 3;
    }

    # Cache path and settings
    proxy_cache_path /var/cache/nginx/my_cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m;
}

In this example, content will only be cached if it's requested three or more times. Adjust the value according to your specific requirements and the popularity of your resources.
Topics

Mastering Nginx

27 articles

Bash script

2 articles

Crontab

2 articles