Traffic Mirroring to load test web application

Generating real traffic on your application is hard. Successful load testing lends the assurance of a high-quality application that can serve a large number of users of the organization.

First of all, Why do we need load testing?

When testing our website, app or API endpoint under a load, we are actually simulating how it will perform when hundreds, thousands or millions of users visit it, in real life. Our system might perform completely differently for one user (functional testing) compared to many (load testing), due to the system’s resources. So, to understand, analyze and fix errors, bugs, and bottlenecks before they actually happen, we need to understand the real users load and it is not wise at all to neglect the real people who will be using our system or product.

KPIs like response time, error rate, memory leakage, and CPU might be top-notch when running functional tests. But, when scaling to millions of users and running the tests from all over the globe, they could suddenly behave like an alien application.

The questions that we will be answered by load testing:

  • How much load our application is capable of handling?
  • How much load our system is capable of handling?
  • As of Micro-service, Which instance or container size will be ideal of certain load?

and many more.


There are a lot of ways to generate loads on application. But it is very very hard to generate real load on application through the tools like jmeter or other.

For already built and production application, we can test load via mirroring the real traffic to the stage or test server.


I will be using a very basic python-flask based application.

#  Project TrafficLoader is developed by Fahad Ahammed on 3/7/20, 11:12 AM.
#  Last modified at 3/7/20, 11:10 AM.
#  Github: fahadahammed
#  Email:
#  Copyright (c) 2020. All rights reserved.

from flask import Flask, request
import os

app = Flask(__name__)

ENV = os.getenv('ENV', 'prod')  # two env: 1: dev, 2: prod and default is prod

APPLICATION_NAME = f"TrafficLoader_Application-{ENV}"

def save_to_file(content):
        with open(FILE_NAME_TO_SAVE_DATA, "a") as file:
            file.write(str(content) + "\n")
    except Exception as e:
        return e

@app.route('/', methods=["GET"])
def hello():
    return f"Hello from {APPLICATION_NAME}"

@app.route('/save', methods=["POST"])
def hello_save():
    to_save = request.json  # { "message": "Hello..." }
    save_to_file(content=to_save["message"] + f" From {APPLICATION_NAME}")
    return f"Saving in {APPLICATION_NAME}"

if __name__ == "__main__":
    app.secret_key = b'_5#y__RO__4Q8z\n\xec]/'
    app.config["ENV"] = ENV
    if ENV == "dev":"", port=15002, debug=True)
    else:"", port=15001)

This application has two endpoints.

  1. “/”: Prints “Hello from the application.”
  2. “/save”: It is a POST method endpoint and takes a json as a body in

{ “message”: “Hello !”}


Now this application can run in two environment.

  1. dev
  2. prod

We can set environment in ubuntu/mac via

export ENV=dev

Without any environment assigned, it will by default run as production mode selecting ENV=prod and port=15001 and when setting that ENV variable to “dev”, it will take port as 15002.

Also according to that ENV, this application will create some variables for our testing help.

  1. Route “/” will return TrafficLoader_Application-{ENV}
  2. Route “/save” will set
    save json response to “data-TrafficLoader_Application-{ENV}.txt” with changed message of that request with identifying the app environment.



$ python3 
* Serving Flask app "app" (lazy loading)
* Environment: prod
* Debug mode: off
* Running on (Press CTRL+C to quit)


$ curl
Hello from TrafficLoader_Application-prod


$ python3
* Serving Flask app "app" (lazy loading)
* Environment: dev
* Debug mode: on
* Running on (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 180-761-601


$ curl
Hello from TrafficLoader_Application-dev


For traffic mirroring I will be using nginx.


upstream production-server {
upstream dev-server {
server {
    listen 40000;
location / {
        mirror /mirror;
        proxy_pass http://production-server/;
location = /mirror {
        proxy_pass http://dev-server$request_uri;

Here, I have declared two upstream for production and dev. And assumed this setting for production server and default location “/” is proxying to upstream production-server or the port of 15001.

For mirroring, I have to create another location drive pointing to “/mirror” and thus in dev-server upstream with all request taking on mirror.

Then I set ‘mirror /mirror’ to mirror the traffic in “location = /mirror {” directive which should proxy to dev-server upstream.


$ curl
Hello from TrafficLoader_Application-prod

So, our nginx proxy is working as expected: requesting to prod upstream thus prod server/app.

This request is actually sending in dev server too, internally.


Normal Request:

POST Request:


This way we can mirror production traffic to our desired testing server. The the real users will have no side affects.

Previous ArticleNext Article
Fahad Ahammed is a System Administrator and DevOps in Software Department of OWSL. As a Linux enthusiast he breathes in server consoles or terminals. You can have a look to his website here:

Leave a Reply

Your email address will not be published. Required fields are marked *