logdistributor

Log Distributor

This repository implements a log packet distributor using Akka HTTP and FastAPI. It simulates a distributed system where log packets are sent to multiple analysers for processing. The system is designed to demonstrate weighted round-robin distribution of packets, health monitoring, and basic load testing.

Video Demos

Associated Content

Architecture Overview

Architecture

Prerequisites

Running with Docker

Use Docker Compose to build the images and start the stack. The detached mode keeps the console uncluttered and gives slightly higher throughput because services do not write logs to the terminal. Without using --detach, you can see the logs in real time but in my local system the services would crash after a few minutes due to the large amount of output.

docker compose up --build --detach   # build all images and run in the background

For subsequent runs when the images already exist you can skip the build step:

docker compose up --detach           # start the existing containers in background

The services will be available on the following ports:

Service URL / Description
Analyser 1‑6 http://localhost:80018006
Distributor http://localhost:8080
Locust UI http://localhost:8089
Frontend http://localhost:5173

Prometheus and Grafana definitions exist in the repository but are commented out in docker-compose.yaml. You can enable them by uncommenting the respective sections if you require metrics collection and dashboards.

Task and Deliverables

Define a data model for a log message and packet

Develop a multi-threaded web server that accepts a post request for a log packet

Design the logic for high-throughput non-blocking thread-safe distribution

Handle the condition of one or more analyzers going offline for a while, and then coming back online

Setup a working demo (e.g. docker compose w/ jmeter, but feel free to use whatever you would like) to show:

Video Demo

Include clear instructions on how to run the demo locally (e.g. we should be able to run your demo)

# Start the Docker containers
docker compose up --build --detach

Give me a 1-page write-up on what other conditions you might want to handle or improvements you might want to add given more time, what would be your testing strategy, etc.

Tech stack

Testing the stack

With the services running you can send a sample packet to the distributor:

curl -X POST http://localhost:8080/distributor/packet \
  -H "Content-Type: application/json" \
  -d '{"id":"1","timestamp":0,"messages":[{"level":"INFO","message":"hello"}]}'

Locust will also start generating traffic automatically when the stack is up. Visit http://localhost:8089 to see the Locust dashboard.

Using the Frontend

Open http://localhost:5173 in your browser once the containers are running. The React web UI exposes several controls and charts:

Using these controls you can test scenarios such as taking an analyser offline during a swarm and observing how the distributor rebalances traffic.

Cleaning up

docker compose down

This stops and removes all containers.

Testing and Results

Load Testing (High Throughput)

I was able to capture a maximum load of around 5000 requests per second using JMeter, the test plan is included. Using Locust (which is integrated into the Docker Compose setup) I was able to achieve a maximum of around 1800 requests per second. The load testing script is located in locustfile.py.

LoadTest5000

Locust

Failure and Recovery Testing

I tested the failure and recovery of analysers by simulating failures using the /internal/simulate/fail endpoint. This endpoint allows you to mark an analyser as down, and the distributor will stop routing packets to it. You can then use the /internal/simulate/begin endpoint to bring the analyser back online, and the distributor will resume routing packets to it. I tested on different ratios: 1:1 and 1:6

Failure and Recovery

Failure and Recovery 2

Throughput with Concurrent Requests

Interestingly, I observed that the throughput of the system increases when there are no new users hatching just old users putting several thousand requests. I think this may be how I have my Akka HTTP service configured, but it is worth noting that the throughput can be significantly higher when there are no new users hatching.

Throughput with Concurrent Requests

Repository structure

/                – top‑level Docker Compose file
/analyser        – FastAPI service that accepts log packets
/distributor     – Akka HTTP application distributing packets to analysers
/frontend        – React + TypeScript web interface (Vite)
/prometheus      – example Prometheus configuration (currently unused)
/locustfile.py   – Locust load‑testing script