This repository implements a log packet distributor using Akka HTTP and FastAPI. It simulates a distributed system where log packets are sent to multiple analysers for processing. The system is designed to demonstrate weighted round-robin distribution of packets, health monitoring, and basic load testing.
Use Docker Compose to build the images and start the stack. The detached mode keeps the console uncluttered and gives slightly higher throughput because services do not write logs to the terminal. Without using --detach
, you can see the logs in real time but in my local system the services would crash after a few minutes due to the large amount of output.
docker compose up --build --detach # build all images and run in the background
For subsequent runs when the images already exist you can skip the build step:
docker compose up --detach # start the existing containers in background
The services will be available on the following ports:
Service | URL / Description |
---|---|
Analyser 1‑6 | http://localhost:8001 … 8006 |
Distributor | http://localhost:8080 |
Locust UI | http://localhost:8089 |
Frontend | http://localhost:5173 |
Prometheus and Grafana definitions exist in the repository but are commented out in docker-compose.yaml
. You can enable them by uncommenting the respective sections if you require metrics collection and dashboards.
/internal/simulate/fail
and /internal/simulate/begin
endpoints.# Start the Docker containers
docker compose up --build --detach
distributor
service exposes HTTP routes and uses Akka Typed actors for a weighted round‑robin distribution of packets to analysers.analyser
container provides an API to process log packets and exposes Prometheus metrics.frontend
is a simple web application served on port 5173
.locust-master
and locust-worker
generate traffic against the distributor for benchmarking.With the services running you can send a sample packet to the distributor:
curl -X POST http://localhost:8080/distributor/packet \
-H "Content-Type: application/json" \
-d '{"id":"1","timestamp":0,"messages":[{"level":"INFO","message":"hello"}]}'
Locust will also start generating traffic automatically when the stack is up. Visit http://localhost:8089
to see the Locust dashboard.
Open http://localhost:5173
in your browser once the containers are running. The React web UI exposes several controls and charts:
/internal/simulate/begin
and /internal/simulate/fail
endpoints which the UI calls for you.Using these controls you can test scenarios such as taking an analyser offline during a swarm and observing how the distributor rebalances traffic.
docker compose down
This stops and removes all containers.
I was able to capture a maximum load of around 5000 requests per second using JMeter, the test plan is included. Using Locust (which is integrated into the Docker Compose setup) I was able to achieve a maximum of around 1800 requests per second. The load testing script is located in locustfile.py
.
I tested the failure and recovery of analysers by simulating failures using the /internal/simulate/fail
endpoint. This endpoint allows you to mark an analyser as down, and the distributor will stop routing packets to it. You can then use the /internal/simulate/begin
endpoint to bring the analyser back online, and the distributor will resume routing packets to it. I tested on different ratios: 1:1 and 1:6
Interestingly, I observed that the throughput of the system increases when there are no new users hatching just old users putting several thousand requests. I think this may be how I have my Akka HTTP service configured, but it is worth noting that the throughput can be significantly higher when there are no new users hatching.
/ – top‑level Docker Compose file
/analyser – FastAPI service that accepts log packets
/distributor – Akka HTTP application distributing packets to analysers
/frontend – React + TypeScript web interface (Vite)
/prometheus – example Prometheus configuration (currently unused)
/locustfile.py – Locust load‑testing script