Bench-marking RESTful APIs | Part - II: On the Cloud
Recap π€
In our previous post, we did some benchmarks of frameworks from different languages. Our test hardware/server was my Raspberry Pi 3 Model B from 2016, it was a good experiment, but this time around we needed something more realistic.
TL;DR - We used Kubernetes π on Cloud (Google Cloud Platform) -> jump to benchmarks section.
Let's get back to our story if you are following the story-line from the first post π
Dave went back to his peers to show benchmark results that he had got in the initial tests which he ran on his Raspberry Pi 3, while some of his peers liked the idea and appreciated the outcome, some pointed out that they would need the test on real production-grade hardware to believe the results! Dave went back to his garage to tweak his test bench.
Intro
Our setup is quite straightforward, each RESTful service gets the same amount of CPU and memory (enforced via k8s config). The entire setup takes few minutes to initialize, thanks to tools like gcloud CLI, Terraform & Pulumi. You can get it up and running with the environment without much hassle. And if you want to run the benchmark without fancy infra (i.e. without private VPC etc.). We recommend that you use the CLI wrapper as it is built over gcloud SDK and for the adventurous type, we have a slightly more elaborate setup with Terraform (GCP) & Pulumi (GCP & Digital Ocean).
Environment Review
Kubernetes is a planet scale tool that can orchestrate containerized applications and more.
Checkout our series on Kubernetes it is swell! π
Since we didn't want to scale the application as load increases, we have put some limits. The config ensures that the deployments stay put and do not auto-scale in the K8s cluster. The whole point of this exercise is to simulate a prod environment (but without auto-scaling). We then load test and measure performance.
Quite the step up from the test on Raspberry Pi 3, isn't it? π
It took us a while to figure out the right configuration for the cluster so that you could replicate the tests on your own with the optimal amount of resources. The K8s environment can be setup on GCP free tier (at the time of writing this article)
Source code link for this entire project is given the references section! π
Let's review our K8οΈβ£s Config file
Deployment config looks like this - π
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-net-http-golang
spec:
selector:
matchLabels:
app: rest-net-http-golang
template:
metadata:
labels:
app: rest-net-http-golang
spec:
containers:
- name: rest-net-http-golang
image: ghcr.io/gochronicles/benchmark-rest-frameworks/rest-net-http-golang:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 3000
Notice that we have allocated memory - 1Gi and CPU 500m or half vCPU. This constraint is given to all frameworks, this will ensure that the amount of compute given to each deployment is consistent.
apiVersion: v1
kind: Service
metadata:
name: rest-net-http-golang
spec:
type: LoadBalancer # provide public ip to the service
selector:
app: rest-net-http-golang
ports:
- port: 80
targetPort: 3000
Service config exposes the RESTful apps in the cluster network via Public IP so that our bench-marking client can connect and run load simulations.
Our attack tool for load testingπΎ
This time around, we decided to play around with different benchmark & load testing tools. Finally, we chose Hey.
Hey is a drop-in replacement for ab tool (Apache benchmark).
hey -c 800 -n 35000 <http://ip-addr-url/>
This command will send 800 concurrent & total 35k requests to the RESTful API services on the K8s cluster.
Honorable mentions -
1. Locust! π©οΈ-
- This was the ideal tool for this test for couple of important reasons.
- We could deploy this python based web app like load test tool on K8s cluster and run benchmarks from with in the cluster network (no need for public IP)
- It comes with a nice UI dashboard to visualize the results.
- The test results was same across frameworks, it looked like we couldn't schedule enough number of workers to really push the throttle on the RESTful APIs.
- We had a limit on the number of processors we could deploy on our GCP instance (free tier has 8CPU limit for the entire project)
- If you want to tinker with locust, here's the k8s config we created.
2. Apache Benchmark -
- Good old tool we could still probably use, but the results were better and faster with hey and it shares similar CLI options.
- CPU monitoring tool (htop) revealed that ab tool didn't take advantage of all the CPU cores, where as hey tool fired up on all CPU cores with same parameters out of the box.
Benchmarks π
The order of slowest to fastest framework is as expected in the benchmark results. Go frameworks are at a minimum 10x faster than Node & Python-based frameworks. However, the interesting bit is FastAPI (Python framework) isn't too far off from NestJS (which is about ~12% faster).
Closing thoughts π€
Results are as we anticipated - Go-based frameworks are at least 10x faster than Node & Python-based frameworks. One thing surprised us and possible areas for more research -
- In our local testing, Gin has always performed faster than Net/HTTP (Golang). However, in this test, it has scored lower. The source code for this service and the kubernetes config can be found here and here respectively.
Let us know in the comments if you found a better way to do these tests.
Your feedback π and supportπ€ means a lot, do share some loveπ₯° by sharing our posts on social media and subscribe to our newsletter! Until next time! ππ
References