Micro-services Using go-kit: Rate Limiting
In the service world, usually we need to limit the number of requests in order to protect our services does not get overwhelmed.
Overview
In the previous article, we already add logging functionality into Lorem Service. But it is not a production-ready yet. We still need to add middleware layer into the service in order to leverage its capabilities.
Therefore, in this article I will bring one of the middleware capabilities, which is instrumentation. Specifically, I will talk about rate limiting with service middleware.
Rate Limiting Service
In the service world, usually we need to limit the number of requests, in order to protect our services does not get overwhelmed. Because sometimes a request consume huge number of CPU processes. Or it is greedy in memory consumption. All those things, combine with multiple requests in a period of time, can lead into performance degradation.
Token Bucket Limiter
For this purpose, I am going to add a token-based rate limiter algorithm. In brief, we will have a bucket of tokens, and each request will take a token in order to continue the process. If there is no token left, the request cannot be completed. Also, the bucket will refill the token at a specific interval. More details about this algorithm, please read it on wiki.
For this purpose, there is a Go library from Juju to implement this algorithm. Furthermore, Go-kit has a middleware built-in function for this implementation as well, so both of them will minimize our efforts.
Middlware in Go-kit
In go-kit, middleware endpoint is a function, which takes endpoint.Endpoint
as in input and return endpoint.Endpoint
as well.
1 2 |
# Go-kit Middleware Endpoint type Middleware func(Endpoint) Endpoint |
Just for your note, endpoint.Endpoint
is also a function:
1 2 |
# Go-kit Endpoint Function type Endpoint func(ctx context.Context, request interface{}) (response interface{}, err error) |
Step by Step
It is time to implement all the talks into code. Amazingly, you don’t need to make a big change into existing code. Now, I will use existing code, lorem-logging
, and enhance its capability by adding rate limiter. Before it begins, we need download required library first.
1 2 |
# juju library go get github.com/juju/ratelimit |
Step 1: instrument.go
Create file, give it a name instrument.go
. Then add NewTokenBucketLimiter
function. The function takes token bucket as an argument and return endpoint.Middleware
. One note, before continue to pass execution into next endpoint, we will check whether a token is available or not by using TakeAvailable
function.
1 2 3 4 5 6 7 8 9 10 11 |
var ErrLimitExceed = errors.New("Rate Limit Exceed") func NewTokenBucketLimiter(tb *ratelimit.Bucket) endpoint.Middleware { return func(next endpoint.Endpoint) endpoint.Endpoint { return func(ctx context.Context, request interface{}) (interface{}, error) { if tb.TakeAvailable(1) == 0 { return nil, ErrLimitExceed } return next(ctx, request) } } } |
Step 2: main.go
In this step, we need to define the token bucket ratelimit.NewBucket function. This example will refill the token bucket every seconds, up to maximum five tokens. Please take note, this number is not for production use case because it is too small. I use this number for simulation purpose only. Next, add the middleware function into existing endpoint.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
func main() { ctx := context.Background() errChan := make(chan error) // Logging domain. var logger log.Logger { logger = log.NewLogfmtLogger(os.Stdout) logger = log.With(logger, "ts", log.DefaultTimestampUTC) logger = log.With(logger, "caller", log.DefaultCaller) } var svc lorem_rate_limit.Service svc = lorem_rate_limit.LoremService{} svc = lorem_rate_limit.LoggingMiddleware(logger)(svc) rlbucket := ratelimit.NewBucket(1*time.Second, 5) e := lorem_rate_limit.MakeLoremLoggingEndpoint(svc) e = lorem_rate_limit.NewTokenBucketLimiter(rlbucket)(e) endpoint := lorem_rate_limit.Endpoints{ LoremEndpoint: e, } r := lorem_rate_limit.MakeHttpHandler(ctx, endpoint, logger) // HTTP transport go func() { fmt.Println("Starting server at port 8080") handler := r errChan <- http.ListenAndServe(":8080", handler) }() go func() { c := make(chan os.Signal, 1) signal.Notify(c, syscall.SIGINT, syscall.SIGTERM) errChan <- fmt.Errorf("%s", <-c) }() fmt.Println(<- errChan) } |
Step 3 – Test
That’s all you need to make it run. Very simple, isn’t it? Now it is time to test it. Run the server and make several requests. And at some point, you will get error response which tells you Rate Limit Exceed.
1 2 3 4 5 6 7 8 9 10 |
# output logs ts=2017-03-19T04:33:57.97656492Z caller=logging.go:31 function=Word min=20 max=20 result=persentiscere took=3.417µs ts=2017-03-19T04:33:58.123130597Z caller=logging.go:31 function=Word min=20 max=20 result=exclamaverunt took=2.678µs ts=2017-03-19T04:33:58.258280166Z caller=logging.go:31 function=Word min=20 max=20 result=difficultates took=2.934µs ts=2017-03-19T04:33:58.84378762Z caller=logging.go:31 function=Word min=20 max=20 result=recognoscimus took=3.47µs ts=2017-03-19T04:33:59.338875968Z caller=logging.go:31 function=Word min=20 max=20 result=cognosceremus took=4.142µs ts=2017-03-19T04:33:59.755599747Z caller=server.go:110 err="Rate Limit Exceed" ts=2017-03-19T04:33:59.923025144Z caller=server.go:110 err="Rate Limit Exceed" ts=2017-03-19T04:34:00.086307562Z caller=logging.go:31 function=Word min=20 max=20 result=similitudinem took=3.108µs ts=2017-03-19T04:34:00.224307681Z caller=server.go:110 err="Rate Limit Exceed" |
Options
Maybe you feel unsatisfied if the service return error if the token is unavailable. Also, it is pretty harsh. It is better to make a request waits for a while to make a token is available. For this purpose, go-kit provides NewTokenBucketThrottler
middleware. To make this work, just replace
1 2 3 4 5 6 |
// add import: ratelimitkit "github.com/go-kit/kit/ratelimit" // replace this line // e = lorem_rate_limit.NewTokenBucketLimiter(rlbucket)(e) // with e = ratelimitkit.NewTokenBucketThrottler(rlbucket, time.Sleep)(e) |
Conclusion
Whenever you create a service, do not forget to limit the number of requests that need to be processed. Especially for service that consumes lot of CPUs and/or memory usage. So your service performance is still in an acceptable level.
Yet, I still amazed about how go-kit works. They make it so modular, therefore whenever I want to add another service capabilities it works like a charm.
That’s all from me. You still can see the source code on my github account.
Reference
- Go Programming Blueprints – Second Edition by Mat Ryer